text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Fractal compression
(Redirected from Fractal image compression)
Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image.[citation needed] Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image.
1 Iterated function systems
1.1 For binary images
1.2 Extension to grayscale
1.3 Encoding
2.1 Resolution independence and fractal scaling
2.2 Fractal interpolation
4 Implementations
Iterated function systems[edit]
Main article: Iterated function system
Fractal image representation may be described mathematically as an iterated function system (IFS).[1]
For binary images[edit]
We begin with the representation of a binary image, where the image may be thought of as a subset of R 2 {\displaystyle \mathbb {R} ^{2}} . An IFS is a set of contraction mappings ƒ1,...,ƒN,
f i : R 2 → R 2 . {\displaystyle f_{i}:\mathbb {R} ^{2}\to \mathbb {R} ^{2}.}
According to these mapping functions, the IFS describes a two-dimensional set S as the fixed point of the Hutchinson operator
H ( A ) = ⋃ i = 1 N f i ( A ) , A ⊂ R 2 . {\displaystyle H(A)=\bigcup _{i=1}^{N}f_{i}(A),\quad A\subset \mathbb {R} ^{2}.}
That is, H is an operator mapping sets to sets, and S is the unique set satisfying H(S) = S. The idea is to construct the IFS such that this set S is the input binary image. The set S can be recovered from the IFS by fixed point iteration: for any nonempty compact initial set A0, the iteration Ak+1 = H(Ak) converges to S.
The set S is self-similar because H(S) = S implies that S is a union of mapped copies of itself:
S = f 1 ( S ) ∪ f 2 ( S ) ∪ ⋯ ∪ f N ( S ) {\displaystyle S=f_{1}(S)\cup f_{2}(S)\cup \cdots \cup f_{N}(S)}
So we see the IFS is a fractal representation of S.
Extension to grayscale[edit]
IFS representation can be extended to a grayscale image by considering the image's graph as a subset of R 3 {\displaystyle \mathbb {R} ^{3}} . For a grayscale image u(x,y), consider the set S = {(x,y,u(x,y))}. Then similar to the binary case, S is described by an IFS using a set of contraction mappings ƒ1,...,ƒN, but in R 3 {\displaystyle \mathbb {R} ^{3}} ,
Encoding[edit]
A challenging problem of ongoing research in fractal image representation is how to choose the ƒ1,...,ƒN such that its fixed point approximates the input image, and how to do this efficiently.
A simple approach[1] for doing so is the following partitioned iterated function system (PIFS):
Partition the image domain into range blocks Ri of size s×s.
For each Ri, search the image to find a block Di of size 2s×2s that is very similar to Ri.
Select the mapping functions such that H(Di) = Ri for each i.
In the second step, it is important to find a similar block so that the IFS accurately represents the input image, so a sufficient number of candidate blocks for Di need to be considered. On the other hand, a large search considering many blocks is computationally costly. This bottleneck of searching for similar blocks is why PIFS fractal encoding is much slower than for example DCT and wavelet based image representation.
The initial square partitioning and brute-force search algorithm presented by Jacquin provides a starting point for further research and extensions in many possible directions -- different ways of partitioning the image into range blocks of various sizes and shapes; fast techniques for quickly finding a close-enough matching domain block for each range block rather than brute-force searching, such as fast motion estimation algorithms; different ways of encoding the mapping from the domain block to the range block; etc.[2]
Other researchers attempt to find algorithms to automatically encode an arbitrary image as RIFS (recurrent iterated function systems) or global IFS, rather than PIFS; and algorithms for fractal video compression including motion compensation and three dimensional iterated function systems.[3][4]
Fractal image compression has many similarities to vector quantization image compression.[5]
Features[edit]
With fractal compression, encoding is extremely computationally expensive because of the search used to find the self-similarities. Decoding, however, is quite fast. While this asymmetry has so far made it impractical for real time applications, when video is archived for distribution from disk storage or file downloads fractal compression becomes more competitive.[6][7]
At common compression ratios, up to about 50:1, Fractal compression provides similar results to DCT-based algorithms such as JPEG.[8] At high compression ratios fractal compression may offer superior quality. For satellite imagery, ratios of over 170:1[9] have been achieved with acceptable results. Fractal video compression ratios of 25:1–244:1 have been achieved in reasonable compression times (2.4 to 66 sec/frame).[10]
Compression efficiency increases with higher image complexity and color depth, compared to simple grayscale images.
Resolution independence and fractal scaling[edit]
An inherent feature of fractal compression is that images become resolution independent[11] after being converted to fractal code. This is because the iterated function systems in the compressed file scale indefinitely. This indefinite scaling property of a fractal is known as "fractal scaling".
Fractal interpolation[edit]
The resolution independence of a fractal-encoded image can be used to increase the display resolution of an image. This process is also known as "fractal interpolation". In fractal interpolation, an image is encoded into fractal codes via fractal compression, and subsequently decompressed at a higher resolution. The result is an up-sampled image in which iterated function systems have been used as the interpolant.[12] Fractal interpolation maintains geometric detail very well compared to traditional interpolation methods like bilinear interpolation and bicubic interpolation.[13][14][15] Since the interpolation cannot reverse Shannon entropy however, it ends up sharpening the image by adding random instead of meaningful detail. One cannot, for example, enlarge an image of a crowd where each person's face is one or two pixels and hope to identify them.
Michael Barnsley led development of fractal compression in 1987, and was granted several patents on the technology.[16] The most widely known practical fractal compression algorithm was invented by Barnsley and Alan Sloan. Barnsley's graduate student Arnaud Jacquin implemented the first automatic algorithm in software in 1992.[17][18] All methods are based on the fractal transform using iterated function systems. Michael Barnsley and Alan Sloan formed Iterated Systems Inc.[19] in 1987 which was granted over 20 additional patents related to fractal compression.
A major breakthrough for Iterated Systems Inc. was the automatic fractal transform process which eliminated the need for human intervention during compression as was the case in early experimentation with fractal compression technology. In 1992, Iterated Systems Inc. received a US$2.1 million government grant[20] to develop a prototype digital image storage and decompression chip using fractal transform image compression technology.
Fractal image compression has been used in a number of commercial applications: onOne Software, developed under license from Iterated Systems Inc., Genuine Fractals 5[21] which is a Photoshop plugin capable of saving files in compressed FIF (Fractal Image Format). To date the most successful use of still fractal image compression is by Microsoft in its Encarta multimedia encyclopedia,[22] also under license.
Iterated Systems Inc. supplied a shareware encoder (Fractal Imager), a stand-alone decoder, a Netscape plug-in decoder and a development package for use under Windows. As wavelet-based methods of image compression improved and were more easily licensed by commercial software vendors the adoption of the Fractal Image Format failed to evolve.[citation needed] The redistribution of the "decompressor DLL" provided by the ColorBox III SDK was governed by restrictive per-disk or year-by-year licensing regimes for proprietary software vendors and by a discretionary scheme that entailed the promotion of the Iterated Systems products for certain classes of other users.[23]
During the 1990s Iterated Systems Inc. and its partners expended considerable resources to bring fractal compression to video. While compression results were promising, computer hardware of that time lacked the processing power for fractal video compression to be practical beyond a few select usages. Up to 15 hours were required to compress a single minute of video.
ClearVideo – also known as RealVideo (Fractal) – and SoftVideo were early fractal video compression products. ClearFusion was Iterated's freely distributed streaming video plugin for web browsers. In 1994 SoftVideo was licensed to Spectrum Holobyte for use in its CD-ROM games including Falcon Gold and Star Trek: The Next Generation A Final Unity.[24]
In 1996, Iterated Systems Inc. announced[25] an alliance with the Mitsubishi Corporation to market ClearVideo to their Japanese customers. The original ClearVideo 1.2 decoder driver is still supported[26] by Microsoft in Windows Media Player although the encoder is no longer supported.
Two firms, Total Multimedia Inc. and Dimension, both claim to own or have the exclusive licence to Iterated's video technology, but neither has yet released a working product. The technology basis appears to be Dimension's U.S. patents 8639053 and 8351509, which have been considerably analyzed.[27] In summary, it is a simple quadtree block-copying system with neither the bandwidth efficiency nor PSNR quality of traditional DCT-based codecs. In January 2016, TMMI announced that it was abandoning fractal-based technology altogether.
Numerous research papers have been published during the past few years discussing possible solutions to improve fractal algorithms and encoding hardware.[28][29][30][31][32][33][34][35][36]
Implementations[edit]
A library called Fiasco was created by Ullrich Hafner. In 2001, Fiasco was covered in the Linux Journal. [37] According to the 2000-04 Fiasco manual, Fiasco can be used for video compression. [38] The Netpbm library includes the Fiasco library. [39][40]
Femtosoft developed an implementation of fractal image compression in Object Pascal and Java. [41]
Iterated function system
Wavelet
^ a b Fischer, Yuval (1992-08-12). Przemyslaw Prusinkiewicz (ed.). SIGGRAPH'92 course notes - Fractal Image Compression (PDF). SIGGRAPH. Fractals - From Folk Art to Hyperreality. ACM SIGGRAPH.
^ Dietmar Saupe, Raouf Hamzaoui. "A Review of the Fractal Image Compression Literature". 1994. doi: 10.1145/193234.193246
^ Bruno Lacroix. "Fractal Image Compression". 1998.
^ Yuval Fisher. "Fractal Image Compression: Theory and Application". 2012. p. 300
^ Henry Xiao. "Fractal Compression". 2004.
^ John R. Jensen, "Remote Sensing Textbooks", Image Compression Alternatives and Media Storage Considerations (reference to compression/decompression time), University of South Carolina, archived from the original on 2008-03-03
^ Steve Heath (23 August 1999). Multimedia and communications technology. Focal Press. pp. 120–123. ISBN 978-0-240-51529-8. Focal Press link
^ Sayood, Khalid (2006). Introduction to Data Compression, Third Edition. Morgan Kaufmann Publishers. pp. 560–569. ISBN 978-0-12-620862-7.
^ Wee Meng Woon; Anthony Tung Shuen Ho; Tao Yu; Siu Chung Tam; Siong Chai Tan; Lian Teck Yap (2000), "IGARSS 2000. IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No.00CH37120)", Geoscience and Remote Sensing Symposium paper, IGARSS 2000, 2, pp. 609–611, doi:10.1109/IGARSS.2000.861646, ISBN 978-0-7803-6359-5, Achieving high data compression of self-similar satellite images using fractal
^ "Fractal encoding of video sequences". inist.fr. Retrieved 18 April 2018.
^ Walking, Talking Web Archived 2008-01-06 at the Wayback Machine Byte Magazine article on fractal compression/resolution independence
^ Interpolation decoding method with variable parameters for fractal image compression College of Mathematics and Physics, Chongqing University, China
^ Smooth fractal interpolation Departamento de Matemáticas, Universidad de Zaragoza, Campus Plaza de San Francisco, Zaragoza, Spain
^ A Note on Expansion Technique for Self-Affine Fractal Objects Using Extended Fractal Interpolation Functions Archived 2011-01-01 at the Wayback Machine Hokkaido Univ., Graduate School of Engineering, JPN
^ Studies on Scaling Factor for Fractal Image Coding Archived 2008-01-27 at the Wayback Machine Nagasaki University, Faculty of Engineering
^ U.S. Patent 4,941,193 – Barnsley and Sloan's first iterated function system patent, filed in October 1987
^ Using Fractal Coding to Index Image Content for a Digital Library Tech report
^ Arnaud E. Jacquin. Image Coding Based on a Fractal Theory of Iterated Contractive Image Transformations. IEEE Transactions on Image Processing, 1(1), 1992.
^ Iterated Systems Inc. changed its name to MediaBin Inc. Inc. in 2001 and in turn was bought out by Interwoven, Inc. in 2003)
^ NIST SP950-3, "Capturing and Integrating Patient Healthcare Information to Improve Accessibility"; see page 36, "MediaBin Fractal-Based Technology to Compress Digital Image Files" Archived 2015-09-23 at the Wayback Machine
^ Genuine Fractals Product Review
^ "MAW 1998: Theme Essay". www.mathaware.org. Retrieved 18 April 2018.
^ Aitken, William (May 1994). "The big squeeze". Personal Computer World.
^ 1994 Manual specifying on page 11 SoftVideo under license to Spectrum Holobyte
^ Business Library (8 July 2012). "Mitsubishi Corporation Inks Agreement With Iterated Systems". findarticles.com. Archived from the original on 8 July 2012. Retrieved 18 April 2018.
^ Microsoft ClearVideo support
^ "April - 2014 - Due Diligence Study of Fractal Video Technology". paulschlessinger.wordpress.com. Retrieved 18 April 2018.
^ Kominek, John (1 July 1997). "Advances in fractal compression for multimedia applications". Multimedia Systems. 5 (4): 255–270. CiteSeerX 10.1.1.47.3709. doi:10.1007/s005300050059. Retrieved 18 April 2018 – via dl.acm.org.
^ "Refdoc". cat.inist.fr. Retrieved 18 April 2018.
^ Rajkumar, Wathap Sapankumar; Kulkarni, M.V.; Dhore, M.L.; Mali, S.N. (2006). "Fractal image compression performance synthesis through HV partitioning". Fractal image compression performance synthesis through HV partitioning - IEEE Conference Publication. pp. 636–637. doi:10.1109/ADCOM.2006.4289976. ISBN 978-1-4244-0715-6.
^ Simple and Fast Fractal Image Compression Circuits, Signals, and Systems - 2003
^ Schema genetic algorithm for fractal image compression Department of Electrical Engineering, National Sun Yet-Sen University, Kaohsiung, Taiwan
^ A fast fractal image encoding method based on intelligent search of standard deviation Department of Electrical and Computer Engineering, The University of Alabama
^ Novel fractal image-encoding algorithm based on a full-binary-tree searchless iterated function system[permanent dead link] Department of Electrical and Computer Engineering, The University of Alabama
^ Fast classification method for fractal image compression Proc. SPIE Vol. 4122, p. 190-193, Mathematics and Applications of Data/Image Coding, Compression, and Encryption III, Mark S. Schmalz; Ed
^ Toward Real Time Fractal Image Compression Using Graphics Hardware Dipartimento di Informatica e Applicazioni, Università degli Studi di Salerno
^ Hafner, Ullrich (2001). "FIASCO - An Open-Source Fractal Image and Sequence Codec". Linux Journal (81). Retrieved February 19, 2013.
^ "Manpage of fiasco". castor.am.gdynia.pl. Archived from the original on 9 March 2012. Retrieved 18 April 2018.
^ "Pnmtofiasco User Manual". netpbm.sourceforge.net. Retrieved 18 April 2018.
^ "Fiascotopnm User Manual". netpbm.sourceforge.net. Retrieved 18 April 2018.
Pulcini and Verrando's Compressor
Keith Howell's 1993 M.Sc. dissertation Fractal Image Compression for Spaceborne Transputers
My Main Squeeze: Fractal Compression, Nov 1993, Wired.
Fractal Basics description at FileFormat.Info
Superfractals website devoted to fractals by the inventor of fractal compression
Data compression methods
Entropy type
Asymmetric numeral systems
Golomb
Shannon–Fano
Shannon–Fano–Elias
Unary
Exp-Golomb
Levenshtein
Byte pair encoding
Lempel–Ziv
LZFSE
LZJB
LZMA
LZO
LZRW
LZSS
LZWL
LZX
Zstandard
DPCM
LDCT
Lossy
Transform type
Discrete cosine transform
MDCT
Daubechies
SPIHT
Predictive type
ACELP
CELP
WLPC
Psychoacoustic
Companding
Nyquist–Shannon theorem
Speech coding
Sub-band coding
Codec parts
A-law
μ-law
Psychoacoustic model
Chroma subsampling
Coding tree unit
Compression artifact
Macroblock
Standard test image
Chain code
EZW
Video characteristics
Deblocking filter
Lapped transform
Kolmogorov complexity
Rate–distortion
Compression formats
Compression software (codecs)
Fractal software
Fractal art
Apophysis
Kalles Fraktaler
MilkDrop
Electric Sheep
openPlaG
Fractint
Chaotica
Wolfram Mathematica
Windows only
Ultra Fractal
VisSim
Scenery generator
MojoWorld Generator
Picogen
VistaPro
Burning Ship fractal
Jerusalem cube
Julia set
Mandelbox
Mandelbulb
Computer-generated imagery
Fractal landscape
Fractal flame
Mathematical visualization
Orbit trap
Retrieved from "https://en.wikipedia.org/w/index.php?title=Fractal_compression&oldid=929437714"
Lossy compression algorithms
CS1: long volume value
This page was last edited on 5 December 2019, at 20:50 (UTC). | CommonCrawl |
Make Life Visible
Make Life Visible pp 65-77 | Cite as
19F MRI Probes with Tunable Chemical Switches
Kazuya Kikuchi
Tatsuya Nakamura
First Online: 02 October 2019
Activatable 19F MRI small molecule probes have been developed to detect calcium ion, pH change, enzyme activity etc. However, small molecule based probes could not be applicable to in vivo applications owing to low sensitivity. Though PFC encapsulated nanoparticle are highly sensitive, activatable PFC encapsulated nanoparticles (switching OFF/ON-type probes) have not been reported. Thus, activatable PFC nanoparticles are highly desirable in order to realize various applications.
This thesis describes herein the development of switching OFF/ON type nanoparticles probes for detecting biological environment and biological functions. To develop activatable 19F MRI probes, the author utilized FLAME as highly sensitive contrast agent and PRE effect as modulation of 19FNMR/MRI signals. The PRE effects of Gd3+ complexes was efficient for decreasing the 19F NMR/MRI signals of fluorine compounds in FLAME. Based on this finding, the author attempted to develop an activatable 19F MRI probes (switching OFF/ON type probes) for the detection of reducing environment.
The online version of this chapter ( https://doi.org/10.1007/978-981-13-7908-6_7) contains supplementary material, which is available to authorized users.
Download conference paper PDF
7.1 Magnetic Resonance Imaging
MRI is the imaging technique based on nuclear magnetic resonance (NMR) phenomena. MRI offers high resolution, deep tissue imaging, and no radiation exposure (Louie et al. 2000). To acquire high contrast images, contrast agents such as Gd3+ complexes and superparamagnetic iron oxide nanoparticle (SPIO) are widely used in the field of clinical and research (Fig. 7.1) (Lee et al. 2008). Gd3+ complexes shorten the longitudinal relaxation time (T1), results in enhancement of MRI signals. SPIO shorten the tranverse relaxation time (T2), results in attenuation of MRI signal intensities. Figure 7.2 shows the switching OFF/ON type probes based on Gd3+ complexes and SPIO (Perez et al. 2002). However, 1H MRI often suffers from high background signals derived from water and lipid etc. Therefore, there is a limitation of monitoring of biological signals.
Fig. 7.1
(a) Clinically utilized T1 contrast agent, Dotarem®, and T1 relaxation. (b) Clinically utilized T2 contrast agent, Resovist, and T2 relaxation
The switching OFF/ON type probes based on (a) Gd3+ complexes and (b) SPIO
Recently, heteronuclear MRI has been attracted considerable attentions as the alternative 1H MRI. Several non proton MRI such as 13C, 15N, 19F, 29Si, 31P, and 129Xe has been utilized in biological analysis (Table 7.1) (Cassidy et al. 2013). Among these non proton MRI, 19F MRI has considerable attentions, because fluorine has a 100% natural abundance and a high gyromagnetic ratio (Ahrens et al. 2005). In our bodies, there are a large amount of fluorine atoms in bones and teeth and almost no fluorine atoms in tissues. However, these fluorine atoms are immobilized in a solid state, exhibits very short T2 which results in invisible MRI. Therefore, the 19F MRI can acquire the image without the background signals.
Table 7.1
NMR observable nucleus and the sensitivity
Resonant frequency (MHz·T−1)
Relative sensitivity
Natural abundance (%)
NMR sensitivity
1.59 × 10−2
Toward this ends, 19F MRI contrast agents (always ON type probes) have been utilized in visualization of foci, and cell tracker (Ahrens et al. 2005; Thurecht et al. 2010; Srinivas et al. 2007). In particular, perfluorocarbon (PFC) encapsulated nanoemulsions have attracted significant attention as highly sensitive 19F MRI contrast agents (Srinivas et al. 2010), and have been utilized as a cell tracker, and oxygen delivery. Recently, several activatable 19F MRI probes (switching OFF/ON type probes) have also been developed. However, there are only a few examples of in vivo applications owing to the low sensitivity of such probes.
7.2 Perfluorocarbon Encapsulated in Silica Nanoparticle (FLAME)
In the author's research group, novel unique shape nanomaterials, which are perfluoro-15-crown-5 ether (PFCE)-encapsulated silica nanoparticles, FLAMEs (FLuorine Accumulated silica nanoparticle for MRI contrast Enhancement), were developed (Fig. 7.3) (Matsushita et al. 2014). FLAMEs are composed of a liquid PFCE, which shows the high molecular mobility to achieve the long T2, and a silica shell, which can be easily surface-modified for various functionalization. Although Ahrens et al. reported lipid-based PFCE nanoemulsions as 19F MRI contrast agents for immune cell tracking (Ahrens et al. 2005; Srinivas et al. 2007), the chemical modification of the lipid emulsion surface is limited due to the unstablity in organic solvents. In contrast, the silica shell fulfills the many demands such as high hydrophilicity, high stability in both aqueous and organic solutions, and chemically surface-modifiable property. In fact, various surface functionalization of FLAMEs was achieved and the functionalized FLAMEs were useful for monitoring a reporter protein expression in living cells and in vivo detection of a tumor. These biological applications represent only a fraction of the forthcoming applications.
Illustration and transmission electron microscope image of FLAME. The molecular motion of PFC is highly retained and thus the sensitivity of the nanoparticles is high sensitive
7.3 Paramagnetic Relaxation Enhancement (PRE) Effect
There are three types of paramagnetic effects: paramagnetic relaxation enhancement (PRE) effect, pseudocontact shifts (PCSs), and residual dipolar couplings (RDCs) (Clore and Iwahara 2009). Since PCSs and RDCs are observed only in anisotropic electron systems, only PRE is effective in the case of SPIO and Gd3+ complexes (Keizer et al. 2007). The PRE decreases the spin-spin relaxation time (T2) and results in the broadening of the NMR signals and the decrease of the MRI signals. There are two types of the relaxation mechanism of PRE effect. One is PRE through dipole-dipole interaction and the other is PRE through Curie-spin relaxation. The PRE effect of Gd3+ complexes is occurred through dipole-dipole interaction. The transverse (Γ2) PRE rates of Gd3+ are described by the Solomon–Bloembergen (SB) equations (Solomon 1955; Bloembergen and Morgan 1961; Lipari and Szabo 1982):
$$ {\varGamma}_2=\frac{1}{15}{\left(\frac{\mu_0}{4\pi}\right)}^2{\gamma}_{\mathrm{I}}^2{g}^2{\mu}_{\mathrm{B}}^2S\left(S+1\right)\left\{4{J}_{\mathrm{SB}}(0)+3{J}_{\mathrm{SB}}\left({\omega}_{\mathrm{I}}\right)\right\} $$
where μ0 is the permeability of free space, μB is the magnetic moment of the free electron, γI the fluorine gyromagnetic ratio, g is the electron g-factor, S is the electron spin quantum number, and ωI/2π is the Larmor frequency of the fluorine compound. JSB(ω) is the spectral density function;
$$ {J}_{\mathrm{SB}}\left(\omega \right)={r}^{-6}\frac{\tau_C}{1+{\left(\omega {\tau}_C\right)}^2} $$
τC is the correlation time, defined as (τr−1 + τs−1)−1. τr is the rotational correlation time of the molecule, and τs is the effective electron relaxation time.
In contrast, Curie-spin relaxation arises from dipole-dipole interaction between a observable nuclide and the magnetization of the electron. The PRE effect of SPIOs is governed by Curie-spin relaxations owing to their high magnetic susceptibility. The Γ2 PRE rates of Curie-spin relaxation are given by (Bertinin et al. 2002):
$$ {\varGamma}_2=\frac{1}{5}{\left(\frac{\mu_0}{4\pi}\right)}^2\frac{\omega_I{g}^4{\mu}_{\mathrm{B}}^4{S}^2{\left(S+1\right)}^2}{{\left(3{k}_BT\right)}^2{r}^6}\left(4{\tau}_r+\frac{3{\tau}_r}{1+{\omega_I}^2{\tau_r}^2}\right) $$
where kB is the Boltzmann constant, T is temperature.
In both cases, PRE effect is effective over short distance due to its r−6 dependency, where r is the distance between NMR-observable nuclei and a paramagnetic center. When the T2 relaxivity of SPIO is compared with that of Gd3+ complexes, SPIOs have higher T2 relaxivity than Gd3+ complexes (Table 7.2). Thus, SPIO is efficient for decreasing the 19F NMR/MRI signals of PFCE near the FLAME core compared with Gd3+ complexes.
Relaxivities (mM−1 s−1) of paramagnetic contrast agents in H2O at 37 °C (Rohrer et al. 2005)
r2/r1
Gd3+ complex
Magnevist
Gadovist
ProHance
MultiHance
Dotarem
Teslascan
Optimark
Resovist
Feridex
7.4 Gadolinium Based-19F MRI Nanoprobe for Monitoring Reducing Environment
PRE effect is effective over short distance due to its r−6 dependency, where r is the distance between NMR-observable nuclei and a paramagnetic center (Clore and Iwahara 2009; Iwahara and Clore 2006). The author's research group has employed PRE effect to develop activatable 19F MRI small molecule probes for detection of enzyme activity (Mizukami et al. 2008). The probes consist of fluorine compound, enzyme substrate, and Gd3+ complex. Gd3+ complex was conjugated with fluorine compounds through enzyme substrate. The distance between fluorine compound and Gd3+ complex was approximately 2.2 nm, determined by molecular mechanic method. Since PRE effect is effective at such close distance, 19F NMR/MRI signal of the probes were decreased. Upon addition of enzyme, Gd3+ complexes were away from fluorine compounds, which results in high 19F NMR/MRI signal enhancements.
In the case of FLAME, most of PFCE compounds are more than 50 Å away from the surface-modified Gd3+ complexes due to the thickness of the silica shell. Thus, it was assumed that the PRE effect might not sufficiently attenuate the 19F NMR/MRI signals of FLAME.
The authors first confirmed whether the PRE of the Gd3+ complexes on the FLAME surface was effective. Different concentration of Gd3+ diethylenetriaminepentaacetate (DTPA) complexes were attached to FLAME to yield FLAME-DTPA-Gd1–2 (Scheme 7.1). The 19F NMR spectrum of FLAME-DTPA without Gd3+ exhibited a sharp, single peak (T2 = 420 ms). Meanwhile, that of FLAME-DTPA-Gd became a broader peak as Gd3+ concentration increased (Fig. 7.4a). The T2 of FLAME-DTPA-Gds decreased in Gd3+ concentration dependent manner (T2 = 68, 40 ms for FLAME-DTPA-Gd1, 2 respectively). Although the 19F MRI signal of FLAME-DTPA were observed due to the long T2, that of FLAME-DTPA-Gd was decreased with Gd3+ concentration increasing (Fig. 7.4b). These results indicated that the 19F NMR/MRI signals of PFCE in FLAME were affected by the PRE from the surface-modified Gd3+ complexes. Therefore, the author expected that activatable 19F MRI probes with high 19F MRI signal enhancement would be achieved by introducing a cleavable linker between FLAME and the surface-modified Gd3+ complexes.
Scheme 7.1
Preparation of FLAME-DTPA-Gd. (a) diethylenetriaminepentaacetic acid dianhydride, TEA, DMF; (b) GdCl3·6H2O, methanol
19F NMR spectra and 19F MRI phantom images of FLAME-DTPA and FLAME-DTPA-Gd. For 19F NMR, CPFCE = 0.6 mM, and the accumulation time was 1 min 22 s. For 19F MRI (Rapid Acquisition with the Refocused Echoes (RARE) method): TR was 3000 ms. TE,eff was 12 ms. The NEX was 64. The acquisition time was 12 min 48 s
This result was explained by the molecular mobility on the NMR/MRI measurement time scale. Iwahara et al. reported that the PRE effect was efficient in spite of the long average distance, when NMR-observable nuclei can occasionally enter the effective range of the PRE effect (Lee et al. 2008). The long T2 indicates that the PFCE in FLAME maintains high molecular mobility even in the nanoparticle structure (Matsushita et al. 2014). Although the PFCE at the center of the FLAME core is about 250 Å away from the surface Gd3+ complexes (where PRE is not efficient), the fluorine compounds can access the inner shell of FLAME on the measurement time scale. Near the inner shell, although the contribution of one Gd3+ complex to the PRE effect is small, the PRE effect from multiple surface Gd3+ complexes is combined, and thus the T2 of PFCE is efficiently decreased (Fig. 7.5). Although Grüll et al. observed the PRE of PFCE in Gd3+-modified nanoemulsions, where the distance between the Gd3+ complexes and the fluorine core was less than 22 Å (De Vries et al. 2014), we confirmed that the PRE was effective as such distance for the first time.
Proposed relaxation mechanism of fluorine compounds in FLAME
Next, the authors designed activatable FLAMEs, FLAME-SS-Gd3+ (FSG), to image reducing environments. Gd3+ complexes were attached to the FLAME surface via disulfide linkers to reduce the T2 of the fluorine compounds by the PRE effect, which attenuates the 19F NMR/MRI signals (Fig. 7.6). When the disulfide of FSG was reduced, the Gd3+ complexes were cleaved from the FLAME surface. Then, the T2 of the encapsulated PFCE would be elongated and the 19F NMR/MRI signal intensity would increase.
Design of activatable FLAME, FLAME-SS-Gd3+ (FSG)
To optimize the amount of Gd3+ complexes on the surface of FLAMEs, three types of FSGs with different concentrations of Gd3+ were prepared (Scheme 7.2). The synthetic intermediate FLAME-Py was prepared by the reaction of FLAME with different amounts of 2-((3-(trimethoxysilyl)propyl)dithio)pyridine (1 eq. for FSG1, 10 eq. for FSG2, and 100 eq. for FSG3). Then, 1 eq., 10 eq., or 100 eq. of Gd3+ complexes were conjugated to the FLAMEs via a thiol-disulfide exchange reaction to afford FSG1–3, respectively.
Preparation of FLAME-SS-Gd3+ (FSG). (a) 2-((3-(trimethoxysilyl)propyl) dithio)pyridine, isopropanol; (b) Gd-DOTA-SH, MeOH
Next, the number of fluorine atoms and Gd3+ ions per nanoparticle were calculated as n19F and nGd, respectively (Table 7.3). The quantity of attached Gd3+ ions was measured by inductively coupled plasma atomic emission spectrometry (ICP-AES), and the amount of the fluorine atoms was quantified by 19F NMR in comparison with that of an internal standard, sodium trifluoroacetate. The average diameter of FLAME was 53.4 nm with a 5 nm-thick silica shell, as measured by transmission electron microscopy. If FLAME has a single size of 53.4 nm, the mole of PFCE per one nanoparticle (mPFCE) could be calculated as follows:
$$ {m}_{\mathrm{PFCE}}=\frac{w_{\mathrm{PFCE}}}{M{W}_{\mathrm{PFCE}}}=\frac{d_{\mathrm{PFCE}}\times {V}_{\mathrm{core}}}{M{W}_{\mathrm{PFCE}}}=\frac{d_{\mathrm{PFCE}}\times \frac{4}{3}\pi {r}_{\mathrm{core}}^3}{M{W}_{\mathrm{PFCE}}}\approx 1.4\times {10}^{-19}\left(\mathrm{mol}/\mathrm{particle}\right) $$
where wPFCE is the weight of PFCE in FLAME, MWPFCE is the molecular weight of PFCE, dPFCE is the density of PFCE (1.86 g/cm3), Vcore is the volume of PFCE in FLAME, and rcore is the radius of the FLAME core (21.7 nm). Thus, the number of fluorine atoms per one nanoparticle (n19F) was calculated as:
$$ {n}_{{}{}^{19}\mathrm{F}}={m}_{\mathrm{PFCE}}\times 20\times {N}_{\mathrm{A}}\approx 1.7\times {10}^6\left({}{}^{19}\mathrm{F}\;\mathrm{atom}/\mathrm{particle}\right) $$
where NA is Avogadro's constant. Since the amount of the Gd3+ ions was measured by ICP-AES, the molar ratio of the Gd3+ ions to PFCE for FSG1, FSG2, and FSG3 was calculated to be 0.011, 0.026, and 0.038, respectively. Therefore, the number of Gd3+ ions per nanoparticle (nGd) was calculated as:
$$ {\displaystyle \begin{array}{l}\mathrm{FSG}1:{m}_{{\mathrm{Gd}}^{3+}}/{m}_{\mathrm{PFCE}}=0.011\\ {}{n}_{Gd}={m}_{{\mathrm{Gd}}^{3+}}\times {N}_{\mathrm{A}}=0.011\times {m}_{\mathrm{PFCE}}\times {N}_{\mathrm{A}}\approx 9.1\times {10}^2\left({\mathrm{particle}}^{-1}\right)\\ {}\mathrm{FSG}2:{m}_{{\mathrm{Gd}}^{3+}}/{m}_{\mathrm{PFCE}}=0.026\\ {}{n}_{Gd}={m}_{{\mathrm{Gd}}^{3+}}\times {N}_{\mathrm{A}}=0.026\times {m}_{\mathrm{PFCE}}\times {N}_{\mathrm{A}}\approx 2.1\times {10}^3\left({\mathrm{particle}}^{-1}\right)\\ {}\mathrm{FSG}3:{m}_{{\mathrm{Gd}}^{3+}}/{m}_{\mathrm{PFCE}}=0.038\\ {}{n}_{\mathrm{Gd}}={m}_{{\mathrm{Gd}}^{3+}}\times {N}_{\mathrm{A}}=0.038\times {m}_{\mathrm{PFCE}}\times {N}_{\mathrm{A}}\approx 3.1\times {10}^3\left({\mathrm{particle}}^{-1}\right)\end{array}} $$
Physical properties of FLAME and FSGs
ς-potential/mV
n19Fa
nGda
n19F/nGda
T2, TCEP−/ms
T2, TCEP+/ms
−24.8 ± 1.7
1.7 × 106
−b
FSG1
n19F: the number of 19F atoms in one nanoparticle, nGd the number of Gd3+ atoms in one nanoparticle
aThese values were predicted assuming that FSG has a single size of 53.4 nm (diameter)
bNot measured
The ς-potentials of FSGs gradually shifted towards the positive direction with increasing amounts of surface Gd3+ ions (Table 7.3). This was because the slightly electronegative silanol groups on the FLAME surface were decreased owing to the coupling with 2-((3-(trimethoxysilyl)propyl)dithio)pyridine. The nGd and ς-potential data indicated that different concentrations of Gd3+ complexes were successfully introduced on the FLAME surface.
The 19F NMR spectrum of FLAME without paramagnetic ions exhibited a sharp peak. In contrast, the 19F NMR peaks of FSGs were decreased and more broad according to the concentration of surface Gd3+ on account of the PRE effect (Fig. 7.7a). Although the 19F NMR of FSG1 exhibited a sharp peak, the T2 of FSG1 (120 ms) was shorter than that of FLAME (420 ms) (Table 7.3). The T2 of FSG2 and FSG3 was 66 ms, 27 ms, respectively. As such, the PRE effect was observed in all FSGs.
(a) 19F NMR spectra of FSGs incubated with or without TCEP. CPFCE: 0.6 mM, CTCEP: 1.0 mM, incubation time: 4 h, accumulation time: 10 min 55 s. (b) 19F NMR signal to noise ratio of FSGs in the presence of TCEP (Blue: FSG1, Red: FSG2, Green: FSG3). CPFCE: 0.15 mM
19F NMR spectra and T2 of FSGs were measured after treatment with a reducing agent, tris(2-carboxyethyl)phosphine (TCEP) (Fig. 7.7). Addition of TCEP made the 19F NMR peaks of all FSGs sharper and taller as compared to those before the addition. The T2 values of FSG1–3 were significantly increased upon addition of TCEP within 2 h, and were comparable to that of FLAME. All Gd3+ complexes were cleaved upon addition of more than 2 mM TCEP (Fig. 7.7b). The highest 19F NMR SNR of FSG1–3 was obtained at 2 mM TCEP, and the values were 16.2 for FSG1, 19.5 for FSG2, and 17.9 for FSG3. The signal enhancement factors in response to the reductant were 3.1, 9.7, and 12.7 for FSG1–3, respectively. Thus, FSG3 was the most sensitive 19F NMR probe in the detection of the reducing environment.
The 19F NMR signals of the FSGs increased upon addition of other reducing agents such as glutathione, cysteine, and dithiothreitol (Fig. 7.8). In particular, addition of glutathione induced the greatest 19F NMR signal enhancement. Although there are some concerns about the stability of reduction-triggered nanoparticles in normal tissues, rational optimization of the disulfide linkage will lead to practical in vivo applications.
19F NMR spectra of FSG2 (CPFCE = 0.15 mM) incubated with several thiol-based reducing agents (3 mM). Left to right, control (without reductant), glutathione (GSH), cysteine (Cys), 1,4-dithiothreitol (DTT). The accumulation time was 1 min 22 s. Incubation time was 4 h
Finally, 19F MR phantom images of FSGs solutions with or without TCEP were obtained by varying TE,eff. In general, the MRI signal of the long T2 component is well observed at both short and long TE,eff. In contrast, the MRI signal of samples with moderately short T2 is only visible at short TE,eff, and that of the extremely short T2 component is not observed even at short TE,eff. As expected from the 19F NMR results, almost no 19F MRI signals of FSG2 and FSG3 were detected without TCEP at any TE,eff due to the strong PRE effect (Fig. 7.9a, b). In contrast, the 19F MRI signals of FSG1 were observed at TE,eff ≤ 84 ms because of the moderately short T2. However, the measurement of FSG1 without TCEP at TE,eff ≥ 108 ms extinguished the undesired 19F MRI signals. Reductive reactions induced a noticeable 19F MRI signal enhancement in FSG1–3 at any TE,eff (filled circles). At TE,eff = 12 ms, approximately 60- and 40-fold increases were observed in FSG2 and FSG3, respectively. Although the signal the enhancement of FSG1 was only two-fold at TE,eff = 12 ms, a 50-fold increase was observed at TE,eff = 108 ms. These results indicated that FSG2 was the most effective probe for detecting reducing environments. One of the advantages of FSGs is the high sensitivity, because the 19F NMR/MRI signals of 1.7 × 106 fluorine atoms in the core were decreased by ca. 1.0 × 103 Gd3+ complexes on the FLAME surface. The ratios of fluorine atoms to Gd3+ complexes (Table 7.1) are the highest among known PRE-based probes, of which the ratios were single digits. This high ratio led to the high signal amplification.
19F MRI signal enhancement of FSGs by TCEP. (a) 19F MRI phantom images of FSG1–3 with or without TCEP. (b) Plot of 19F MRI signal intensity of FSG1–3 at different TE,eff with (filled circles) or without (open circles) TCEP. 19F MRI RARE method: the matrix size was 128 × 64 and the slice thickness was 30 mm. TR was 3000 ms. The NEX was 64. The acquisition time was 25 min 36 s
Video S1-6
(MP4 508249 kb)
Ahrens ET, Flores R, Xu H, Morel PA (2005) In vivo imaging platform for tracking immunotherapeutic cells. Nat Biotechnol 23:983–987CrossRefGoogle Scholar
Bertinin I, Luchinat C, Parigi G (2002) Magnetic susceptibility in paramagnetic NMR. Prog Nucl Mag Res Sp 40:249–273CrossRefGoogle Scholar
Bloembergen N, Morgan LO (1961) Proton relaxation times in paramagnetic solutions. Effects of electron spin relaxation. J Chem Phys 34:842–850CrossRefGoogle Scholar
Cassidy MC, Chan HR, Ross BD, Bhttacharya PK, Marcus CM (2013) In vivo magnetic resonance imaging of hyperpolarized silicon particles. Nat Nanotechnol 8:363–368CrossRefGoogle Scholar
Clore GM, Iwahara J (2009) Theory, practice, and applications of paramagnetic relaxation enhancement for the characterization of transient low-population states of biological macromolecules and their complexes. Chem Rev 109:4108–4139CrossRefGoogle Scholar
De Vries A, Moonen R, Yildirim M, Langereis S, Lamer-Ichs R, Pikkemaat JA, Baroni S, Terreno E, Nicolay K, Strijkers GJ, Grüll H (2014) Relaxometric studies of gadolinium-functionalized perfluorocarbon nanoparticles for MR imaging. Contrast Media Mol Imaging 9:83–91CrossRefGoogle Scholar
Iwahara J, Clore GM (2006) Detecting transient intermediates in macromolecular binding by paramagnetic NMR. Nature 440:1227–1230CrossRefGoogle Scholar
Keizer PHJ, Desreux JF, Overhand M, Ubbink M (2007) Increased paramagnetic effect of a lanthanide protein probe by two-point attachment. J Am Soc Chem 129:9292–9293CrossRefGoogle Scholar
Lee H, Sun E, Ham D, Weissleder R (2008) Chip–NMR biosensor for detection and molecular analysis of cells. Nat Med 14:869–873CrossRefGoogle Scholar
Lipari G, Szabo A (1982) Model-free approach to the interpretation of nuclear magnetic resonance relaxation in macromolecules. 1. Theory and range of validity. J Am Chem Soc 104:4546–4559CrossRefGoogle Scholar
Louie AY, Hüber MM, Ahrens ET, Rothbächer U, Moats R, Jacobs RE, Fraser SE, Meade TJ (2000) In vivo visualization of gene expression using magnetic resonance imaging. Nat Biotechnol 18:321–325CrossRefGoogle Scholar
Matsushita H, Mizukami S, Sugihara F, Nakanishi Y, Yoshioka Y, Kikuchi K (2014) Multifunctional core-shell silica nanoparticles for highly sensitive19F magnetic resonance imaging. Angew Chem Int Ed 53:1008–1011CrossRefGoogle Scholar
Mizukami S, Takikawa R, Sugihara F, Hori Y, Tochio H, Wälchli M, Shirakawa M, Kikuchi K (2008) Paramagnetic relaxation-based 19F MRI probe to detect protease activity. J Am Chem Soc 130:794–795CrossRefGoogle Scholar
Perez JM, Josephson L, O'Loughlin T, Hӧgemann D, Weissleder R (2002) Magnetic relaxation switches capable of sensing molecular interactions. Nat Biotechnol 20:816–820CrossRefGoogle Scholar
Rohrer M, Bauer H, Mintorovitch J, Requardt M, Weinmann H-J (2005) Comparison of magnetic properties of MRI contrast media solutions at different magnetic field strengths. Investig Radiol 40:715–724CrossRefGoogle Scholar
Solomon I (1955) Relaxation processes in a system of two spins. Phys Rev 99:559–595CrossRefGoogle Scholar
Srinivas M, Morel PA, Ernst LA, Laidlaw DH, Ahrens ET (2007) Fluorine-19 MRI for visualization and quantification of cell migration in a diabetes model. Mang Reson Med 58:725–734CrossRefGoogle Scholar
Srinivas M, Cruz LJ, Bonetto F, Heerschap A, Figdor CG, de Vries IJM (2010) Customizable, multi-functional fluorocarbon nanoparticles for quantitative in vivo imaging using 19F MRI and optical imaging. Biomaterials 31:7070–7077CrossRefGoogle Scholar
Thurecht KJ, Blakey I, Peng H, Squires O, Hsu S, Alexander C, Whittaker AK (2010) Functional hyperbranched polymers: toward targeted in Vivo19F magnetic resonance imaging using designed macromolecules. J Am Chem Soc 132:5336–5337CrossRefGoogle Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
1.Graduate School of EngineeringOsaka UniversitySuita CityJapan
Kikuchi K., Nakamura T. (2020) 19F MRI Probes with Tunable Chemical Switches. In: Toyama Y., Miyawaki A., Nakamura M., Jinzaki M. (eds) Make Life Visible. Springer, Singapore
First Online 02 October 2019
DOI https://doi.org/10.1007/978-981-13-7908-6_7 | CommonCrawl |
Target sequencing reveals genetic diversity, population structure, core-SNP markers, and fruit shape-associated loci in pepper varieties
Heshan Du1,2 na1,
Jingjing Yang1,2 na1,
Bin Chen1,2,
Xiaofen Zhang1,2,
Jian Zhang1,2,
Kun Yang3,
Sansheng Geng1,2 &
Changlong Wen1,2
BMC Plant Biology volume 19, Article number: 578 (2019) Cite this article
The widely cultivated pepper (Capsicum spp.) is one of the most diverse vegetables; however, little research has focused on characterizing the genetic diversity and relatedness of commercial varieties grown in China. In this study, a panel of 92 perfect single-nucleotide polymorphisms (SNPs) was identified using re-sequencing data from 35 different C. annuum lines. Based on this panel, a Target SNP-seq genotyping method was designed, which combined multiplex amplification of perfect SNPs with Illumina sequencing, to detect polymorphisms across 271 commercial pepper varieties.
The perfect SNPs panel had a high discriminating capacity due to the average value of polymorphism information content, observed heterozygosity, expected heterozygosity, and minor allele frequency, which were 0.31, 0.28, 0.4, and 0.31, respectively. Notably, the studied pepper varieties were morphologically categorized based on fruit shape as blocky-, long horn-, short horn-, and linear-fruited. The long horn-fruited population exhibited the most genetic diversity followed by the short horn-, linear-, and blocky-fruited populations. A set of 35 core SNPs were then used as kompetitive allele-specific PCR (KASPar) markers, another robust genotyping technique for variety identification. Analysis of genetic relatedness using principal component analysis and phylogenetic tree construction indicated that the four fruit shape populations clustered separately with limited overlaps. Based on STRUCTURE clustering, it was possible to divide the varieties into five subpopulations, which correlated with fruit shape. Further, the subpopulations were statistically different according to a randomization test and Fst statistics. Nine loci, located on chromosomes 1, 2, 3, 4, 6, and 12, were identified to be significantly associated with the fruit shape index (p < 0.0001).
Target SNP-seq developed in this study appears as an efficient power tool to detect the genetic diversity, population relatedness and molecular breeding in pepper. Moreover, this study demonstrates that the genetic structure of Chinese pepper varieties is significantly influenced by breeding programs focused on fruit shape.
Pepper are members of the genus Capsicum, which originated in South America and represents one of the most economically important vegetable crops worldwide [1,2,3]. To date, 38 species of Capsicum have been reported (USDA-ARS, 2011). Of these, C. annuum, C. frutescens, C. chinense, C. baccatum, and C. pubescens are thought to have been domesticated [4]. Globally, the most predominant species is C. annuum, which has numerous commercial varieties varying greatly in size, shape, pungency, and color.
As the seed trade has developed and globalized, the commercial quality of seeds, which is based on authenticity and purity, has become increasingly important [5]. Traditionally, cultivar characterization was completed by field investigation of morphological traits; however, this process is time-consuming and labor-intensive and is thus not suitable for modern inspection demands [6]. A more high-throughput approach to distinguish varieties is the used of molecular markers [5]. Indeed, genetic markers have been used for DNA fingerprinting, diversity analysis, variety identification, and marker-assisted breeding of multiple commercial crops [7, 8]. Moreover, several PCR-based tools have been used to detect genetic diversity in peppers, including random amplified polymorphic (RAPD), restriction fragment length polymorphism (RFLP), and amplified fragment length polymorphism (AFLP) [9,10,11,12].
Recently, the genomes of two C. annuum cultivars, Zunla-1 and CM334, were sequenced [3, 13], which provided an important platform for the detection and development of genome-wide simple sequence repeats (SSR) and insertion or deletion (InDel) markers [14,15,16,17,18,19,20]. Although a large number of SSR and InDel markers have become available, these technologies are not suitable for large scale germplasm characterization. Thus, there is an unmet need for an efficient, rapid, and high-throughput system capable of characterizing thousands of germplasm.
One approach for meeting such high standards is the use of single-nucleotide polymorphisms (SNPs), which are good markers for genotyping because of their whole genome coverage and primarily biallelic nature. Accordingly, multiple high-throughput SNP genotyping platforms have been developed, including the GoldenGate [21] and Infinium [22], TaqMan [23], and KASPar platform (KBiosciences, www.kbioscience.co.uk). In recent years, high-throughput transcriptome sequencing and genotyping-by-sequencing (GBS) have been successfully used in pepper, generating highly informative genome-wide SNP data [24,25,26,27,28,29,30]. However, SNP marker genotyping is considered expensive as it requires a comprehensive technical platform and special equipment and reagents.
Genotyping by target sequencing (GBTS) is a targeted sequence-capture strategy that can genotype more than thousands of SSRs or SNPs using high-throughput sequencing technology. The two main types of GBTS are multiplex PCR and probe-in-solution-based target sequencing; the technology has been commercialized as AmpliSeq [31], NimbleGen [32], SureSelect [33], GenoBaits, and GenoPlexs [34]. To date, this technology has been widely used for medical applications but has rarely been used for agriculture species. However, a Target SSR-seq technique, which is a multiplex PCR-based approach, was successfully applied to the study of genetic diversity and structure in 382 cucumber varieties [35]. The results of this study demonstrated that GBTS is a customizable, flexible, high-throughput, low cost, and accurate sequencing tool.
Peppers from China constitute one-third of the world's pepper production [36]. Until now, the genetic diversity of pepper accessions in China has primarily been investigated using SSR markers, but these surveys only examined either several Chinese germplasm (up to 32) [37] or a small number of SSR markers (up to 28) [36]. However, high-throughput SNP platforms used for genotyping and the identification of pepper varieties have lagged significantly behind those for SSRs, and studies on the genetic diversity between the varieties of peppers in China has not yet been extensively analyzed. Therefore, the main objectives of the present work were: 1) to develop a Target SNP-seq technique suitable for genotyping pepper varieties; 2) to characterize composite core-SNP markers for use with the KASPar platform to maximize variety identification; 3) to examine the level of genetic diversity, structure, and differentiation within 271 pepper varieties. This study demonstrated that a novel Target SNP-seq can be used as a rapid and efficient tool for genotyping peppers, and the genetic structure of these cultivated varieties have been strongly impacted by breeding programs that select for fruit shapes.
Genome-wide perfect SNPs used for target SNP-seq
Re-sequencing of the 31 pepper lines (C. annuum) in this study generated a total of 872 Gb of paired-end sequence data, at an average depth of ~ 8.4. After mapping to the Zunla-1 genome [3], 40,700,040 SNPs were detected across the genomic sequences of the 31 re-sequenced lines and four previously published cultivars (Dempsey, Zunla-1, Perennial, and Chiltepin) [3, 13]. Approximately 11.3% of the C. annuum genome contains variable SNP sites. A total of 21,237,194 SNPs, with minor allele frequency (MAF) > 5% and missing data < 10%, were considered high-quality SNPs for downstream analyses. Using C. annuum's progenitor cultivar, Chiltepin, as an outgroup, the phylogenetic tree showed that pepper lines could generally be classified according to fruit shapes, except for three long horn-fruited lines that grouped with the linear-fruited lines. Based on the genetic distance, the transitions in fruit shapes were from Chiltepin-like peppers followed by the linear-fruited, short horn-fruited, long horn-fruited, finally to blocky-fruited peppers, which were the furthest from the Chiltepin-like peppers (Fig. 1a). Furthermore, the 35 lines can be divided into two major groups based on the optimal number of K = 2 by STRUCTURE (Fig. 1b); Group 1 consisted of the nine bell-fruited lines and ten of the long horn-fruited lines, whereas the remaining peppers, including three long horn-fruited, all the linear-fruited, and all the short horn-fruited peppers, as well as two cultivar progenitors Perennial and Chiltepin were assigned to Group 2. The clustering of these pepper lines appeared to be more related to fruit type, when K = 5. Group 1 was divided into Subgroup 1 (mostly blocky-fruited) and Subgroup 2 (long horn-fruited), whereas Group 2 was composed of Subgroup 3 (admixture, mostly short horn-fruited), Subgroup 4 (linear-fruited), and Subgroup 5 (cultivar progenitors with small fruit).
Population structure across pepper lines. Phylogenetic relationships (a) and population structure (b) based on the total SNPs of the 31 pepper inbred lines sequenced in this study and the previously sequenced C. annuum cultivars Zunla-1, Chiltepin [3], Perennial, and Dempsey [13]. Fruit shapes are presented as colored shapes
Given that pepper genomes are highly repetitive, strict criteria were used to identify the perfect SNPs (See Methods). In total, 521 perfect SNPs were identified, and 92, which were distributed across the genome (Fig. 2; Additional file 9: Table S2), were selected as multiplex PCR targets. Based on the previous annotation [3], 83 and 9 perfect SNPs fall within intergenic and genic regions, respectively. The nearest flanking annotated genes for each perfect SNP are shown in Additional file 9: Table S2.
Characteristics of the perfect SNPs used to genotype pepper varieties by Target SNP-seq. a Distribution of the 92 perfect SNPs in the ideogram of the genome of C. annuum Zunla-1 [3]. b Observed heterozygosity (Ho) per SNP locus, colored in red. c Expected heterozygosity (He) per SNP locus is presented in green. d Polymorphism information content (PIC) per SNP locus is presented in blue. e Minor allele frequency (MAF) per SNP locus is given in yellow. This figure was generated using Circos (http://circos.ca/) with the SNP region magnified to 2 Mb
Genotyping analysis of pepper varieties using the target SNP-seq
In total, 271 pepper varieties, including 90 blocky-, 113 long horn-, 25 short horn-, and 43 linear-fruited varieties, were genotyped using the Target SNP-seq (Additional file 8: Table S1). A total of 55.9 million reads were generated from the 271 varieties, with an average target read depth of 2064, and approximately 82% of the samples were sequenced at a depth greater than 1000 × (Additional file 2: Figure S2A). Among the 271 varieties, 238 varieties (87.8%) aligned to the Zunla-1 genome [3] at a rate of more than 90% (Additional file 2: Figure S2B). Of these aligned reads, 221 varieties (81.5%) exhibited an align rate to the target SNP region of over 80% (Additional file 2: Figure S2C). Furthermore, the Target SNP-seq uniformity index was analyzed, which was used to calculate the proportion of the coverage above 10% of the mean depth value for each variety. The average uniformity index in this study was 93.68% (Additional file 2: Figure S2D), which indicated a high uniformity of sequence depth among the 92 SNPs.
Perfect SNPs in 271 pepper varieties
The genetic parameters, MAF, Ho, He, and PIC revealed by each perfect SNP are given in Additional file 10: Table S3. MAF is a measure of the discriminating ability of the markers; as such, the closer the MAF is to 0.5 for biallelic markers, the better discriminatory properties. In this study, 28.26% of perfect SNPs showed a MAF between 0.4 and 0.5, whereas only four SNPs had MAF below 0.1 (Additional file 3: Figure S3A). The Ho value of each SNP ranged from 0.01 (CaSNP079) to 0.59 (CaSNP009) with an average of 0.28, and 11 SNPs exhibited higher Ho (> 0.4) (Additional file 3: Figure S3B; Additional file 10: Table S3). Furthermore, the He values ranged from 0.01 (CaSNP079) to 0.5 (CaSNP043 and CaSNP094) (Additional file 3: Figure S3C; Additional file 10: Table S3), whereas PIC values varied among perfect SNPs from 0.01 (CaSNP079) to 0.38 (CaSNP043, CaSNP094 and CaSNP117) with a mean of 0.31 (Additional file 3: Figure S3D; Additional file 10: Table S3). 71.74% of the perfect SNPs had PIC values greater than 0.30, whereas only four SNPs showed PIC values below 0.2. These values indicate that the perfect SNPs panel has a high discriminating capacity for varieties, and that CaSNP043, CaSNP94, CaSNP117, and CaSNP009 were the best at discriminating between varieties. Overall, the results indicate that the Target SNP-seq can be used as a rapid tool for genotyping peppers.
Perfect SNPs across the fruit shapes
The average values of the genetic parameters across the four fruit shape populations were also compared for genetic diversity, and the results showed that the blocky-fruited population had the lowest average values for He (0.18), Ho (0.16), and PIC (0.15) (Table 1), indicating the lowest genetic diversity within this population. In contrast, the long horn-fruited population exhibited the highest genetic diversity as defined by the highest average values of He (0.39), Ho (0.36), and PIC (0.31).
Table 1 Genetic diversity in fruit shape populations and across all varieties
A total of 21 SNP loci did not indicate any diversity (PIC = 0) within certain fruit populations, of which 16, 1, 3, and 5 loci were for the blocky-, long horn-, short horn-, and linear-fruited population, respectively (Additional file 10: Table S3). These fruit shape-specific loci may have been under selection during breeding or were selected owing to linkage with genes that determine fruit traits.
Identification of a core-SNP set
The perfect SNP panel distinguished 97.7% of the 271 pepper varieties (Fig. 3), the remaining displayed the same multilocus genotypes that were also difficult to distinguish from field phenotypes. Given that some varieties may exist with multiple names, varieties with identical genotypes may be redundant and were discarded to build non-redundant genotype varieties. Thus, a minimum of 27 perfect SNPs could distinguish between all non-redundant varieties (Fig. 3).
Discriminating saturation curve of 92 perfect SNPs in pepper varieties. The maximum discrimination power was 97.7% across all 271 varieties using 35 perfect SNPs, and 100% across non-redundant varieties using 27 perfect SNPs
To develop a core-SNP set for the KASPar platform, each perfect SNP marker was tested on a set of 23 to 95 pepper varieties with two allele-specific forward primers and one common reverse primer. The results showed that 35 SNP primers (Additional file 11: Table S4; Additional file 4: Figure S4) produced consistent and repeatable results with Target SNP-seq. Finally, 35 SNPs with a high discrimination power of up to 97% across all varieties and 100% in non-redundant varieties were proposed as a core SNPs set for use with the KASPar platform (Fig. 3 and Additional file 4: Figure S4; Additional file 11: Table S4).
Genetic structure in pepper varieties
The principal component analysis (PCA) was performed using the 92 perfect SNPs to investigate population clusters across the 271 varieties (Fig. 4a). Accordingly, the PCA plot indicates that the four fruit shape populations were generally clustered separately. The distribution of blocky-fruited varieties was very concentrated, whereas that of the long horn-fruited varieties was relatively dispersed. Linear-fruited varieties were more closely related to the short horn-fruited varieties than to either the long horn- or blocky-fruited varieties. Linear and blocky-fruited populations were the most diverse, and these clusters did not overlap, suggesting considerable genetic divergence throughout their breeding history. Notably, a selection of both long horn- and short-fruited varieties showed close relatedness to the linear-fruited population.
Population structure across pepper varieties. a Principal component analysis (PCA). b Population structure inferred using STRUCTURE. All varieties were divided into two main populations (Pop1 and Pop2) when K = 2, which was the optimal K. The populations were subdivided into five subpopulations, Subpop1~Subpop5, which correlated with fruit shape. c Phylogenetic tree analysis. The tree was produced using the neighbor-joining method based on the 92 perfect SNPs. The scale bar indicates simple matching distance
The population structure of the 271 varieties was further inferred using the cluster program, STRUCTURE, testing for 2 to 5 number of clusters (K). Evanno's correction [38] showed the peak of delta K at K = 2, which suggests the presence of two main populations, denoted as Pop1 and Pop2. Pop1 comprised 160 varieties (59.0%), containing all blocky-fruited varieties, 60.2% of long horn-fruited varieties, and only two linear-fruited varieties (Fig. 4b; Additional file 8: Table S1). The remaining 111 varieties (41.0%) were assigned to Pop2, which included all the short horn- and linear-fruited varieties, as well as 39.8% of long horn-fruited varieties (Fig. 4b; Additional file 8: Table S1). When K = 3, Pop1 was subdivided into two clusters, blocky- or long horn-fruited types. At K = 4, a mixture of 56% short horn-, 15 long horn-, and two linear-fruited varieties were assigned to a new cluster from Pop2, and these short horn- and a new long horn-fruited groups were assigned to independent clusters, respectively, when K = 5. Of note, the linear-fruited types were never assigned to an independent cluster as K was increased. Considering that the classification of populations appeared highly correlated with fruit types when K = 5, the two main populations were further subdivided into five subpopulations (Subpop1~Subpop5; Fig. 4b; Additional file 8: Table S1). Subpop1, 2, 3, and 4 showed a clear-cut structure with no or very few admixtures. Subpop1 comprised 98 varieties, 90 of which belong to blocky-fruited varieties and the remaining eight to long horn-fruited varieties. Long horn-fruited varieties were members of both Subpop2 and Subpop3, which is not surprising as long horn-fruited varieties were distributed across both Pop1 and Pop2. Subpop2 comprised 44 long horn-fruited varieties. Subpop3 comprised 24 varieties, 22 of which were long horn-fruited varieties and the remaining two were linear-fruited varieties. Subpop4 comprised 14 short horn-fruited varieties. Consistent with the results of PCA analyses, admixtures were mostly located in Subpop5, which contained 41 long linear-fruited varieties as well as a minority of short horn- and long horn-fruited varieties.
The unrooted phylogenetic tree (Fig. 4c) is consistent with the aforementioned PCA and model-based population structure, and indicated a clear distinction in the four fruit shapes, despite having admixtures. Images of the representative varieties, which were selected based on the lowest average genetic distance to other varieties within corresponding subpopulations, are presented in Fig. 4c. The representative images for two long horn-fruited varieties from Subpop2 and Subpop3 clearly indicate distinct morphologies.
In summary, three independent analysis methods strongly supported the division of pepper varieties into five well-differentiated genetic populations, which were correlated with distinct fruit shapes, indicating that the genetic structure of these cultivated varieties may have been strongly affected by fruit shape selection through breeding practices.
Genetic variation assessment of pepper populations
Comparison of the results between Pop1 and Pop2 using analysis of molecular variance (AMOVA) revealed that 33.04% of the total genetic variation was partitioned among Pops, 8.47% within Pops, and the remaining 58.49% within varieties (Table 2). AMOVA analysis of the five Subpops further indicated that the maximum variation (63.83%) occurred within varieties, the minimum variation (3.54%) was accounted for within Subpops, and 32.63% of the variation occurred between Subpops (Table 2), suggesting relatively moderate differentiation among Subpops.
Table 2 Analysis of molecular variance (AMOVA) among Pops and Subpops
To test for significant variations between Pops and among Subpops, a randomization test was performed (Additional file 5: Figure S5). The output revealed six histograms representing the distribution of the randomization strata. The observed results in the output showed significant differentiation of the structure of Pops and Subpops considering all levels of Pops and Subpops strata (Additional file 5: Figure S5). These results also supported the separation of the varieties into two Pops and five Subpops. Furthermore, pairwise estimates of Fst showed that population differentiation between Pop1 and Pop2 was high (Fst = 0.35). The pairwise Fst between the five Subpops ranged from 0.13 between Subpop2 and Subpop3 (both consist largely of long horn-fruited varieties) to 0.48 between Subpop1 (mostly blocky-fruited varieties) and Subpop4 (short horn-fruited varieties) (Table 3). Notably, high genetic differentiation (Fst = 0.43) was observed between Subpop1 and Subpop5 (mostly consisting of linear-fruited varieties), whereas lower genetic differentiation was observed between Subpop4 and Subpop5 (Fst = 0.14).
Table 3 Pairwise F statistics (Fst) estimates among subpopulations
Identification of the loci associated with fruit shape
A wide range of variation was observed for fruit shape index (FSI) in the 271 pepper varieties (Additional file 12: Table S5). The average FSI was 1.34, 4.98, 4.70, and 16.56 in blocky-, long horn-, short horn-, and linear-fruited populations, respectively. Significant differences were observed among blocky-, horn-, and linear-shaped populations (p < 0.01), but no differences were detected between long horn and short horn-fruited populations. FSI values of more than 9.5 are typical of linear fruits.
Having observed concordance between the population structure and fruit shapes (Fig. 2), we next performed association analyses of the FSI in 271 varieties and 165 genetic loci, including 92 SNPs and an additional 73 SSRs, which were all detected using Target sequencing (Additional file 6: Figure S6). Using the K + Q mixed linear model (MLM), a total of nine loci (CaSSR013, CaSSR090, CaSSR105, CaSSR091, CaSSR039, CaSSR044, CaSSR107, CaSSR077, and CaSNP112) were identified as significantly associated with FSI under a threshold p-value of 0.0001 (Additional file 7: Figure S7; Additional file 13: Table S6). To pair the associations with previously identified quantitative trait loci (QTL), the physical position of the nine loci in both the reference genome of Zunla-1 [3] and CM334 [13] are provided in Table 4. Loci CaSSR091 and CaSSR039 are within 820 kb on the same chromosome and were considered a unit. Therefore, the nine loci were located at eight chromosomal regions on six chromosomes, including chromosomes 1, 2, 3, 4, 6, and 12, and the phenotypic variation explained by each locus ranged from 7.9 to 12.7%. Two loci, CaSSR044 and CaSSR107, spanning approximately 39 Mb on chromosome 6, explained the highest phenotypic variation, which was 12.4 and 12.7%, respectively (Table 4 and Additional file 13: Table S6; Additional file 7: Figure S7).
Table 4 Loci significantly associated with fruit shape index as identified by association analysis
High-throughput genotyping by target SNP-seq
High-throughput genotyping technology has become essential for effective crop breeding programs. Target SSR-seq, which combined the multiplexed amplification of perfect SSRs with high-throughput sequencing, was recently developed and applied to the identification of cucumber varieties, leading to the characterization of a set of core SSRs [35]. This sequencing technology can acquire thousands of data points in under 72 h, costs less than $7/sample, and is associated with genotyping accuracy up to 100% due to the high coverage. The cost of Target SNP-seq developed in this study was similar as that of Target SSR-seq because the same procedure was used for target library construction in these two technologies.
In this study, re-sequencing tools were used to identify 92 perfect SNPs from the genomes of 35 C. annuum based on strict screening criteria. Only 9.8% of the perfect SNPs fell within genic regions, which is in agreement with the previous result that variant density is significantly lower in the genic region than in the intergenic regions [30]. The identified perfect SNPs were then used for target SNP sequencing to assess genetic diversity across 271 pepper varieties that are popular in China. The results showed that the perfect SNP panel had a high discriminating capacity for varieties, as 71.74% of the perfect SNPs had PIC values of > 0.30 (Additional file 10: Table S3). Further, a minimum of 27 perfect SNPs could distinguish between all non-redundant varieties (Fig. 3). Notably, the mean PIC value was found to be 0.31, which is lower than the values derived from studies using SSR markers [17, 39]. These discrepancies may be explained by the nature of the different types of markers; SSRs are multiallelic and more polymorphic than SNP markers, which are biallelic [40]. Another reason for the discrepancies may be due to the commercial varieties tending to be less variable compared to the landraces or the wide germplasm collection.
A set of 35 core SNPs that had the same discrimination power as the 92 perfect SNPs was successfully converted into KASPar markers, representing another robust genotyping choice for pepper varieties (Fig. 3; Additional file 11: Table S4). Unlike SSR markers, SNP markers do not require reference cultivars to be included in each experiment and will also overcome the confusion between labs regarding SSR alleles.
Population structure among inbred C. annuum lines
Since their initial domestication in Mexico, peppers have been under strong selection for fruit shape and size [56]. Consumption habits and pepper type preference vary globally. In the US alone, more than 20 market types are recognized and consumed [57]. In China, most of the pepper varieties commercially cultivated belong to the species C. annuum, and the market types are classified by fruit shapes, such as the popular blocky, long horn, short horn, and linear fruits [58, 59]. To date, most experiments have evaluated the genetic relationships among several Capsicum species [29, 30, 36, 37, 60,61,62] or the genetic diversity of C. chinense and C. baccatum germplasm from relatively restricted regions [26, 63]. Phylogenetic analysis based on molecular markers, pan-genome sequencing, and GBS confirmed that C. chinense and C. frutescens are more closely related to each other than to C. annuum [29, 61, 64]. Several studies attempted to characterize the population relatedness of cultivated C. annuum in restricted geographical areas [29, 36, 40,41,42, 65]. They revealed that the population structures of the C. annuum accessions were mainly associated to distinct cultivar types with respect to the plant and fruit descriptors, and thus mostly result from human selection for cultivar types in agreement with consumption modes and adaptation to the highly diversified agro-climatic conditions. Notably, the relationships among the 35 re-sequenced C. annuum lines described in this study align with previous reports grouping C. annuum according to fruit traits [29, 41, 65]. Further, clustering of blocky-fruited peppers in the furthest positions relative to small hot Chiltepin-like types can also be observed in previous studies [29, 41, 65].
Genetic structure among C. annuum varieties
Although the previous work has shown that population of C. annuum landraces in China clusters according to cultivar type [36], the relationships among commercially important C. annuum varieties from different companies have not been investigated with a fine set of genetic markers. In the present study, the relationships among four fruit shape populations were assessed across a broad range of pepper varieties cultivated in China. Comparison of the genetic parameters showed the lowest Ho was observed within the blocky-fruited population, while the highest was detected in the horn-fruited population (Table 1). These findings agree with the earlier studies that found a reduction in diversity was associated with non-pungent blocky-fruited lines relative to pungent lines [41,42,43,44]. The narrow genetic diversity associated with the blocky-fruited varieties may be a consequence of inbreeding with a limited gene pool.
Additionally, the PCA and phylogenetic tree demonstrated that the four fruit shape populations clustered separately with a little or no overlap. This aligns with the fruit shape classification system and demonstrates that the genetic structure of pepper varieties in China has been significantly influenced by breeding programs that select for fruit shape. Similarly, STRUCTURE analysis grouped the varieties into two main populations, Pop1 and Pop2, which were further divided into five subpopulations, Subpop1 to Subpop5 (Fig. 4b). Moreover, the subpopulations correlated with fruit shape. Notably, Subpop1, Subpop4, and Subpop5 corresponded to the blocky, short horn, and linear-fruited varieties, respectively. However, the majority of the long horn-fruited varieties were divided into two subpopulations, Subpop2 and Subpop3, which were statistically unique (Additional file 5: Figure S5). The best fit of genetic structures of the pepper lines and varieties were both divided into two groups in this study, different to that observed in the 368 Chinese C. annuum accessions analyzed by Zhang et al. [36, 18], which included 28 SSR markers to structure the accessions into three STRUCTURE groups. These differences may be attributed to the different types of pepper materials and the number of markers used in the two studies. However, the clustering of fruit types in both studies appeared to be somewhat similar, although different classifications of fruit shape were used. For example, Group1 mainly included rectangular, square, and triangular fruit types [36], which were also mainly clustered in Pop1 of our study. Group 3 mainly comprised cultivars with small and long fruits characterized by a very high fruit length: width ratio [36], which is the characteristic of linear-fruited and some short horn-fruited varieties in Pop2 of this study. In summary, our study provided valuable insight into the population structure underlying the fruit shapes of pepper varieties, as well as confirmed the strong effect of fruit shape selection by breeders on the genetic structure of Chinese pepper varieties.
Identification of associated loci for fruit shape
Fruit shape is an important trait in pepper breeding programs. A number of QTLs controlling FSI have been identified in intraspecific and interspecific populations from a cross between the bell pepper and a small-fruited hot pepper [45,46,47,48,49,50,51,52,53]. The first FSI QTL in peppers, named fs3.1, was detected on linkage group 3 [48]. This QTL was subsequently detected in other linkage analyses [46, 47, 51]. Using genome-wide associations in 373 pepper accessions, Colonna et al. (2019) recently identified that the SNP 3:183386147 on chromosome 3, located in the exon of gene CA03g16080, is significantly associated with FSI [30]. In the current study, the FSI associated loci CaSSR105 and its nearest upstream marker, CaSNP029, are located within approximately 26 Mb intervals on chromosome 3 (Additional file 13: Table S6). This FSI association region covers the reported FSI associated loci SNP 3:183386147 in the gene Longifolia 1-like (CA03g16080) [30]. Zygier et al. (2005) mapped a fruit shape QTL (fs2.1) on chromosome 2 [50]. This QTL was also detected in other studies and was found to be close to the Ovate gene (CA02g22830) at approximately 158 Mb of the CM334 genome [52, 53]. In the current study, FSI associated loci CaSSR090 and its nearest downstream loci CaSSR024, with an interval spanning of approximately 12.6 Mb on chromosome 2, covered the reported Ovate gene (CA02g22830). The Ovate gene was initially discovered in the tomato, where it controlled the fruit shape transformation from round to pear-shaped fruit [54, 55]. A single FSI QTL (fs4.2) was detected at the end of chromosome 4, which explained the 26.1% phenotypic variation [50]. We found that two FSI associated loci, CaSSR091and CaSSR039, were located between CaSNP041 and the end of chromosome 4, covering approximately 12.9 Mb (Additional file 13: Table S6). The presence of FSI association loci on chromosomes 1, 6, and 12 was also detected in this study (Additional file 6: Figure S6 and Additional file 7: Figure S7; Table 4 and Additional file 13: Table S6). After screening for protein function in these association loci, we found that two Ovate genes, CA06g21580 and CA12g07370, could be considered candidate genes, as they had significant effects on fruit shape.
Future directions of target SNP-seq in pepper
Of note, foreground selecting markers that are suitable for specific primer design could also be added to the perfect SNP panel used in our Target SNP-seq. For example, based on the functional site of the Tobamovirus resistance gene L3 and L4 [66], the Phytophthora capsici resistance genes CaDMR1 and Phyto5NBS1 [67, 68], bacterial spot resistance gene Bs3 [69], and potato virus Y resistance gene pvr1 [70], specific primers at the flanking region of the functional site have successfully been developed (Additional file 14: Table S7) and added to the perfect SNP panel. These functional loci of resistance genes, combined with the perfect SNP and SSR markers, could be detected simultaneously across hundreds of pepper accessions through Target SNP-seq. The commercial application of this technique has the potential to increase the efficiency of marker-associated selection programs, as well as aiding in variety identification.
The Target SNP-seq developed in this study is a high-throughput and reliable tool for the investigation of genetic diversity, variety identification, and characterization of population structure in peppers. The use of PCA, phylogenetic tree generation, and STRUCTURE revealed that the genetic structure of commercially available pepper varieties in China had been significantly influenced by fruit shape selection through breeding. Finally, association analysis of a limited number of markers allowed for the identification of previously reported and novel genomic regions that control fruit shape.
Plant materials, fruit shape categorization, and DNA extraction
A total of 271 pepper varieties, which were kindly supplied by 60 different breeding companies in China, were analyzed in this study. Information on these hybrid seeds, including variety name and source, is available in Additional file 8: Table S1. Fruit trait investigation and genetic identification were carried out by the pepper genetic breeding group and high-throughput molecular breeding platform at the Beijing Vegetable Research Center (BVRC).
Varieties were planted under greenhouse conditions at the Vegetable Varieties Exhibition Center in the Tongzhou District of Beijing. The greenhouse temperature ranged from 25 to 30 °C (08:00–20:00) and 20–25 °C (20:00–08:00), with natural light. Each variety consisted of at least four plants. The fruits were categorized into one of four fruit shapes: blocky-, long horn-, short horn-, and linear-fruited types (Additional file 8: Table S1).Examples of fruit shape classification are presented in Additional file 1: Figure S1. Four to ten ripe fruits from each plant were subjected to measurements of maximum height and width using a Vernier caliper (Hangzhou Tool and Measuring Tool Company, Hangzhou, China).
DNA was extracted from four young plantlet randomly selected from individuals of each variety using a CTAB-based method [71]. The DNA integrity was assessed using 1.5% (w/v) agarose gel electrophoresis, and the concentration was determined using a Nanodrop 2000 Spectrophotometer (Thermo Fisher Scientific, DE, USA).
Re-sequencing and perfect SNP identification
In total, 31 diverse pepper lines (C. annuum), including 30 inbred lines from our ongoing breeding programs in BVRC and PI640446 provided by the U.S. National Plant Germplasm System, were selected for re-sequencing on the Illumina X Ten platform at Shanghai Majorbio Biopharm Technology Co. Ltd. (Shanghai, China). The 31 pepper lines had diverse genetic backgrounds and horticultural traits, including eight blocky-fruited lines, 13 long horn-fruited lines, five short horn-fruited lines and five linear-fruited lines (Fig. 1).
The raw reads of the 31 re-sequenced lines and four previously sequenced cultivars; Zunla-1 (C. annuum) and its wild progenitor Chiltepin (C. annuum var. glabriusculum) [3], C. annuum cv. Perennial and C. annuum cv. Dempsey [13], were filtered into clean data using Trimmomatic [72]. The clean reads were then mapped to the reference genome of Zunla-1 chromosome version 2.0 [3] using the Burrows-Wheeler Alignment Tool (BWA) with default parameters, and SNPs were called using the Genome Analysis Toolkit (GATK, v2.4-7g5e89f01) [73]. SNPs with MAF > 5% and missing data < 10% were imported into MEGA to build the rooted phylogenetic tree using the cultivar progenitor, Chiltepin, as an outgroup with the neighbor-joining method [74]. Population structure analysis was completed using STRUCTURE v2.3. The number of populations (K) was determined following the standard procedure [75] with a burn-in period of 100,000 iterations and Markov Chain Monte Carlo of 100,000. Twenty independent runs were performed for K varying from 1 to 15. The optimum K was defined according to Evanno's delta K method [38].
To acquire a dataset of genome-wide SNPs for subsequent Target SNP-seq analysis, perfect SNPs were identified using the following criteria: (i) MAF > 0.4 to filter out uninformative SNPs; (ii) miss rate < 0.2; (iii) heterozygosity < 0.2; (iv) no sequence variation in the 100 bp flanking sequence of the SNP locus; and (v) 2 alleles per locus for the SNPs.
Target SNP-seq
The Target SNP-seq procedure was completed as previously described using the SNPs identified above [35]. In brief, library construction for Target SNP-seq consisted of the following two rounds of PCR: the first round amplified and captured the target SNPs in DNA samples using the multiplexed panel of perfect SNP primers (Additional file 9: Table S2); the second round added a unique barcode to the capture product for each DNA sample. Thus, the samples are distinguished based on the different barcodes. The multiplexed PCR was conducted in a 30 μl reaction mixture, containing 50 ng genomic DNA template, 8 μl of the multiplexed SNP-capture panel primers (10 μM), 10 μl of 3 M enzymes (Molbreeding Biotechnology Company, Shijiazhuang, China). The PCR mixtures were heated at 95 °C for 5 min followed by 17 cycles at 95 °C for 30 s, 60 °C for 4 min, 72 °C for 4 min with a final 4 min extension at 72 °C. The PCR products were purified using a magnetic bead suspension and 80% alcohol. Similarly, the second PCR amplification was performed in a 30 μl reaction volume containing 11 μl of purified PCR product from the previous round, 10 μl of 3 M Taq enzyme (Molbreeding Biotechnology Company, Shijiazhuang, China), 8 μl nuclease-free water, and 1 μl of primers with the following sequences: forward 5′- AATGATACGGCGACCACCGAGATCTACACTCTTTCCCTACACGA -CGCTCTTCCG-3′ and reverse 5′-CAAGCAGAAGACGGCATACGAGAT -XXXXXXXXGTGACTGGAGTTCCTTGGCACCCGAGA-3′ (barcodes are indicated by underlined sequences). The PCR procedure was 95 °C for 3 min; 7 cycles of 95 °C for 15 s, 58 °C for 15 s, and 72 °C for 30 s with a final 4 min extension at 72 °C. The PCR products were then purified with 100 μl of 80% alcohol and 23 μl Tris-HCl buffer (10 mM, pH 8.0–8.5). After that, a Target SNP-seq library was sequenced using an Illumina X Ten platform at Molbreeding Biotechnology Company (Shijiazhuang, China).
SNP genotype analysis of target SNP-seq
The raw data from the Target SNP-seq was de-multiplexed to determine the exact genotypes for each variety based on the sample-specific barcodes using the Illumina bcl2fastq pipeline (Illumina, San Diego, CA, USA). Clean data were filtered out using Trimmomatic, and the reads of each variety were mapped to the pepper reference genome of Zunla-1 [3] using BWA with default parameters. Sequence depth, alignment rate, as well as target alignment rate and uniformity for each variety were calculated as follows to evaluate the results of targeted sequencing.
$$ \mathrm{Alignment}\ \mathrm{rate}=\mathrm{the}\ \mathrm{number}\ \mathrm{of}\ \mathrm{reads}\ \mathrm{aligned}\ \mathrm{on}\ \mathrm{the}\ \mathrm{genome}/\mathrm{total}\ \mathrm{reads}. $$
$$ \mathrm{Target}\ \mathrm{alignment}\ \mathrm{rate}=\mathrm{the}\ \mathrm{number}\ \mathrm{of}\ \mathrm{reads}\ \mathrm{aligned}\ \mathrm{on}\ \mathrm{the}\ \mathrm{target}\ \mathrm{region}/\mathrm{total}\ \mathrm{reads}. $$
Uniformity inferred to the proportion of the SNPs with a depth of > 10% of the average depth.
$$ \mathrm{Depth}\ \mathrm{of}\ \mathrm{each}\ \mathrm{SNP}=\mathrm{total}\ \mathrm{base}\ \mathrm{generated}\ \mathrm{in}\ \mathrm{the}\ \mathrm{perfect}\ \mathrm{SNP}/\mathrm{read}\ \mathrm{length}. $$
$$ \mathrm{Average}\ \mathrm{depth}=\mathrm{S}/\left(\mathrm{M}\times \mathrm{N}\times \mathrm{L}\right), $$
where S indicates the total base generated from Target SNP-seq; M indicates the total number of varieties; N indicates the total number of SNPs; L indicates the average read length.
SNP genotypes were called using GATK. Based on the high-throughput sequencing results, the SNP alleles with the maximum numbers of reads and the second maximum numbers of reads were treated as the major and minor allele for each target SNP locus. When the read frequency of the major allele was higher than 0.7, the locus was described as homozygous. If the read frequencies of the major and minor allele were both more than 0.35, the locus was described as heterozygous.
Sequence depth for each variety = Total base generated from variety / total base length of the 92 targeted genome region.
Determination of genetic parameters for each perfect SNP
Genetic parameter statistics of the perfect SNPs, including the observed heterozygosity (Ho), expected heterozygosity (He), and polymorphism information content (PIC) [76] were calculated using a Perl script with the following equation:
$$ \mathrm{PIC}=1-\sum \limits_{i=1}^l{P}_i^2-\sum \limits_{i=1}^{l-1}\sum \limits_{j=i+1}^l2{P}_i^2\ {P}_j^2 $$
where l is the allele locus, and Pi and Pj represent the population frequency of the ith and jth allele. The chromosomal distribution of the perfect SNPs was mapped using Circos software (http://circos.ca/) with the SNP region magnified to 2 Mb.
Genetic structure analysis
Genetic relationships among varieties were investigated using three different methods: PCA, STRUCTURE, and a phylogenetic tree. PCA was carried out using the FactoMineR package in R [77]. The Bayesian-based model procedure implemented by STRUCTURE v2.3 [75, 78] was also used to determine population structure. The number of populations (K) was determined as described above, and the unrooted phylogenetic tree was constructed using the Ape and Poppr packages in R based on the neighbor-joining method with the tree viewed using MEGA v5.1 [79, 80].
Population diversity analysis
The different fruit shape populations, as well as the subpopulations inferred from STRUCTURE, Ho, He, PIC, and MAF analyses, were calculated using the methods mentioned above. To measure genetic differences of populations and subpopulations, AMOVA and pairwise Fst were performed using poppr R package and the function pairwise.neifst in the Hierfstat R package, respectively [81, 82]. Randomization tests were further performed to test the significance of differentiation using the function randtest in the ade4 package [81]. Detailed instructions for the AMOVA and randomization tests are available at https://grunwaldlab.github.io/Population_Genetics_in_R/AMOVA.html and https://rdrr.io/cran/poppr/man/poppr.amova.html.
Discrimination power of the perfect SNPs
To determine a minimal number of SNPs for distinguishing the maximum number of pepper varieties, a Perl script was developed to determine the best discrimination power for 1 to 92 perfect SNPs according to the following algorithm.
1) Selection of the first SNP: a) pairwise comparison between varieties were conducted for each SNP, which included 36,585 comparisons for each SNP; b) Xij = 1 if genotype difference existed for jth pairwise comparison of ith SNP (i = 1, 2, 3, …, 92; j = 1, 2, 3, …, 36,585). Xij = 0 if genotype difference did not exist for jth pairwise comparison of ith SNP (i = 1, 2, 3, …, 92; j = 1, 2, 3, …, 36,585); c) The SNP with a maximum value of \( {\sum}_{j=1}^{36585}\mathrm{Xij} \) among the 92 SNPs was selected as the first SNP. 2) Selection of the best two SNPs: a) 91 SNP combinations of the first selected SNP and each of the rest 91 SNPs were formed; b) 36,583 pairwise comparisons were conducted for each of the 91 SNP combination; c) Xmj = 1 if genotype difference existed for jth pairwise comparison of mth SNP combination (m = 1, 2, 3, …, 91; j = 1, 2, 3, …, 36,583). Xmj = 0 if genotype difference did not exist for jth pairwise comparison of mth SNP combination (m = 1, 2, 3, …, 91; j = 1, 2, 3, …, 36,583); d) the SNP combination with a maximum value of \( {\sum}_{j=1}^{36585}\mathrm{Xmj} \) was selected as the best two SNPs. If the SNP combinations had the same values, the SNP combination with the second SNP located at a different chromosome as first SNP was preferentially selected as the best two SNPs. 3) Selection of the best three SNPs: a) 90 SNP combinations of the best two SNPs and each of the rest 90 SNPs were formed; b), c), and d) steps were conducted similarly as that in step 2) to select the best three SNPs. If the SNP combinations had the same values, the SNP combination with the third SNP located at different chromosome as the first and second SNPs was preferentially selected as the best three SNPs. The best 4 to 92 SNPs were selected gradually as in step 3). Discrimination power for 1 to 92 best SNPs was calculated using the following formula: discrimination power = the number of varieties showing unique genotypes / 271. The saturation curve was plotted by discrimination power for 1 to 92 best SNPs (Fig. 3). High discrimination power referred to high saturation value and high SNP discernibility.
Core SNPs set for variety discrimination
To develop a set of core SNPs that discriminates between varieties using the KASPar platform, two allele-specific forward primers and one common reverse primer were designed for each perfect SNP marker. The 23 to 95 commercial varieties were then used to assess the potential utility of the SNP markers through the KASPar platform; fluorescence was detected as previously described [83]. Detailed instructions are available at www.kbioscience.co.uk. The Perl script, as mentioned above, was used to select a core-SNP set from the successfully verified SNP markers. Finally, the SNP markers associated with the maximum variety discrimination (the highest saturation value) were identified as a core-SNP set. The primer sequences of the core-SNP markers are shown in Additional file 11: Table S4.
Association analysis
FSI for each variety was calculated as the ratio of maximum height to maximum width. Ninety-two SNP loci (Additional file 9: Table S2) combined with 73 SSR loci (Additional file 6: Figure S6), all developed in this study and detected by target sequencing across 271 varieties, were used for association analysis. The methods used for SSRs target library construction and detection were the same as those used in Target SNP-seq. The software program TASSEL 5.2.25 was used for association analysis. The MLM that considered both the fruit shape populations (Q matrix) and the kinship matrix (K matrix), and a general linear model (GLM) using fruit shape populations (Q matrix) as a fixed factor were used for association identification of loci conferring fruit shape. Significance of marker-trait association was indicated when the p-value was less than 10− 4. Because it has been popularly proved that the MLM + Q + K model is more effective than other models in detecting loci [84, 85], only data from the MLM + Q + K model is presented in this study. The phenotypic variation explained by each perfect SNP was the R2-value obtained from the MLM. Candidate genes between the nearest up- and down-stream SNP loci to the significantly associated loci were identified from the protein annotation published using the CM334 genome [3].
The raw sequence data reported in this paper have been deposited in the Genome Sequence Archive in BIG Data Center (BIG data center members, 2019), Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, under accession number CRA001576. The data are publicly accessible at http://bigd.big.ac.cn/gsa.
AFLP:
Amplified fragment length polymorphism
AMOVA:
Analysis of molecular variance
FSI:
Fruit shape index
GATK:
GBS:
Genotyping-by-sequencing
GBTS:
Genotyping by target sequencing
GLM:
He :
Expected heterozygosity
Ho :
Observed heterozygosity
InDel:
Insertion or deletion
KASPar:
Kompetitive allele-specific PCR
MAF:
Minor allele frequency
MLM:
Mixed linear model
PCA:
Principal component analysis
PIC:
Polymorphism information content
RAPD:
Random amplified polymorphic
RFLP:
Restriction fragment length polymorphism
SNP:
Single-nucleotide polymorphism
SSR:
Simple sequence repeats
Moscone EA, Scaldaferro MA, Grabiele M, Cecchini NM, Sánchez García Y, Jarret R, Daviña JR, Ducasse DA, Barboza GE, Ehrendorfer F. The evolution of chili peppers (Capsicum - Solanaceae): a cytogenetic perspective. Acta Hortic. 2007;745:137–70. https://doi.org/10.17660/ActaHortic.2007.745.5.
Olmstead RG, Bohs L, Migid HA, Santiago-Valentin E, Garcia VF, Collier SM. A molecular phylogeny of the Solanaceae. Taxon. 2008;57:1159–81. https://doi.org/10.1002/tax.574010.
Qin C, Yu CS, Shen YO, Fang XD, Chen L, Min JM, Cheng JW, Zhao SC, Xu M, Luo Y, et al. Whole-genome sequencing of cultivated and wild peppers provides insights into Capsicum domestication and specialization. Proc Natl Acad Sci USA. 2014;111:5135–40. https://doi.org/10.1073/pnas.1400975111.
Andrews J. Peppers: the domesticated Capsicums. Austin: University of Texas Press; 1984.
Gao P, Ma H, Luan F, Song H. DNA fingerprinting of Chinese melon provides evidentiary support of seed quality appraisal. PLoS One. 2012;7:e52431. https://doi.org/10.1371/journal.pone.0052431.
Tian HL, Wang FG, Zhao JR, Yi HM, Wang L, Wang R, Yang Y, Song W. Development of maizeSNP3072, a high-throughput compatible SNP array, for DNA fingerprinting identification of Chinese maize varieties. Mol Breed. 2015;35:136. https://doi.org/10.1007/s11032-015-0335-0.
McCouch SR, Chen XL, Panaud O, Temnykh S, Xu YB, Cho YG, Huang N, Ishii T, Blair M. Microsatellite marker development, mapping and applications in rice genetics and breeding. Plant Mol Biol. 1997;35:89–99. https://doi.org/10.1023/a:1005711431474.
Nagaraju J, Kathirvel M, Kumar RR, Siddiq EA, Hasnain SE. Genetic analysis of traditional and evolved Basmati and non-Basmati rice varieties by using fluorescence-based ISSR-PCR and SSR markers (vol 99, pg 5836, 2002). Proc Natl Acad Sci USA. 2002;99:13357. https://doi.org/10.1073/pnas.212463799.
Darine T, Allagui MB, Rouaissi M, Boudabbous A. Pathogenicity and RAPD analysis of Phytophthora nicotianae pathogenic to pepper in Tunisia. Physiol Mol Plan Pathol. 2007;70:142–8. https://doi.org/10.1016/j.pmpp.2007.08.002.
Lanteri S, Acquadro A, Quagliotti L, Portis E. RAPD and AFLP assessment of genetic variation in a landrace of pepper (Capsicum annuum L.), grown in North-West Italy. Gen Res Crop Evol. 2003;50:723–35. https://doi.org/10.1023/a:1025075118200.
Lefebvre V, Palloix A, Rives M. Nuclear RFLP between pepper cultivars (Capsicum annuum L). Euphytica. 1993;71:189–99. https://doi.org/10.1007/BF00040408.
Tanksley SD, Bernatzky R, Lapitan NL, Prince JP. Conservation of gene repertoire but not gene order in pepper and tomato. Proc Natl Acad Sci USA. 1988;85:6419–23. https://doi.org/10.1073/pnas.85.17.6419.
Kim S, Park M, Yeom SI, Kim YM, Lee JM, Lee HA, Seo E, Choi J, Cheong K, Kim KT, et al. Genome sequence of the hot pepper provides insights into the evolution of pungency in Capsicum species. Nat Genet. 2014;46:270–8. https://doi.org/10.1038/ng.2877.
Guo GJ, Zhang GL, Pan BG, Diao WP, Liu JB, Ge W, Gao CZ, Zhang Y, Jiang C, Wang SB. Development and application of InDel markers for Capsicum spp. based on whole-genome re-sequencing. Sci Rep. 2019;9:3691. https://doi.org/10.1038/s41598-019-40244-y.
Li WP, Cheng JW, Wu ZM, Qin C, Tan S, Tang X, Cui JJ, Zhang L, Hu KL. An InDel-based linkage map of hot pepper (Capsicum annuum). Mol Breed. 2015;35:32. https://doi.org/10.1007/s11032-015-0219-3.
Tan S, Cheng JW, Zhang L, Qin C, Nong DG, Li WP, Tang X, Wu ZM, Hu KL. Construction of an interspecific genetic map based on InDel and SSR for mapping the QTLs affecting the initiation of flower primordia in pepper (Capsicum spp.). Plos One. 2015;10:e0119389. https://doi.org/10.1371/journal.pone.0119389.
Yumnam JS, Tyagi W, Pandey A, Meetei NT, Rai M. Evaluation of genetic diversity of chilli landraces from North Eastern India based on morphology, SSR markers and the Pun1 locus. Plant Mol biol Report. 2012;30:1470–9. https://doi.org/10.1007/s11105-012-0466-y.
Zhang XF, Sun HH, Xu Y, Chen B, Yu SC, Geng SS, Wang Q. Development of a large number of SSR and InDel markers and construction of a high-density genetic map based on a RIL population of pepper (Capsicum annuum L.). Mol Breed. 2016;36:92. https://doi.org/10.1007/s11105-012-0466-y.
Aguilar-Meléndez A, Morrell PL, Roose ML, Kim SC. Genetic diversity and structure in semiwild and domesticated chiles (Capsicum annuum; Solanaceae) from Mexico. Am J Bot. 2009;96:1190–202. https://doi.org/10.3732/ajb.0800155.
Ibiza VP, Blanca J, Canizares J, Nuez F. Taxonomy and genetic diversity of domesticated Capsicum species in the Andean region. Gen Res Crop Evol. 2012;59:1077–88. https://doi.org/10.1007/s10722-011-9744-z.
Fan JB, Oliphant A, Shen R, Kermani BG, Garcia F, Gunderson KL, Hansen M, Steemers F, Butler SL, Deloukas P, et al. Highly parallel SNP genotyping. Cold Spring Harb Symp Quant Biol. 2003;68:69–78. https://doi.org/10.1101/sqb.2003.68.69.
Steemers FJ, Gunderson KL. Whole genome genotyping technologies on the BeadArray™ platform. Biotechnol J. 2007;2:41–9. https://doi.org/10.1002/biot.200600213.
Livak KJ, Flood SJA, Marmaro J, Giusti W, Deetz K. Oligonucleotides with fluorescent dyes at opposite ends provide a quenched probe system useful for detecting PCR product and nucleic acid hybridization. Genome Res. 1995;4:357–62. https://doi.org/10.1101/gr.4.6.357.
Kang JH, Yang HB, Jeong HS, Cheo P, Kwon JK, Kang BC. Single nucleotide polymorphism marker discovery from transcriptome sequencing for marker-assisted backcrossing in Capsicum. Kor J Hortic Sci Technol. 2014;32:535–43. https://doi.org/10.7235/hort.2014.14109.
Taranto F, D'Agostino N, Greco B, Cardi T, Tripodi P. Genome-wide SNP discovery and population structure analysis in pepper (Capsicum annuum) using genotyping by sequencing. BMC Genom. 2016;17:943. https://doi.org/10.1186/s12864-016-3297-7.
Nimmakayala P, Abburi VL, Saminathan T, Almeida A, Davenport B, Davidson J, Reddy CV, Hankins G, Ebert A, Choi D, Stommel J. Genome-wide divergence and linkage disequilibrium analyses for Capsicum baccatum revealed by genome-anchored single nucleotide polymorphisms. Front PlantSci. 2016;7:1646. https://doi.org/10.3389/fpls.2016.01646.
Nimmakayala P, Abburi VL, Saminathan T, Alaparthi SB, Almeida A, Davenport B, Nadimi M, Davidson J, Tonapi K, Yadav L, Malkaram S, Vajja G, Hankins G, Harris R, Park M, Choi D, Stommel J, Reddy UK. Genome-wide diversity and association mapping for Capsaicinoids and fruit weight in Capsicum annuum L. Sci Rep. 2016;6:38081. https://doi.org/10.1038/srep38081.
Taitano N, Bernau V, Jardón-Barbolla L, Leckie B, Mazourek M, Mercer K, McHale L, Michel A, Baumler D, Kantar M, van der Knaap E. Genome-wide genotyping of a novel Mexican Chile Pepper collection illuminates the history of landrace differentiation after Capsicum annuum L. domestication. Evol Appl. 2018;12:78–92. https://doi.org/10.1111/eva.12651.
Pereira-Dias L, Vilanova S, Fita A, Prohens J, Rodríguez-Burruezo A. Genetic diversity, population structure, and relationships in a collection of pepper (Capsicum spp.) landraces from the Spanish centre of diversity revealed by genotyping-by-sequencing (GBS). Horticulture Res. 2019;6:54. https://doi.org/10.1038/s41438-019-0132-8.
Colonna V, D'Agostino N, Garrison E, Albrechtsen A, Meisner J, Facchiano A, Cardi T, Tripodi P. Genomic diversity and novel genome-wide association with fruit morphology in Capsicum, from 746k polymorphic sites. Sci Rep. 2019;9:10067. https://doi.org/10.1038/s41598-019-46136-5.
Li L, Fang ZW, Zhou JF, Chen H, Hu ZF, Gao LF, Chen LH, Ren S, Ma HY, Lu L, Zhang WX, Peng H. An accurate and efficient method for large-scale SSR genotyping and applications. Nucleic Acids Res. 2017;45. https://doi.org/10.1093/nar/gkx093.
Krasileva KV, Vasquez-Gross HA, Howell T, Bailey P, Paraiso F, Clissold L, Simmonds J, Ramirez-Gonzalez RH, Wang XD, Borrill P, Fosker C, Ayling S, Phillips AL, Uauy C, Dubcovsky J. Uncovering hidden variation in polyploid wheat. Proc Natl Acad Sci USA. 2017;114:913–21. https://doi.org/10.1073/pnas.1619268114.
Jiang L, Liu X, Yang J, Wang HF, Jiang JC, Liu LL, He S, Ding XD, Liu JF, Zhang Q. Targeted resequencing of GWAS loci reveals novel genetic variants for milk production traits. BMC Genomics. 2014;15:1105. https://doi.org/10.1186/1471-2164-15-1105.
Guo ZF, Wang HW, Tao JJ, Ren YH, Xu C, Wu KS, Zou C, Zhang JN, Xu YB. Development of multiple SNP marker panels affordable to breeders through genotyping by target sequencing (GBTS) in maize. Mol Breed. 2019;39:37. https://doi.org/10.1007/s11032-019-0940-4.
Yang JJ, Zhang J, Han RX, Zhang F, Mao AJ, Luo J, Dong BB, Liu H, Tang H, Zhang JN, Wen CL. Target SSR-seq: a novel SSR genotyping technology associate with perfect SSRs in genetic analysis of cucumber varieties. Front Plant Sci. 2019;10:531. https://doi.org/10.3389/fpls.2019.00531.
Zhang XM, Zhang ZH, Gu XZ, Mao SL, Li XX, Chadoeuf J, Palloix A, Wang LH, Zhang BX. Genetic diversity of pepper (Capsicum spp.) germplasm resources in China reflects selection for cultivar types and spatial distribution. J Integr Agric. 2016;15:1991–2001. https://doi.org/10.1016/S2095-3119(16)61364-3.
Meng CY, Wei XC, Zhao YY, Yuan YX, Yang SJ, Wang ZY, Zhang XW, Sun JW, Zheng XL, Yao QJ, Zhang Q. Genetic diversity analysis of Capsicum genus by SSR markers. Mol Plant Breed. 2017;8:70–8. https://doi.org/10.5376/mpb.2017.08.0008.
Evanno G, Regnaut S, Goudet J. Detecting the number of clusters of individuals using the software STRUCTURE: a simulation study. Mol Ecol. 2005;14:2611–20. https://doi.org/10.1111/j.1365-294X.2005.02553.x.
Lee JM, Nahm SH, Kim YM, Kim BD. Characterization and molecular genetic mapping of microsatellite loci in pepper. Theor Appl Genet. 2004;108:619–27. https://doi.org/10.1007/s00122-003-1467-x.
Taranto F, D'Agostino N, Greco B, Cardi T, Tripodi P. Genome-wide SNP discovery and population structure analysis in pepper (Capsicum annuum) using genotyping by sequencing. BMC Genomics. 2016;17. https://doi.org/10.1186/s12864-016-3297-7.
Hill TA, Ashrafi H, Reyes-Chin-Wo S, Yao JQ, Stoffel K, Truco MJ, Kozik A, Michelmore RW, Van Deynze A. Characterization of Capsicum annuum genetic diversity and population structure based on parallel polymorphism discovery with a 30K unigene Pepper GeneChip. PLoS One. 2013;8:e56200. https://doi.org/10.1371/journal.pone.0056200.
Solomon AM, Han K, Lee J-H, Lee H-Y, Jang S, Kang B-C. Genetic diversity and population structure of Ethiopian Capsicum germplasms. PLoS One. 2019;14:e0216886. https://doi.org/10.1371/journal.pone.0216886.
Nicolai M, Cantet M, Lefebvre V, Sage-Palloix AM, Palloix A. Genotyping a large collection of pepper (Capsicum spp.) with SSR loci brings new evidence for the wild origin of cultivated C. annuum and the structuring of genetic diversity by human selection of cultivar types. Gen Res Crop Evol. 2013;60:2375–90. https://doi.org/10.1007/s10722-013-0006-0.
Tam SM, Lefebvre V, Palloix A, Sage-Palloix AM, Mhiri C, Grandbastien MA. LTR-retrotransposons Tnt1 and T135 markers reveal genetic diversity and evolutionary relationships of domesticated peppers. Theor Appl Genet. 2009;119:973–89. https://doi.org/10.1007/s00122-009-1102-6.
Chaim AB, Paran I, Grube RC, Jahn M, van Wijk R, Peleman J. QTL mapping of fruit-related traits in pepper (Capsicum annuum). Theor Appl Genet. 2001;102:1016–28. https://doi.org/10.1007/s001220000461.
Han K, Jeong HJ, Yang HB, Kang SM, Kwon JK, Kim S, Choi D, Kang BC. An ultra-high-density bin map facilitates high-throughput QTL mapping of horticultural traits in pepper (Capsicum annuum). DNA Res. 2016;23:81–91. https://doi.org/10.1093/dnares/dsv038.
Yarnes SC, Ashrafi H, Reyes-Chin-Wo S, Hill TA, Stoffel KM, VanDeynze A. Identifcation of QTLs for capsaicinoids, fruit quality, and plant architecture-related traits in an interspecifc Capsicum RIL population. Genome. 2013;56:61–74. https://doi.org/10.1139/gen-2012-0083.
Chaim AB, Borovsky Y, De Jong W, Paran I. Linkage of the A locus for the presence of anthocyanin and fs10.1, a major fruit-shape QTL in pepper. Theor Appl Genet. 2003;106:889–94. https://doi.org/10.1007/s00122-002-1132-9.
Rao GU, Chaim AB, Borovsky Y, Paran I. Mapping of yield-related QTL in pepper in an interspecific cross of Capsicum annuum and C. frutescens. Theor Appl Genet. 2003;106:1457–66. https://doi.org/10.1007/s00122-003-1204-5.
Zygier S, Chaim AB, Efrati A, Kaluzky G, Borovsky Y, Paran I. QTL mapping for fruit size and shape in chromosomes 2 and 4 in pepper and a comparison of the pepper QTL map with that of tomato. Theor Appl Genet. 2005;111:437–45. https://doi.org/10.1007/s00122-005-2015-7.
Barchi L, Lefebvre V, Sage-Palloix AM, Lanteri S, Palloix A. QTL analysis of plant development and fruit traits in pepper and performance of selective phenotyping. Theor Appl Genet. 2009;118:1157–71. https://doi.org/10.1007/s00122-009-0970-0.
Hill TA, Chunthawodtiporn J, Ashrafi H, Stoffel K, Weir A, Van Deynze A. Regions underlying population structure and the genomics of organ size determination in Capsicum annuum. Plant Genome. 2017. https://doi.org/10.3835/plantgenome2017.03.0026.
Chunthawodtiporn J, Hill T, Stoffel K, Van Deynze A. Quantitative trait loci controlling fruit size and other horticultural traits in bell pepper (Capsicum annuum). Plant Genome. 2018;11:160125. https://doi.org/10.3835/plantgenome2016.12.0125.
van der Knaap E, Chakrabarti M, Chu YH, Clevenger JP, Illa-Berenguer E, Huang ZJ, Keyhaninejad N, Mu Q, Sun L, Wang YP, Wu S. What lies beyond the eye: the molecular mechanisms regulating tomato fruit weight and shape. Front Plant Sci. 2014;5:227. https://doi.org/10.3389/fpls.2014.00227.
Wu S, Zhang BY, Keyhaninejad N, Rodriguez GR, Kim HJ, Chakrabarti M, Illa-Berenguer E, Taitano NK, Gonzalo MJ, Diaz A, Pan YP, Leisner CP, Halterman D, Buell CR, Weng YQ, Jansky SH, van Eck H, Willemsen J, Monforte AJ, Meulia T, van der Knaap E. A common genetic mechanism underlies morphological diversity in fruits and other plant organs. Nat Commun. 2018;9:4734. https://doi.org/10.1038/s41467-018-07216-8.
Kraft KH, Brown CH, Nabhan GP, Luedeling E, Ruiz JDL, d'Eeckenbrugge GC, Hijmans RJ, Gepts P. Multiple lines of evidence for the origin of domesticated chili pepper, Capsicum annuum, in Mexico. Proc Natl Acad Sci USA. 2014;111:6165–70. https://doi.org/10.1073/pnas.1308933111.
Bosland PW, Votava E. Peppers: vegetable and spice Capsicums. Cabi: Oxford, Wallingford; 2000.
Geng SS, Chen B, Zhang XF, Sun JT. Hot pepper breeding development and its varieties's distribution in China. J China Capsicum. 2011;1:1–5 (In Chinese). Available from: https://www.ifabiao.com/lj/201103/15393478.html.
Geng SS, Chen B, Zhang XF, Du HS. The trend of market demand and breeding strategies of pepper varieties in China. China Vegetables. 2015;3:1–5 (In Chinese) Available from: http://www.cnveg.org/UserFiles/File/3-1.pdf.
Moreira AFP, Ruas PM, Ruas CD, Baba VY, Giordani W, Arruda IM, Rodrigues R, Goncalves LSA. Genetic diversity, population structure and genetic parameters of fruit traits in Capsicum chinense. Sci Hortic. 2018;236:1–9. https://doi.org/10.1016/j.scienta.2018.03.012.
Ou LJ, Li D, Lv JH, Chen WC, Zhang ZQ, Li XF, Yang BZ, Zhou SD, Yang S, Li WG, et al. Pan-genome of cultivated pepper (Capsicum) and its use in gene presence-absence variation analyses. New Phytol. 2018;220:360–3. https://doi.org/10.1111/nph.15413.
Lee HY, Ro NY, Jeong HJ, Kwon JK, Jo J, Ha Y, Jung A, Han JW, Venkatesh J, Kang BC. Genetic diversity and population structure analysis to construct a core collection from a large Capsicum germplasm. BMC Genet. 2016;17:142. https://doi.org/10.1186/s12863-016-0452-8.
Moses M, Umaharan P, Dayanandan S. Microsatellite based analysis of the genetic structure and diversity of Capsicum chinense in the Neotropics. Gen Res Crop Evol. 2014;61:741–55. https://doi.org/10.1007/s10722-013-0069-y.
Baral JB, Bosland PW. Unraveling the species dilemma in Capsicum frutescens and C. chinense (Solanaceae): a multiple evidence approach using morphology, molecular analysis, and sexual compatibility. J Amer Soc Hort Sci. 2004;129:826–32. https://doi.org/10.21273/JASHS.129.6.0826.
Gonzalez-Perez S, Garces-Claver A, Mallor C, de Miera LES, Fayos O, Pomar F, Merino F, Silvar C. New insights into Capsicum spp relatedness and the diversification process of Capsicum annuum in Spain. PLoS One. 2014;9:e116276. https://doi.org/10.1371/journal.pone.0116276.
Yang HB, Liu WY, Kang WH, Kim JH, Cho HJ, Yoo JH, Kang BC. Development and validation of L allele-specific markers in Capsicum. Mol Breed. 2012;30:819–29. https://doi.org/10.1007/s11032-011-9666-7.
Rehrig WZ, Ashrafi H, Hill T, Prince J, Van Deynze A. CaDMR1 Cosegregates with QTL Pc5.1 for resistance to Phytophthora capsici in pepper (Capsicum annuum). Plant Genome. 2014;7:1–12. https://doi.org/10.3835/plantgenome2014.03.0011.
Liu WY, Kang JH, Jeong HS, Choi HJ, Yang HB, Kim KT, Choi D, Choi GJ, Jahn M, Kang BC. Combined use of bulked segregant analysis and microarrays reveals SNP markers pinpointing a major QTL for resistance to Phytophthora capsici in pepper. Theor Appl Genet. 2014;127:2503–13. https://doi.org/10.1007/s00122-014-2394-8.
Romer P, Hahn S, Jordan T, Strauss T, Bonas U, Lahaye T. Plant pathogen recognition mediated by promoter activation of the pepper Bs3 resistance gene. Science. 2007;318:645–8. https://doi.org/10.1126/science.1144958.
Yeam I, Kang BC, Lindeman W, Frantz JD, Faber N, Jahn MM. Allele-specific CAPS markers based on point mutations in resistance alleles at the pvr1 locus encoding eIF4E in Capsicum. Theor Appl Genet. 2005;112:178–86. https://doi.org/10.1007/s00122-005-0120-2.
Fulton TM, Chunwongse J, Tanksley SD. Microprep protocol for extraction of DNA from tomato and other herbaceous plants. Plant Mol Biol Report. 1995;13:207–9. https://doi.org/10.1007/bf02670897.
Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014;30:2114–20. https://doi.org/10.1093/bioinformatics/btu170.
McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, DePristo MA. The Genome Analysis Toolkit: A mapreduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20:1297–303. https://doi.org/10.1101/gr.107524.110.
Price MN, Dehal PS, Arkin AP. FastTree 2-approximately maximum-likelihood trees for large alignments. PLoS One. 2010;5:e9490. https://doi.org/10.1371/journal.pone.0009490.
Pritchard JK, Stephens M, Donnelly P. Inference of population structure using multilocus genotype data. Genetics. 2000;155:945–59 Available from: https://www.ncbi.nlm.nih.gov/pubmed/10835412.
Botstein D, White RL, Skolnick M, Davis RW. Construction of a genetic linkage map in man using restriction fragment length polymorphisms. Am J Hum Genet. 1980;32:314–31 Available from: https://www.ncbi.nlm.nih.gov/pubmed/6247908.
Husson F, Josse J, Pages J. Principal component methods-hierarchical clustering-partitional clustering: why would we need to choose for visualizing data? Technical report-Agrocampus, Applied Mathematics Department. 2010 Available from: http://www.agrocampus-ouest.fr/math/
Falush D, Stephens M, Pritchard JK. Inference of population structure using multilocus genotype data: linked loci and correlated allele frequencies. Genetics. 2003;164:1567–87 Available from: https://www.ncbi.nlm.nih.gov/pubmed/12930761.
Nei M. Estimation of average heterozygosity and genetic distance from a small number of individuals. Genetics. 1978;89:583–90 Available from: https://www.ncbi.nlm.nih.gov/pubmed/17248844.
Kamvar ZN, Tabima JF, Grünwald NJ. Poppr: an R package for genetic analysis of populations with clonal, partially clonal, and/or sexual reproduction. PeerJ. 2014;2:e281. https://doi.org/10.7717/peerj.281.
Excoffier L, Smouse PE, Quattro JM. Analysis of molecular variance inferred from metric distances among DNA haplotypes: application to human mitochondrial DNA restriction data. Genetics. 1992;131:479–91.
de Meeus T, Goudet J. A step-by-step tutorial to use HierFstat to analyse populations hierarchically structured at multiple levels. Infect Genet Evol. 2007;7:731–5. https://doi.org/10.1016/j.meegid.2007.07.005.
Su TB, Li PR, Yang JJ, Sui GL, Yu YJ, Zhang DS, Zhao XY, Wang WH, Wen CL, Yu SC, Zhang FL. Development of cost-effective single nucleotide polymorphism marker assays for genetic diversity analysis in Brassica rapa. Mol Breed. 2018;38:42. https://doi.org/10.1007/s11032-018-0795-0.
Pace J, Gardner C, Romay C, Ganapathysubramanian B, Lubberstedt T. Genome-wide association analysis of seedling root development in maize (Zea mays L.). BMC Genomics. 2015;16:47. https://doi.org/10.1186/s12864-015-1226-9.
Sim SC, Robbins MD, Wijeratne S, Wang H, Yang WC, Francis DM. Association analysis for bacterial spot resistance in a directionally selected complex breeding population of tomato. Phytopathology. 2015;105:1437–45. https://doi.org/10.3835/plantgenome2017.03.0026.
The authors thank Dr. Jianan Zhang for the assistance during bioinformatics analysis.
This work was partially financed by the National Key Research and Development of China (Grant No. 2017YFD0102004 to CW, 2017YFD0101901 to SG, and 2016YFD0101704 to BC), National Science Foundation of China (Grant No. 31701913) to HD, Beijing Nova Program (Grant No. Z181100006218060) to CW, Beijing Municipal Department of Organization (Grant No. 2016000021223ZK22) to CW, Ministry of Agriculture and Rural Affairs, China (Grant No. 11162130109236051) to CW, and Beijing Academy of Agricultural and Forestry Sciences (Grant No. KJCX20170402, KJCX20161503, QNJJ201810, and KJCX2017102) to CW. The funding bodies had no role in the design of the study and collection, analysis, and interpretation of data or in writing the manuscript.
Heshan Du and Jingjing Yang contributed equally to this work.
Beijing Vegetable Research Center (BVRC), Beijing Academy of Agricultural and Forestry Sciences, Beijing, 100097, China
Heshan Du, Jingjing Yang, Bin Chen, Xiaofen Zhang, Jian Zhang, Sansheng Geng & Changlong Wen
Beijing Key Laboratory of Vegetable Germplasm Improvement, National Engineering Research Center for Vegetables, Beijing, 100097, China
Institute of Vegetables and Flowers, Chinese Academy of Agricultural Sciences, Beijing, 100081, China
Kun Yang
Heshan Du
Jingjing Yang
Bin Chen
Xiaofen Zhang
Jian Zhang
Sansheng Geng
Changlong Wen
CW and SG designed the research. HD and JY performed the bioinformatics analysis. KY, BC, and XZ contributed materials and helped with data analysis. HD, BC, XZ, and JZ performed the experiments. HD drafted the manuscript. All authors read and approved the final version of the manuscript.
Correspondence to Sansheng Geng or Changlong Wen.
Examples of fruit shape classification. Fruit shapes were categorized into four types as (A) blocky-fruited: blocky shape, 5.0–12.5 cm wide at the shoulder, 7.0–18 cm long, 3–4 lobes, including Fang Jiao, Chang Fang Jiao, and Ma La Jiao, as named in China; (B) long horn-fruited: long horn shape, 3.0–8.0 cm wide at the shoulder, 10.0–35.0 cm long, without lobe, including Niu Jiao Jiao, Yang Jiao Jiao, and Luo Si Jiao, as named in China; (C) short horn-fruited: cone-shaped, medium-hot, 1.0–3.0 cm in diameter at the base, 3.5–10.0 cm in length, and with very thin pericarp, including Gan Jiao and Chao Tian Jiao, as named in China; (D) linear-fruited: cayenne type, 1.0–3.0 cm wide by 10.0–35.0 cm long, without shoulder and lobe, including Xian Jiao, Tiao Jiao and Mei Ren Jiao, as named in China.
Target SNP-seq genotyping analysis results. Distribution of the average read depths (A), reads alignment rate to the pepper reference genome (B), target region alignment rate (C), and uniformity for 271 pepper varieties (D).
Genetic diversity analysis for the 92 perfect SNPs across 271 pepper varieties. Minor allele frequency (MAF; A), observed heterozygosity (Ho; B), expected heterozygosity (He; C), and polymorphism information content (PIC; D).
Kompetitive allele-specific PCR (KASPar) results of the 35 core SNP markers genotyped across 23 to 95 pepper varieties.
Significance testing of differentiation between Pops and among Subpops. The graphs show significant population differentiation at all levels given that the observed line (black) does not fall within the distribution expected of the permutation.
Chromosomal map of a subset of markers used in association analysis with fruit shape index (FSI). The physical position of each marker on 12 chromosomes of C. annuum Zunla-1 [3] are shown between brackets. Significantly associated markers are shown in red color.
Manhattan plots (A) and quantile-quantile plots (B) of fruit shape index (FSI) in the 271 pepper varieties. Red dashed line indicates high probability of associated loci with FSI.
Classification of and information on the pepper varieties used in this study.
Multiplexed primers panel of the 92 perfect SNPs used for Target SNP-seq.
Additional file 10: Table S3.
Characteristics of the 92 perfect SNPs and the diversity detected in the 271 pepper varieties and four fruit shape populations.
Primer sequences of the 35 core-SNP markers developed in this study.
Range, mean, and standard deviations (SD) collected for fruit shape index (FSI) in the pepper varieties.
Association regions with fruit shape index (FSI) in the reference genomes of CM334 and Zunla-1.
Specific primers used to detect functional resistance loci against four pepper diseases through Target SNP-seq.
Du, H., Yang, J., Chen, B. et al. Target sequencing reveals genetic diversity, population structure, core-SNP markers, and fruit shape-associated loci in pepper varieties. BMC Plant Biol 19, 578 (2019). https://doi.org/10.1186/s12870-019-2122-2
Genetic structure | CommonCrawl |
Title:SU(3) sphaleron: Numerical solution
Authors:F.R. Klinkhamer, P. Nagel
(Submitted on 25 Apr 2017 (v1), last revised 12 Jul 2017 (this version, v5))
Abstract: We complete the construction of the sphaleron $\widehat{S}$ in $SU(3)$ Yang-Mills-Higgs theory with a single Higgs triplet by solving the reduced field equations numerically. The energy of the $SU(3)$ sphaleron $\widehat{S}$ is found to be of the same order as the energy of a previously known solution, the embedded $SU(2)\times U(1)$ sphaleron $S$. In addition, we discuss $\widehat{S}$ in an extended $SU(3)$ Yang-Mills-Higgs theory with three Higgs triplets, where all eight gauge bosons get an equal mass in the vacuum. This extended $SU(3)$ Yang-Mills-Higgs theory may be considered as a toy model of quantum chromodynamics without quark fields and we conjecture that the $\widehat{S}$ gauge fields play a significant role in the nonperturbative dynamics of quantum chromodynamics (which does not have fundamental scalar fields but gets a mass scale from quantum effects).
Comments: 36 pages, 6 figures, v5: published version
Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
Journal reference: Phys. Rev. D 96, 016006 (2017)
Report number: KA-TP-18-2017
From: Frans Klinkhamer [view email]
[v1] Tue, 25 Apr 2017 15:58:52 UTC (559 KB)
[v2] Tue, 2 May 2017 17:23:41 UTC (568 KB)
[v3] Mon, 8 May 2017 17:46:56 UTC (229 KB)
[v4] Tue, 6 Jun 2017 13:13:33 UTC (232 KB)
[v5] Wed, 12 Jul 2017 16:52:06 UTC (231 KB) | CommonCrawl |
Module 4: Graph Theory
Hamiltonian Circuits
Determine whether a graph has an Euler path and/ or circuit
Use Fleury's algorithm to find an Euler circuit
Add edges to a graph to create an Euler circuit if one doesn't exist
Identify whether a graph has a Hamiltonian circuit or path
Find the optimal Hamiltonian circuit for a graph using the brute force algorithm, the nearest neighbor algorithm, and the sorted edges algorithm
Identify a connected graph that is a spanning tree
Use Kruskal's algorithm to form a spanning tree, and a minimum cost spanning tree
Hamiltonian Circuits and the Traveling Salesman Problem
In the last section, we considered optimizing a walking route for a postal carrier. How is this different than the requirements of a package delivery driver? While the postal carrier needed to walk down every street (edge) to deliver the mail, the package delivery driver instead needs to visit every one of a set of delivery locations. Instead of looking for a circuit that covers every edge once, the package deliverer is interested in a circuit that visits every vertex once.
Hamiltonian Circuits and Paths
A Hamiltonian circuit is a circuit that visits every vertex once with no repeats. Being a circuit, it must start and end at the same vertex. A Hamiltonian path also visits every vertex once with no repeats, but does not have to start and end at the same vertex.
Hamiltonian circuits are named for William Rowan Hamilton who studied them in the 1800's.
One Hamiltonian circuit is shown on the graph below. There are several other Hamiltonian circuits possible on this graph. Notice that the circuit only has to visit every vertex once; it does not need to use every edge.
This circuit could be notated by the sequence of vertices visited, starting and ending at the same vertex: ABFGCDHMLKJEA. Notice that the same circuit could be written in reverse order, or starting and ending at a different vertex.
Unlike with Euler circuits, there is no nice theorem that allows us to instantly determine whether or not a Hamiltonian circuit exists for all graphs.[1]
Does a Hamiltonian path or circuit exist on the graph below?
We can see that once we travel to vertex E there is no way to leave without returning to C, so there is no possibility of a Hamiltonian circuit. If we start at vertex E we can find several Hamiltonian paths, such as ECDAB and ECABD
With Hamiltonian circuits, our focus will not be on existence, but on the question of optimization; given a graph where the edges have weights, can we find the optimal Hamiltonian circuit; the one with lowest total weight.
Watch this video to see the examples above worked out.
This problem is called the Traveling salesman problem (TSP) because the question can be framed like this: Suppose a salesman needs to give sales pitches in four cities. He looks up the airfares between each city, and puts the costs in a graph. In what order should he travel to visit each city once then return home with the lowest cost?
To answer this question of how to find the lowest cost Hamiltonian circuit, we will consider some possible approaches. The first option that might come to mind is to just try all different possible circuits.
question can be framed like this: Suppose a salesman needs to give sales pitches in four cities. He looks up the airfares between each city, and puts the costs in a graph. In what order should he travel to visit each city once then return home with the lowest cost?
Brute Force Algorithm (a.k.a. exhaustive search)
1. List all possible Hamiltonian circuits
2. Find the length of each circuit by adding the edge weights
3. Select the circuit with minimal total weight.
Apply the Brute force algorithm to find the minimum cost Hamiltonian circuit on the graph below.
To apply the Brute force algorithm, we list all possible Hamiltonian circuits and calculate their weight:
Circuit Weight
ABCDA 4+13+8+1 = 26
ABDCA 4+9+8+2 = 23
ACBDA 2+13+9+1 = 25
Note: These are the unique circuits on this graph. All other possible circuits are the reverse of the listed ones or start at a different vertex, but result in the same weights.
From this we can see that the second circuit, ABDCA, is the optimal circuit.
Watch these examples worked again in the following video.
The Brute force algorithm is optimal; it will always produce the Hamiltonian circuit with minimum weight. Is it efficient? To answer that question, we need to consider how many Hamiltonian circuits a graph could have. For simplicity, let's look at the worst-case possibility, where every vertex is connected to every other vertex. This is called a complete graph.
Suppose we had a complete graph with five vertices like the air travel graph above. From Seattle there are four cities we can visit first. From each of those, there are three choices. From each of those cities, there are two possible cities to visit next. There is then only one choice for the last city before returning home.
This can be shown visually:
Counting the number of routes, we can see thereare [latex]4\cdot{3}\cdot{2}\cdot{1}[/latex] routes. For six cities there would be [latex]5\cdot{4}\cdot{3}\cdot{2}\cdot{1}[/latex] routes.
Number of Possible Circuits
For N vertices in a complete graph, there will be [latex](n-1)!=(n-1)(n-2)(n-3)\dots{3}\cdot{2}\cdot{1}[/latex] routes. Half of these are duplicates in reverse order, so there are [latex]\frac{(n-1)!}{2}[/latex] unique circuits.
The exclamation symbol, !, is read "factorial" and is shorthand for the product shown.
How many circuits would a complete graph with 8 vertices have?
A complete graph with 8 vertices would have = 5040 possible Hamiltonian circuits. Half of the circuits are duplicates of other circuits but in reverse order, leaving 2520 unique routes.
While this is a lot, it doesn't seem unreasonably huge. But consider what happens as the number of cities increase:
Cities Unique Hamiltonian Circuits
9 8!/2 = 20,160
10 9!/2 = 181,440
11 10!/2 = 1,814,400
15 14!/2 = 43,589,145,600
20 19!/2 = 60,822,550,204,416,000
As you can see the number of circuits is growing extremely quickly. If a computer looked at one billion circuits a second, it would still take almost two years to examine all the possible circuits with only 20 cities! Certainly Brute Force is not an efficient algorithm.
Nearest Neighbor Algorithm (NNA)
1. Select a starting point.
2. Move to the nearest unvisited vertex (the edge with smallest weight).
3. Repeat until the circuit is complete.
Unfortunately, no one has yet found an efficient and optimal algorithm to solve the TSP, and it is very unlikely anyone ever will. Since it is not practical to use brute force to solve the problem, we turn instead to heuristic algorithms; efficient algorithms that give approximate solutions. In other words, heuristic algorithms are fast, but may or may not produce the optimal circuit.
Consider our earlier graph, shown to the right.
Starting at vertex A, the nearest neighbor is vertex D with a weight of 1.
From D, the nearest neighbor is C, with a weight of 8.
From C, our only option is to move to vertex B, the only unvisited vertex, with a cost of 13.
From B we return to A with a weight of 4.
The resulting circuit is ADCBA with a total weight of [latex]1+8+13+4 = 26[/latex].
Watch the example worked out in the following video.
We ended up finding the worst circuit in the graph! What happened? Unfortunately, while it is very easy to implement, the NNA is a greedy algorithm, meaning it only looks at the immediate decision without considering the consequences in the future. In this case, following the edge AD forced us to use the very expensive edge BC later.
Consider again our salesman. Starting in Seattle, the nearest neighbor (cheapest flight) is to LA, at a cost of $70. From there:
LA to Chicago: $100
Chicago to Atlanta: $75
Atlanta to Dallas: $85
Dallas to Seattle: $120
In this case, nearest neighbor did find the optimal circuit.
Watch this example worked out again in this video.
Going back to our first example, how could we improve the outcome? One option would be to redo the nearest neighbor algorithm with a different starting point to see if the result changed. Since nearest neighbor is so fast, doing it several times isn't a big deal.
We will revisit the graph from Example 17.
Starting at vertex A resulted in a circuit with weight 26.
Starting at vertex B, the nearest neighbor circuit is BADCB with a weight of 4+1+8+13 = 26. This is the same circuit we found starting at vertex A. No better.
Starting at vertex C, the nearest neighbor circuit is CADBC with a weight of 2+1+9+13 = 25. Better!
Starting at vertex D, the nearest neighbor circuit is DACBA. Notice that this is actually the same circuit we found starting at C, just written with a different starting vertex.
The RNNA was able to produce a slightly better circuit with a weight of 25, but still not the optimal circuit in this case. Notice that even though we found the circuit by starting at vertex C, we could still write the circuit starting at A: ADBCA or ACBDA.
The table below shows the time, in milliseconds, it takes to send a packet of data between computers on a network. If data needed to be sent in sequence to each computer, then notification needed to come back to the original computer, we would be solving the TSP. The computers are labeled A-F for convenience.
A — 44 34 12 40 41
B 44 — 31 43 24 50
C 34 31 — 20 39 27
D 12 43 20 — 11 17
E 40 24 39 11 — 42
F 41 50 27 17 42 —
a. Find the circuit generated by the NNA starting at vertex B.
b. Find the circuit generated by the RNNA.
While certainly better than the basic NNA, unfortunately, the RNNA is still greedy and will produce very bad results for some graphs. As an alternative, our next approach will step back and look at the "big picture" – it will select first the edges that are shortest, and then fill in the gaps.
Using the four vertex graph from earlier, we can use the Sorted Edges algorithm.
The cheapest edge is AD, with a cost of 1. We highlight that edge to mark it selected.
The next shortest edge is AC, with a weight of 2, so we highlight that edge.
For the third edge, we'd like to add AB, but that would give vertex A degree 3, which is not allowed in a Hamiltonian circuit. The next shortest edge is CD, but that edge would create a circuit ACDA that does not include vertex B, so we reject that edge. The next shortest edge is BD, so we add that edge to the graph.
We then add the last edge to complete the circuit: ACBDA with weight 25.
Notice that the algorithm did not produce the optimal circuit in this case; the optimal circuit is ACDBA with weight 23.
While the Sorted Edge algorithm overcomes some of the shortcomings of NNA, it is still only a heuristic algorithm, and does not guarantee the optimal circuit.
Your teacher's band, Derivative Work, is doing a bar tour in Oregon. The driving distances are shown below. Plan an efficient route for your teacher to visit all the cities and return to the starting location. Use NNA starting at Portland, and then use Sorted Edges.
Ashland Astoria Bend Corvallis Crater Lake Eugene Newport Portland Salem Seaside
Ashland – 374 200 223 108 178 252 285 240 356
Astoria 374 – 255 166 433 199 135 95 136 17
Bend 200 255 – 128 277 128 180 160 131 247
Corvallis 223 166 128 – 430 47 52 84 40 155
Crater Lake 108 433 277 430 – 453 478 344 389 423
Eugene 178 199 128 47 453 – 91 110 64 181
Newport 252 135 180 52 478 91 – 114 83 117
Portland 285 95 160 84 344 110 114 – 47 78
Salem 240 136 131 40 389 64 83 47 – 118
Seaside 356 17 247 155 423 181 117 78 118 –
To see the entire table, scroll to the right
Using NNA with a large number of cities, you might find it helpful to mark off the cities as they're visited to keep from accidently visiting them again. Looking in the row for Portland, the smallest distance is 47, to Salem. Following that idea, our circuit will be:
Portland to Salem 47
Salem to Corvallis 40
Corvallis to Eugene 47
Eugene to Newport 91
Newport to Seaside 117
Seaside to Astoria 17
Astoria to Bend 255
Bend to Ashland 200
Ashland to Crater Lake 108
Crater Lake to Portland 344
Total trip length: 1266 miles
Using Sorted Edges, you might find it helpful to draw an empty graph, perhaps by drawing vertices in a circular pattern. Adding edges to the graph as you select them will help you visualize any circuits or vertices with degree 3.
We start adding the shortest edges:
Seaside to Astoria 17 miles
Corvallis to Salem 40 miles
Portland to Salem 47 miles
Corvallis to Eugene 47 miles
The graph after adding these edges is shown to the right. The next shortest edge is from Corvallis to Newport at 52 miles, but adding that edge would give Corvallis degree 3.
Continuing on, we can skip over any edge pair that contains Salem or Corvallis, since they both already have degree 2.
Portland to Seaside 78 miles
Eugene to Newport 91 miles
Portland to Astoria (reject – closes circuit)
Ashland to Crater Lk 108 miles
The graph after adding these edges is shown to the right. At this point, we can skip over any edge pair that contains Salem, Seaside, Eugene, Portland, or Corvallis since they already have degree 2.
Newport to Astoria (reject – closes circuit)
Newport to Bend 180 miles
Bend to Ashland 200 miles
At this point the only way to complete the circuit is to add:
Crater Lk to Astoria 433 miles. The final circuit, written to start at Portland, is:
Portland, Salem, Corvallis, Eugene, Newport, Bend, Ashland, Crater Lake, Astoria, Seaside, Portland. Total trip length: 1241 miles.
While better than the NNA route, neither algorithm produced the optimal route. The following route can make the tour in 1069 miles:
Portland, Astoria, Seaside, Newport, Corvallis, Eugene, Ashland, Crater Lake, Bend, Salem, Portland
Watch the example of nearest neighbor algorithm for traveling from city to city using a table worked out in the video below.
In the next video we use the same table, but use sorted edges to plan the trip.
Find the circuit produced by the Sorted Edges algorithm using the graph below.
Spanning Trees
A company requires reliable internet and phone connectivity between their five offices (named A, B, C, D, and E for simplicity) in New York, so they decide to lease dedicated lines from the phone company. The phone company will charge for each link made. The costs, in thousands of dollars per year, are shown in the graph.
In this case, we don't need to find a circuit, or even a specific path; all we need to do is make sure we can make a call from any office to any other. In other words, we need to be sure there is a path from any vertex to any other vertex.
Spanning Tree
A spanning tree is a connected graph using all vertices in which there are no circuits.
In other words, there is a path from any vertex to any other vertex, but no circuits.
Some examples of spanning trees are shown below. Notice there are no circuits in the trees, and it is fine to have vertices with degree higher than two.
Usually we have a starting graph to work from, like in the phone example above. In this case, we form our spanning tree by finding a subgraph – a new graph formed using all the vertices but only some of the edges from the original graph. No edges will be created where they didn't already exist.
Of course, any random spanning tree isn't really what we want. We want the minimum cost spanning tree (MCST).
Minimum Cost Spanning Tree (MCST)
The minimum cost spanning tree is the spanning tree with the smallest total edge weight.
A nearest neighbor style approach doesn't make as much sense here since we don't need a circuit, so instead we will take an approach similar to sorted edges.
Kruskal's Algorithm
Select the cheapest unused edge in the graph.
Repeat step 1, adding the cheapest unused edge, unless:
adding the edge would create a circuit
Repeat until a spanning tree is formed
Using our phone line graph from above, begin adding edges:
AB $4 OK
AE $5 OK
BE $6 reject – closes circuit ABEA
DC $7 OK
AC $8 OK
At this point we stop – every vertex is now connected, so we have formed a spanning tree with cost $24 thousand a year.
Remarkably, Kruskal's algorithm is both optimal and efficient; we are guaranteed to always produce the optimal MCST.
The power company needs to lay updated distribution lines connecting the ten Oregon cities below to the power grid. How can they minimize the amount of new line to lay?
Using Kruskal's algorithm, we add edges from cheapest to most expensive, rejecting any that close a circuit. We stop when the graph is connected.
Seaside to Astoria 17 milesCorvallis to Salem 40 miles
Corvallis to Newport 52 miles
Salem to Eugene reject – closes circuit
The graph up to this point is shown below.
Continuing,
Newport to Salem reject
Corvallis to Portland reject
Eugene to Newport reject
Portland to Astoria reject
Eugene to Portland reject
Newport to Portland reject
Newport to Seaside reject
Salem to Seaside reject
Bend to Eugene 128 miles
Bend to Salem reject
Astoria to Newport reject
Salem to Astoria reject
Corvallis to Seaside reject
Portland to Bend reject
Astoria to Corvallis reject
Eugene to Ashland 178 miles
This connects the graph. The total length of cable to lay would be 695 miles.
Watch the example above worked out in the following video, without a table.
Now we present the same example, with a table in the following video.
Find a minimum cost spanning tree on the graph below using Kruskal's algorithm.
[1] There are some theorems that can be used in specific circumstances, such as Dirac's theorem, which says that a Hamiltonian circuit must exist on a graph with n vertices if each vertex has degree n/2 or greater.
Hamiltonian circuits . Authored by: LIPPMAN, DAVID. Located at: https://youtu.be/SjtVuw4-1Qo. License: CC BY: Attribution
TSP by brute force . Authored by: OCLPhase2. Located at: https://youtu.be/wDXQ6tWsJxw. License: CC BY: Attribution
Number of circuits in a complete graph . Authored by: OCLPhase2. Located at: https://youtu.be/DwZw4t0qxuQ. License: CC BY: Attribution
Nearest Neighbor ex2 . Authored by: OCLPhase2. Located at: https://youtu.be/3Eq36iqjGKI. License: CC BY: Attribution
Sorted Edges ex 2 . Authored by: OCLPhase2. Located at: https://youtu.be/QxF23w3DpQc. License: CC BY: Attribution
Kruskal's ex 1 . Authored by: OCLPhase2. Located at: https://youtu.be/gaXM0HNErc4. License: CC BY: Attribution
Kruskal's from a table . Authored by: OCLPhase2. Located at: https://youtu.be/Pu2_2ftkwdo. License: CC BY: Attribution | CommonCrawl |
Realizing an excellent solution for detecting and solving conflicts between viewpoints of designers in self-adaptive systems
The-Can Do1
Recently, given the rise of emerging demands on intelligent technological systems, increasing attention has been given to proliferating and enhancing the adaptation capacities of self-adaptive systems (SAS), which are capable of adapting their behavior to changes in the context or system resources contributing to overcome the complexity of today's software-intensive systems. However, there are a significant number of challenges for developing adaptations on the SAS, such as simplifying the designer's task, improving responsiveness, and reducing the conflicts between its adaptations. This study focuses on simplifying the designers' tasks by utilizing independent designers' viewpoints on context modeling. We proposed several solutions for solving the conflicts among different viewpoints of designers in the layer of context-aware management (CAM). The validation results are promising and show that our method effectively provides a solution that supports the problem of using independent viewpoints in the context modeling process and improves the SAS adaptation capacity.
Under rapid technology development and continuous advancement in society, demands on information systems and related devices that consider properties such as flexibility, dependability, customizability, and adaptability have become increasingly vigorous [1]. The SAS, which is the primary component operating automatically in the framework of the system and interacting with the system on behalf of the user, plays an inevitable role in maintenance and configuration and is integrated into ubiquitous devices more than ever before [2, 3]. Therefore, several services at the inside of interactions between ubiquitous devices and the application layer are provided by the SAS. Furthermore, it allows the services or applications of the system to use the context information to change their behaviors according to the situations of the device and user.
The contextual information around users, such as connections and devices, is changed due to their various locations. Hence, the user's mobility in the design of the SAS should be considered. It means that the SAS, whose context model is required to cover all user scenarios, must auto-adapt applications that can intelligently adjust to numerous environment changes [4]. However, the modeling context in the SAS based on predicting all user's situations and application scenarios can increase the number of required concepts to descibe during the context modeling process. Moreover, the designer has to combine knowledge of different expert domains to design a single big context model, making the designer's task highly complicated. It would be better if the single large context model is separated into multiple independent context models in which the designers utilize their knowledge to present the scenario of user and application in terms of their perspective. That work can help each designer's task become simpler than before. In addition, the use of independent models also makes it easier to develop or reconfigure the context model system in the SAS.
Moreover, the data obtained after the modeling process based on the perspectives of various experts have different data structures and types as a result of their implementation of dissimilar modeling techniques or tools. It is therefore essential that the data obtained needs to be standardized by standard formats for all views (e.g., the context-intermediate model (CIM)) [5]. The standardized data is then added to context-aware management (CAM) to help the SAS in context processing [6]. The CAM consists of two phases, i.e. the design time where the designer uses specific models to describe the views and the runtime for which all the special views are independently managed was used to manage all specific views and detect changes in context [7]. Each viewpoint in work at an observation cycle stores one state of adaptation that is defined by one predication and each state of the adaptation has its objective. Necessary implementation steps to fulfill the identified goals are defined by the objective of the state (OS). In particular, each OS with a defined action is specific and measurable. At one moment, if the OS of one viewpoint contradicts or is incompatible with the OS of the others in a SAS, then there will be a conflict between the viewpoints that are not easily determined since it exists in many aspects of the response. Wang et al. presented that "conflict is a natural disagreement between different attitudes, beliefs, values or needs" [5]. The conflict between viewpoints also is a conflict between the OSs. The conflict between OS occurs when there is an incompatibility in desired end states or preferred outcomes. Hence, it is worth designing solutions for addressing the conflicts.
The works [6, 8,9,10], and [11] indicated that the present SASs are capable of detecting and resolving conflicts between adaptation rules at runtime. However, further enhancing SAS's adaptive capacity requires a significant increase in the number of adaptive rules, resulting in determining solutions for the conflicts becoming more complex and consuming time. Moreover, since all user scenarios associated with context cannot be predicted, resolving the conflicts between viewpoints at design time is impossible. In addition, adding new perspectives is mainly dependent on the user's choice, causing the challenge and difficulty of using all of the viewpoints together in a system. For example, the CAM manages two viewpoints, and two objectives of the state in these viewpoints may remain conflicts in the same period and place, such as viewpoint 1 with increasing the light intensity and viewpoint 2 with decreasing the light intensity. It is not easy to solve this problem with the usual system. Thus, a mechanism is needed to find and solve the conflicts between the semantics of viewpoints on the CAM before providing adaptation rules for the deployment process of the SAS [6].
Numerous studies have been conducted in the SAS using "Who, What, When, Where, Why, and How" (5W1H) questions to handle the problems of capturing the adaptation requirements. Salehie et al. use the 5W1H questions to elicit adaptation information: Where must the change be implemented? When does an adaptation need to be implemented? What needs to be adapted in each situation? Why is the adaptation necessary (identify the goals of the adaptation)? Who must execute the adaptation? And how are the adaptations implemented? [12]. Krupitzer et al. provided the taxonomy to answer the questions of adaptation [13]. They used the 5W1H question as the solution to describe their taxonomy. Answering each question provides an element of adaptation: "When?" is the time of adaptation, "Why?" provides the reason for adapting, "Where?" shows the location of adaptation, and "What?" presents the technique of adaptation, "Who?" is the nature of the system leads to an automatic type of adaptation) and "How?" provides an adaptation control solution.
Many works, such as the works on [14, 15], have used the 5W1H approach or a part of it to model contexts. Kim et al. use 5W1H questions to define the model for contextual information named ontological context-aware model based on 5W1H (CA5W1Honto). They used three elements "Concept, instance and context" to define each object of the contextual information. Moreover, the authors of the CA5W1HOnto model separated ontologies and context information in the form of two independent modules. Therefore, this approach can provide a high level of recyclability and expandability of the context model and some formalism to contextual information through the web ontology language-description logic (OWL-DL). They used six elements "goal, role, location, action, status, time" corresponding to six context modeling ontology elements (why, who, how, where, what, when) to define the context. Rathi et al. introduce a framework named "event and implied situation ontology" (ESO-5W1H) for adjudging the machine and human roles in their corresponding interaction [16]. The ESO-5W1H uses six classes: "Who", "When", "Where", "Why", "What" and "How" to build the basic decision process for the system's adaption. The element "Who" is used to present the role (Device, Signboard, etc.); "When" is used to define causal chains; "Where" is used to describe Location; "Why" is used to describe Goal; "What" is used to describe Status; and "How" is used to describe Action.
From the works mentioned above, we can see that the 5W1H approach can describe the adaptation information. In addition, the ontology-based on 5W1H questions in [15] and [16] shows that the 5W1H framework can also be used to build a domain context ontology for the context model. It provides the ability to concepts and relations analyzing and extracting within the discourse domain. In our approach, we propose using the OS to outline the "what, where, and how" of each state's implementation steps to achieve the primary goal of the viewpoint's state. So, we use also the 5W1H approach or a part of it to build our contextual ontology that is used to describe the OS. For this, we need to provide a standard definition of the OS, and then the designer will use the elements of contextual ontology to describe the OS from all viewpoints at design time.
In this study, we proposed a solution to add two new mechanisms to handle problems associated with detecting and solving conflicts between viewpoints on CAM. In our approach, CAM is independent of the SAS. While CAM can anticipate actions of perspectives to dismantle unnecessary adaptation rules, the SAS manages the adaptation rules without caring about their semantics of it. In particular, the SAS can tackle only the conflicts of access to components or devices. The SAS cannot operate with conflicts related to the semantics of each state in viewpoints. The details of the contextual ontology, objective of state, and conflict detection methodology are analyzed in the next section. Finally, a solution for solving the conflicts between designers' viewpoints is also proposed.
Contextual ontology and objective of state
In this section, our construction of a contextual ontology database (COD) that stores the essentials for describing the OS, based on the 5W1H conceptual modeling frameworks. In our previous research [17], we used four elements including "What, where, how and priority" to describe the OS. However, if we have two OS with the same composite time, the CAM cannot choose a suitable adaptation for the state. In this case, the CAM needs more information related to the atomic time of OS. So, we proposed an approach that requires five elements of the 5W1H: "what, where, when, who and how" [18,19,20]. In our methodology, "Why" is not used to describe the OS. The designers will explain their choice in the official language or human language (for better understanding by the user and the system). This means that the reason for adaptation corresponds to the designer's choice.
Figure 1 presents the domain concept in our contextual ontology (CO) which is composed by the union of several sets. Each set stores five aspects of 4W1H ontology (what-where-when-who and how). The combination of 4W1H is given by:
Contextual ontology 4W1H
$$CO = {C}_{what} \cup {C}_{where} \cup {C}_{when} \cup {C}_{who} \cup {C}_{how}$$
$$\mathrm{With}: {\mathrm{C}}_{n} \cap {\mathrm{C}}_{m}=\varnothing \left(\mathrm{n }\#\mathrm{m \Lambda }{\mathrm{C}}_{n}, {\mathrm{C}}_{m}\upepsilon \left\{{\mathrm{C}}_{\mathrm{what}}, {\mathrm{C}}_{\mathrm{where}}, {\mathrm{C}}_{\mathrm{when}}, {\mathrm{C}}_{\mathrm{who}},{\mathrm{C}}_{\mathrm{how}}\right\}\right)$$
where, Cwhat represents the class of concepts related to objects which are impacted during the adaptation process of SAS (such as Humidity, temperature, etc.). Cwhere shows the location of objects (Parking, Office, etc.) respectively. Cwhen is used to describe the time of adaptation need. Cwho is used as an entity to indicate the priority level of adaptation [15]. Chow denotes the set of concepts about the action that needs to be performed to reach the adaptation's goal (Activate, deactivate, etc.).
Additionally, the designer can utilize the information obtained from the 4W1H database to demonstrate the OS in the design time. With {O,L,T,P,A} being a set of concept names, the term O can be defined as O [o,l,t,p,a]. Where:
o ∈ O, with O as a class of concepts related to objects on "What element" (i.e., temperature, light intensity).
l ∈ L, with L as a class of concepts related to places or locations on "Where elements" (i.e., Parking, Office).
t ∈ T, with T as a class of concepts related to times on "When elements" (i.e., daytime, nighttime).
p ∈ P, with P as a class of concepts related to the position of the user on "Who elements" (i.e., manager, security).
a ∈ A, with A as a class of concepts related to actions on "How elements" (i.e., activate, turn of).
Detecting and solving methods
CAM can handle states of multiple viewpoints at once, causing the possibility of conflict between any two objectives of the state under different viewpoints. Thus, detecting and solving conflicts between OSs on CAM pleasure a leading role in improving the responsiveness of the SAS. This section demonstrates detecting conflicts between the OSs in different viewpoints, and a solution to solve these conflicts problem.
Conflict detection
Figure 2 illustrates the structure of the SAS in our approach [17]. In the SAS as shown in Fig. 2, if there are n viewpoints used on one CAM system, at one moment, the CAM will manage n states corresponding with N viewpoints. This means that if the system has N states corresponding with the N viewpoint, the CAM has to manage N OS at once. It is required an OS merging process to find and solve the conflicts between them. In addition, it provides also incompatibility situations between conflicting OSs.
Self-adaptive system (SAS) diagram
In our approach, the five elements "what, where, when, who and how" of the states' objectives are employed to analyze situations of conflict between viewpoints. Of which, {o, l, t, p, a} are five concepts corresponding to the information used to define OS (included elements of "what, when, where, who and how" class) of two OSs: O1(e) and O2(e). Figure 3 presents the association between each element of two OSs (O1 and O2).
The relationship between two elements O1(e) and O2(e) with different cases of a having the same object, b having different objects, c having an intersection relation, and d having a subspace in each object
In terms of element "What," with Oi(o) = oi (i = 1, 2, oi ∈ {CWhat}), there are four relation situations of O1(o) and O2(o) as shown in Fig. 3. As illustrated in Fig. 3(a), O1(o) and O2(o) have same object to be adjusted (i.e., O1(o) = O2 (o) = Light intensive), Fig. 3(b) show the totally different objects in O1 and O2 (i.e., O1(o) = Humidity, O2(o) = Devices). Meanwhile, in Fig. 3(c), O1(o) and O2(o) exist intersection relation (i.e., O1(o) = (2000C to 3500C) and O1(o) = (2500C to 4500C), and in Fig. 3(d), O2(o) is a subspace of O1(o) or O1(o) is a subspace of O2(o) (i.e., O1(o) = (1500C to 4500C) and O1(o) = (2500C to 3500C)).
Similarly, for the elements "Where" and "How", there are also four relation situations of O(l) and O(a) corresponding to their cases. (O1(l) = O2(l) = Car parking) or (O1(a) = O2(a) = Deactivate) is an example of the situation O1 and O2 have the same location or same action, respectively. In some other cases, O1 totally different to O2 both location and action (i.e. O1(l) = Department and O2(l) = Office or O1(a) = Deactivate and O2(a) = Turn On). There is even the existence of an intersection relation between O1 and O2 of the elements "Where" and "How", for instance, O1(l) = Second floor of department and O2(l) = Corridor of department).
If two OSs have the same action, it is confirmed that they will never have conflict. In contrast, if two OSs have different actions, the CAM must check the relation of these actions to determine whether these actions conflict or not conflict. In the 4W1H contextual ontology database, the relationship (R) between section "O(a)" of two OSs can be easily constructed as presented in Fig. 4. In "How class" of contextual ontology, the relation of two actions Y and Z obtain two values a or \(\overline{a }\). We have R (Y, Z) = a if the action Y of OS1 does not involve action Z of OS2 (i.e., decrease and deactivate) and is R(Y, Z) = \(\overline{a }\) if the action Y of OS1 is opposite or incompatible with the action Z of OS2 (i.e., Increase and Decrease). The relationship between all elements of "How" class (including their subclass) is based on the rules:
The relation properties between elements in "How class" of 4W1H ontology with R(x,y) = a when the action x does not affect action y and R(x,y) = ℎ̅ when the action x is incompatible or opposite with action y [17]
If action Y conflicts with action Z:
$$\mathrm{R}\left(\mathrm{Y},\mathrm{Z}\right)= \overline{a} \to \mathrm{ R }\left(\left(\forall y \mathrm{\epsilon Y}\right),\left(\forall z\mathrm{\epsilon Z}\right)\right)= \overline{a }$$
If action Y does not involve action Z:
$$\mathrm{R}\left(\mathrm{Y},\mathrm{Z}\right)= a\to \mathrm{ R}\left(\left(\forall y \mathrm{\epsilon Y}\right),\left(\forall z\mathrm{\epsilon Z}\right)\right)= \mathrm{a}$$
Conflict case definition: Objective (O1) CONFLICT objective (O2) (Table 1), if:
Action of O1 incompatible with or opposite to action of O2
These actions impact to same "object" in same "place" and same "time"
Table 1 Objective of states conflict analysis results
Example 1: As the conflict case mentioned in the introduction, if we have two application scenarios:
Viewpoint 1: In the office: if it is daytime and "light intensity < M = yes", then increase the light intensity.
Viewpoint 2: In the office: if it is daytime and "experiment completed? = yes", then decrease the light intensity.
Randomly during the daytime, the "experiment completed? = yes" and "light intensity < M = yes", we have:
From the analysis result in Table 2, we can conclude that O1 conflicts O2.
Table 2 Conflict situation analysis from example 1
Solving conflicts between designers' viewpoints
In this section, we present the solution for solving the conflicts between designers' viewpoints. If at least two viewpoints are applied, the CAM has to supervise at least two objectives of states at every moment. Each state of viewpoint has its own OS, hence, the CAM must find out the conflict situation of any two OSs by checking each pair of all OSs. Two OSs exist in conflict when four elements of these OSs have a relation as shown in Fig. 5.
The diagram of the relationships of three elements of two OSs when there is a conflict situation between two OSs
SAS will do the action in each state of viewpoint following the adaptation rules (AR) built by the experts at design time. However, each expert designer has different ways to describe his adaptation rules following his concern. So, the CAM cannot work on the rules level [21] to find and solve the conflict between OSs. Nevertheless, the CAM cannot adjust the action through adaptation rules in each state of viewpoint; it can choose the best adaption rule to suit the needs for adaptation at one time. However, the problem here is: "How to find what is the best action for adaptation?". In this case, all designers must provide additional information to support the CAM in choosing the adaptation rule of OS. Furthermore, one of the most used solutions is adding priority levels for all states of each viewpoint. The SAS can use priority levels from viewpoints as important indexes to select more valuable adaptation rules when the CAM detected the conflicts between OSs.
Many researchers have used "Priority" as an important index for evaluating or choosing adaptation rules on research domains such as security, health care, management, etc. [22,23,24]. While Spicker proposes to use five priority levels (Lexical ordering, special status, precedence, relative value, and importance,) in choosing adaptation rules [25]. Spicker agreed that the "priority" of something shows that it is more important than others. As a result, many approaches have used a level of priority to provide well services to adapt the versatile user situations. In the above approaches, the designer or user points out conflict situations, and then the CAM use the priority of the OS as an important index to support the SAS in selecting services or applications if the CAM find out the conflict situations. In this approach, we find the motivation to use the "Priority level" to describe each OS's importance in viewpoints. Moreover, we believe that it is a well-suited solution to support the SAS in detecting and solving conflicts between OSs at run time. In our viewpoint, a SAS must provide the adaptation rules based on four levels of user: manager, security guards, employees, and guests. So, in our approach, we use four elements of "Who class" including "Manager, security, employee, and guest" corresponding with four objects of adapt, as shown in Table 3. The priority element of each OS shows the importance of the viewpoint's states. Following that, the SAS can select the best adaptation rules in conflict cases.
Table 3 Priority levels used on 4W1H ontology database
With the addition of the priority element in the OS description, each OS will be established by five elements corresponding with concepts from five classes (what, where, when, who and how) of the contextual ontology database, as shown in Fig. 6.
The Objective of state description with 4W1H with five elements: what-where-when-who-how
All OSs that conflict with at least one other OS will be classified into four subclass PL1 to PL4 corresponding with four important levels of OSs. Next, the conflict-solving step is done by the CAM, as shown in Fig. 7.
Schematic of classifying and solving conflict between multiple viewpoints
At time t, for an example, the observation predicate cycle of the CAM can detect n-OS (n is the total number of all OS at one time). The CAM will classify them into four levels, from PL1 to PL4 (corresponding with four levels of priority), as shown in algorithm 1:
Furthermore, the CAM compares all OSs at the same priority level (PLi) to find the same OSs in this priority level (two OSs is called the same if five elements them are the same (Oi (o) = Oj (o), Oi (l) = Oj (l), Oi (t) = Oj (t)), Oi (p) = Oj (p), Oi (a) = Oj (a)). Suppose the CAM has class PLi (i = 1 to 4) containing OSs with the same priority level. It can keep one of them and determine the number of viewpoints with identical OS in each class PLi following algorithm 2:
In the previous section, we proposed to use four priority levels in the "Who class" including "Manager, Security, Employee, Guest and Device" to point out the important level of each OS. Based on the priority level, the solution that solves conflicts between OSs can be expressed logically. This solution based on the proposed method can be illustrated in three following steps in some issues which can be happened in system.
Step I: Determine the number of designers' viewpoints which has the same OS in each priority level
- Issue 1: After the classifying OS step (use algorithm 1), suppose we have nine OSs where each OS conflicts with at least one other OS, and O2 = O8, and the number of OS in each class of priority is shown in Table 4.
Table 4 The priority of objective of the state
Table 5 presents the result of the calculation process (use algorithm 2) to determine the number of designers' viewpoints which has the same OS in each priority level at runtime.
Table 5 The result of merging the same OSs in each priority level
Step II: Find out the conflict between OSs in the same priority
The SAS examines the conflicts among OSs at the same level of priority (element "Who") from PL1 to PL4. In the same PLi, the CAM compares the number of conflicts between each OS with others. After that, the CAM ignores the OS has the highest number of conflicts. It updates the OS list in each class and checks again without ignoring OS. This process repeats until it has no conflict between viewpoints in all classes. If any two OSs (Oi and Oj) have the same conflict number compared to others; the CAM must check the number of viewpoints associated with them. It keeps the OS with a higher number of viewpoints related to that OS and ignores the other. If two OSs have the priority level and the number of viewpoints associated with them equally, it must keep randomly one of them for responding.
- Issue 2: With the previous example in Issue 1, at PL1 level, if O2 > < O3 (we propose to use the symbol " > < " to replace the conflict situation), and they do not conflict with any other OS. The result of step 2 shows in Table 6.
Table 6 Conflict and result of conflict-solving process
In the above example, the amount of conflict with other OS of O2 and O6 is equal. However, O2 is used by more than one viewpoint, while O6 is used by only one viewpoint. Therefore, the CAM keeps O2 (ignore O6), then it has to update the situation of OSs as shown in Table 7.
Table 7 The updated table of the OS list
In case of two OSs have the same viewpoints associated with them and there is a conflict between them, the CAM will examine the conflict between these OSs with other OSs in the lower priority level. Next, the CAM ignores the OS having the most conflict with others. In a special case, if these two OSs have the same amount of conflict as other OS in the lower level, the CAM has to keep randomly one of these OSs for responding.
Assuming that the conflict situations between OSs in the above example can be: {O4 > < O5; O4 > < O1, O4 > < O7 and O5 > < O1}, the results of checking conflict in the PL2 class can be illustrated in Table 8.
Table 8 The conflict number of OSs in class PL2
In the PL2 class, because the conflict number of O4 = O5 = 1, they have the same number of viewpoints associated with them (k[4] = k[5] = 1). The CAM has to check the conflict situation between them with OSs at the lower priority level. And the result of checking and solving conflict is presented in Table 9.
Table 9 The result of solving conflict
Step III: Checking the conflicts with OSs in the lower priority level
If the CAM cannot find out the conflict between OSs at the same priority level, it will check the conflicts between Oss at the highest priority level with other OSs at other lower levels:
If a conflict exists between OSs at the highest level and other OSs at the lower level, the CAM ignores the OS in the lower PLi.
If the CAM does not detect any conflict, it will check for the next lower level in the "Who class".
The CAM will continue checking and solving the conflict process until not no conflict is detected; it will provide a new set of OSs without conflict for the deployment rules process. With the example above, the result of the solving conflict process can be expressed as O* = O2 + O3 + O5 + O7 + O9. This is the most suitable solution for giving the determination of the system based on the viewpoints.
Based on some mentioned issues and the solving process, the conflicts between viewpoints at CAM before executing the adaptation rules for the SAS can be found and resolved by the proposed method in the paper. When we make an application using multiple viewpoints to model the context, the conflicts between the adaptation of the viewpoints can happen at run time. This disadvantage occurs because the viewpoints are designed by different designers. Each designer only cares about the purpose of their application scenario. So, when the CAM merges the viewpoints at runtime, the corresponding adaptation rules of each viewpoint may conflict with each other. These conflicts can not be resolved at the design time because the designer does not have the instance information of the components which are impacted by the adaptation of the SAS at runtime. The goal of each viewpoint state by OS can become the key to solving the problem. It can be used to find out the potential conflict between designers' viewpoints. The conflicts between viewpoints at a time are also the conflicts between the OS of viewpoints. Therefore, the resolution is introduced to detect and solve the conflicts between the OS of viewpoints. This solution can reduce the number of conflicts between viewpoints that need to be solved in the SAS. However, this solution also has its disadvantage. It requires the system to spend a lot of time checking, selecting, and updating the OS situation during the adaptation process.
When we make an application using multiple viewpoints to model the context, the conflicts between the adaptation of the viewpoints can occur at run time. Detecting and resolving conflicts in CAM plays an important role in improving the adaptive capacity of the SAS. However, detecting conflicts between viewpoints through adaptive rules is not possible because there is no information at device instances. In the proposed method, the goal of each viewpoint state by the Objective ontology database is described and used to find out the potential of conflict of designers' viewpoints. The contextual ontology database composes of five classes: what, where, when, who and how. All designers can use the contextual ontology database (4W1H) to present the objective of state for each viewpoint at design time. The CAM detects the conflict between viewpoints by comparing five elements "what, where, when, who and how" of OS at runtime.
In addition, in our research, we propose to add "Who class" (on contextual ontology database) as the priority element of each OS for the conflict resolution process. The CAM uses the priority of OS to classify all OS into four levels corresponding from PL1 to PL4. The priority level of OS is an essential index for choosing OS in case of a conflict. The result of detecting and solving conflict process is the list of OS without conflict and the CAM ready for deployment rules to SAS.
In the future, we want to build an observation mechanism to observe context automatically without triggering an observation signal from the CAM. It can operate independently with the CAM to provide useful contextual information for the system. This solution allows using the same observation mechanism for many CAM on the self-adaptive system.
All data generated or analyzed during this study are included in this article.
SAS:
Self-adaptive system
CIM:
Context-intermediate model
CAM:
Context-aware management
The objective of the state
5W1H:
Questions to elicit adaptation information
Vandana B, Purkayastha, Kumar A (2020) Issues and Challenges in Management related to Information Technology. Int J Comput Digit Syst. 9(4):703–713
T C Do, G Rey, JY Tigli, S Lavirotte, N, Le Thanh (2019) "From BPMN to Live Application: How the Context Can Drive an Auto-Adapted System", 2019 IEEE-RIVF International Conference on Computing and Communication Technologies, pp. 1–6
Macías-Escrivá FD, Haber R, Del Toro R, Hernandez V (2013) Self-adaptive systems: A survey of current approaches, research challenges and applications. Expert Syst Appl 40(18):7267–7279
Krupitzer, Christian et al (2015) A survey on engineering approaches for self-adaptive systems. Pervasive and Mobile Computing 17:184–206
Wang WM, Ting SL (2011) Development of a computational simulation model for conflict management in team building. Int Journal of Engineering Business Management 3:9–15
Sawyer P, Bencomo N, Whittle J, Letier E, Finkelstein A (2010) Requirements-aware systems: A research agenda for re for self-adaptive systems. In: 2010 18th IEEE International Requirements Engineering Conference. pp 95–103 (IEEE)
Continuum project, Continuum ANR, Programme VERSO, Continuum ANR-08-VERS-005, 12–2008/09–2012.
Anthony R J (2006) Generic support for policy-based self-adaptive systems. In: 17th International Workshop on Database and Expert Systems Applications (DEXA'06). pp 108–113 (IEEE)
Pradeep P, Kant K (2022) Conflict Detection and Resolution in IoT Systems: A Survey. IoT 3(1):191–218
Welsh K, Bencomo N, Sawyer P, Whittle J (2014) Self-explanation in adaptive systems based on runtime goal-based models. In: Transactions on Computational Collective Intelligence XVI. Springer, Berlin, Heidelberg, pp 122–145
Lemos R D, Garlan D, Ghezzi C, Giese H, Andersson J, Litoiu M, Zambonelli F (2017) Software engineering for self-adaptive systems: Research challenges in the provision of assurances. In: Software Engineering for Self-Adaptive Systems III. Assurances. Springer, Cham, pp 3–30
Salehie M, Tahvildari L (2009) Self-adaptive software: Landscape and research challenges. ACM transactions on autonomous and adaptive systems (TSAS) 4(2):1–42
Krupitzer C, Roth F M, VanSyckel S, Schiele G, Becker C (2015) A survey on engineering approaches for self-adaptive systems. Pervasive Mobile Computing 17:184–206
G Rey (2005) "Contexte en Interaction Homme-Machine : le contexteur", PhD Thesis, Universite Joseph-Fourier-Grenoble I.
Benaddi H, Laaz N, El Kettani E, Hannad Y (2022) Ontology Model for Public Services in Morocco Based on 5W1H Approach: PSOM-eGovMa. Procedia Computer Science 198:429–434
S Rathi, A Alam (2018) "ESO-5W1H Framework: Ontological model for SITL paradigm". HumL@ ISWC 2018, Vol-2169, pp. 51–63. [2nd International Workshop on Augmenting Intelligence with HumansintheLoop co-located with 17th International Semantic Web Conference]
Do The Can (2019) A context manager for solving conflicts between designer's viewpoints. Ubiquitous Computing. COMUE Université Côte d'Azur
Oh Y, Shin C, Jang S, Woo W (2005) ubi-UCAM 2.0: A unified context-aware application model for ubiquitous computing environments. In the 1st Korea/Japan Joint workshop on Ubiquitous Computing and Network Systems.
Hong D, Shin C, Oh S, Woo W (2006) A new paradigm for user interaction in ubiquitous computing environment. ISUVR 2006:41–44
Kim JD, Son J, Baik DK (2012) CA 5W1H onto: ontological context-aware model based on 5W1H. Int J Distrib Sens Netw 8(3):247346
M Yagita, F Ishikawa, S Honiden (2015) "An application conflict detection and resolution system for smart homes". Proceedings of the First International Workshop on Software Engineering for Smart Cyber-Physical Systems, pp. 33–39, IEEE Press
Wan Z. (2009). Priority Based Adaptive Video Transmission Over Ad Hoc Networks. In 2009 International Conference on Multimedia Information Networking and Security (Vol. 1, pp. 277–281). IEEE.
Renners L, Heine F, Kleiner C, Rodosek G D (2018) A Feedback-Based Evaluation Approach for the Continuous Adjustment of Incident Prioritization. In: 2018 1st International Conference on Data Intelligence and Security (ICDIS). pp 176–183
Wang H, Mehta R, Supakkul S, Chung L (2011) Rule-based context-aware adaptation using a goal-oriented ontology. In: Proceedings of the 2011 international workshop on Situation activity & goal awareness. pp 67–76
Spicker P (2009) "What is a priority?" J Health Serv Res Policy 112–116.
This work was supported by The University of Danang, University of Science and Technology, code number of Project: T2021-02-02.
This study is funded by The University of Danang, University of Science and Technology.
Faculty of Mechanical Engineering, The University of Danang, University of Science and Technology, 54 Nguyen Luong Bang Street, Lien Chieu District, Danang City, Vietnam
The-Can Do
The author has contributed significantly to this work, read, wrote, and approved the manuscript; and agree to its submission to the Journal of Engineering and Applied Science.
Correspondence to The-Can Do.
The author declare that he has no competing interests.
Do, TC. Realizing an excellent solution for detecting and solving conflicts between viewpoints of designers in self-adaptive systems. J. Eng. Appl. Sci. 69, 75 (2022). https://doi.org/10.1186/s44147-022-00134-z
Context-awareness
Context management
Context modeling | CommonCrawl |
Journal of Harbin Institute of Technology (New Series) 2019, Vol. 26 Issue (3): 26-34 DOI: 10.11916/j.issn.1005-9113.18025
Yang Liu, Yanjie Ji, Keyu Chen, Xinyi Qi. Support Vector Regression for Bus Travel Time Prediction Using Wavelet Transform[J]. Journal of Harbin Institute of Technology (New Series), 2019, 26(3): 26-34. DOI: 10.11916/j.issn.1005-9113.18025.
Sponsored by the Projects of International Cooperation and Exchange of the National Natural Science Foundation of China (Grant No. 51561135003) and the Scientific Research Foundation of Graduated School of Southeast University(Grant No.YBJJ1842)
Yanjie Ji, E-mail: [email protected]
Received: 2018-03-17
Contents Abstract Full text Figures/Tables PDF
Support Vector Regression for Bus Travel Time Prediction Using Wavelet Transform
Yang Liu1, Yanjie Ji1 , Keyu Chen2, Xinyi Qi1
1. School of Transportation, Southeast University, Nanjing 210096, China;
2. Guangzhou Urban Planning & Design Survey Research Institute, Guangzhou 510060, China
Fund: Sponsored by the Projects of International Cooperation and Exchange of the National Natural Science Foundation of China (Grant No. 51561135003) and the Scientific Research Foundation of Graduated School of Southeast University(Grant No.YBJJ1842)
Corresponding author: Yanjie Ji, E-mail: [email protected]
Abstract: In order to accurately predict bus travel time, a hybrid model based on combining wavelet transform technique with support vector regression (WT-SVR) model is employed. In this model, wavelet decomposition is used to extract important information of data at different levels and enhances the forecasting ability of the model. After wavelet transform different components are forecasted by their corresponding SVR predictors. The final prediction result is obtained by the summation of the predicted results for each component. The proposed hybrid model is examined by the data of bus route No.550 in Nanjing, China. The performance of WT-SVR model is evaluated by mean absolute error (MAE), mean absolute percent error (MAPE) and relative mean square error (RMSE), and also compared to regular SVR and ANN models. The results show that the prediction method based on wavelet transform and SVR has better tracking ability and dynamic behavior than regular SVR and ANN models. The forecasting performance is remarkably improved to obtain within 6% MAPE for testing section Ⅰ and 8% MAPE for testing section Ⅱ, which proves that the suggested approach is feasible and applicable in bus travel time prediction.
Keywords: intelligent transportation bus travel time prediction wavelet transform support vector regression hybrid model
Bus travel time prediction is vital component of advanced public transportation system (APTS) and advanced traveler information system (ATIS). With the rapid development of communication and network technology, an accurate and real-time travel time forecast is increasingly important. For bus operation management, it can help optimize bus route planning, stop site and distance between stations selection, and choose appropriate road section to implement bus priority tragedy, which will realize better bus priority on the premise of limited traffic supply. On the other hand, real-time and dynamic bus arrival time forecast released by mobile communication applications can help passengers make more suitable travel plans, which not only reduces the long waiting process, but also improves the service level of public transportation and attracts more passengers.
Previously, various methods have been adopted by researchers to forecast bus travel time using historical average model[1], time series model[2], statistical regression model[3] and kalman filter algorithms[4]. However, the prediction of bus travel time is very complex and highly nonlinear in nature as it depends upon many influence factors such as ridership, traffic flow, weather and traffic signals in bus system. It is difficult for those predicting methods to consider all of factors, so the prediction quality, in practice, is unsatisfactory.
In the recent decade, machine learning models have better capability to handle nonlinear mapping problems that are complex in nature, particularly in the field of travel time prediction where an artificial neutral network (ANN) has been widely applied. Park and Rilett analyzed the performance of ANN applications in bus travel time modeling[5]; Chien et al.[6] put forward two ANN models based on link and bus station respectively, which are applied to bus travel time prediction. It has been shown that ANN model has good applicability in bus travel time prediction. Further, there is a lot of research proving that ANN model outperforms historical average, statistical regression and kalman filter models in bus travel time prediction[7-8]. However, ANN model follows the principle of empirical risk minimization, which has some drawbacks, like local optimal, over-fitting or under-fitting problems and generalization ability defects[9-10], which may reduce the effect of artificial neural network application in travel time prediction to a certain degree.
Support vector machine (SVM), which is based on statistical learning theory, is a relatively new classification and regression technique from the artificial intelligence field. It is good at finding the statistical laws under the small sample and has a strong learning ability. Moreover, this technique has better generalization performance and is easy to be balanced between the level of generalization and fitness. Due to the structural risk minimization principle, SVM can effectively overcome the defects of ANN, which has gained attention in the transportation domain. Besides, urban public transport is a non-stationary, time-variant, and stochastic system, therefore using SVM in bus travel time prediction has important significance, which has been found to perform well compared to the other predictors[11-13].
Wavelet transform (WT), which can decompose the original data into various frequency components, has been successfully used in the fields like data analysis and signal processing. The application of wavelet transform provides useful information about sub-series components of original data so that forecasting capability of a model can be improved by extracting useful information at different levels. In recent years, wavelet transforms has been applied to a number of research fields such as temperature[14], water resource[15-16], wind energy[17] and share price prediction[18], which combine the wavelet transform to form a hybrid tool in their models. Research findings indicate that the hybrid model can be efficient and effective in improving the accuracy of forecasts and has been gradually adopted in transport domain. In hybrid prediction model, several techniques are combined to take advantages of their unique features in data analysis and modeling. In fact, every method has its strong points, for example wavelet transform (WT) has an advantage of frequency decomposition in time domain, while a support vector machine (SVM) is good at handling nonlinear optimization problems. So it is really meaningful to unite those methods in bus travel time prediction domain for the purpose of improving the accuracy of prediction results[19-21].
In this study, wavelet transform is used to capture the detailed information of bus travel time variation and decompose original data into several components at different frequency. The SVR models for predicting the components from high frequency to low frequency are constructed respectively. The final prediction result is derived from the summation of model outputs for each component. The main purpose of this study are to analyze the performances of applying wavelet transform-support vector regression model into bus travel time prediction and to compare the performances of the WT-SVR models with other widely used models like SVR and ANN models.
2 Theory of the Model
Wavelet transform (WT) has excellent characteristics of multi-resolution analysis. On one hand, the signals can be decomposed into different levels, and the information features of different levels can be displayed, which helps to give a deep insight into the variation of signal. On the other hand, the components of transient abnormal phenomenon entrained in normal signal can be detected, and their components are displayed[22]. Compared to traditional artificial neural network, support vector machine method replaces traditional empirical risk with structure risk minimization and solves a quadratic optimization problem with the global optimal solution in theory. Therefore, the application of hybrid wavelet transform-support vector regression (WT-SVR) model in bus travel time prediction can capture the regularity of bus running behind the seemingly random and improve the prediction accuracy.
2.1 Wavelet Transform
Suppose the function φ(t)∈L2(R) and its Fourier transform ψ(ω) satisfies the condition (t and ω are random variables):
$ \int\limits_{R} \frac{|\hat{\psi}(\omega)|^{2}}{|\omega|} \mathrm{d} \omega<+\infty $ (1)
Then φ(t) can be called wavelet base or mother wavelet. By dilations and translations of mother wavelet, a family of wavelet functions can be obtained:
$ \psi_{a, b}(t)=\frac{1}{\sqrt{a}} \psi\left(\frac{t-b}{a}\right)(a \neq 0, b \in R) $ (2)
where a represents the scale factor and b represents the translation factor. Let a=2j and b=k·2j, discrete wavelet transform (DWT) can be transformed as follow:
$ \psi_{j, k}(t)=2^{\frac{-j}{2}} \psi\left(2^{-j} \cdot t-k\right) $ (3)
where k denotes the shift parameter and j denotes the resolution level. If the value of j is larger, the frequency of wavelet decomposition is lower.
An effective way to apply the wavelet transform is the multi-resolution technique based on scale function and wavelet base function, which extracts the low frequency components and the high frequency components of the series respectively. The process of multi-scales decomposition can be expressed as:
$ \begin{array}{c}{V_{0}=V_{1} \oplus W_{1}=V_{2} \oplus W_{2} \oplus W_{1}=} \\ {V_{3} \oplus W_{3} \oplus W_{2} \oplus W_{1}=\cdots}\end{array} $ (4)
where, V0 is original signal; Vi is the approximate components of signal, i=1, 2, …, n; Wi is the detail components of signal, i=1, 2, …, n.
For a given section of a bus route, the bus travel time in this section at time step t can be defined as f(t), and t=1, 2, …, n, f(t)∈L2(R). Therefore, the bus travel time series f(t) can be treated as a signal input, which can be decomposed into different frequency bands through wavelet decomposition. The reconstruction expression of f(t) can be obtained by Mallat multi-scales analysis algorithm as follows:
$ \begin{aligned} f(t)=& \sum\limits_{k} c_{j, k} \varphi_{j, k}(t)+\sum\limits_{k} \sum\limits_{j} d_{j, k} \psi_{j, k}(t)=\\ & A_{j}(t)+\sum\limits_{j} D_{j}(t) \end{aligned} $ (5)
where cj, k is wavelet coefficient and dj, k is scale coefficient; φj, k(t) denotes wavelet base function and ψj, k(t) denotes scale function; Aj and Dj are the approximate and detail sequences of original data after reconstruction, respectively. The flow chart of Mallat wavelet decomposition is shown in Fig. 1.
Fig.1 Mallat wavelet decomposition
2.2 Support Vector Regression
For the case of regression problems, suppose that given a series of data points, namely {(x1, y1), (x2, y2), …, (xn, yn)} (xi is the input vector; yi relates to the target value; and n is the number of observation). In order to solve nonlinear regression problems, a set of non-linear transfer functions are used to map the input space into high dimension feature space, where theoretically a simple linear regression can be found to approximate a given sample. According to statistical learning theory[23], the linear estimation function of SVR can be formulated as follows:
$ f(x)=\omega \cdot \phi(x)+b $ (6)
where ϕ(x) denotes a non-linear transfer function in the feature space; ω is weight vector, b is a constant. The coefficients ω and b can be calculated by minimizing the regularized risk function:
$ R(f)=C \frac{1}{n} \sum\limits_{i=1}^{n} L_{\varepsilon}\left(y_{i}, f\left(x_{i}\right)\right)+\frac{1}{2}\|w\|^{2} $ (7)
$ L_{\varepsilon}(y , f(x))=\left\{\begin{array}{l}{0, \text { if }(y-f(x)) \leqslant \varepsilon} \\ {|y-f(x)|-\varepsilon, \text { otherwise }}\end{array}\right. $
where Lε(y, f(x))is called ε-insensitive loss function, the constant C>0 specifies a trade-off between an approximation error and the weight vector ||w||. ε is called as the tube size that is equivalent to the approximation accuracy placed on the training data points. Both C and ε must be chosen beforehand by the user. Two non-negative slack variables ξ and ξ* can be introduced, which represent the distance from the actual values to the corresponding boundary values of ε-tube; then, Eq.(7) is transformed to the following convex quadratic programming problem:
$ \begin{array}{*{20}{l}} {\mathop {\min }\limits_{_{w, b, \xi , \xi } \cdot } \frac{1}{2}||w|{|^2} + C\sum\limits_{i = 1}^N {\left( {{\xi _i} + \xi _i^*} \right)} }\\ {{\rm{ Subject}}\;{\rm{to }}\left\{ {\begin{array}{*{20}{l}} {{w_i} \cdot \phi \left( {{x_i}} \right) + {b_i} - {y_i} \le \varepsilon + \xi _i^*}\\ {{y_i} - {w_i} \cdot \phi \left( {{x_i}} \right) - {b_i} \le \varepsilon + {\xi _i}}\\ {{\xi _i}, \xi _i^* \ge 0, i = 1, 2, \cdots , N} \end{array}} \right.} \end{array} $ (8)
After optimizing above equation by Lagrange function and condition, a non-linear regression function can be given as:
$ \begin{array}{*{20}{c}} {f(x) = \sum\limits_{i = 1}^l {\left( {{\alpha _i} - \alpha _i^*} \right)} k\left( {{x_i}, x} \right) + b}\\ {k\left( {{x_i}, x} \right) = \sum\limits_{j = 1}^D \phi \left( {{x_i}} \right)\phi \left( {{x_j}} \right)} \end{array} $ (9)
where αi and αi* are two Lagrange multipliers. k(xi, xj)=ϕ(xi)ϕ(xj) is a kernel function which describes the inner products in the high dimension feature space. By using kernel functions, all calculation processes can be finished directly in the input space without mapping into the high dimension feature space. The structure of SVR is shown in Fig. 2.
Fig.2 The topology structure of SVR
The performance and efficiency of SVM depends greatly on the kernel function, so choosing the kernel function and corresponding parameters properly according to different problems is very important. The common kernel functions are shown in Table 1.
表 1
Table 1 Common kernel functions of SVR
Kernel function Expression
Linear k(xi, xj)=xi·xj
Polynomial k(xi, xj)=[(xi·xj)+c]d
Radial basis function (RBF) k(xi, xj)=exp(-γ||xi-xj||)
Sigmoid k(xi, xj)=tanh[v(xi·xj)+c]
3 Model Development in Bus Travel Time Prediction
In this study, a hybrid WT-SVR model is used for predicting bus travel time, which is formed by combining the model of support vector machine with wavelet transform technique. The details of model input and details regarding the wavelet decomposition are discussed briefly in this section.
Considered to the variation of bus running, four input variables and an output variable are used, which are advised by Ji et al.[24]. Firstly, bus travel time is non-stationary and fluctuates during a day. Especially at morning and afternoon peak hours, the bus travel times will increase significantly; then, different road segments have different number of intersections, road segment length, traffic conditions, and traffic flow composition. All these differences can result in the changes of bus travel times. Thus, the time of day should be classified into several periods, and road segments should also be considered as input factors in this model. Moreover, bus travel time is easily influenced by many random factors such as traffic flow, ridership, weather, stops delay and traffic signals delay, but it is very difficult to estimate the traffic condition of road segments by obtaining this information in real time. Based on the research of Yu[25], this paper chooses the latest bus travel time of the predicted section and the latest bus travel time of the previous section to represent the current traffic condition of predicted section and the running status of the bus, assuming that the latest travel time can be obtained by bus information system in real time. Therefore, four input variables include time of day (x1), road segment (x2), the latest bus travel time of at predicted section (x3) and the latest bus travel time of current bus at preceding segment (x4); y denotes output vector, which represents the bus travel times from stop i to stop j. While a bus reaches the stop i, the latest travel time from stop i-1 to stop i will be updated.
For a bus route, the bus travel time series at each segment can be decomposed into sub-series component (approximation components A's and detail components D's) using wavelet multi-scales decomposition beforehand. The input data such as the latest bus travel time in current section and the latest bus travel time in preceding section can be obtained by the corresponding bus travel time sub-series. The sub-series (A's and D's) components of future travel time at predicted section are predicted by different SVR models separately. Finally, the prediction result is the aggregation of each model outputs.
With respect to the model parameters, radial basis function (RBF) is selected as the kernel function, which is able to fit high-dimension data with a few hyper parameters thus reducing the complexity of prediction model. The definition of RBF kernel function can be expressed as:
$ k\left(x_{i}, x\right)=\exp \left(-\gamma| | x-x_{i}| |^{2}\right), \gamma>0 $ (10)
When RBF kernel is used, three SVR parameters including penalty parameter C and kernel function's parameter γ and tube size ε are considered. The general accuracy of prediction depends on a proper setting of these parameters, and the best combination of parameters (C, γ and ε) can be determined by the methods such as k-fold cross validation (CV), genetic algorithm (GA), and particle swarm optimization (PSO). For simplicity, five-fold cross validation is chosen to optimize the parameters of all SVR models.
The structure of prediction model is shown in Fig. 3, of which details are demonstrated as follows.
Fig.3 Diagram of bus travel time prediction model based on wavelet transform and SVR
1) The bus route under study is separated into k segments according to the bus stops. For the convenience of this study, the time of day variable is classified into peak hours (7:00-9:00 a.m. and 17:00-19:00 p.m.) and off-peak hours.
2) The original bus travel time data is decomposed into a set of various subsequences using wavelet multiresolution technique and single branch reconstruction method.
3) After wavelet transform, each sub-series components are learned and trained separately by support vector regression models. The parameters including penalty parameter C, kernel function's parameter γ and tube size ε are optimized by cross-validation and grid search approach.
4) The final predicting results is obtained by the combination of prediction results from all SVR models, which can be expressed as
$ \begin{aligned} y_{\text { predict }}=& \sum\limits_{i=1}^{n} D_{i}^{\prime}+A_{n}^{\prime}=\sum\limits_{i=1}^{n} f_{D i}\left(x_{1}, x_{2}, x_{3 D i}, x_{4 D i}\right)+\\ & f_{A}\left(x_{1}, x_{2}, x_{3 A n}, x_{4 A n}\right) \end{aligned} $ (11)
where f(*) denotes non-linear mapping function trained by SVR; D denotes detail components of predict value and A denotes approximation components of predict value; n is decomposition level.
5) Performance measures are conducted by comparing the final forecasting value with ANN and SVR prediction results.
4 Numerical Test 4.1 Study Area and Data
To evaluate the applicability of proposed WT-SVR model for bus travel time prediction, a south-eastbound corridor on Daqiao Rd. and a north-westbound from Jianning Rd. to Rehe Rd. of bus No.550 in Nanjing, China were selected, as experimental route sections. The route of Bus No. 550 is 10.2 km length and has 27 bus stops in the upstream direction, which starts from Taifeng Road terminus to Mochou Lake Park terminus. The bus headway varies from about 10 min in peak hours and about 15 min in off-peak hours. The study region of bus No.550 in this paper starts from Qiaobei Coach station to Agricultural Trade Center stops, which is divided into two sections as shown in Fig. 4.
Fig.4 Layout of study area of bus No.550
a) Section Ⅰ: from Qiaobei Coach Station to Daqiao Hotel stop.
b) Section Ⅱ: from Daqiao Hotel stop to Agricultural Trade Center stop.
The buses on this route are equipped with the GPS and AVL devices that can obtain the real-time travel time information. The bus travel time data was collected from November 2, 2015 to November 10, 2015 in weekdays during the bus operation time (05:10 am -21:10 pm). After preprocessing of collected data, a total of 560 sets of data are valid, and each set of data contains the travel time of a bus through a road segment. All the bus travel time data sets are divided into two parts for training and testing. The bus travel time observations from the six weekdays from November 2, 2015 to November 9, 2015 are set as training set, and the data of November 10, 2015 is set as the testing set. To avoid numerical difficulties, normalization of the samples is conducted before modeling as follows:
$ x_{i}^{\prime}=\frac{x_{i}-\min \left(x_{i}\right)}{\max \left(x_{i}\right)-\min \left(x_{i}\right)} $ (12)
where, xi denotes the ith value of the input or output data set X={x1, x2, …, xn}.
4.2 Model Identification 4.2.1 WT-SVR model
The history and real-time bus travel time data series are decomposed into several components by wavelet transform at different levels, and each sub-series component is predicted by different SVR models. The decomposed level is the most key parameter in wavelet transform. If the decomposed level is too low, high-frequency noise remains in the low-frequency components, which will directly affect the prediction accuracy of low-frequency components; but when the level is too large, the complexity and training time of the model will be increased. Thus, in this study 'db3' function is selected as the mother wavelet and decomposed level is three, according to the requirement of multi-scale decomposition and single branch reconstruction. All levels components received by decomposition are forecasted respectively by SVR models. At last, the future bus travel time is equal to the summation of prediction results of each component. During the process, RBF is selected as kernel function of SVR models. The best combination of parameters for each SVR is shown in Table 2.
Table 2 Parameters selection of each SVR model
Parameters A3 D1 D2 D3
C 32 256 2 1
γ 0.015 6 0.003 9 0.062 5 0.031 3
ε 0.007 8 0.000 3 0.007 8 0.062 5
4.2.2 SVR model
For the purpose of investigating the performance of the model, the proposed WT-SVR model is compared with the normal SVR and BPANN, which are trained and tested with the same data sets. The normal support vector regression model consists of four model inputs (x1, x2, x3, x4) and one output vector (y) without wavelet decomposition. The best combination of parameters for SVR is C=1, γ=0.062 5 and ε=0.003 125.
4.2.3 ANN model
The ANN model with the hyperbolic tangent sigmoid transfer function is used in this study, which consists of an input layer, a hidden layer, and an output layer. Different number of neurons in the hidden layer is tested in the back-propagation neural network model in order to identify the suitable well-trained one. By trial and error process the optimal number of neurons in the hidden layer is determined to be 8. The final ANN architecture consists of the same input features as the SVR and the model parameters are optimized by the back propagation algorithm.
4.3 Results and Discussion
In order to evaluate the performance of the prediction, the performance measurement of proposed WT-SVR model is mean absolute percentage error (EMAP), mean absolute error (EMA) and the root mean square error (ERMS). The formula can be expressed as follows:
$ E_{\mathrm{MA}}=\frac{1}{n} \sum\limits_{i=1}^{n}\left|y_{i}-\hat{y}_{i}\right| $ (13)
$ E_{\mathrm{MAP}}=\frac{1}{n} \sum\limits_{i=1}^{n}\left|\frac{y_{i}-\hat{y}_{i}}{y_{i}}\right| $ (14)
$ E_{\mathrm{RMS}}=\sqrt{\frac{1}{n} \sum\limits_{i=1}^{n}\left(y_{i}-\hat{y}_{i}\right)^{2}} $ (15)
where, yi is the observed value in future; $\hat{y}_{i} $ is the predicted value of yi. The smaller that the value of EMA, EMAP and ERMS are, the better that the performance of the prediction is.
The future travel time of bus can be forecasted by WT-SVR model proposed in the above section, and the prediction results of two testing sections in the bus route NO.550 are shown in Fig. 5. it can be seen that the proposed hybrid model can capture the underlying dynamics of bus travel time variations and achieve high fitness in both two sections, with the regression coefficient R square 0.754 7 and 0.630 6 respectively. Considering the different traffic conditions on the two testing sections, difference between R square can be easily understood. There are many traffic signal intersections and bus stops in section Ⅱ, which may cause the travel time of this section to be more fluctuant and non-stationary than section Ⅰ.
Fig.5 Prediction results of WT-SVR in two testing sections
Additionally, traditional BP neural networks and support vector regression model are also experimented with in this paper as comparisons. Fig. 6 gives the absolute error of prediction for ANN, SVR and hybrid WT-SVR model in the testing links of the bus No.550. The maximum prediction error of ANN, SVR and WT-SVR are 244, 223 and 167 respectively for section Ⅰ and 331, 256 and 140 respectively for section Ⅱ. It is observed that the hybrid WT-SVR model is able to forecast accurately and gain a lower prediction error in almost all trips when compared to other models. Moreover, Table 2 gives a comparison of EMA, EMAP and ERMS obtained by the WT-SVR, SVR, and ANN models for two testing sections. In comparison with single SVR model, the proposed hybrid model gives a decrease in EMA, EMAP and ERMS values of 15 seconds, 2% and 20 seconds respectively for section Ⅰ and 18 seconds, 2.5% and 23 seconds for section Ⅱ. Similarly when compared to BPANN model, EMA, EMAP and ERMS values for WT-SVR are lowered by 26 seconds, 4% and 30 seconds respectively for section Ⅰ and 31 seconds, 4.5% and 45 seconds for section Ⅱ. According to Lewis[26], a EMAP value of less than 10% can be considered quite accurate. As shown in Table 3, the EMAP values of the two reference models constructed in this paper are close to 10% or even greater than 10% indicating that their performance is between "more accurate" and "accurately accurate". However, the EMAP values of the WT-SVR shows that its predictive performance is "pretty accurate". It shows that the prediction results of the model constructed in this paper are more accurate and reliable, which is feasible and effective in bus travel-time prediction. For the arrival time forecast of passengers issued to passengers, the value of the information depends heavily on the reliability of the forecast results, reducing the prediction error can prevent passengers from missing the bus due to wrong information and improve the availability of information.
Fig.6 Prediction error of three models in testing sections
Table 3 Comparison of WT-SVR with ANN and SVR models
Model Section Ⅰ Section Ⅱ
EMA (s) EMAP (%) ERMS (s) EMA (s) EMAP (%) ERMS (s)
ANN 65.61 9.01 82.89 81.48 11.92 107.51
SVR 54.45 7.78 72.05 68.28 10.05 85.16
WT-SVR 39.70 5.59 52.81 50.64 7.42 62.11
In this paper, the applicability of a hybrid WT-SVR model has been investigated for predicting the bus travel time of the route No.550 in Nanjing, China. The WT-SVR model was developed by integrating wavelet transform technique with support vector regression model. In the developed model, the original travel time data were decomposed into approximate components and detail components by wavelet transform, and SVR model was constructed for each components of future travel time. The model was tested using four input variables including time of day, road segment, and the latest travel time of previous section as well as the latest travel time in predicted section, which is also compared with regular SVR and ANN model with the same dataset. From the results, it was determined that bus travel time prediction based on the wavelet SVR provided higher accuracy when compared to regular SVR and ANN models, as the wavelet transform can capture travel time variations in different scale and thus enhances the forecasting ability of SVR model. Therefore, the proposed model can greatly improve the prediction performance of bus travel times, which would contribute to the increase of the service level and predictive reliability.
Chen M, Chien S I, Liu X, et al. Application of APC/AVL archived data support system. 82 nd Annual Meeting of the Transportation Research Board, 2003. ( 0)
D'Angelo M P, Al-Deek H M, Wang M C. Travel-time prediction for freeway corridors. Transportation Research Record Journal of the Transportation Research Board, 1999, 1676: 184-191. DOI:10.3141/1676-23 ( 0)
Patnaik J, Chien S, Bladikas A. Estimation of bus arrival times using APC data. Physical Review Letters, 2004, 7(1): 128001-128100. ( 0)
Vanajakshi L, Subramanian S C, Sivanandan R. Travel time prediction under heterogeneous traffic conditions using global positioning system data from buses. Iet Intelligent Transport Systems, 2009, 3(1): 1-9. DOI:10.1049/iet-its:20080013 ( 0)
Park D, Rilett L R. Forecasting freeway link travel times with a multilayer feedforward neural network. Computer-Aided Civil and Infrastructure Engineering, 1999, 14(5): 357-367. DOI:10.1111/mice.1999.14.issue-5 ( 0)
Chien I J, Ding Y, Wei C. Dynamic bus arrival time prediction with artificial neural networks. Journal of Transportation Engineering, 2002, 128(5): 429-438. DOI:10.1061/(ASCE)0733-947X(2002)128:5(429) ( 0)
Jeong R, Rilett L R. Bus arrival time prediction using artificial neural network model.International IEEE Conference on Intelligent Transportation Systems, 2004. Proceedings of IEEE Xplore, 2004.988-993. https://www.researchgate.net/publication/224756438_Bus_arrival_time_prediction_using_artificial_neural_network_model ( 0)
Gurmu Z, Wei F. Artificial neural network travel time prediction model for buses using only GPS data. Journal of Public Transportation, 2014, 17(2): 45-65. DOI:10.5038/2375-0901 ( 0)
Yu Bin, Jiang Y L, Yu Bo, et al. Application of support vector machines in bus travel time prediction. Journal of Dalian Maritime University, 2008, 34(4): 158-160. ( 0)
Wu C H, Ho J M, Lee D T. Travel-time prediction with support vector regression. IEEE Transactions on Intelligent Transportation Systems, 2005, 5(4): 276-281. ( 0)
Vanajakshi L, Rilett L R. Support vector machine technique for the short term prediction of travel time. Intelligent Vehicles Symposium. IEEE Xplore, 2007.600-605. https://www.researchgate.net/publication/4268836_Support_Vector_Machine_Technique_for_the_Short_Term_Prediction_of_Travel_Time ( 0)
Wang J, Yu B, Yang Z Z. Bus travel-time prediction based on bus speed. Transport, 2010, 163(1): 3-7. ( 0)
Liu X, Yuan S, Li L. Prediction of temperature time series based on wavelet transform and support vector machine. Journal of Computers, 2012, 7(8): 32-42. ( 0)
Suryanarayana C, Sudheer C, Mahammood V, et al. An integrated wavelet-support vector machine for groundwater level prediction in Visakhapatnam, India. Neurocomputing, 2014, 145(18): 324-335. ( 0)
Kalteh A M. Wavelet genetic algorithm-support vector regression (wavelet GA-SVR) for monthly flow forecasting. Water Resources Management, 2015, 29(4): 1283-1293. DOI:10.1007/s11269-014-0873-y ( 0)
Liu Y, Shi J, Yang Y, et al. Short-term wind-power prediction based on wavelet transform-support vector machine and statistic-characteristics analysis. IEEE Transactions on Industry Applications, 2011, 48(4): 1136-1141. ( 0)
Fang X, Bai T. Share price prediction using wavelet transform and ant colony algorithm for parameters optimization in SVM. Intelligent Systems, 2009. GCIS '09. WRI Global Congress on IEEE, 2009.288-292. ( 0)
Yu B, Yang Z Z, Chen K, et al. Hybrid model for prediction of bus arrival times at next station. Journal of Advanced Transportation, 2010, 44(3): 193-204. ( 0)
Ge Y, Wang G. Study of traffic flow short-time prediction based on wavelet neural network. Electrical Engineering and Control, 2011, 98: 509-516. DOI:10.1007/978-3-642-21765-4 ( 0)
Yusuf A, Madisetti V K. Configuration for predicting travel-time using wavelet packets and support vector regression. Journal of Transportation Technologies, 2013, 3(3): 220-231. DOI:10.4236/jtts.2013.33023 ( 0)
Daubechies I. The wavelet transform, time-frequency localization and signal analysis. IEEE Transactions on Information Theory, 1990, 36(5): 961-1005. DOI:10.1109/18.57199 ( 0)
Vapnik V N. The nature of satistical learning theory. New York: Springer-Verlag Inc, 1999.988 - 999. ( 0)
Ji Y J, Lu J W, Chen X S, et al. Prediction model of bus arrival time based on particle swarm optimization and wavelet neural network. Journal of Transportation Systems Engineering and Information Technology, 2016, 16(3): 60-66. ( 0)
Yu B, Yang Z Z, Chen K, et al. Hybrid model for prediction of bus arrival times at next station. Journal of Advanced Transportation, 2010, 44(3): 193-204. DOI:10.1002/atr.136 ( 0)
Lewis C D. Industrial and Business Forecasting Methods: A Practical Guide to Exponential Smoothing and Curve Fitting.London: Butterworth Heinemann, 1982. ( 0) | CommonCrawl |
Clinical Hemorheology and Microcirculation - Volume 26, issue 3
Clinical Hemorheology and Microcirculation, a peer-reviewed international scientific journal, serves as an aid to understanding the flow properties of blood and the relationship to normal and abnormal physiology. The rapidly expanding science of hemorheology concerns blood, its components and the blood vessels with which blood interacts. It includes perihemorheology, i.e., the rheology of fluid and structures in the perivascular and interstitial spaces as well as the lymphatic system. The clinical aspects include pathogenesis, symptomatology and diagnostic methods, and the fields of prophylaxis and therapy in all branches of medicine and surgery, pharmacology and drug research.
The endeavour of the Editors-in-Chief and publishers of Clinical Hemorheology and Microcirculation is to bring together contributions from those working in various fields related to blood flow all over the world. The editors of Clinical Hemorheology and Microcirculation are from those countries in Europe, Asia, Australia and America where appreciable work in clinical hemorheology and microcirculation is being carried out. Each editor takes responsibility to decide on the acceptance of a manuscript. He is required to have the manuscript appraised by two referees and may be one of them himself. The executive editorial office, to which the manuscripts have been submitted, is responsible for rapid handling of the reviewing process.
Clinical Hemorheology and Microcirculation accepts original papers, brief communications, mini-reports and letters to the Editors-in-Chief. Review articles, providing general views and new insights into related subjects, are regularly invited by the Editors-in-Chief. Proceedings of international and national conferences on clinical hemorheology (in original form or as abstracts) complete the range of editorial features.
The following professionals and institutions will benefit most from subscribing to Clinical Hemorheology and Microcirculation: medical practitioners in all fields including hematology, cardiology, geriatrics, angiology, surgery, obstetrics and gynecology, ophthalmology, otology, and neurology. Pharmacologists, clinical laboratories, blood transfusion centres, manufacturing firms producing diagnostic instruments, and the pharmaceutical industry will also benefit.
Important new topics will increasingly claim more pages of Clinical Hemorheology and Microcirculation: the role of hemorheological and microcirculatory disturbances for epidemiology and prognosis, in particular regarding cardiovascular disorders, as well as its significance in the field of geriatrics. Authors and readers are invited to contact the editors for specific information or to make suggestions.
Recommend this journal Editorial board Submissions
Comparative analysis of aggregate shapes by digitized microscopic images. Application to hypertension
Authors: Foresto, Patricia | D'Arrigo, Mabel | Racca, Liliana | Filippini, Fernando | Gallo, Roberto | Valverde, Juana | Rasia, Rodolfo J.
Abstract: The main objective of the present work was to study modifications in RBC aggregate morphology by analyzing digitized microscopic images and compare them between healthy subjects and patients suffering from essential hypertension. Blood samples were obtained from normal subjects (n=30) and patients suffering from essential hypertension (n=20). RBC aggregate morphology was quantified using direct microscopic observation and numerical analysis of images. ASP (Aggregation Shape Parameter) defined as the ratio of the area of the projected image to its square perimeter was calculated. Other rheological parameters have been determined in order to establish the hemorheological profile of the studied hypertension …states. ASP appears significantly higher (p<0.001) in patients suffering essential hypertension (0.69±0.11) than in normal control subjects (0.25±0.12). RBC aggregation is known to be responsible for the high increase in apparent blood viscosity at low shear rates. By compare ASP values with whole blood viscosity at low rate (2.30 s−1 ) a high correlation was formed between both parameters (Spearman coefficient was 0.8835 and p<0.001). The applied method is simple, direct and quantitative and provides a useful tool for measuring deviations of RBC aggregate morphology. Show more
Citation: Clinical Hemorheology and Microcirculation, vol. 26, no. 3, pp. 137-144, 2002
Hemorheology and hemostasis in vascular disease. A pathophysiological review
Authors: Angelkort, Bernhard | Amann, Berthold | Lawall, Holger
Exercise hemorheology as a three acts play with metabolic actors: Is it of clinical relevance?
Authors: Brun, Jean‐Frédéric
Abstract: Hemorheological effects of exercise are a triphasic phenomenon including: (a) short‐term effects (hyperviscosity mostly due to fluid shifts and alterations of erythrocyte rigidity and aggregability); (b) middle‐term effects (i.e., the reversal of acute effects due to plasma volume expansion (autohemodilution) that lowers both plasma viscosity and hematocrit; (c) long‐term effects that further improve blood fluidity, parallel with the classical training‐induced hormonal and metabolic alterations. Red cell rheology during these 3 stages is affected by white cells and oxidant stress. On the other hand, most metabolic and hormonal alterations play a role in exercise‐induced hemorheological changes: among them, blood lactate appears …to have opposite effects according to the training status, since it generally impairs erythrocyte fluidity while it improves it in some subgroups of highly trained athletes, a difference that could be related to membrane monocarboxylate transporter status. Body composition (mostly hydration status and the amount of fat mass) as well as its major hormonal regulating system (the growth‐hormone‐IGF‐I axis) are both markedly modified by training and these modifications are correlated with hemorheology. Nutritional disturbances affecting caloric and proteic intake, lipids, iron, zinc, etc. also modulate the hemorheologic effects of exercise. The overtraining syndrome represents a situation of unbalance between body's possibilities, nutrition, and work load, and is associated with metabolic, hormonal, immunologic and hemorheologic disturbances. The clinical relevance of these data is underlined by studies showing that exercise training in patients suffering from metabolic and/or cardiovascular disorders (such as the insulin resistance syndrome) results in a parallel improvement of metabolism, risk factors, blood rheology and fitness. Hemorheological measurements require to be studied, at least as sensitive markers of training, and possibly as "true" risk factors highly sensitive to exercise intensification. Show more
Keywords: Blood viscosity, hematocrit, exercise, VO$_{2\max}$, training, overtraining, metabolic fitness, hemorheology, erythrocyte deformability, erythrocyte aggregation, blood lactate
Does haemorheology explain the paradox of hypoxemia during exercise in elite athletes or thoroughbred horses?
Authors: Caillaud, Corinne | Connes, Philippe | Bouix, Didier | Mercier, Jacques
Abstract: Exercise‐induced arterial hypoxemia (EIAH), i.e., a significant drop of O2 arterial partial pressure during sea level exercise, has been shown in both aerobically trained athletes and athletic animal species. The mechanisms potentially involved include ventilation/perfusion inequality and/or pulmonary diffusing capacity limitation. In thoroughbred horses, EIAH is going with pulmonary haemorrhage (EIPH). Stress failure of pulmonary capillaries leading to diffusion limitation has been proposed. Indeed, during intense exercise, the increased cardiac output and blood viscosity combine to rise capillary wall stress. Blood rheology may participate to the increase of $\dot{\mathrm{V}}$ A/$\dot{\mathrm{Q}}$ mismatch and capillary wall stress. High …level of hematocrit (Hct) are known to alter blood flow distribution and rise shear stress in pulmonary capillaries. Any change in red blood cells (RBC) deformability may lead to aggregation at low shear rate, in post capillary veinules. There are contrasting data regarding the effects of blood rheology on EIPH in horses, however the large augmentation of hematocrit during exercise may cause vessel wall stress. In humans, greatest increase in hematocrit may participate to EIAH as well as RBC deformability. Today there is no consensus opinion and further studies of blood rheology in athletes is a field of interest. Show more
Keywords: Blood viscosity, hematocrit, pulmonary gas exchange, horses
The microrheological behavior of young and old red blood cells in athletes
Authors: Muravyov, A.V. | Draygin, S.V. | Eremin, N.N. | Muravyov, A.A.
Abstract: Previous studies have shown a difference in rheological properties of young versus senescent RBCs. There are data that the athletes blood has more young RBCs than untrained people. Our research was a comparative study of the microrheological properties of young and old RBCs in athletes and in untrained people that was as control group. In athletes (men, n=24) and group of the control (men, n=20) the following parameters were measured: RBC aggregation (ARBC; Myrenne aggregometer) and deformability, RBC suspension and plasma viscosity as well as osmolarity, albumin, globulin and fibrinogen concentration, MCHC. Red cells were density (i.e., age) fractionated by …the method of Murphy. After centrifugation the top 10% of the packed cell column (RBCtop, relatively young cells) and the bottom 10% (RBCbot, relatively old cells) were resuspended at 40.0±0.4% (in plasma) for aggregation, deformation and suspension viscosity measurements. It was found significant difference in aggregation and rigidity of the all RBC subpopulations between athletes and control group. The difference in aggregation was associated with reduced fibrinogen and increased ratio albumin/globulin in athletes. Besides, the correlation between aggregation RBCtop and RBCbot with fibrinogen was decreased in athletes. It was one of the cause of high fluidity of the RBCtop‐ and RBCbot suspensions and whole blood in athletes and more effective oxygen transport than in untrained people. Show more
Keywords: Red blood cells, cell aging, aggregation, deformation, athletes
Can white blood cell activation be one of the major factors that affect hemorheological parameters during and after exercise?
Authors: Temiz, Aysegul | Yalcin, Ozlem | Resmi, Halil | Baskurt, Oguz K.
Major alterations in body fluid status and blood rheology
Authors: Tikhomirova, I.A. | Muravyov, A.V. | Levin, V.N.
Abstract: Since the dehydration causes a loss of body water, we studied the rheological properties of blood in the course of water deprivation. Subjects used in this study were 64 white male rats divided into 4 groups: control (n=19) and 3 experimental groups which underwent water deprivation for 3 days (n=15), 6 days (n=15) and 10 days (n=15). The results obtained indicate that under dehydration animals have higher blood and plasma viscosity and erythrocyte aggregation index than in the control group. After 3 days of dehydration these changes are due to the loss of intravascular water. The water deprivation for 10 …days causes significant disturbances in blood composition as well as changes of red blood cell membrane properties whereas blood and plasma volume return to control values. Show more
Keywords: Blood viscosity, dehydration, erythrocyte aggregation, body water, blood composition
Fluid shear stress induces the secretion of monocyte chemoattractant protein‐1 in cultured human umbilical vein endothelial cells
Authors: Yu, Hongmei | Zeng, Yanjun | Hu, Jinlin | Li, Caixia
Abstract: In this study we investigated the patterns of fluid shear stress on induction of monocyte chemoattractant protein‐1 (MCP‐1) secretion in cultured human umbilical vein endothelial cells (HUVECs). MCP‐1 is a potent special chemoattractant, which recruits monocytes into the sub‐endothelium. This process is one of the early events of atherosclerosis. We examined the pattern of fluid shear stress inducing the secretion of MCP‐1 in cultured HUVECs from the view of biomechanics. In our experiments, HUVECs were subjected to controlled levels of shear stress (4, 10, 20 dyn/cm2 ) in a parallel plate flow chamber. MCP‐1 in HUVECs of different periods was …measured by an immunohistochemistry method and digital image analysis; MCP‐1 in perfusion was measured by sandwich ELISA. The results demonstrated the increase of MCP‐1 synthesis and secretion by shear stress was time‐ and force‐dependent. The accumulated level of MCP‐1 in HUVECs under lower shear stress (4 dyn/cm2 ) for 4–5 hrs was 3‐fold compared with that for static cells. When the shear stress lasted to 6 hrs, the secretion of MCP‐1 was reduced to normal levels and could not be increased even when the shear stress lasted for 12 hours. 10 dyn/cm2 had less effect on the secretion of MCP‐1 compared with 4 dyn/cm2 . This research provides data for understanding the mechanism of the contribution of hemodynamic forces to atherosclerosis. Show more
Keywords: Fluid shear stress, atherosclerosis, human vein umbilical endothelial cells (HUVECs), monocyte chemoattractant protein‐1 (MCP‐1)
Insulin‐like growth factor‐binding protein 1 and blood rheology in athletes
Authors: Aïssa Benhaddad, A. | Monnier, J.F. | Fédou, C. | Micallef, J.P. | Brun, J.F.
Abstract: The GH–IGF axis has been recently suggested to modulate blood rheology in trained athletes, via GH effects on body water status and a possible action of IGF‐I on erythrocyte deformability and aggregability. Another potential candidate for such a rheologic effect of the GH–IGF axis is insulin‐like growth factor binding protein‐1 (IGF‐BP1) which is increased in trained people and correlated to fitness: IGF‐BP1 is elevated in patients with polycythemia vera and stimulates erythroid burst formation in vitro. We investigated the statistical relationships between IGF‐BP1 and blood rheology in athletes. 21 soccer players, age 24.5±1.13 yr; body mass index 23.7±0.38 kg/m2 ; …VO2max 44.8±7 ml.min−1 .kg−1 ). The major statistical determinant of IGFBP1 (measured at rest after overnight fast) was age (r=0.752, p=0.00013) which was not correlated with rheological parameters. IGF BP1 was negatively correlated with blood viscosity η (high shear rate r=−0.516, p=0.024) and positively correlated with the percentage of extracellular water in total body water (ECW/TBW) (r=0.488, p=0.039). The previously reported correlations between IGF‐I and both η (r=0.637, p=0.003) and red cell rigidity "Tk" (r=0.696, p=0.0137) were observed, but IGF‐I and IGF‐BP1 were not correlated to each other (r=−0.176 ns) and their correlations with η and Tk appeared to be independent when studied by multivariate analysis. Consistent with these correlations, subjects in the upper tertile of IGF‐BP1 (>23.4 ng/ml) compared to those in the lower (<7.5 ng/ml) had a higher percentage of ECW/TBW (40.8±0.4 vs 38±0.8%, p=0.033), a lower η (2.7±0.05 vs 2.97±0.06 mPa.s, p=0.016), and a lower Tk (0.54±0.05 vs 0.63±0.01, p=0.027). Thus, beside GH and IGF‐I, IGF‐BP1, which is reported to act on erythroid progenitors, exhibits statistical relationships with blood fluidity and erythrocyte flexibility that may suggest a physiological role in improving blood rheology. Show more
Keywords: Blood viscosity, hemorheology, erythrocyte deformability, erythrocyte aggregability, exercise training, overtraining, insulin‐like growth factor binding protein 1, insulin‐like growth factor binding protein 3, insulin‐like growth factor I, growth hormone, body fluids
Issue 2,3,4 | CommonCrawl |
xtratweet
Mahematics
Previous Year Questions
Srikanta
NETWOREDBLOG
Satisfaction of One's Curiosity is one of the Greatest Sources of Happiness in Life
Science is the Beauty of Organized Knowledge
People Propose, Science Studies, Technology Confirms
Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.
Book Hangover
Books are a uniquely Portable Magic
Life is a Gift - Live Each Day as a Thank You Note!
Dolphins Hush When Killer Whales Lurk
Saturday, October 23, 2010 Life, Nature No comments
Research has suggested killer whale predation may affect cetacean vocal behavior; however, few data exist to test this hypothesis. Data collected for 19609 km of visual and acoustic shipboard surveys in the tropical Pacific Ocean were examined to determine if changes in dolphin vocal activity could be attributed to the presence of killer whales.
These surveys included 346detections of three highly vocal dolphin species (genus Stenella),whose whistles can be detected at ranges over 4.6 km. Random forest analysis was used to model vocal behavior based on sea state, visibility, fog rain, thermo cline temperature depth, mixed layer depth, chlorophyll, distance to shore, species, group size, perpendicular distance, and presence of killer whales.
The results show that the presence of killer whales significantly inhibited vocal activity in these tropical dolphins (p = 0.02). Killer whales are rare in the tropics, and this disruption in communication may not have a significant impact on interactions necessary for survival. However, in temperate climates, where increased productivity supports a greater abundance of killer whales, this interruption in communication may have a greater impact. The lower incidence of whistling dolphins in temperate waters may be related to the greater abundance of killer whales in these areas.
Nobel Prize and Wonder Material Graphene
Friday, October 15, 2010 Physics, Technology No comments
Russian born duo Andre Geim and Konstantin Novoselov shared the Noble prize in Physics 2010 for their work on a carbon compound called Graphene.
Graphene may not common to the man now, but experts believe that its amazing mechanical and electrical properties will prove as transformative to coming generations as the television, atomic bomb and silicon chip did in the decades after the Nobel committee first honored the scientists who made those inventions possible.
Graphene is a single-atom-thick planner sheet of carbon atoms (sp²-bonded) arrayed in a honeycomb pattern. Graphene is the basic structural element for all other graphite materials including graphite, carbon nanotubes and fullerenes. It is the strongest material ever discovered, yet flexible like rubber. It conducts electricity better than silicon, and resists heat better than diamond. And it allows for physics experiments that would otherwise require miles-long particle accelerators to be performed on a desktop.
"It's an amazing material with the incredible electronic properties and mechanical strength," said Paul Sheehan, head of the surface nanoscience and sensors section at the Naval Research Laboratory in Washington, D.C.
As an ultra-light but nearly indestructible material, graphene (and graphene composites) could drastically alter the aerospace and automotive industry, said Rodney Ruoff, a professor of engineering at the University of Texas, Austin.
Research has already accelerated to the point where laboratories can mass-produce the material, Ruoff said. Soon companies will be able to produce sheets of graphene hundreds of feet wide; embed it in other materials as a strengthening composite; or create microscopic flakes of it for use as a conductive ink.
Since electrons behave as waves in graphene, not as rubber balls as they do in silicon and metals, researchers can use graphene as a platform for observing particle behavior previously consigned to the world of theory, said Pablo Jarillo-Herrero, a professor of physics at MIT.
"Graphene has enabled us to study in small-scale experiments, cheap enough to do on your kitchen counter," Jarillo-Herrero said. "It created a whole field – condensed matter quantum physics – that wasn't there before."
Carbon is one of the most versatile elements in the periodic table, forming the base for diamonds, pencils and all life on Earth. Given that diversity, it is likely that the most transformative uses for graphene have yet to be discovered, Sheehan of the Office of Naval Research said.
Dr.Andre Geim
Born: 1958, Sochi, Russia
Director of Manchester Centre for Mesoscience and Nanotechnology
Chair of Condensed Matter Physics
Interview of Dr. Andre Geim
Dr. Kostya Novoselov
Born: 1974, Nizhny Tagil, Russia
Interview of Dr. Kostya Novoselov
Nikola's Death Ray Mystery
Sunday, October 03, 2010 curiosity, Physics, Scientist No comments
Thomas Edison gets all the credit as the father of electricity, but the real credit should go to a man named Nikola Tesla. Nikola Tesla (10 July 1856 – 7 January 1943) born as an ethnic Serb in the village of Smiljan, Croatian Military Frontier in Austrian Empire (now Croatia). He was a subject of the Austrian Empire by birth and later became an American citizen. He was an inventor and also one of the most important contributors to the birth of commercial electricity, and is best known for his many revolutionary developments in the field of electromagnetism in the late 19th and early 20th centuries. Aside from his work on electromagnetism and electromechanical engineering, Tesla contributed in varying degrees to the establishment of robotics, remote control, radar and computer science, and to the expansion of ballistics, nuclear physics and theoretical physics.
Most scholars acknowledge that Tesla's obscurity is partially due to his eccentric ways and fantastic claims during the waning years of his life, of communicating with other planets and death rays. Many of these fantastic inventions of Tesla are scientifically accurate and workable. It has simply taken mankind this long to catch up to the astonishing ideas of a man who died in 1943. It is now known that various governments were extremely interested in Tesla's ideas for weapons and limitless energy. So much so that after his death, the U.S. military confiscated boxes full of Tesla's research and writings. Much of this material has never been revealed to the public. What is not so widely known is that Tesla often suffered from financial difficulties, forcing him to move from hotel to hotel as his debt increased? Many times Tesla had to move, leaving crates of his belongings behind.
Tesla made statements during his lifetime that he had invented a Death Ray, which would be of benefit to warfare. According to Tesla the ray was capable of destroying up to 10,000 enemy aircraft at a distance of 250 miles away! Tesla's Death Ray was featured in the July 23, 1934 issue of Times Magazine, which stated that Nikola Tesla had announced a combination of four weapons that would make war 'unthinkable'. The article went on to describe how the weapons would work: "the nucleus of the idea is a death beam of submicroscopic particles flying at velocities approaching that of light".
This may sound like a fantasy, but it may surprise the reader to learn that we use Tesla's Particle beam everyday in the modern world. Particle beams are simply light beams, constructed of a special combination of electromagnetic waves. Unlike naturally occurring light the waves in a particle beam are very special, because they all end at the same point, creating a sort of imaginary 'knife edge' of light waves. Particle beams are utilized in hospitals in delicate micro-laser surgeries such as brain surgeries or cauterization within deep tissue, to determine distance, cut diamonds or guide missiles. So the question arises about Nikola's Death Ray invention – the source of the mystery.
After the death of Nikola Tesla, when the room in which he passed was searched, the papers had disappeared. All traces of the papers he claimed to have written on the subject vanished. In 1947 the military intelligence service identified the papers as extremely important, but no one has claimed possession of them or knowledge of their whereabouts. There are a number of people who suggest that the documents remain unfound that they were never lost in the first place. But, it has not been able to complete the work Tesla begun. Another reasonable theory would be that someone close to Tesla might have taken them to prevent the creation of such a weapon of mass destruction.
Whatever became of Tesla's brilliant invention, were it to surface now in any form it would likely be used to devastating effect. Already we have seen evidence by the invention of nuclear power. If it is truly lost, then perhaps we are better off without it.
A Place Where Things Seems To Roll Uphill
Friday, September 24, 2010 curiosity, Physics No comments
Friends you may find or hear of a mysterious place where objects can apparently roll uphill. Actually this is a common illusion which is found in numerous locations around the world. These spots where the illusion is especially powerful often become tourist attractions. Tour guides may like to claim that the effect is a mystery or that it is due to magnetic or gravitational anomalies or even that it is a paranormal phenomenon which science cannot explain.
But, friends this is not true of course. Natural anomalies can only be detected with sensitive equipments and cannot account for these places but science can easily explain them as optical illusion. If you observe the uphills, usually it is a stretch of road in a hilly area where the level horizon is obscured. Objects such as trees and walls which are normally provide visual clues to the true vertical, may be leaning slightly. This creates an optical illusion making a slight downhill look like an uphill slope. Objects may appear to roll uphill. Sometimes rivers even seem to flow against gravity.
There are several things which enables us to sense which way is up. The balance mechanism in our inner ears is one system we have, but visual clues are also important and can be overriding. If the horizon cannot be seen or is not level then we may be fooled by objects which we expect to be vertical but which aren't really. False perspective may also play a role. If a line of trees get larger or smaller with distance away, our sense of perspective is thrown off. Objects far away may seem smaller or larger than they really are. People often overestimate the angle of a slope. If you are standing on a slope of 1 degree it will seem like a slope of 5 degrees and if you stand on a slope of 5 degrees it may seem like you are on a slope of 30 degrees. Because of this effect the anti-gravity illusion can seem stronger be even when you know the cause.
Interestingly, even when true cause is understood it can be difficult to believe. In some cases the sea horizon is partly visible and it seems incredible that the effect can be an illusion. If you think there is a magnetic anomaly just using two plumb lines, one made of iron and one of stone. They would hang at different angles if a strong magnetic field was acting horizontally. In fact magnetic anomalies are never that strong and are never the cause as is easily shown.
However friends, it is not always easy to demonstrate that a slope which appears to go uphill is really going downhill. Plumb lines and spirit levels cannot be relied on if you think there is a gravitational anomaly. If the slope runs parallel to a sea view it would be possible to compare a plumb line with the horizon. Otherwise the only reliable way of determining the true horizontal is by careful surveying. Gravitational anomalies are always very small. In any case, if there was a gravitational anomaly you should wonder how you notice it. There would be an equal effect on your sense of balance as there is on any object. The anomaly would not be apparent unless there was a clear view of the sea behind the slope, which there never is.
Mystery Spot Road, off Branciforte Dr. Santa Cruz, CA, USA.
Mystery Spot, Putney Road, Benzie County, Michigan, USA.
Gravity Hill, Northwest Baltimore County, USA.
Gravity Hill, Mooresville, Southwest Indianapolis, USA.
Gravity Road, Ewing Road exit ramp off Route 208, Franklin Lakes, USA.
Mystery Hill, Blowing Rock, hwy 321, Carolina, USA.
Confusion Hill, Idelwild Park, Ligonier, Pennsylvania, USA.
Gravity Hill, off of State Route 96 just south of New Paris, Bedford County, Pennsylvania, USA.
Oregon Vortex, near Gold-Hill, Grants Pass, Oregon, USA.
Spook Hill, North Wales Drive, North Avenue, Lake Wales, Florida, USA.
Magnetic Hill, Near Neepawa in Manitoba, Canada.
Gravity Hill, on McKee Rd. Abbotsford, British Columbia, Canada.
Electric Brae, on the A719, Near Croy Bay, South of Ayr, Ayeshire, Scotland.
Anti-Gravity Hill, Straws Lane Road, Wood-End, Near hanging rock, Victoria, Australia
Morgan Lewis Hill, St Andrew, Barbados.
Hill South of Rome, in Colli Albani, near Frascati, Italy.
Malveira da Serra, on N247 coast road West of Lisbon, Portugal
Mount Penteli, on a road to Mount Penteli, Athens, Greece
Mount Halla, on the 1.100 highway a few miles south of the airport, near Mount Halla, on the island of Cheju Do, South Korea
Top Ten cars could help to save the planet
Friday, September 17, 2010 Technology 10 comments
25th anniversary of Bucky Ball
Sunday, September 05, 2010 Chemistry, Technology No comments
Yesterday was the 25th anniversary of discover of Bucky Ball – known as fullerene. Fullerenes are new class of carbon allotropes. They are spheroidal in shape and contain even number of carbon atoms ranging from 60 – 350 or above. The C60 fullerene is the most stable and was the first to be identified. It contains 60 atoms which are arranged in the shape of a football or a soccer ball, therefore, it is called buck ball.
It contains 20 six – membered rings and 12 five – memebered rings but five – membered rings are fused only to six – memebered rings. In other words, no two five – memebered rings are fused together. Further, because these allotropes look like geodesic by the US architect Buckminister Fuller, they are called Buckminster fullerenes or fullerenes.
Bucky ball is a dark solid at room temperature. Unlike diamond and graphite which are giant molecules containing thousands and thousands of carbon atoms, C60 fullerene is a very small molecule containing only 60 carbon atoms.
Bucky balls or Fullerenes were discovered by H.W.Kroto, R.F.Curt and R.E.Smalley. The 1996 Nobel Prize was awarded to above scientists for the discovery of fullerenes.
The popular search engine giant Google released popular doodle on the 25th anniversary of the discovery of the Bucky ball on Saturday, September 4.
Thursday, July 29, 2010 Books, Mathematics No comments
Human have used symmetrical patterns for thousands of years in both functional and decorative ways. Now, a new book by three mathematicians offers both math experts and enthusiasts a new way to understand symmetry and a fresh way to see the world. In The Symmetries of Things, eminent Princeton mathematician John H. Conway teams up with Chaim Goodman-Strauss of the University of Arkansas and Heidi Burgiel of Bridgewater State College to present a comprehensive mathematical theory of symmetry in a richly illustrated volume. The book is designed to speak to those with an interest in math, artists, working mathematicians and researchers.
"Symmetry and pattern are fundamentally human preoccupations in the same way that language and rhythm are. Any culture that is making anything has ornament and is preoccupied with this visual rhythm," Goodman-Strauss said. "There are actually Neolithic examples of many of these patterns. The fish-scale pattern, for example, is 22,000 years old and shows up all over the world in all kinds of contexts." Symmetrical objects and patterns are everywhere. In nature, there are flowers composed of repeating shapes that rotate around a central point. Architects trim buildings with friezes that repeat design elements over and over. Mathematicians, according to Goodman-Strauss, are latecomers to the human fascination with pattern. While mathematicians bring their own particular concerns, "we're also able to say things that other people might not be able to say. "The symmetries of Things contribute a new system of notation or descriptive categories for symmetrical patterns and a host of new proofs. The first section of the book is written to be accessible to a general reader with interest in the subject. Sections two and three are aimed at mathematicians and experts in the field. The entire book, Goodman-Strauss said, "is meant to be engaging and reveal itself visually as well."
Book Information:
Authors: John Horton Conway, Heidi Burgiel, Chaim Goodman- Strauss
Publisher: A K Peters Ltd
Keywords: things, symmetries
EBook link here.
Some Excellent Snaps ...
Thursday, July 22, 2010 Photos No comments
The photographs collected by NAINA KAUR and posted by me. I just captured the snaps on behalf of NAINA KAUR for her excellent collections.
How Big is Infinity?
Saturday, July 17, 2010 Mathematics 1 comment
Most of us are familiar with the infinity symbol – the one that looks like the number eight tipped over on its side. The infinite sometimes crops up in everyday speech as a superlative form of the word many. But how many is infinitely many? How far away is "from here to infinity"? How big is infinity?
You can't count to infinity. Yet we are comfortable with the idea that there are infinitely many numbers to count with: no matter how big a number you might come up with, someone else can come up with a bigger one: that number plus one – or plus two, or times two. Or times itself. There simply is no biggest number. Is there?
Is infinity a number? Is there anything bigger than infinity? How about infinity plus one? What's infinity plus infinity? What about infinity times infinity? Children, to whom the concept of infinity is brand new, pose questions like this and don't seem to have very much bearing on daily life, so their unsatisfactory answers don't seem to be a matter of concern.
At the turn of the century, in Germany, the Russian – born mathematician George Cantor applied the tools of mathematical rigor and logical deduction to questions about infinity in search of satisfactory answers. His conclusions are paradoxical to our everyday experience, yet they are mathematically sound. The world of our everyday experience is finite. We can't exactly say where the boundary line is, but beyond the finite, in the realm of the transfinite, things are different.
Mathematics of DNA
Saturday, July 10, 2010 Life, Mathematics No comments
Why is DNA packed into twisted, knotted shapes? What does this knotted structure have to do with? How DNA functions? How does DNA 'undo' these complicated knots to transform itself into different structures? The mathematical theory of knots, links and tangles is helping to find answers.
In order to perform such functions as replication and information transmission, DNA must transform itself from one form of knotting or coiling into another. The agents for these transformations are enzymes. Enzymes maintain the proper geometry and topology during the transformation and also 'cut' the DNA strands and recombine the loose ends. Mathematics can be used to model these complicated processes.
The description and quantization of the three-dimensional structure of DNA and the changes in DNA structure due to the action of these enzymes have required the serious use of geometry and topology. This use of mathematics as an analytical tool is especially important because there is no experimental way to observe the dynamics of enzymatic action directly.
A key mathematical challenge is to deduce the enzyme mechanism from observing the changes the enzymes bring about in the geometry and topology of the DNA. This requires the construction of mathematical models for enzyme action and the use of these models for enzyme action and the use of these models to analyze the results of topological enzymology experiments. The entangled form of the product DNA knots and links contains information about the enzymes that made them.
Martian moon mystery
Friday, July 02, 2010 Space Science No comments
The Martian moon Phobos is cratered, lumpy and about 16.8 miles long. According to a study, the moon is also unusually light. Planetary scientists found that Phobos is probably not a solid object, and that as much as 30 percent of the moon's interior may be empty space.
That doesn't mean that Phobos is an empty shell where we could, say, set up a rest stop for spaceships on their way to the outer planets. But the new finding probably does mean that Phobos was not an asteroid that got caught in Mars' gravity as it floated by the planet.
Phobos is the larger of Mars' two moons, and astronomers have had many ideas about where it came from. Previous studies have suggested that Phobos was an asteroid. Other studies suggest the moon formed from bits of Martian rock that were sent into space after a giant object, like an asteroid, crashed in Mars. The new study suggests that neither of these ideas is completely correct. The truth might be some combination of the two.
Scientists may never know how Phobos came to be a Martian satellite, but the new study may help eliminate some possibilities. A planetary geophysicist is a scientist who studies physical properties, such as rocks and appearance, to understand more about celestial bodies such as planets and moons.
The Mars Express, a spacecraft that orbits Mars and takes measurements. That spacecraft left Earth in 2003 and is a project by the European Space Agency. In March, Mars Express flew closer to Phobos than any spacecraft ever had before, ESA reports.
The scientists wanted to learn the density of Phobos. Density measures how close together, on average, are the atoms in an object. If two objects are the same size but have different densities, the denser object will have more mass — which means it will feel heavier when you're holding it on Earth. Density is found by dividing mass by volume. Since the scientists already had a good idea of the volume of Phobos, they just had to find its mass in order to figure out its density.
They made their mass measurements by studying the gravitational force of Phobos. Gravity is an attractive force, which means anything with mass attracts anything else with mass. The more mass an object has, the stronger its gravitational force. Since a large body like the Earth has a lot of mass, it has a strong gravitational force.
When Mars Express flew close to Phobos, the small moon's gravity attracted the spacecraft. By studying changes in the motion of Mars Express, the scientists were able to estimate the gravitational tug of Phobos. Once they knew the strength of its gravity, they could find its mass.
They found that Phobos has a density of about 1.87 grams per cubic centimeter. The rocks in the crust of Mars, for comparison, are much denser: about 3 grams per cubic centimeter. This difference suggests that Phobos is not made of rocks from the surface of Mars.
Some asteroids have densities of about 1.87 grams per cubic centimeter, but those asteroids would be broken apart by Mars' gravity — a fact that probably rules out the possibility that Phobos was once a free-floating asteroid.
Some scientists don't mind giving up the idea that Phobos was once an asteroid. Finally we're drifting away from the idea that the Martian moons are captured asteroids. We happy to see that Phobos and Deimos [Mars' other moon] are getting a lot of attention these days.
Mathematical Proof of God's Existance
Thursday, June 24, 2010 Mathematics No comments
Catherine the Great (Catherine II) was a woman of culture who reigned the 34 years from 1762 to 1796. This is a story when Empress Catherine II invited Denis Diderot, a distinguished French philosopher and appointed him as first librarian of St. Petersburg Academy. At that time Leonard Euler, famous mathematician, returned to St. Petersburg at the request of Catherine II from Prussia in Berlin and was the chair of mathematics at the Academy of St. Petersburg.
Empress Catherine II was alarmed when Diderot's arguments for atheism were influencing members of her court. So Euler was asked to confront Diderot. Diderot was informed that a learned mathematician had produced a proof of the existence of God. He agreed to view the proof as it was presented in court.
Euler appeared, advanced toward Diderot, and in a tone of perfect conviction announced, "Sir, $\frac{a+b^n}{n}=x$ , hence God exists—reply!". Diderot, to whom all mathematics was gibberish, stood dumbstruck. The peals of laughter erupted from the court. Embarrassed, Diderot asked to leave Russia and was graciously granted by the Empress.
If You Can …
Wednesday, June 23, 2010 Z - talk No comments
Hlelo if you can raed tihs tehn taht maens that your barin is mroe poerwufl tehn others cool huh? Yuor barin olny raeds the frist and lsat letetr of each wrod. If it tkaes you mroe tehn 15 scenods to raed tihs taht maens you can tehn maens you are a fckuning rtaerd if you can tehn tuhmbs tihs up.
Origin of Universe
Tuesday, June 08, 2010 Physics No comments
One of the most persistently asked questions has been: How was the Universe created? Many people believed that the Universe had no beginning or end and was truly unchanging static infinite.
Save Your Globe!
Saturday, June 05, 2010 Nature No comments
The entire species of
Aldabra banded snail died
out after warmer weather
cut off the rainfall in its habitat!
Wonder Fish
Friday, May 28, 2010 Life, Nature No comments
Alien of the Deep:
Looking like a creature from the Alien movies, this nightmarish "longhead dreamer" anglerfish (Chaenophryne longiceps) was until recently an alien species to Greenland waters.
Synthetic Genome
Saturday, May 22, 2010 Life, Nature 1 comment
Craig Venter and colleagues have achieved a remarkable milestone: they designed a genome, and brought it to life. More specifically, they've synthesized a chromosome consisting of over a million DNA base pairs, and implanted it in a bacterial cell to replace the cell's original genome. That cell then reproduced, giving birth to offspring that only had the synthetic genome.
Primitive Birds Lack Of Flying!
Saturday, May 15, 2010 Life No comments
The wings were willing, but the feathers were weak. Delicate, thin-shafted plumage would have made flapping difficult if not impossible for two prehistoric birds, a new analysis of fossil feathers suggests.
Green Exercise Boost Mental Health
Wednesday, May 05, 2010 Health No comments
Researchers have reported the fast improvements in mood and self-esteem for just five minutes of exercise in a green space. The study in Environmental Science and Technology journal suggested the strongest impact on young people. The outdoor activities like walking, gardening, cycling, fishing, boating, horse-riding and farming in a green environment with water contained – such as a lake or river boosts well – being.
Name E-mail * Message *
Physics NCERT Books XI
The NCERT books and exemplar of physics for class XI can download from these links. Physics Book - 1 Physics Book - 2 Physics Exemplar
Alien of the Deep: Looking like a creature from the Alien movies, this nightmarish "longhead dreamer" anglerfish (Chaenophryne...
NCERT Book For Class VIII
Students the NCERT and Exemplar Books for Mathematics and Science can download from here. Mathematics Book Mathematics Exemplar Book ...
Thomas Edison gets all the credit as the father of electricity, but the real credit should go to a man named Nikola Tesla. Nikola Tesla (10 ...
Class - IX (2)
Class - VIII (1)
Class - X (2)
Class - XI (3)
Class - XII (3)
Disclaimer (1)
Space Science (2)
Trading (15)
Z - talk (1)
Receive posts by email:
Delivered byFeedBurner
Copyright © 2011 xtratweet | Powered by Blogger | CommonCrawl |
Results for 'Jiayi Pan'
Regulation Retrieval Using Industry Specific Taxonomies.Chin Pang Cheng, Gloria T. Lau, Kincho H. Law, Jiayi Pan & Albert Jones - 2008 - Artificial Intelligence and Law 16 (3):277-303.details
Increasingly, taxonomies are being developed and used by industry practitioners to facilitate information interoperability and retrieval. Within a single industrial domain, there exist many taxonomies that are intended for different applications. Industry specific taxonomies often represent the vocabularies that are commonly used by the practitioners. Their jobs are multi-faceted, which include checking for code and regulatory compliance. As such, it will be very desirable if industry practitioners are able to easily locate and browse regulations of interest. In practice, multiple sources (...) of government regulations exist and they are often organized and classified by the needs of the issuing agencies that enforce them rather than the needs of the communities that use them. One way to bridge these two distinct needs is to develop methods and tools that enable practitioners to browse and retrieve government regulations using their own terms and vocabularies, for example, via existing industry taxonomies. The mapping from a single taxonomy to a single regulation is a trivial keyword matching task. We examine a relatedness analysis approach for mapping a single taxonomy to multiple regulations. We then present an approach for mapping multiple taxonomies to a single regulation by measuring the relatedness of concepts. Cosine similarity, Jaccard coefficient and market basket analysis are used to measure the semantic relatedness between concepts from two different taxonomies. Preliminary evaluations of the three relatedness analysis measures are performed using examples from the civil engineering and building industry. These examples illustrate the potential benefits of regulatory usage from the mapping between various taxonomies and regulations. (shrink)
RT₂² Does Not Imply WKL₀.Jiayi Liu - 2012 - Journal of Symbolic Logic 77 (2):609-620.details
We prove that RCA₀ + RT $RT\begin{array}{*{20}{c}} 2 \\ 2 \\ \end{array} $ ̸͢ WKL₀ by showing that for any set C not of PA-degree and any set A, there exists an infinite subset G of A or ̅Α, such that G ⊕ C is also not of PA-degree.
RT2 2 Does Not Imply WKL0.Jiayi Liu - 2012 - Journal of Symbolic Logic 77 (2):609-620.details
New Official Documents of China Addressing Academic Misconduct.Jiayi Zhu - 2020 - Science and Engineering Ethics 26 (3):1881-1882.details
Technology Ethics in Applied Ethics
Follow the Heart or the Head? The Interactive Influence Model of Emotion and Cognition.Jiayi Luo & Rongjun Yu - 2015 - Frontiers in Psychology 6.details
Pan Shu Quan Ji =.Shu Pan - 2007 - Ren Min Jiao Yu Chu Ban She.details
Psychotherapy and Psychoanalysis in Philosophy of Cognitive Science
Pan Yuting Xian Sheng Tan Hua Lu.Yuting Pan - 2012 - Fu Dan da Xue Chu Ban She.details
Chinese Philosophy in Asian Philosophy
A CEEMDAN and XGBOOST-Based Approach to Forecast Crude Oil Prices.Yingrui Zhou, Taiyong Li, Jiayi Shi & Zijie Qian - 2019 - Complexity 2019:1-15.details
Crude oil is one of the most important types of energy for the global economy, and hence it is very attractive to understand the movement of crude oil prices. However, the sequences of crude oil prices usually show some characteristics of nonstationarity and nonlinearity, making it very challenging for accurate forecasting crude oil prices. To cope with this issue, in this paper, we propose a novel approach that integrates complete ensemble empirical mode decomposition with adaptive noise and extreme gradient boosting, (...) so-called CEEMDAN-XGBOOST, for forecasting crude oil prices. Firstly, we use CEEMDAN to decompose the nonstationary and nonlinear sequences of crude oil prices into several intrinsic mode functions and one residue. Secondly, XGBOOST is used to predict each IMF and the residue individually. Finally, the corresponding prediction results of each IMF and the residue are aggregated as the final forecasting results. To demonstrate the performance of the proposed approach, we conduct extensive experiments on the West Texas Intermediate crude oil prices. The experimental results show that the proposed CEEMDAN-XGBOOST outperforms some state-of-the-art models in terms of several evaluation metrics. (shrink)
CSR as Gendered Neocoloniality in the Global South.Banu Ozkazanc-Pan - 2018 - Journal of Business Ethics 160 (4):851-864.details
Corporate social responsibility has generally been recognized as corporate pro-social behavior aimed at remediating social issues external to organizations, while political CSR has acknowledged the political nature of such activity beyond social aims. Despite the growth of this literature, there is still little attention given to gender as the starting point for a conversation on CSR, ethics, and the Global South. Deploying critical insights from feminist work in postcolonial traditions, I outline how MNCs replicate gendered neocolonialist discourses and perpetuate exploitative (...) material dependences between Global North/south through CSR activities. Specifically, I address issues of neocolonial relations, subaltern agency, and ethics in the context of gendered global division of labor through the exemplar of Rana Plaza and its aftermath. In all, I offer new directions for CSR scholarship by attending to the intersections of gender, ethics, and responsibility as they relate to corporate actions in the Global South. (shrink)
Qin Jiayi Zi Xuan Ji.Julia Ching - 2005 - Shandong Jiao Yu Chu Ban She.details
Generalized Bayesian Inference Nets Model and Diagnosis of Cardiovascular Diseases.Jiayi Dou, Mingchui Dong & Booma Devi Sekar - 2011 - Journal of Intelligent Systems 20 (3):209-225.details
A generalized Bayesian inference nets model is proposed to aid researchers to construct Bayesian inference nets for various applications. The benefit of such a model is well demonstrated by applying GBINM in constructing a hierarchical Bayesian fuzzy inference nets to diagnose five important types of cardiovascular diseases. The patients' medical records with doctors' confirmed diagnostic results obtained from two hospitals in China are used to design and verify HBFIN. Bayesian theorem is used to calculate the propagation of probability and address (...) the uncertainties involved in each sequential stage of inference nets to deduce the disease. The validity and effectiveness of proposed approach is witnessed clearly from testing results obtained. (shrink)
Bayesian Reasoning, Misc in Philosophy of Probability
Diagnosis in Philosophy of Science, Misc
Origin of Tweed in Au–Cu–Al Alloys.Jiayi Liu, Zhiquan Liu & Xuejun Jin - 2014 - Philosophical Magazine 94 (1):56-72.details
Corona Pan(Dem)Ic: Gateway to Global Surveillance.Regina Sibylle Surber - forthcoming - Ethics and Information Technology.details
The essay reviews the digital emergency measures many governments have adopted in an attempt to curb Covid-19. It argues that those 'virologically legitimized' measures may infringe the human right to privacy and mark the transition into a world of global surveillance. At this possible turning point in human history, panic and latent fear seem to fog much needed farsightedness. Leaving the current state of emotional paralysis and restarting to critically assess the digital pandemic management can serve as an emergency break (...) against drifting into a new era of digital monitoring. (shrink)
Computer Ethics in Applied Ethics
The Art of Lamentation in the Works of Pan Yue: "Mourning the Eternally Departed".C. M. Lai & Pan Yue - 1994 - Journal of the American Oriental Society 114 (3):409-425.details
20th Century Continental Philosophy in 20th Century Philosophy
Poststructuralism in Continental Philosophy
Investigating Institutional Practice in News Translation: An Empirical Study of a Chinese Agency Translating Discourse on China.Li Pan - 2014 - Perspectives 22 (4):547-565.details
Who Suffers When Supervisors Are Unhappy? The Roles of Leader–Member Exchange and Abusive Supervision.Su-Ying Pan & Katrina Jia Lin - 2018 - Journal of Business Ethics 151 (3):799-811.details
Driven by the cognitive-neoassociationistic model of aggression, this study examines how supervisors' negative affect at work influences their interaction with subordinates, which further affects subordinate outcomes. Drawing upon research on power/resource interdependence and victim precipitation theory, we also test whether the positive relationship between supervisors' negative affect and abusive supervision is moderated by leader–member exchange. Using one hundred and eighty supervisor–subordinate dyads from five hotels, we found that, supervisors' negative affect at work was positively related to abusive supervision, LMX buffered (...) the positive association between supervisors' negative affect and abusive supervision, and the indirect effects of supervisors' negative affect on subordinate outcomes via abusive supervision was buffered by LMX, such that the indirect effects were only found in dyads with lower LMX, but not in dyads with higher LMX. Theoretical contributions and practical implications for managers and organizations were also discussed. (shrink)
Argumentative Patterns in Chinese Medical Consultations.Dawei Pan, Yanjin Chen & Shier Ju - 2018 - Argumentation 32 (1):37-52.details
Medical argumentation in non-Western societies has attracted little attention. In line with the pragma-dialectical approach to the study of argumentation, this article identifies a prototypical argumentative pattern in Chinese medical consultations. In addition to institutional preconditions, whose relevance to the argumentative pattern has been well cited, a factor that may be equally important has remained unnoticed: the preference for certain drugs, treatments or therapeutic measurements on the basis of folk interpretations of medical phenomena in individual ethnic groups. These preferences may (...) be seen as cultural preferences in the medical domain. In this paper, a prototypical argumentative pattern of Chinese medical consultations is provided. Two levels of the pattern are distinguished and discussed: a basic argumentative pattern as presented in the pragmatic-dialectical approach and its extensions due to cultural preferences as well as institutional constraints. Illustrated by an exemplary analysis on the basis of empirical data collected from Chinese consulting rooms, the impact of cultural preferences on physicians' strategic maneuvering in argumentation is identified and recognized. It is argued that the existence and impact of cultural preferences require attention in medical argumentation. (shrink)
In Defence of Pan-Dispositionalism.Simon Bostock - 2008 - Metaphysica 9 (2):139-157.details
Pan-Dispositionalism – the view that all properties (and relations) are irreducibly dispositional – currently appears to have no takers amongst major analytic metaphysicians. There are those, such as Mumford, who are open to the idea but remain uncommitted. And there are those, such as Ellis and Molnar, who accept that some properties are irreducibly dispositional but argue that not all are. In this paper, I defend Pan-Dispositionalism against this 'Moderate' Dispositionalism.
Dispositions and Powers in Metaphysics
Dare to Be Different? Investigating the Relationship Between Analyst Categorisation Hierarchies and Corporate Social Responsibility (CSR) Conformity.Xin Pan, Xuanjin Chen, Mengxi Yang & Xin Chen - 2020 - Business Ethics: A European Review 29 (1):56-69.details
Business Ethics: A European Review, EarlyView.
Introduction.David Pan - 2019 - Télos 2019 (187):3-7.details
Science Education in the People's Republic of China.Wenjin Wang, Jiayi Wang, Guizing Zhang, Yong Lang & Victor J. Mayer - 1996 - Science Education 80 (2):203-222.details
Urban Industrial Land Expansion and Its Influencing Factors in Shunde: 1995–2017.Chen Xiong, Jiayi Lu & Fangqu Niu - 2020 - Complexity 2020:1-12.details
The change in the industrial land is of great significance to the sustainable development of cities. However, scholars have done relatively little research on this subject, especially on the urban industrial land expansion process and its influencing factors. This article selects Shunde, a typical Chinese industrial city of the Guangdong province, using remote sensing interpretation to analyze the spatiotemporal evolution of industrial land expansion from 1995 to 2017 and applies the multiple regression model to analyze the influencing factors. The main (...) conclusions are as follows: the industrial land in Shunde has experienced the development trend of "slow expansion-rapid expansion-slow expansion," and the "fragmentation" of the industrial land space is still prominent. Decentralization, marketization, capital, and labor force have passed the significance test of the model, which are important factors influencing the expansion of the industrial land in Shunde. Among them, decentralization is the primary factor, while marketization has the greatest impact on industrial land expansion in Shunde. The influence of globalization and technical progress is not significant. (shrink)
Panpsychism, Pan-Consciousness and the Non-Human Turn: Rethinking Being as Conscious Matter.Cornel Du Toit - 2016 - Hts Theological Studies 72 (4):1-11.details
It is not surprising that in a time of intensified ecological awareness a new appreciation of nature and the inanimate world arises. Two examples are panpsychism and deep incarnation. Consciousness studies flourish and are related to nature, the animal world and inorganic nature. A metaphysics of consciousness emerges, of which panpsychism is a good example. Panpsychism or panconsciousness or speculative realism endows all matter with a form of consciousness, energy and experience. The consciousness question is increasingly linked to the quantum (...) world, which offers some option in bridging mind and reality, consciousness and matter. In this regard Kauffman's notion of 'triad' is referred to as well as the implied idea of cosmic mind. This is related to the notion of 'deep incarnation' as introduced by Gregersen. Some analogical links are made between panpsychism and deep incarnation. (shrink)
Consciousness and Materialism in Philosophy of Mind
Adaptive Gradient-Based Iterative Algorithm for Multivariable Controlled Autoregressive Moving Average Systems Using the Data Filtering Technique.Jian Pan, Hao Ma, Xiao Jiang, Wenfang Ding & Feng Ding - 2018 - Complexity 2018:1-11.details
Emotion Regulation and Complex Brain Networks: Association Between Expressive Suppression and Efficiency in the Fronto-Parietal Network and Default-Mode Network.Junhao Pan, Liying Zhan, ChuanLin Hu, Junkai Yang, Cong Wang, Li Gu, Shengqi Zhong, Yingyu Huang, Qian Wu, Xiaolin Xie, Qijin Chen, Hui Zhou, Miner Huang & Xiang Wu - 2018 - Frontiers in Human Neuroscience 12.details
Cultural Differences in on-Line Sensitivity to Emotional Voices: Comparing East and West.Pan Liu, Simon Rigoulot & Marc D. Pell - 2015 - Frontiers in Human Neuroscience 9.details
Neuroethics in Applied Ethics
RBF Neural Network Backstepping Sliding Mode Adaptive Control for Dynamic Pressure Cylinder Electrohydraulic Servo Pressure System.Pan Deng, Liangcai Zeng & Yang Liu - 2018 - Complexity 2018:1-16.details
Ethical Judgments in Business Ethics Research: Definition, and Research Agenda.John R. Sparks & Yue Pan - 2010 - Journal of Business Ethics 91 (3):405-418.details
Decades of empirical and theoretical research has produced an extensive literature on the ethical judgments construct. Given its importance to understanding people's ethical choices, future research should explore the psychological processes that produce ethical judgments. In this paper, the authors discuss two steps needed to advance this effort. First, they note that the business ethics literature lacks a single, generally accepted definition of ethical judgments. After reviewing several extant definitions, the authors offer a definition of the construct and discuss its (...) advantages. Second, future ethical judgment research would benefit from greater integration between theories of ethical decision making and theories of social cognition. Drawing upon the Hunt–Vitell ( Journal of Macromarketing 6 (Spring), 5–15, 1986 ; In: N. C. Smith and J. A. Quelch (eds.), Ethics in Marketing . Irwin, Homewood, IL, pp. 775–784, 1992 ) model and the heuristic-systematic model (Chaiken, Journal of Personality and Social Psychology 39 (November), 752–766, 1980 ), the authors present a brief research agenda intended to stimulate research on the psychological processes behind ethical judgments. (shrink)
Carl Schmitt on Culture and Violence in the Political Decision.David Pan - 2008 - Telos: Critical Theory of the Contemporary 2008 (142):49-72.details
Though he has become known to his detractors as a theorist who has replaced rational discourse with pure power in his theory of the decision, Carl Schmitt's notion of politics is, on a fundamental level, culturally and ethically based. This cultural and ethical conception of politics permeates his work, not only in texts about explicitly cultural issues, such as his 1916 study of Theodor Däubler's Expressionist Nordlicht or his meditation on the connection between politics and art in Shakespeare in Hamlet (...) oder Hekuba,1 but also in Political Theology, one of the key texts of his theory of decisionism. While commentators…. (shrink)
Doing Well While Doing Bad? CSR in Controversial Industry Sectors.Ye Cai, Hoje Jo & Carrie Pan - 2012 - Journal of Business Ethics 108 (4):467 - 480.details
In this article, we examine the empirical association between firm value and CSR engagement for firms in sinful industries, such as tobacco, gambling, and alcohol, as well as industries involved with emerging environmental, social, or ethical issues, i.e., weapon, oil, cement, and biotech. We develop and test three hypotheses, the window-dressing hypothesis, the value-enhancement hypothesis, and the value-irrelevance hypothesis. Using an extesive US sample from 1995 to 2009, we find that CSR engagement of firms in controversial industries positively affects firm (...) value after controlling for various firm characteristics. To address the potential endogeneity problem, we further estimate a system of equations and change regression and continue to find a positive relation between CSR engagement and firm value. Our findings support the value-enhancement hypothesis and are consistent with the premise that the top management of US firms in controversial industries, in general, considers social responsibility important even though their products are harmful to human being, society, or environment. (shrink)
Long-Time Predictive Modeling of Nonlinear Dynamical Systems Using Neural Networks.Shaowu Pan & Karthik Duraisamy - 2018 - Complexity 2018:1-26.details
Le «Pan-Propositionnalisme» de Jean Wyclif Le «Pan-Propositionnalisme» de Jean Wyclif.Laurent Cesalli - 2005 - Vivarium 43 (1):124-155.details
This paper shows how Wyclif is able at the same time to claim that whatever is is a proposition and to develop a nontrivial theory of propositional truth and falsity. The study has two parts: 1) Starting from Wyclif's fivefold propositional typology – including a propositio realis and asic esse sicut propositio significat – we will analyse the three different kinds of real predication, the distinction between primary and secondary signification of propositions and the status of logical truth as opposed (...) to metaphysical truth. Furthermore, the notion of ens logicum will be compared to Walter Burley's propositio in re of which it appears to be a close analogon. 2) The second part deals with two semantic and metaphysical implications of the "pan-propositionalism": the extended notion of being called upon to explain the truth of so-called non-standard propositions and the relation between contents of the divine mind as "arch-truth-makers" and eternal as well as contingent truths. (shrink)
13th/14th Century Philosophy, Misc in Medieval and Renaissance Philosophy
Medieval Logic in Medieval and Renaissance Philosophy
Medieval Philosophy of Language in Medieval and Renaissance Philosophy
Medieval and Renaissance Philosophy, Misc in Medieval and Renaissance Philosophy
A Multi-Agent Based Framework for the Simulation of Human and Social Behaviors During Emergency Evacuations.Xiaoshan Pan, Charles S. Han, Ken Dauber & Kincho H. Law - 2007 - AI and Society 22 (2):113-132.details
Many computational tools for the simulation and design of emergency evacuation and egress are now available. However, due to the scarcity of human and social behavioral data, these computational tools rely on assumptions that have been found inconsistent or unrealistic. This paper presents a multi-agent based framework for simulating human and social behavior during emergency evacuation. A prototype system has been developed, which is able to demonstrate some emergent behaviors, such as competitive, queuing, and herding behaviors. For illustration, an example (...) application of the system for safe egress design is provided. (shrink)
Philosophy of Artificial Intelligence in Philosophy of Cognitive Science
Using a Two-Tier Test to Examine Taiwanese Graduate Students' Misunderstanding of Responsible Conduct of Research.Sophia Jui-An Pan & Chien Chou - 2015 - Ethics and Behavior 25 (6):500-527.details
The present study investigates Taiwanese graduate students' general understanding and misunderstanding of Responsible Conduct of Research. A total of 580 graduate students responded to the self-developed Responsible Conduct of Research Reasoning Test. The results reveal that, first, students did not have sufficient knowledge to reason why a particular instance of research conduct was doable or not. Second, the statistical results show that female students, students majoring in the humanities or the social sciences, doctoral-level students, and students with RCR-related training outperformed (...) others. In addition, the misbehaviors that students judged relatively uncritically comprise the following nine categories: seeing authorship as a property or power, misinterpreting research coauthors' responsibilities, inaccurately conducting the informed-consent process, fabricating and falsifying research data, misinterpreting the correct citation of research sources, holding vague concepts of self-plagiarism, misinterpreting the Taiwan Copyright Act, accepting duplicate-publication practices, and accepting piecemeal publication practices. The present study discusses participative students' major misunderstandings of actual RCR-related practices. The study also presents further implications and suggestions based on the findings. (shrink)
Professional Ethics in Applied Ethics
Translating Conjunctive Cohesion in Legal Documents.Hanting Pan - 2014 - Perspectives 22 (1):1-20.details
The Newspaper as an Epideictic Meeting Point : On the Epidictic Nature of the Newspaper Argumentation.Fernando López Pan - 2015 - Argumentation 29 (3):285-303.details
This article shows how epideictic rhetoric and argumentation may be interrelated in a general-interest newspaper framed as a single discourse produced by a collective author. In more specific terms, the view advanced here is that newspaper as whole has an epideictic dimension which, in terms of argumentation, is the fundamental or predominant one. The usefulness of this approach is twofold. In terms of rhetoric, to explore the applicability of epideictic rhetoric to journalistic discourse; and in the field of journalism studies, (...) the goal is to draw on the theory of epideictic rhetoric so as to refine the conceptualization of the nature of argumentation in the newspaper as such. Given this twofold perspective, the account of newspaper discursive practices will be general, and the classical and contemporary epideictic theory will be briefly summarized. However, the outcome of this analysis is an example of a fruitful encounter between the two fields. (shrink)
Informal Logic in Logic and Philosophy of Logic
Vice or Virtue? The Impact of Corporate Social Responsibility on Executive Compensation.Ye Cai, Hoje Jo & Carrie Pan - 2011 - Journal of Business Ethics 104 (2):159-173.details
We empirically examine the impact of corporate social responsibility (CSR) on CEO compensation using a large sample of the US firms from 1996 to 2010. We develop and test two hypotheses, the overinvestment hypothesis based on agency theory and the conflict–resolution hypothesis based on stakeholder theory. We find that the lag of CSR adversely affects both total compensation and cash compensation, after controlling for various firm and board characteristics. Our estimates show that an interquartile increase in CSR is followed by (...) a 4.35% (2.78%) decrease in total (cash) compensation. We also find an inverse association between lagged employee relations and CEO compensation. Our results are robust to the correction for endogeneity using instrumental variable approach. Taken together, our results support the conflict–resolution hypothesis, but not the CSR overinvestment argument. (shrink)
Ethics of Executive Remuneration in Applied Ethics
Automatic Imitation in a Rich Social Context with Virtual Characters.Xueni Pan & Antonia F. De C. Hamilton - 2015 - Frontiers in Psychology 6.details
Seismic Sedimentology of Sand-Gravel Bodies on the Steep Slope of Rift Basins — A Case Study of the Shahejie Formation, Dongying Sag, Eastern China.Xiaomin Zhu, Rong Pan, Shunli Li, Hongbao Wang, Xin Zhang, Jiawang Ge & Zhiyong Lu - 2018 - Interpretation: SEG 6 (2):SD13-SD27.details
A variety of genetic types of reservoirs with good hydrocarbon accumulation conditions have been developed in petroliferous rift basins. The near-provenance, coarse-grained depositional system on the steep slopes of rift basins has become an important oil and gas exploration area. However, due to the large changes in lithologies and difficulties in its identification and characterization, the challenges in oil/gas exploration are significant. Seismic sedimentology, in this case, provides an effective means of identifying and characterizing the complex, coarse-grained sediments. We use (...) a large number of cores, logs, and seismic data and establish the third- and fourth-order sequence frameworks in the Shahejie Formation on the steep slope of the northern Dongying Sag in eastern China. Three types of lithofacies, including conglomerates, sandstones, and mudstones with 12 subspecies facies types have been identified and the relationship between different lithofacies types and depositional systems is determined. The relative changes of the lake level control the distribution of depositional systems in a sequence framework. Lowstand system tracts of SQ3 and SQ4 in the Shahejie Formation mainly developed near-shore subaqueous fans and a small number of slump turbidite fans. Small-scale offshore fans mainly develop in lacustrine transgressive systems tracts, and fan deltas, flood-type sublake fans, slump turbidite fans, and near-shore subaqueous fans mainly developed in highstand systems tracts. The study of seismic sedimentology, based on the theory of seismic lithology and seismic geomorphology, have been carried out. Stratal slices are used to identify and characterize the morphology and temporal-spatial distributions of various types of sand-gravel bodies on the steep slopes of the Dongying Sag based on core calibration and establish the model of seismic sedimentology for various types of sand-gravel bodies in different systems tracts. (shrink)
The Joint Moderating Impact of Moral Intensity and Moral Judgment on Consumer's Use Intention of Pirated Software.Mei-Fang Chen, Ching-Ti Pan & Ming-Chuan Pan - 2009 - Journal of Business Ethics 90 (3):361 - 373.details
Moral issues have been included in the studies of consumer misbehavior research, but little is known about the joint moderating effect of moral intensity and moral judgment on the consumer's use intention of pirated software. This study aims to understand the consumer's use intention of pirated software in Taiwan based on the theory of planned behavior (TPB) proposed by Ajzen (Organizational Behavior and Human Decision Processes, 50, 179, 1991). In addition, moral intensity and moral judgment are adopted as a joint (...) moderator to examine their combined influence on the proposed research framework. The results obtained from this Taiwan case reveal that the antecedent constructs proposed in the TPB model–an individual's attitude and subjective norms toward using pirated software, and perceived behavioral control to use pirated software–indeed have positive impacts on the consumer's use intention of pirated software. In addition, the joint moderating effect of moral intensity and moral judgment is manifested in the consumer's use intention of pirated software. The results of this study not only could substantiate the results of consumer misbehavior research, but also could provide some managerial suggestions for Taiwanese government authorities concerned and the related software industries devoted to fighting pirated software. (shrink)
Moral Judgment, Misc in Meta-Ethics
Software in Philosophy of Computing and Information
Pan Chao: Foremost Woman Scholar of China.J. K. Shryock & Nancy Lee Swann - 1933 - Journal of the American Oriental Society 53 (1):91.details
The Joint Moderating Impact of Moral Intensity and Moral Judgment on Consumer's Use Intention of Pirated Software.Mei-Fang Chen, Ching-Ti Pan & Ming-Chuan Pan - 2009 - Journal of Business Ethics 90 (3):361-373.details
Moral issues have been included in the studies of consumer misbehavior research, but little is known about the joint moderating effect of moral intensity and moral judgment on the consumer's use intention of pirated software. This study aims to understand the consumer's use intention of pirated software in Taiwan based on the theory of planned behavior proposed by Ajzen. In addition, moral intensity and moral judgment are adopted as a joint moderator to examine their combined influence on the proposed research (...) framework. The results obtained from this Taiwan case reveal that the antecedent constructs proposed in the TPB model–an individual's attitude and subjective norms toward using pirated software, and perceived behavioral control to use pirated software–indeed have positive impacts on the consumer's use intention of pirated software. In addition, the joint moderating effect of moral intensity and moral judgment is manifested in the consumer's use intention of pirated software. The results of this study not only could substantiate the results of consumer misbehavior research, but also could provide some managerial suggestions for Taiwanese government authorities concerned and the related software industries devoted to fighting pirated software. (shrink)
Cultural System Vs. Pan‐Cultural Dimensions: Philosophical Reflection on Approaches for Indigenous Psychology.Kwang-Kuo Hwang - 2015 - Journal for the Theory of Social Behaviour 45 (1):2-25.details
The three approaches for conducting psychological research across cultures proposed by Berry , namely, the imported etic, emic and derived etic approach are critically examined for developing culture-inclusive theories in psychology, in order to deal with the enigma left by Wilhelm Wundt. Those three approaches have been restricted to a certain extent by the pan-cultural dimensional approach which may result in the Orientalism of psychology in understanding people of non-Western cultures. This article is designated to provide the philosophical ground for (...) an alternative cultural system approach to construct culture-inclusive theories in psychology. Following the principle of cultural psychology: "one mind, many mentalities" , the alternative strategy contains two steps: First, based on Bhaskar's critical realism, all universal mechanisms should seek to represent the operation of the human mind. Second, based on Archer's analytical dualism, the mechanisms of the universal mind may be used as frameworks for analyzing any cultural tradition. The culture-inclusive theories thus obtained represent the synchronic morphostasis of a cultural system, which may be used as theoretical frameworks for conducting either qualitative or quantitative empirical research in studying the diachronic morphogenesis of socio-cultural interaction in a particular culture. (shrink)
Weibo or WeChat? Assessing Preference for Social Networking Sites and Role of Personality Traits and Psychological Factors.Juan Hou, Yamikani Ndasauka, Xuefei Pan, Shuangyi Chen, Fei Xu & Xiaochu Zhang - 2018 - Frontiers in Psychology 9.details
Mapping Discourse Analysis in Translation Studies Via Bibliometrics: A Survey of Journal Publications.Meifang Zhang, Hanting Pan, Xi Chen & Tian Luo - 2015 - Perspectives 23 (2):223-239.details
Culture Prefigures Cognition in Pan/Homo Bonobos.Sue Savage-Rumbaugh, William M. Fields & Par Segerdahl - 2005 - Theoria 20 (3):311-328.details
This article questions traditional experimental approaches to the study of primate cognition. Beecuse of a widespread assumption that cognition in non-human primates is genetically encoded and "natural," these approaches neglect how profoundly apes' cultural rearing experiences affect test results. We deseribe how three advanced cognitive abilities - imitation, theory of mind and language - emerged in bonobos maturing in a bi-species Pan/Homo culture, and how individual rearing differences led to individual forms of these abilities. These descriptions are taken from a (...) rich ethnographic material, and we argue for the scientific superiority of participant-based ethnographic studies of primate cognition in shared Pan/Homo cultures. (shrink)
Embodiment and Situated Cognition in Philosophy of Cognitive Science
Philosophy of Cognitive Science, Miscellaneous in Philosophy of Cognitive Science
On Chung-Ying Cheng's onto-hermeneutics.Pan Derong & Katherine R. Xin - 1995 - Journal of Chinese Philosophy 22 (2):215-231.details
Chinese Philosophy: Topics, Misc in Asian Philosophy
Contemporary Chinese Philosophy, Misc in Asian Philosophy
Hermeneutics, Misc in Continental Philosophy
Assessing Mission Drift at Venture Capital Impact Investors.Dilek Cetindamar & Banu Ozkazanc-Pan - 2017 - Business Ethics: A European Review 26 (3):257-270.details
In this article, we consider a recent trend whereby private equity available from venture capital firms is being deployed toward mission-driven initiatives in the form of impact investing. Acting as hybrid organizations, these impact investors aim to achieve financial results while also targeting companies and funds to achieve social impact. However, potential mission drift in these VCs, which we define as a decoupling between the investments made and intended aims, might become detrimental to the simultaneous financial and social goals of (...) such firms. Based on a content analysis of mission statements, we assess mission drift and the hybridization level of VC impact investors by examining their missions and their investment practices through the criteria of social and financial logic. After examining eight impact-oriented VC investors and their investments in 164 companies, we find mission drift manifest as a disparity between the means and ends in half of the VC impact investors in our sample. We discuss these findings and make suggestions for further studies. (shrink)
Pan: der griechische Bocksgott. Versuch einer Monographie. By R. Herbig. Pp. 99; pl. 40 + 14 text figs. Frankfurt a/Main: V. Klostermann, 1949. DM 14. [REVIEW]H. J. Rose & R. Herbig - 1950 - Journal of Hellenic Studies 70:90-90.details
Pan American Health Organization.Gilles Dussault - 1995 - Idee 16 (2).details | CommonCrawl |
A periodic and diffusive predator-prey model with disease in the prey
DCDS-S Home
Condensing operators and periodic solutions of infinite delay impulsive evolution equations
June 2017, 10(3): 463-473. doi: 10.3934/dcdss.2017022
Almost periodic solution for neutral functional dynamic equations with Stepanov-almost periodic terms on time scales
Yongkun Li , and Pan Wang
Department of Mathematics, Yunnan University, Kunming, Yunnan 650091, China
* Corresponding author: Yongkun Li
Received November 2015 Revised December 2016 Published February 2017
Fund Project: The first author is supported by the National Natural Sciences Foundation of China under Grant 11361072.
We first propose a concept of almost periodic functions in the sense of Stepanov on time scales. Then, we consider a class of neutral functional dynamic equations with Stepanov-almost periodic terms on time scales in a Banach space. By means of the contraction mapping principle, we establish some criteria for the existence and uniqueness of almost periodic solutions for this class of dynamic equations on time scales. Finally, we give an example to illustrate the effectiveness of our results.
Keywords: Neutral functional dynamic equation, Stepanov-almost periodic function, the contraction mapping principle, time scales.
Mathematics Subject Classification: Primary: 34K40, 34K14; Secondary: 34N05.
Citation: Yongkun Li, Pan Wang. Almost periodic solution for neutral functional dynamic equations with Stepanov-almost periodic terms on time scales. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 463-473. doi: 10.3934/dcdss.2017022
B. Amir and L. Maniar, Composition of pseudo almost periodic functions and Cauchy problems with operator of non dense domain, Ann. Math. Blaise Pascal, 6 (1999), 1-11. doi: 10.5802/ambp.110. Google Scholar
L. Amerio and G. Prouse, Almost-periodic Functions and Functional Differential Equations, Van Nostrand-Reinhold, New York, 1971. Google Scholar
J. Andres and D. Pennequin, On Stepanov almost-periodic oscillations and their discretizations, J. Differ. Equ. Appl., 18 (2012), 1665-1682. doi: 10.1080/10236198.2011.587813. Google Scholar
J. Andres and D. Pennequin, On the nonexistence of purely Stepanov almost-periodic solutions of ordinary differential equations, Proc. Am. Math. Soc., 140 (2012), 2825-2834. doi: 10.1090/S0002-9939-2012-11154-2. Google Scholar
S. Bochner, Beitrage zur Theorie der fastperiodischen Funktionen, Ⅰ, Mathematische Annalen, 96 (1927), 119-147. doi: 10.1007/BF01209156. Google Scholar
S. Bochner, Abstrakte fastperiodische Funktionen, Acta Mathematica, 61 (1933), 149-184. doi: 10.1007/BF02547790. Google Scholar
S. Bochner, Fastperiodische Lösungen der Wellengleichung, Acta Mathematica, 62 (1933), 227-237. doi: 10.1007/BF02393605. Google Scholar
M. Bohner and A. Peterson, Dynamic Equations on Time Scales: An Introduction with Applications, Birkh$ \ddot{\mathrm{a}} $user, Boston, 2001. doi: 10.1007/978-1-4612-0201-1. Google Scholar
M. Bohner and A. Peterson, Advances in Dynamic Equations on Time Scales, Birkh$ \ddot{\mathrm{a}} $user, Boston, 2003. doi: 10.1007/978-1-4612-0201-1. Google Scholar
H. Bohr, Zur Theorie der fastperiodischen Funktionen, Ⅰ, Acta Mathematica, 45 (1925), 29-127. doi: 10.1007/BF02395468. Google Scholar
H. Bohr, Zur Theorie der fastperiodischen Funktionen, Ⅱ, Acta Mathematica, 46 (1925), 101-214. doi: 10.1007/BF02543859. Google Scholar
A. Cabada and D. R. Vivero, Expression of the Lebesgue Δ-integral on time scales as a usual Lebesgue integral; application to the calculus of Δ-antiderivatives, Math. Comput. Modelling, 43 (2006), 194-207. doi: 10.1016/j.mcm.2005.09.028. Google Scholar
X. X. Chen and F. X. Lin, Almost periodic solutions of neutral functional differential equations, Nonlinear Anal. Real World Appl., 11 (2010), 1182-1189. doi: 10.1016/j.nonrwa.2009.02.010. Google Scholar
J. Favard, Sur les équations différentielles á coefficients présquepériodiques, Acta Mathematica, 51 (1927), 31-81. doi: 10.1007/BF02545660. Google Scholar
J. Favard, Leçons sur les fonctions presque-périodiques, Paris, Gauthier-Villars, 1933.Google Scholar
A. M. Fink, Almost Periodic Differential Equations, Springer, Berlin, 1974. Google Scholar
Z. Hu, Boundedness and Stepanov's almost periodicity of solutions, Electron. J. Differ. Equ., 2005 (2005), 1-7. Google Scholar
Z. Hu and A. B. Mingarelli, Bochner's theorem and Stepanov almost periodic functions, Ann. Mat. Pura Appl., 187 (2008), 719-736. doi: 10.1007/s10231-008-0066-5. Google Scholar
M. N. Islam and Y. N. Raffoul, Periodic solutions of neutral nonlinear system of differential equations with functional delay, J. Math. Anal. Appl., 331 (2007), 1175-1186. doi: 10.1016/j.jmaa.2006.09.030. Google Scholar
B. Jackson, Partial dynamic equations on time scales, J. Comput. Appl. Math., 186 (2006), 391-415. doi: 10.1016/j.cam.2005.02.011. Google Scholar
Y. K. Li and B. Li, Existence and exponential stability of positive almost periodic solution for Nicholson's blowflies models on time scales, SpringerPlus, 5 (2016), p1096. doi: 10.1186/s40064-016-2700-9. Google Scholar
Y. K. Li and C. Wang, Uniformly almost periodic functions and almost periodic solutions to dynamic equations on time scales, Abstr. Appl. Anal., 2011 (2011), Article ID 341520, 22 pp. doi: 10.1155/2011/341520. Google Scholar
Y. K. Li, L. L. Zhao and L. Yang, $ C^1 $-almost periodic solutions of BAM neural networks with time-varying delays on time scales, The Scientific World J., 2015 (2015), Article ID 727329, 15 pp. doi: 10.1155/2015/727329. Google Scholar
Md. Maqbul, Almost periodic solutions of neutral functional differential equations with Stepanov-almost periodic terms, Electron. J. Differ. Equ., 2011 (2011), 1-9. Google Scholar
Md. Maqbul and D. Bahuguna, Almost periodic solutions for Stepanov-almost periodic differential equations, Differ. Equ. Dyn. Syst., 22 (2014), 251-264. doi: 10.1007/s12591-013-0172-8. Google Scholar
V. V. Stepanov, Uber einige Verallgemeinerungen der fastperiodischen Funktionen, Mathematische Annalen, 95 (1926), 437-498. Google Scholar
Y.-H. Su and Z. S. Feng, A non-autonomous Hamiltonian system on time scales, Nonlinear Anal., 75 (2012), 4126-4136. doi: 10.1016/j.na.2012.03.003. Google Scholar
Y.-H. Su and Z. S. Feng, Homoclinic orbits and periodic solutions for a class of Hamiltonian systems on time scales, J. Math. Anal. Appl., 411 (2014), 37-62. doi: 10.1016/j.jmaa.2013.08.068. Google Scholar
C. Wang and Y. K. Li, Weighted pseudo almost automorphic functions with applications to abstract dynamic equations on time scales, Ann. Pol. Math., 108 (2013), 225–240, Available from: http://eudml.org/doc/280802. doi: 10.4064/ap108-3-3. Google Scholar
L. Yang and Y. K. Li, Existence and global exponential stability of almost periodic solutions for a class of delay duffing equations on time scales, Abstr. Appl. Anal., 2014 (2014), Article ID 857161, 8 pp. doi: 10.1155/2014/857161. Google Scholar
H. Zhou, Z. F. Zhou and W. Jiang, Almost periodic solutions for neutral type BAM neural networks with distributed leakage delays on time scales, Neurocomputing, 157 (2015), 223-230. doi: 10.1016/j.neucom.2015.01.013. Google Scholar
Nguyen Minh Man, Nguyen Van Minh. On the existence of quasi periodic and almost periodic solutions of neutral functional differential equations. Communications on Pure & Applied Analysis, 2004, 3 (2) : 291-300. doi: 10.3934/cpaa.2004.3.291
Junhao Hu, Chenggui Yuan. Strong convergence of neutral stochastic functional differential equations with two time-scales. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-18. doi: 10.3934/dcdsb.2019108
Ruichao Guo, Yong Li, Jiamin Xing, Xue Yang. Existence of periodic solutions of dynamic equations on time scales by averaging. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 959-971. doi: 10.3934/dcdss.2017050
Mostafa Fazly, Mahmoud Hesaaraki. Periodic solutions for a semi-ratio-dependent predator-prey dynamical system with a class of functional responses on time scales. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 267-279. doi: 10.3934/dcdsb.2008.9.267
Sung Kyu Choi, Namjip Koo. Stability of linear dynamic equations on time scales. Conference Publications, 2009, 2009 (Special) : 161-170. doi: 10.3934/proc.2009.2009.161
B. Kaymakcalan, R. Mert, A. Zafer. Asymptotic equivalence of dynamic systems on time scales. Conference Publications, 2007, 2007 (Special) : 558-567. doi: 10.3934/proc.2007.2007.558
Wacław Marzantowicz, Justyna Signerska. Firing map of an almost periodic input function. Conference Publications, 2011, 2011 (Special) : 1032-1041. doi: 10.3934/proc.2011.2011.1032
Xiao Wang, Zhaohui Yang, Xiongwei Liu. Periodic and almost periodic oscillations in a delay differential equation system with time-varying coefficients. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6123-6138. doi: 10.3934/dcds.2017263
Raegan Higgins. Asymptotic behavior of second-order nonlinear dynamic equations on time scales. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 609-622. doi: 10.3934/dcdsb.2010.13.609
Chunhong Li, Jiaowan Luo. Stochastic invariance for neutral functional differential equation with non-lipschitz coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3299-3318. doi: 10.3934/dcdsb.2018321
Denis Pennequin. Existence of almost periodic solutions of discrete time equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 51-60. doi: 10.3934/dcds.2001.7.51
Guy Barles, Ariela Briani, Emmanuel Trélat. Value function for regional control problems via dynamic programming and Pontryagin maximum principle. Mathematical Control & Related Fields, 2018, 8 (3&4) : 509-533. doi: 10.3934/mcrf.2018021
Yunfei Peng, X. Xiang. A class of nonlinear impulsive differential equation and optimal controls on time scales. Discrete & Continuous Dynamical Systems - B, 2011, 16 (4) : 1137-1155. doi: 10.3934/dcdsb.2011.16.1137
Saroj Panigrahi. Liapunov-type integral inequalities for higher order dynamic equations on time scales. Conference Publications, 2013, 2013 (special) : 629-641. doi: 10.3934/proc.2013.2013.629
Y. Gong, X. Xiang. A class of optimal control problems of systems governed by the first order linear dynamic equations on time scales. Journal of Industrial & Management Optimization, 2009, 5 (1) : 1-10. doi: 10.3934/jimo.2009.5.1
Tomás Caraballo, Gábor Kiss. Attractivity for neutral functional differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (7) : 1793-1804. doi: 10.3934/dcdsb.2013.18.1793
Lizhong Qiang, Bin-Guo Wang. An almost periodic malaria transmission model with time-delayed input of vector. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1525-1546. doi: 10.3934/dcdsb.2017073
P.E. Kloeden. Pitchfork and transcritical bifurcations in systems with homogeneous nonlinearities and an almost periodic time coefficient. Communications on Pure & Applied Analysis, 2004, 3 (2) : 161-173. doi: 10.3934/cpaa.2004.3.161
Dirk Aeyels, Filip De Smet, Bavo Langerock. Area contraction in the presence of first integrals and almost global convergence. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 135-157. doi: 10.3934/dcds.2007.18.135
Yongkun Li Pan Wang | CommonCrawl |
Sadly, as the summer season begins news from occurring fires generously populate the headlines of many of our local newspapers. Certainly the increase in the number of frequent wildfires is a symptom of the Earth's global warming, and it's absolutely striking that 90% of biomass burning is human instigated.
Fires at diverse locations and ecosystems burn different depending on the type and structure of its biomass and moisture content. But especially at northern latitudes, where the temperature is rising every year, the fires at Boreal forests burn the hottest and contribute with the most pollutants per unit area burned (such as carbon monoxide). It's not the case for instance of small and dry grasses in savannahs, in which mainly carbon dioxide is produced in a near-complete combustion process. Thus, in turn fires may further contribute to global warming and compromise air quality. Not only that, but the socio-economic impact in terms of terrain rehabilitation, property loss and human causality are also important factors to consider as a result of fires.
Quick reaction to the effects of fires is of ultimate importance to identify, evaluate and quantify the fire effects over large burned areas. Decisions undertaken soon on the short-term severity of burns allow for a better understanding of long-term effects. The strategy shaped for the biomass recovery may be therefore accurately established. However, burn severity is difficult to estimate and depends on the scale we are looking at, the particular means available to measure it, or the initial objectives, as we may be interested only in a particular species.
An interesting way to evaluate burn severity in an aggregate effect over large areas is to use Landsat 7 (or Landsat 8 now) 30 meter resolution data, which provides near global coverage of multispectral data every 16 days, a suitable spatial resolution for broad-area coverage. Complementing the Landsat images with ground data provides meaningful information to contrast and understand the results obtained.
One of the most popular indexes used in burn severity assessment is the Normalized Burn Ratio $(NBR)$, which is defined as the combination of two Landsat bands:
$$NBR=\frac{R_4-R_7}{R_4+R_7},$$
with $R_4$ and $R_7$ the reflectance values at the sensor for band 4 (near-infrared) and band 7 (short-wave infrared), respectively. Band 4 naturally reacts positively to leaf area and plant productivity, whereas band 7 responds to drying and to some non-vegetated characteristics. These bands are applicable to Landsat 5 TM (decommissioned) and Landsat 7 ETM+ instruments; in Landsat 8 the bands are narrower and the near-infrared band 4 in TM/ETM+ has been renamed to band 5 in OLI (read this post for more information on Landsat sensors). In this text the near-infrared band 4 refers to the Landsat 7 band.
Landsat TM/ETM+ band response to burns (USDA Forest Service)
The reflectance of green vegetation and moist surfaces (including wet soil and snow) is therefore large for band 4, just the opposite of band 7, which gets absorbed and offers low reflectance values. The sensitivity of each band to burning is shown in the figure on the left, showing the positive and negative response of bands 4 and 7, respectively, with the largest variance for the latter.
The difference of the two bands enhances the effects of burns in vegetation, yielding the following cases:
Band 4 $>$ Band 7 $(NBR>0)$ for most vegetated areas that are productive
Band 4 $\approx$ Band 7 $(NBR\approx 0)$ for non-productive vegetation, dry soils, rocks, or clouds
Band 4 $<$ Band 7 $(NBR<0)$ for large water stress and burn traces in vegetation
Since the difference of bands in $NBR$ is scaled by their sum, implicitly the overall brightness across the bands is normalized, and topographic effects within every scene are removed. The measure of environmental change between an unburned and a burned area after a fire may be obtained just by subtracting the $NBR$ obtained after burning from the one before burning: $$dNBR=NBR_{prefire}-NBR_{postfire}$$
This assumes unburned terrain is similar between both dates compared in terms of phenology and moisture. $dNBR$ is near zero for background unburned areas, whereas strong positive or negative values indicate a decay or an enhancement of vegetation productivity, respectively. The former is usual in forests and shrub areas where fire effects have a long term impact on biomass productivity. On the contrary, when burn severity is light and affects to herbaceous terrains the release of nutrients after fire may trigger vegetation vigor.
During the summer 2012, two fires started at the locations of La Jonquera and Portbou, in the very north-eastern region of Spain near France (BBC News). That summer had been the driest of the latest 40 years in Catalonia, and the forests weren't in a very good condition due to the heavy snowfalls two years before. The fires started the 22nd July and were considered extinguished the 30th of that month. More than 13,000 hectares of typical Mediterranean vegetation (holm oak and pine forests, herbaceous fields and shrubs) burned easily, and unfortunately four people died during the events.
The fire near La Jonquera, Catalonia, Spain
A first look at the burned area from 'La Jonquera' wildfires may be carried out by producing a false color image with ETM+ data from the 11th August 2012, just some days after the fire. It's important that given the definition of $dNBR$ the prefire and postfire images represent similar surface features. Even if unburned landscapes tend to dynamically change over time, ideally both images will represent moisture content and phenology as similarly as possible. This is usually related to the growing seasons, and may be analyzed in a RGB composite of bands 7, 4 and 2, respectively.
Two dates have been selected as representative for the initial assessment of the burn: 11th Agust 2012 as the postfire image (some days after the fire extinction) and 9th August 2012 as the prefire one (1 year before the fire).
Initial assessment false color prefire (top) and postfire (bottom) images
The Scan-Line-Corrector problems are clearly shown as black strips in the images. With this band combination healthy vegetation shows bright green (A), and bare soil areas pink. Oranges and browns represent sparsely vegetated areas whereas urban areas are showed in magenta tones (C). The burned areas instead are clearly displayed in red in the postfire image due to the large emission at band 7. The recent large fire at La Jonquera stands out clearly (B1), while the Portbout fire (B2) is showed much smaller. Also interesting are the rests of older fires like the one in B3, that burned near Sant Climent Sescebes in 2006. Water bodies are dark blue (D), and light grey lines represent roads and paved areas. Apart from the cloudy regions in the prefire scene, both images are comparable in terms of vegetation growth status, and suitable for its $NBR$ comparison. Now with the same two scenes the $NBR$ calculation may be performed obtaining the following images:
Initial assessment prefire (top) and postfire (bottom) $NBR$
The strong negative values in (A) clearly suggest severe water stress and consequences of burns, whereas large positive bright values like (C) represent vegetated areas that are productive. Parts of the landscape which are less productive (D) display lower positive values instead. All mid-gray values in the picture mean $NBR$ near zero, as with non-productive vegetation and drier soils, and clouds (B). The difference between the two images produces the $dNBR$, which is scaled by 1000 and evaluated specifically within the burn perimeter as shown in the image below:
Initial assessment $dNBR$ severity classification
The severity of the burn has been divided into seven classes ranging from enhanced regrowth (low values of $dNBR$ below -100) to high severity (larger than +660). The specific interval values of each class depend on the seasonality and timing, and a fine tuning may be achieved with ground data and further image processing. Unburned areas are shown in light gray in the image, with values -100 to +100 typically near zero, meaning no change between the prefire and postfire images. The next four classes range from +100 to +1300 and define the severity of the burn from low to high. The area burned in each of the classes within the defined perimeter is:
Enhanced regrowth (high+low): 24 hectares
Unburned: 1,207 hectares
Low severity: 2,103 hectares
Low-moderate severity: 2,242 hectares
Moderate-high severity: 2,796 hectares
High severity: 4,382 hectares
Total: 12,754 hectares (11,523 hectares burned)
The total number approximates that of the official news reports, but a final number should be based on a further fine processing. It's important to note as well that the use of imagery close to the actual fire, as it is the case, may overestimate the real severity of the burn. A more representative assessment of the actual severity (extended assessment) is usually done using images from the following growing season after the fire, compared to a scene of the prefire growing epoch. By doing so, burned vegetation has had some time to recover and to show a response from the initial severity. On the other side, parts of the vegetation green after the fire may have died in the next growing season.
I tried to get some Landsat passes from this year's growing season around the months of May-June, but I feel that given the outstanding amount of rain this year I'll have to wait a bit to be able to compare them to last year's season. A nice starting point, though, for a following post. But all in all I believe that the potential of remote sensing for fire assessment is clearly proven; and that's only an introduction, more is to come!
I would like to appreciate here the great infrastructure of the US Geological Survey (USGS) in processing the Landsat surface reflectance data and make them freely available to everyone, together with the nice USDA Forest Service reports. That's the way! | CommonCrawl |
Research | Open | Published: 15 September 2016
Distance-based topological polynomials and indices of friendship graphs
Wei Gao1,
Mohammad Reza Farahani2,
Muhammad Imran3,4 &
M. R. Rajesh Kanna5
SpringerPlusvolume 5, Article number: 1563 (2016) | Download Citation
Drugs and chemical compounds are often modeled as graphs in which the each vertex of the graph expresses an atom of molecule and covalent bounds between atoms are represented by the edges between their corresponding vertices. The topological indicators defined over this molecular graph have been shown to be strongly correlated to various chemical properties of the compounds. In this article, by means of graph structure analysis, we determine several distance based topological indices of friendship graph \( F_{3}^{(n)} \) which is widely appeared in various classes of new nanomaterials, drugs and chemical compounds.
Recent years have witnessed the rapid development in nanomaterials and drugs, which keeps in pace with the development of pharmacopedia. Because of the new issues raised by it, there is a need to test the physical, chemical and biological properties of these new chemical compounds, nanomaterials and drugs, which make the researchers' workload increased much. Besides, to guarantee the effective results of the research, enough adequate equipment, reagents and human resources are needed to test the performance and the side effects of presented chemical compounds, nanomaterials and drugs. Nevertheless, funds in developing countries (like some countries in Southeast Asia, South America and Africa) are unable to afford the relevant equipment and reagents. Thanks to the contributions from the previous research which discovered that the chemical characteristics of chemical compounds, nanomaterials and drugs and their molecular structures are closely related. Simply speaking, it would benefit the medical and pharmaceutical scientists by providing supports to understand the properties of these chemical compounds, nanomaterials and drugs, if we learn their indicators based on the topological indices. This also helps to make up the experiments shortages. In this way, it can be predicted that the techniques on topological index computation will be welcomed by developing countries by providing the medical and biological information of new chemical compounds, nanomaterials and drugs without chemical experiment conditions.
In the graph computation model, the structure of chemical compounds, nanomaterials and drugs are described as a graph. Every atom are described by an individual vertex, and the chemical bond among them described by the edge. Let G be a graph which corresponds to a chemical structure with atom (vertex) set \( V(G) \) and chemical bond (edge) set \( E(G) \). The distance between vertices u and v, \( d_{G} (u,v) \) or \( d(u,v) \), in a graph is the number of edges in a shortest path connecting them and the diameter of a graph G, \( D(G) \) is the longest topological distance in G. The degree \( d_{v} (G) \) or d v of a vertex \( v \in V(G) \) is the number of vertices of G adjacent to v. A vertex \( v \in V(G) \) is said to be isolated, pendent, or fully connected if \( d_{v} = 0 \); \( d_{v} = 1 \), or \( d_{v} = n - 1 \), respectively.
A topological index can be described as a real-valued map \( f:G \to R^{ + } \) which maps each chemical structure to certain real numbers. For decades, in order to test the features of chemical molecules, some indices like PI index, Wiener index, Harmonic index and Zagreb index are proposed. Meanwhile, some papers also devote to computing these topological indices of special molecular graph in chemical and pharmaceutical engineering.
The Wiener index of G was introduced by chemist Harold Wiener in 1947 to demonstrate correlations between physicochemical properties of organic compounds and the index of their molecular graphs and is defined as follows (Wiener 1947):
$$ W(G) = \frac{1}{2}\sum\limits_{v \in V (G )} {\sum\limits_{u \in V (G )} {d(u,v)} } $$
In 1993 he Hyper-Wiener index is one of distance-based graph invariants, (based structure descriptors), used for predicting physico–chemical properties of organic compounds. The Hyper-Wiener index was introduced by M. Randić. The Hyper Wiener index of G is defined as follow (Wiener 1948; Randić 1993; Randić et al. 1994):
$$ WW(G) = \frac{1}{2}\sum\limits_{v \in V(G)} {\sum\limits_{u \in V(G)} {(d(u,v) + d(u,v)^{2} )} } $$
The Hosoya polynomial was first introduced by H. Hosoya, in 1988 (Hosoya 1989) and define as follows:
$$ H\left( {G,x} \right) = \frac{1}{2}\sum\limits_{v \in \left( V \right)} {\sum\limits_{u \in \left( V \right)} {x^{{d\left( {u,v} \right)}} } } $$
In references (Polansky and Bonchev 1986; Sridhara et al. 2015; Gao et al. 2016a, b; Gao and Farahani 2016; Schultz 1989; Muller et al. 1990; Gutman and Polansky 1986; Trinajstic 1993; Klavžar and Gutman 1996; Gutman and Klavžar 1997; Hua 2009; Deng 2007; Chen et al. 2008; Zhou 2006), some properties and more historical details of the Wiener and Hyper Wiener indices and the Hosoya polynomial of molecular graphs are studded.
For more details on applications and mathematical properties of this topological based structure descriptor (the Wiener and Hyper-Wiener indices and the Hosoya polynomial) see paper series (Wiener 1948; Randić 1993; Randić et al. 1994; Hosoya 1989; Polansky and Bonchev 1986; Sridhara et al. 2015; Gao et al. 2016a, b; Gao and Farahani 2016).
In 1989, H.P. Schultz introduced a graph theoretical descriptor for characterizing alkanes by an integer number. The "Schultz molecular topological index" (MTI) of the graph G is defined as follows (Schultz 1989; Muller et al. 1990)
$$ MTI\left( G \right) = \sum\limits_{i = 1}^{N} {[{\mathbf{d}}({\mathbf{A}} + {\mathbf{D}})]_{i} } $$
where two N × N matrixes A and D are the adjacency and distance matrixes of G (Gutman and Polansky 1986; Trinajstic 1993). \( {\mathbf{d}} = (d_{1} ,d_{2} , \cdots ,d_{N} ) \) is the \( 1 \times N \) vector of the degrees of the vertices of G. The (i, j)-th entry of the distance matrix D, denoted by D ij , is just the distance between the vertices i and j, namely the length of a shortest path connecting i and j (Gutman and Polansky 1986; Trinajstic 1993). Recall that the degree d i of the vertex p i is the number of first neighbors of this vertex or, what is the same, the sum of the entries of the i-th column of A. Note that in the mathematical literature instead of "degree" the name "valency" is sometimes used, which, of course, should be distinguished from valency in chemistry (Klavžar and Gutman 1996; Gutman and Klavžar 1997).
The Wiener index (or Wiener number) of a connected graph G is equal to the sum of distances between all pairs of vertices of G:
$$ W\left( G \right) = \frac{1}{2}\sum\limits_{i = 1}^{N} {\sum\limits_{j = 1}^{N} {D_{ij} } } $$
For the recent results on the Schultz molecular topological index see (Klavžar and Gutman 1996; Gutman and Klavžar 1997; Hua 2009; Deng 2007; Chen et al. 2008; Zhou 2006).The degree distance of G is defined as
$$ DD(G) = \frac{1}{2}\sum\limits_{{\left\{ {u,v} \right\} \subset V(G)}} {\left( {d_{u} + d_{v} } \right)d\left( {u,v} \right)} $$
where \( d_{u} \) and \( d_{v} \) are degrees of vertices u and v of G. The degree distance seems to have been considered first in connection with certain chemical applications by Dobrynin and Kochetova (1994) and at the same time by Gutman (1994) in 1994, who named this degree distance index by the Schultz index. This name was eventually accepted by most other authors [see, e.g., (Zhou 2006; Ilic et al. 2010; Dobrynin 1999; Schultz and Schultz 2000)] and recently we denote the Schultz index of G by Sc(G).
Later in 1997, Klavžar and Gutman defined other basic structure descriptors. The modified Schultz index of G is defined as:
$$ Sc*(G) = \frac{1}{2}\sum\limits_{{\left\{ {u,v} \right\} \subset V(G)}} {\left( {d_{u} \times d_{v} } \right)d\left( {u,v} \right)} $$
Now, there are two topological polynomials of a graph G (Gutman 1994) as follows:
$$ Sc(G,x)\, = \,\frac{1}{2}\sum\limits_{{\left\{ {u,v} \right\} \subset V(G)}} {\left( {d_{u} + d_{v} } \right)x^{d(u,v)} } $$
$$ Sc*(G,x)\, = \,\frac{1}{2}\sum\limits_{{\left\{ {u,v} \right\} \subset V(G)}} {\left( {d_{u} \times d_{v} } \right)x^{d(u,v)} } . $$
Obviously,
$$ Sc\left( {\text{G}} \right) = \frac{{\partial Sc\left( {\text{G,x}} \right)}}{\partial x}|_{x = 1} $$
$$ Sc^{*} \left( {\text{G}} \right) = \frac{{\partial Sc^{*} \left( {\text{G,x}} \right)}}{\partial x}|_{x = 1} $$
Several contributions on these indices or related indices can be referred to (Iranmanesh and Alizadeh 2009a, b; Alizadeh et al. 2009; Halakoo et al. 2009; Heydari 2010; Hedyari 2011; Farahani and Vlad 2012; Farahani 2013a, b, c; Farahani 2014; Farahani et al. 2015, 2016; Farahani and Gao 2015; Gao and Farahani 2016; Bokhary et al. 2016; Imran et al. 2016).
The friendship graph is the graph obtained by taking n copies of the cycle graph \( C_{3} \) with a vertex in common. It is denoted by \( F_{3}^{(n)} \) (Kanna et al. 2016). Friendship graph \( F_{3}^{(n)} \) contains \( 2n + 1 \) vertices and \( 3n \) edges as shown in the figures.
As we mentioned, the new nano materials and drugs are pretty useful in developing areas, and the topological indices are helpful to test the chemical properties of them. In this paper, we present the distance based indices of friendship graph \( F_{3}^{(n)} \). The results we obtained here show promising prospects of the application in material and chemical engineering.
Theorem 1
Let \( F_{3}^{(n)} \) be the friendship graph \( \forall n \in {\mathbb{N}} - \left\{ 1 \right\} \) . Then the Hosoya polynomial and the Wiener index of \( F_{3}^{(n)} \) are equal to:
$$ H(F_{3}^{(n)} ,x)\, = \,3nx^{1} + 2n\left( {n - 1} \right)x^{2} $$
$$ W(F_{3}^{(n)} )\, = \,4n^{2} - n. $$
Proof of Theorem 1 Consider the friendship graph \( F_{3}^{(n)} \) deposit in Fig. 1 and be defined as above, with \( 2n + 1 \) vertices and \( 3n \) edges.
Some examples of friendship graph (in order \( F_{3}^{(4)} \), \( F_{3}^{(8)} \), \( F_{3}^{(n)} \), respectively)
According to Fig. 1 and definition of the friendship graph \( F_{3}^{(n)} \) we know that one of center vertex of \( F_{3}^{(n)} \) has degree \( 2n \) and other \( 2n \) vertices have degree 2. And obviously,
$$ \left| {E(F_{3}^{(n)} )} \right|\, = \,\frac{{\left( {1 \times 2n + 2 \times 2n} \right)}}{2}\, = \,n \times \left| {E(C_{n} )} \right|\, = \,3n $$
Also, from Fig. 1 and the edge set of the friendship graph \( F_{3}^{(n)} \), one can see that there are \( 2n \) 1-edge-paths between only center vertex and all other vertices with degree 2 and for all two vertices \( v,u \in V(F_{3}^{(n)} ) \) with degree 2, there are \( n \) 1-edge-paths. Thus the coefficient of the first term of the Hosoya polynomial of friendship graph \( F_{3}^{(n)} \) is equal to the number of its edges.
For the second term of the Hosoya polynomial of \( F_{3}^{(n)} \), we see that there are \( \frac{(2n)(2n - 2)}{2} \) 2-edge-paths between all pair of vertices \( v,u \in V(F_{3}^{(n)} ) \) with degree 2. So, the coefficient of the second term of the Hosoya polynomial is equal to \( 2n^{2} - 2n \).
Here, by what have been mentioned above, we have following computations for the Hosoya polynomial of friendship graph \( F_{3}^{(n)} \) and alternatively the wiener index of \( F_{3}^{(n)} \).
$$ H(F_{3}^{(n)} ,x)\, = \,\frac{1}{2}\sum\limits_{{u \in V\left( {F_{3}^{(n)} } \right)}} {\sum\limits_{{v \in V\left( {F_{3}^{(n)} } \right)}} {x^{d(u,v)} } } \, = \,3nx^{1} + 2n\left( {n - 1} \right)x^{2} $$
$$ W(F_{3}^{(n)} )\, = \,\frac{\partial }{\partial x}H\left( {F_{3}^{(n)} ,x} \right)\, = \,3n\left( 1 \right) + 2n\left( {n - 1} \right)\left( 2 \right)\, = \,4n^{2} - n. $$
By definition of the Hosoya polynomial of an arbitrary graph G with \( \left| {V\left( G \right)} \right| \) vertices, it is easy to see that
$$ H(G,1) = \left( {_{2}^{{\left| {V\left( G \right)} \right|}} } \right) = \frac{{\left| {V(G)} \right|(\left| {V(G)} \right| - 1)}}{2}. $$
In particular, for \( G = \, F_{3}^{(n)} \), it is easy to see that
$$ H(F_{3}^{(n)} ,1)\, = 3n+2n(n-1)\, = \,2n^{2} + n\,\left( {_{2}^{2n + 1} } \right) = \frac{(2n + 1)(2n)}{2}\, = \,n(2n + 1) $$
And these complete the proof of Theorem 1.\(\square \)
The Hyper-Wiener index of the friendship graph \( F_{3}^{(n)} (\forall n \in {\mathbb{N}} - \left\{ 1 \right\}) \) is equals to:
$$ WW(F_{3}^{(n)} ) = 12n^{2} - 6n $$
Proof of Theorem 2 Consider the friendship graph \( F_{3}^{(n)} \) deposit in Fig. 1. By using the above proof and the Wiener index of friendship graph \( F_{3}^{(n)} \), we see that
$$ \begin{aligned} WW\left( {F_{3}^{(n)} } \right)\, &= \, \frac{1}{2}\sum\limits_{{{\text{v}} \in {\text{V }}\left( {F_{3}^{(n)} } \right)}} {\sum\limits_{{u \in {\text{V }}\left( {F_{3}^{(n)} } \right)}} {\left( {d\left( {v,u} \right) + d\left( {v,u} \right)^{2} } \right)} } \\ &= \,W\left( {F_{3}^{(n)} } \right){ + }\frac{1}{2}\sum\limits_{{{\text{u,v}} \in {\text{V }}\left( {F_{3}^{(n)} } \right)}} {d\left( {v,u} \right)^{ 2} } \\ &= \,4n^{2} - n + 8n^{2} - 5n \\ &= \,12n^{2} - 6n. \\ \end{aligned} $$
\(\square \)
Let \( F_{3}^{(n)} \) be the friendship graph \( (\forall n \ge 2) \). Then,
The Schultz polynomial of \( F_{3}^{(n)} \) is equal to
$$ Sc(F_{3}^{(n)} ,x)\, = \,2n(n + 4)x + + 8n(n - 1)x^{2} $$
The modified Schultz polynomial of \( F_{3}^{(n)} \) is equal to
$$ Sc*(F_{3}^{(n)} ,x)\, = \,4n(n + 1)x + + 8n(n - 1)x^{2} $$
Proof of Theorem 3 Consider the graph of \( F_{3}^{(n)} \) depicted in Fig. 1. Using the definition of \( F_{3}^{(n)} \) and the results from the proof of Theorem 1, the number of all distinct types of 1 and 2-edge-paths are given in the Table 1. On the other hand, from the definitions of the Schultz and modified Schultz polynomials of a graph \( G \), we can obtain the \( Sc\left( {G,x} \right) \) and \( Sc*\left( {G,x} \right) \) by inserting the coefficient \( d_{u + } d_{v} \) and \( d_{u} \times d_{v} \) in the Hosoya polynomial.
Table 1 The number of all distinct types of 1 and 2-edge-paths
Here, we have following computations for the Schultz and modified Schultz polynomials of friendship graph \( F_{3}^{(n)} \)
$$ \begin{aligned} Sc\left( {F_{3}^{(n)} ,x} \right)\, &= \,\frac{1}{2}\sum\limits_{{u,v \in V(F_{3}^{(n)} )}} {\left( {d_{u} + d_{v} } \right)x^{d(u,v)} } \\ &= \,2n\left( {n + 2} \right)x + 4nx + 0x^{2} + 8n\left( {n - 1} \right)x^{2} \\ &= \,2n\left( {n + 4} \right)x + + 8n\left( {n - 1} \right)x^{2} \\ \end{aligned} $$
$$ \begin{aligned} Sc*\left( {F_{3}^{(n)} ,x} \right)\, &= \, \frac{1}{2}\sum\limits_{{u,v \in V(F_{3}^{(n)} )}} {\left( {d_{u} \times d_{v} } \right)x^{d(u,v)} } \\ &= \,4n^{2} x + 4nx + 0x^{2} + 8n\left( {n - 1} \right)x^{2} \\ &= \,4n\left( {n + 1} \right)x + + 8n\left( {n - 1} \right)x^{2} \\ \end{aligned} $$
Now, the proof of theorem is completed.
Let \( F_{3}^{(n)} \) be the friendship graph \( (\forall n \ge 2) \), then
The Schultz index of the friendship graph \( F_{3}^{(n)} (\forall n \ge 2) \) is equal to
$$ Sc(F_{3}^{(n)} ) = 2n(9n - 7) $$
The modified Schultz index of the friendship graph \( F_{3}^{(n)} (\forall n \ge 2) \) is equal to
$$ Sc*\left( {F_{3}^{(n)} } \right) = \, 4n\left( {5n - 3} \right). $$
Proof of Theorem 4 By definitions of the Schultz and modified Schultz indices, we know that
$$ \begin{aligned} Sc\left( {F_{3}^{(n)} } \right)\, &= \,\, \left. {\frac{{\partial Sc(F_{3}^{(n)} ,x)}}{\partial x}} \right|_{x = 1} \\ &= \,\frac{\partial }{\partial x}\left( {2n\left( {n + 4} \right)x + + 8n\left( {n - 1} \right)x^{2} } \right)_{x = 1} \\ &= \,2n\left( {9n - 7} \right). \\ \end{aligned} $$
And also modified Schultz index
$$ \begin{aligned} Sc*\left( {F_{3}^{(n)} } \right)\, &= \, \left. {\frac{{\partial Sc*(F_{3}^{(n)} ,x)}}{\partial x}} \right|_{x = 1} \\ &= \,\frac{\partial }{\partial x}\left( {4n\left( {n + 1} \right)x + + 8n\left( {n - 1} \right)x^{2} } \right)_{x = 1} \\ &= \,4n\left( {5n - 3} \right). \\ \end{aligned} $$
Here, we complete the proof of the Theorem 4.\(\square \)
In this article, by means of graph structure analysis, we have determined the several distance-based topological indices of friendship graph \( F_{3}^{(n)} \) which is widely appeared in various classes of new nanomaterials, drugs and chemical compounds. These results will be helpful to understand the underlying molecular topologies of these graphs.
Alizadeh Y, Iranmanesh A, Mirzaie S (2009) Computing Schultz polynomial, Schultz index of C 60 fullerene by gap program. Digest J Nanomater Bios 4(1):7–10
Bokhary SA, Imran M, Manzoor S (2016) On molecular topological properties of dendrimers. Can J Chem 94(2):120–125
Chen S, Jang Q, Hou Y (2008) The Wiener and Schultz index of nanotubes covered by C4. MATCH Commun Math Comput Chem 59:429–435
Deng H (2007) The Schultz molecular topological index of polyhex nanotubes. MATCH Commun Math Comput Chem 57:677–684
Dobrynin AA (1999) Explicit relation between the Wiener index and the Schultz index of cata-condensed benzenoid graphs. Croat Chem Acta 72:869–874
Dobrynin AA, Kochetova AA (1994) Degree distance of a graph: a degree analogue of the Wiener index. J Chem Inform Comput Sci 34:1082–1086
Farahani MR (2013a) Hosoya, Schultz, modified Schultz polynomials and their topological indices of benzene molecules: first members of polycyclic aromatic hydrocarbons (PAHs). Int J Theor Chem 1(2):09–16
Farahani MR (2013b) On the Schultz polynomial, modified Schultz polynomial, Hosoya polynomial and Wiener index of circumcoronene series of benzenoid. J Appl Math Inform 31(5–6):595–608
Farahani MR (2013c) On the Schultz and modified Schultz polynomials of some harary graphs. Int J Appl Discrete Math 1(1):1–8
Farahani MR (2014) Schultz indices and Schultz polynomials of harary graph. Pac J Appl Math 6(3):77–84
Farahani MR, Gao W (2015) The Schultz index and Schultz polynomial of the Jahangir Graphs J5, m. Appl Math 6:2319–2325
Farahani MR, Vlad MP (2012) On the Schultz, modified Schultz and Hosoya polynomials and derived indices of capra-designed planar benzenoid. Studia UBB Chemia 57(4):55–63
Farahani MR, Rajesh Kanna MR, Gao W (2015) The Schultz, modified Schultz indices and their polynomials of the Jahangir graphs Jn, m for integer numbers n = 3, m > 3. Asian J Appl Sci 3(6):823–827
Farahani MR, Rajesh Kanna MR, Gao W (2016) Schultz polynomial of harary graph H2r+1,2m+1. J Chem Biol Phys Sci 6(1):294–301
Gao W, Farahani MR (2016a) Computing the reverse eccentric connectivity index for certain family of nanocones and fullerene structures. J Nanotechnol 2016:30. doi:10.1155/2016/3129561
Gao W, Farahani MR (2016b) Degree-based indices computation for special chemical molecular structures using edge dividing method. Appl Math Nonlinear Sci 1:94–117
Gao W, Wang WF, Farahani MR (2016a) Topological indices study of molecular structure in anticancer drugs. J Chem 8:116. doi:10.1155/2016/3216327
Gao W, Farahani MR, Shi L (2016b) Forgotten topological index of some drug structures. Acta Medica Mediterranea 32:579–585
Gutman I (1994) Selected properties of the Schultz molecular topological index. J Chem Inform Comput Sci 34:1087–1089
Gutman I, Klavžar S (1997) Bounds for the Schultz molecular topological index of benzenoid systems in terms of the Wiener index. J Chem Inform Comput Sci 37:741–744
Gutman I, Polansky OE (1986) Mathematical concepts in organic chemistry. Springer-Verlag, Berlin
Halakoo O, Khormali O, Mahmiani A (2009) Bounds for Schultz index of pentachains. Digest J Nanomater Bios 4(4):687–691
Hedyari A (2011) Wiener and Schultz indices of V-naphtalenic nanotori. Optoelectron Adv Mater Rapid Commun 5(7):786–789
Heydari A (2010) On the modified Schultz index of C 4 C 8(s) nanotubes and nanotorus. Digest J Nanomater Bios 5(1):51–56
Hosoya H (1989) On some counting polynomials in chemistry. Discrete Appl Math 19:239–257
Hua H (2009) Wiener and Schultz molecular topological indices of graphs with specified cut edges. MATCH Commun Math Comput Chem 61:643–651
Ilic A, Klavžar S, Stevanovic D (2010) Calculating the degree distance of partial hamming graphs. MATCH Commun Math Comput Chem 63:411–424
Imran M, Baig AQ, Ali H (2016) On topological properties of dominating David derived networks. Can J Chem 94(2):137–148
Iranmanesh A, Alizadeh Y (2009a) Computing Szeged and Schultz indices of HAC 5 C 7 C 9[p,q] nanotube by gap program. Digest J Nanomater Bios 4(1):67–72
Iranmanesh A, Alizadeh Y (2009b) Computing Hyper-Wiener and Schultz indices of TUZC 6[p,q] nanotube by gap program. Digest J Nanomater Bios 4(1):607–611
Kanna MR, Kumar RK, Farahani MR (2016) Specific energies of friendship graph. Asian Acad Res J Multidiscip 3(1):189–196
Klavžar S, Gutman I (1996) A comparison of the Schultz molecular topological index with the Wiener index. J Chem Inform Comput Sci 36:1001–1003
Muller WR, Szymanski K, Knop JV, Trinajstic N (1990) Molecular topological index. J Chem Inform Comput Sci 30:160–163
Polansky OE, Bonchev D (1986) The Wiener number of graphs. MATCH Commun Math Chem 21:153–186
Randić M (1993) Novel molecular descriptor for structure-property studies. Chem Phys Lett 211:478–483
Randić M, Gou X, Oxley T, Krishnapriyan H, Naylor L (1994) Wiener matrix invariants. J Chem Inform Comput Sci 34:361
Schultz HP (1989) Topological organic chemistry 1. Graph theory and topological indices of alkanes. J Chem Inform Comput Sci 29:227–228
Schultz HP, Schultz TP (2000) Topological organic chemistry. 12. Whole-molecule Schultz topo- logical indices of alkanes. J Chem Inform Comput Sci 40:107–112
Sridhara G, Rajesh Kanna MR, Indumathi RS (2015) Computation of topological indices of graphene. J Nanomater 16:292. doi:10.1155/2015/969348
Trinajstic N (1993) Chemical graph theory. CRC Press, Boca Raton
Wiener H (1947) Structural determination of paraffin boiling points. J Am Chem Soc 69:17–20
Wiener H (1948) Relations of the physical properties of the isomeric alkanes to molecular structure: surface tension, specific dispersion, and critical solution temperature in aniline. J Phys Chem 52:1082–1089
Zhou B (2006) Bounds for the Schultz molecular topological index. MATCH Commun Math Comput Chem 56:189–194
WG and MRF proposed the idea for computing the distance-based topological indices of friendship graph which was implemented and computations were founds by MI and MRRK which were verified by all the authors. The final draft was prepared by WG and MRF. All authors read and approved the final manuscript.
This research work is supported by NSFC (Nos. 11401519 and 61262070).
School of Information Science and Technology, Yunnan Normal University, Kunming, 650500, China
Wei Gao
Department of Applied Mathematics, Iran University of Science and Technology, Narmak, Tehran, 16844, Iran
Mohammad Reza Farahani
School of Natural Sciences, National University of Sciences and Technology, Sector H-12, Islamabad, P.O. 44000, Pakistan
Muhammad Imran
Department of Mathematical Sciences, United Arab Emirates University, P.O. Box 15551, Al Ain, United Arab Emirates
Post Graduate Department of Mathematics, Maharani's Science College for Women, Mysore, 570005, India
M. R. Rajesh Kanna
Search for Wei Gao in:
Search for Mohammad Reza Farahani in:
Search for Muhammad Imran in:
Search for M. R. Rajesh Kanna in:
Correspondence to Muhammad Imran.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Hosoya polynomial
Wiener index
Hyper-Wiener index
Schultz index
Schultz polynomial
Friendship graph
Mathematics Subject Classification | CommonCrawl |
Reliability of pedigree-based and genomic evaluations in selected populations
Gregor Gorjanc1,
Piter Bijma2 &
Reliability is an important parameter in breeding. It measures the precision of estimated breeding values (EBV) and, thus, potential response to selection on those EBV. The precision of EBV is commonly measured by relating the prediction error variance (PEV) of EBV to the base population additive genetic variance (base PEV reliability), while the potential for response to selection is commonly measured by the squared correlation between the EBV and breeding values (BV) on selection candidates (reliability of selection). While these two measures are equivalent for unselected populations, they are not equivalent for selected populations. The aim of this study was to quantify the effect of selection on these two measures of reliability and to show how this affects comparison of breeding programs using pedigree-based or genomic evaluations.
Two scenarios with random and best linear unbiased prediction (BLUP) selection were simulated, where the EBV of selection candidates were estimated using only pedigree, pedigree and phenotype, genome-wide marker genotypes and phenotype, or only genome-wide marker genotypes. The base PEV reliabilities of these EBV were compared to the corresponding reliabilities of selection. Realized genetic selection intensity was evaluated to quantify the potential of selection on the different types of EBV and, thus, to validate differences in reliabilities. Finally, the contribution of different underlying processes to changes in additive genetic variance and reliabilities was quantified.
The simulations showed that, for selected populations, the base PEV reliability substantially overestimates the reliability of selection of EBV that are mainly based on old information from the parental generation, as is the case with pedigree-based prediction. Selection on such EBV gave very low realized genetic selection intensities, confirming the overestimation and importance of genotyping both male and female selection candidates. The two measures of reliability matched when the reductions in additive genetic variance due to the Bulmer effect, selection, and inbreeding were taken into account.
For populations under selection, EBV based on genome-wide information are more valuable than suggested by the comparison of the base PEV reliabilities between the different types of EBV. This implies that genome-wide marker information is undervalued for selected populations and that genotyping un-phenotyped female selection candidates should be reconsidered.
Selection in livestock breeding programs is commonly based on estimated breeding values (EBV) of selection candidates. In addition to EBV, the variance of prediction errors of EBV (PEV) is also routinely calculated based on the statistical model that is used for genetic evaluation in order to provide a measure of the precision with which the EBV are estimated [1, 2]. PEV for genetic evaluations are routinely produced, either by computationally intensive direct inversion of the left hand side of the mixed model equations or, where this is not possible, by approximations [3–7] or selection index theory [8, 9]. To make interpretation of the precision of published EBV easier for the end user and because of the relationship between reliability and response to selection [10], many breeding programs report the reliability of EBV derived from PEV instead of directly reporting PEV, calculated as 1 minus the ratio between PEV and additive genetic variance. Typically, additive genetic variance in the base population is available and used in calculations, what we will call the base PEV reliability, which quantifies the magnitude of PEV in relation to the base additive genetic variance. This measure of reliability is commonly used to reflect the extent to which EBV may change when more information becomes available, which is particularly relevant in breeding programs with overlapping generations, e.g., in dairy cattle breeding, but much less so in, e.g., pig and poultry breeding programs.
Another measure of the reliability of EBV is the squared correlation between breeding values (BV) and EBV of selection candidates. This measure will hereafter be called the reliability of selection because it measures the response to selection that can be obtained when individuals are selected on those EBV, since response to selection is proportional to the accuracy of the EBV, i.e., to the square root of the reliability [10]. The base PEV reliability and the reliability of selection are equivalent for unselected populations (See Appendix) but not for selected populations, because selection reduces additive genetic variance and therefore also the reliability of selection [9, 11–15]. A recent study [16] showed that base PEV reliability may substantially overestimate the reliability of selection for selected populations, and that the equilibrium value of the latter, i.e., the equilibrium reliability, can be predicted from the parameters of unselected populations. The theoretical basis of this overestimation is demonstrated in Additional file 1 [See Additional file 1]. In summary, this overestimation is due to the reduced additive genetic variance among selection candidates in populations under selection and the magnitude of the overestimation varies depending on the information that contributes to the EBV. The overestimation is larger when the EBV depend more on old information from the parental generation than on new information from the current generation. The old information has lower predictive ability for selected populations than for unselected populations, because that information was already used to perform selection of parents and the base PEV reliability does not consider this selection. More specifically, the EBV of selected parents have a reduced variance and a low correlation with the true BV of progeny, which vary between progeny due to recombination and segregation of parental genomes. An example of an extreme case of overestimation of reliability of selection by the base PEV reliability is when the EBV of selection candidates are based on a pedigree prediction, which uses only the old information to estimate the parent average (PA) component of the EBV. A counter example, for which the overestimation is very small is when the EBV are based on a large progeny test, which provides new information to precisely estimate both the PA and the Mendelian sampling (MS) components of the EBV.
Since the base PEV reliability is a measure of the precision of EBV, it is often used as a measure of efficiency when comparing alternative breeding programs, i.e., as a measure of the reliability of selection. If comparisons between the alternative breeding programs that undergo selection are based on the base PEV reliabilities, then the contribution of old information to response to selection will be overestimated and the contribution of new information will be underestimated. With the introduction of genomics, such comparisons have become very common, e.g., comparing the reliability of progeny-tested males and genomically-tested young males [17]. In addition, these comparisons often involve different types of reliabilities: the base PEV reliability for progeny-tested males and the reliability of selection for genomically-tested young males via either forward validation or cross-validation. These two types of reliabilities are not always comparable because the base PEV reliabilities are the expected theoretical values under the assumption of no selection, while validation measures reliability of selection for the analyzed case.
While traditional pedigree-based evaluations are reasonably accurate at estimating the PA component of breeding values, they often provide limited information to estimate the MS component accurately, particularly for young selection candidates. Genomic data provides new information to estimate both the PA and MS components with moderate reliability, which accounts for its usefulness in breeding programs [17–19]. If the benefit of this new (genome-wide marker) information is evaluated using the base PEV reliability, its usefulness in a breeding program may be undervalued, particularly when compared to the value of old information from the parental generation, i.e., the EBV of selected parents [16]. There are potentially many scenarios that need to take the predictive value of old and new information into account when evaluating the usefulness of genome-wide marker information in breeding programs undergoing selection, as for example, the value of collecting genome-wide marker information on un-phenotyped female selection candidates. To date, most breeding programs have predominantly used genome-wide marker information to select un-phenotyped males, but not females. One of the reasons for this is that the perceived improvement in response to selection when selecting un-phenotyped females using genome-wide marker information is limited, e.g., [20–22].
The aim of this research was to quantify the effect of selection on the two measures of reliability for pedigree-based and genomic evaluation of selection candidates, with the following working objectives: (i) to complement the study of Bijma [16] by comparing the base PEV reliability and reliability of selection for pedigree-based and genomic evaluations in populations under selection; (ii) to evaluate the benefit of having genome-wide marker information in such breeding programs; and (iii) to quantify the effect of selection on additive genetic variance and the reliability of selection and compare these obtained values with theoretical equilibrium reliabilities of [16].
The effect of selection on the two measures of reliability was quantified using simulated data by comparing a scenario with random selection to a scenario with selection on best linear unbiased prediction (BLUP) EBV. The simulation procedure involved generating genome, pedigree, and phenotype data, which were in turn used in genetic evaluation of selection candidates with the different types of information. The effect of selection on the two measures of reliability was evaluated by: (i) comparing the base PEV reliabilities and reliabilities of selection, (ii) quantifying the realized genetic selection intensities, and (iii) evaluating the reduction of the reliability of selection due to reduction in additive genetic variance and comparing it to the theoretical equilibrium reliabilities. Ten replicates were simulated and all the calculated statistics were summarized with their average and standard deviation or 95 % confidence interval. All calculations were done in R [23] unless otherwise stated.
Sequence data were generated for 4000 base haplotypes for each of 30 chromosomes of the genome using the Markovian Coalescent Simulator (MaCS) [24]. The chromosomes were each 100 cM long, comprised 1.0 × 108 base pairs and were simulated using a per site mutation rate of 2.5 × 10−8, a per site recombination rate of 1.0 × 10−8, and an effective population size (N e ) that varied over time, reflecting the estimates for the Holstein cattle population [25]; in the base generation, N e was equal to 100 and was increased linearly to 1256 at 1000 years ago, 4350 at 10 000 years ago, and 43 500 at 100 000 years ago. A set of 9000 segregating sites were selected at random from the simulated base haplotypes to represent causative loci affecting a complex trait, with a restriction that 300 were sampled from each chromosome. The allele substitution effect at each causative locus (α i ) was sampled from a normal distribution with a mean of 0 and standard deviation of 1 divided by the square root of the number of causative loci, i.e., 1/9000. A second sample of 60 000 segregating sites was selected at random as genome-wide markers on a single nucleotide polymorphism (SNP) array, with a restriction that 2000 SNPs were sampled from each chromosome. There was no restriction on the frequency of causal loci or SNPs.
Pedigree and phenotypes
The base haplotypes were dropped through a simulated pedigree of 25 generations using the AlphaDrop program [26]. Each generation was generated by factorial mating of 20 males and 500 females, with four half-sib progenies per female. Altogether, there were 20 × 25 × 4 = 2000 individuals per generation, of which half were males and half females.
The true BV of an individual was obtained as the sum of all allele substitution effects of the causative loci, accounting for the individuals' genotype at these loci. The base additive genetic variance was equal to σ 2 A,0 = a T a/(n − 1), where a is a 0 mean vector of BV of the n base individuals. Phenotypes were obtained by adding a residual term to the BV. The residual variance was scaled according to the base additive genetic variance to give a heritability that was set to a high value (0.75). Phenotypes were assigned only to males, which resulted in a breeding scheme in which males had a performance record of their own and records on 51 male half sibs, whereas females had records on 52 male half sibs. This setup was used to mimic the level of reliabilities that are commonly achieved in dairy cattle breeding programs with progeny testing, but keeping the size of the simulated population small. The base PEV reliabilities of the different types of EBV from these data matched closely the level of reported reliabilities from real dairy cattle breeding programs, e.g., [17, 20, 22].
In the random selection scenario (Table 1), each of the 25 generations were simulated by mating 20 males and 500 females that were each selected at random from a set of 1000 selection candidates of each gender. In the BLUP selection scenario, the simulation involved two stages to generate genomes influenced by selection. In the first stage, 10 generations were generated as in the random selection scenario to reach equilibrium in the pedigree information, so that subsequent selection on this information would induce a reduction in additive genetic variance, i.e., the Bulmer effect [11, 15, 16]. In the second stage, each of the 15 generations were simulated by mating 20 males and 500 females that were each selected from a set of 1000 selection candidates of one sex based on BLUP evaluation using pedigree and phenotype information from the current and all previous generations. This procedure provided data to analyze the effect of selection on the two measures of reliability when the Bulmer effect had reached equilibrium, which was conservatively assumed to be reached after five generations of selection. The results confirmed this assumption. Therefore, the data from generations 16 to 25 were in equilibrium and used to analyze the effect of selection on reliabilities, as described in the following.
Table 1 Simulation design and data available for analysis
Genetic evaluation
The simulated data were subject to retrospective genetic evaluation of selection candidates in each generation using different combinations of the following information (Table 1): pedigree for 25 generations, 60 000 genome-wide marker genotypes for 5000 males from generations 16 to 20 and for 2500 males and females from generations 21 to 25 (i.e., a random sample of 500 individuals from each generation), and 5000 phenotypes for males from generations 16 to 20. Individuals in generations 21 through 25 had no phenotypes and served as a validation set to show the reduction in reliabilities with each next generation of prediction. To limit the amount of computing, a random sample of 500 validation individuals per generation was taken to represent the whole generation and evaluated using the different types of information.
Genetic evaluation was based on the following standard mixed model [2]:
$$ \mathbf{y}=\mathbf{X}\mathbf{b}+\mathbf{Z}\mathbf{a}+\mathbf{e}, $$
where y is a vector of phenotype records, b is a vector of fixed effects (only intercept was used), a ~ N(0, V a) is a vector of BV with an additive genetic covariance matrix V a , e ~ N(0, V e) is a vector of residuals with a residual covariance matrix of V e = I σ 2 E , and X and Z are incidence matrices that link phenotype records to b and a, respectively. Pedigree and genomic evaluations differed in the specification of the covariance structure for a; V a = A σ 2 A,0 for the pedigree model and V a = G σ 2 A,0 for the genomic model, where A and G are the respective relationship matrices based on pedigree [2] and genome-wide marker genotypes [27]. A complete pedigree with all 25 generations was used when setting up the A matrix. All analyses were performed with the assumed known intercept (b) and variances (σ 2 A,0 and σ 2 E ) to facilitate comparison of reliabilities and to avoid variation in the results due to the estimation of parameters that were not of interest in this study. For this reason, the intercept value was first estimated with model (1) and then reused as a known parameter when estimating a.
Using the available data (Table 1), four types of EBV were computed for the selection candidates: (i) EBVP was estimated from pedigree information only, using the pedigree model for all individuals in generations 20 to 25 that were free of phenotypic information from their own performance, collateral relatives, or descendants; (ii) EBVP &Y was estimated from pedigree and phenotype information, using the pedigree model for males and females in generation 20, in which the males had own performance phenotype records and records on male half-sibs, while the females only had records on male half-sibs; (iii) EBVM &Y was estimated from genome-wide marker and phenotype information, using the genomic model for males in generation 20 that had an own performance phenotype record; (iv) EBVM was estimated from genome-wide marker information only, using the genomic model for a random sample of validation individuals from generations 21 to 25 that had no phenotype information.
The reliability of selection was calculated as the squared correlation between the EBV and BV for selection candidates. The PEV reliability of an EBV was computed as:
$$ \begin{array}{rr}\hfill {R}^2\left({\widehat{a}}_i\right)& \hfill =1-\frac{Var\left({a}_i-{\widehat{a}}_i\right)}{Var\left({a}_i\right)}\end{array}, $$
where Var(a i − â i ) is the variance of prediction errors of the EBV of animal i (PEV), which was obtained by inverting the coefficient matrix corresponding to the model used (1), and Var(a i ) is a measure of additive genetic variance σ 2 A (See Appendix). The base PEV reliability was calculated using equation (2), with Var(a i ) set to the base additive genetic variance σ 2 A,0 corrected for inbreeding. This correction was applied due to substantial reduction in σ 2 A,0 caused by the deep pedigree and limited number of parents used in the simulation. In addition to this, the PEV reliability was calculated using equation (2) with Var(a i ) set to different values of additive genetic variance σ 2 A (See subsection "Variances" for details).
Realized genetic selection intensity
The realized genetic selection intensity was defined as the selection differential of BV realized by retrospectively selecting the candidates on a particular type of EBV, standardized by σ A,0. This metric was chosen to show the potential for generating response to selection based on the different types of EBV in order to confirm the effect of selection on the reliability of selection. Otherwise, this metric does not provide any additional information beyond the reliability of selection and can be computed only when simulated data is available.
To quantify the effect of changes in genetic variance on the reliability of selection, the following variances were computed for each generation: (i) the observed additive genic variance; (ii) the expected additive genic variance; and (iii) the additive genetic variance. Here, the additive genetic variance (σ 2 A ) refers to the variance of true breeding values and the additive genic variance (σ 2 α ) refers to the additive genetic variance under the assumption of linkage equilibrium between the causative loci, e.g., [10, 28]. The observed additive genic variance in generation t (including the base generation) was computed as:
$$ {\sigma}_{\alpha, t}^2={\displaystyle \sum }{p}_{i,t}{q}_{i,t}{\alpha}_i^2, $$
where p i,t and q i,t are the allele frequencies in generation t and alpha i is the allele substitution effect of the i-th causative locus. Inbreeding changes the additive genic variance and its expectation in generation t of a randomly mated finite population was computed as:
where N e is the effective size of the population and \( {\overline{F}}_t \) is a mean inbreeding coefficient in generation t [29]. The equation (4) was also used to correct for the effect of inbreeding on the additive genetic variance when calculating the base PEV reliability using equation (2). Note that σ 2 α,0 ≈ σ 2 A,0 because the base generation was in linkage equilibrium. The difference between the observed additive genic variance in the base generation (3) and the expected additive genic variance in generation t (4) was used to estimate the cumulative change in additive genic variance due to inbreeding up to generation t:
$$ \varDelta {\sigma}_{\alpha, t,inb}^2={\sigma}_{\alpha, 0}^2-{\sigma}_{\alpha, t,inb}^2, $$
while the difference between the expected and observed additive genic variance in generation t was used to estimate the cumulative change in additive genic variance due to selection up to generation t:
$$ {\sigma}_{\alpha, t,inb}^2={\sigma}_{\alpha, 0}^2{\left(1-\frac{1}{2{N}_e}\right)}^t={\sigma}_{\alpha, 0}^2\left(1-{\overline{F}}_t\right), $$
The total change in the additive genic variance up to generation t was therefore equal to:
$$ \varDelta {\sigma}_{\alpha, t}^2={\sigma}_{\alpha, 0}^2-{\sigma}_{\alpha, t}^2=\varDelta {\sigma}_{\alpha, t,sel}^2+\varDelta {\sigma}_{\alpha, t,inb}^2. $$
The additive genetic variance in generation t (σ 2 A,t ) was computed as the variance of BV in generation t prior to any selection within that generation. The difference between the additive genic and the additive genetic variances in the BLUP selection scenario was used to estimate the gametic phase disequilibrium covariance due to the Bulmer effect [11]:
$$ {\overline{F}}_t $$
These variances (3) to (8) were used to gradually correct (reduce) the base additive genetic variance and calculate the PEV reliability based on these corrected values to analyze the effect of the different underlying processes on the reduction of the reliability of selection in comparison to the base PEV reliability. In addition, the theoretical expectation of reliability in selected populations, referred to as equilibrium reliabilities, were also calculated for comparison to the base PEV reliabilities corrected for inbreeding (see above) and the proportion of the selected individuals, i.e., 2 % selected males and 50 % selected females [16].
The focal generations for comparison of the base PEV reliabilities and reliabilities of selection and realized genetic selection intensities were generation 20 based on phenotyped males and un-phenotyped females and generations 21 to 25 based on un-phenotyped individuals of both sexes. Changes in the variances were evaluated across all generations. The effect of changes in variances on the PEV reliability and the reliability of selection was analyzed in detail in generations 20 and 21 and compared to the equilibrium reliabilities.
In the random selection scenario, the base PEV reliabilities and reliabilities of selection were equal, within the bounds of sampling, for both the pedigree model and the genomic model (Table 2) and, therefore, only base PEV reliabilities will be described. In general, reliabilities increased with more information on the MS component of BV. The base PEV reliability of EBVP was equal to 27 % in generations 20 and 21 and decreased each generation to 0 % in generation 25. The base PEV reliability of EBVP &Y in generation 20 was higher than that of EBVP due to the availability of phenotypic information (35 % for females and 76 % for males). The base PEV reliability of EBVM &Y was even higher due to the availability of genome-wide marker and phenotype information (84 % in generation 20). The base PEV reliability of EBVM decreased at a slower rate over generations than that of EBVP,, i.e., it was equal to 67 % in generation 21 and decreased to 53 % in generation 25.
Table 2 Prediction error variance (PEV) reliability and reliability of selection (%)a of different types of estimates of breeding values (EBV)b by scenario and generation
In the BLUP selection scenario, the base PEV reliabilities followed the same pattern as in the random selection scenario. However, the reliabilities of selection were consistently lower than the base PEV reliabilities, especially for EBV with a large dependency on PA information (Table 2), which shows that the base PEV reliabilities overestimated the reliabilities of selection in this scenario. The ratio of the reliability of selection to the base PEV reliability in generation 20 was equal to 0.11 for EBVP, 0.37 for EBVP &Y for females, 0.89 for EBVP &Y for males, and 0.94 for EBVM &Y. In generation 21, the ratio of reliabilities for EBVP was equal to 0.11 and 0.00 in the following generations, while for EBVM the ratio was equal to 0.90, 0.91, 0.87, 0.86, and 0.88 in generations 21 to 25, respectively (Table 2).
Comparison of reliabilities of the different types of EBV obtained with the BLUP selection scenario showed that genomic prediction had a greater advantage over pedigree prediction when based on the reliability of selection than when based on base PEV reliability. For example, the difference between the reliability of genomic and pedigree predictions in generation 21 was 17 % larger when based on reliability of selection than when based on base PEV reliability, which indicates that genotyping un-phenotyped females might be more valuable than previously suggested, e.g., [20–22].
To confirm differences between base PEV reliabilities and reliabilities of selection, selection on the different types of EBV was compared in terms of realized genetic selection intensities of BV that could have been achieved if candidates were selected on those EBV. In general, realized genetic selection intensities reflected the reliabilities of selection for both the random selection scenario and the BLUP selection scenario and confirmed that base PEV reliabilities overestimate reliabilities of selection in the BLUP selection scenario. Differences between the realized genetic selection intensities were smaller than between the two measures of reliability, because realized genetic selection intensities are proportional to the accuracy of selection, i.e., to the square root of reliability of selection.
In the random selection scenario, selecting candidates directly on true BV gave realized genetic selection intensities that ranged from 0.73 to 0.76 with 50 % selected and from 2.16 to 2.24 with 2 % selected (Table 3). Selection on EBVP gave the lowest realized genetic selection intensities, which ranged from 0.19 to 0.22 with 50 % selected and from 0.55 to 0.60 with 2 % selected in generations 20 and 21. These values practically decreased by 50 % in each next generation due to the low predictive ability of EBVP. Selection on EBVP &Y gave higher realized genetic selection intensities than selection on EBVP due to the higher reliabilities of EBV when based on phenotype information on full-sibs and half-sibs for females, as well as own performance records for males. Realized genetic selection intensities with EBVP &Y were equal to 0.24 and 0.55 with 50 % selected, and to 0.75 and 1.67 with 2 % selected, respectively. Selection on EBVM &Y gave the highest realized genetic selection intensity due to the use of genome-wide marker and phenotype information. In generation 20, the realized genetic selection intensities for EBVP &Y and EBVM &Y were equal to 0.55 and 0.62 with 50 % selected and to 1.67 and 1.90 with 2 % selected, respectively. In the later generations, selecting on EBVM gave more than half of the realized genetic selection intensity compared to selecting directly on true BV.
Table 3 Realized genetic selection intensitya when selecting on true breeding value (BV) or different types of estimates of breeding values (EBV)b by proportion selected, scenario, and generation
In the BLUP selection scenario, selection on true BV gave realized genetic selection intensities that ranged from 0.58 to 0.62 with 50 % selected and from 1.74 to 1.87 with 2 % selected and remained constant (within the bounds of sampling) over all generations (Table 3). These results in the BLUP selection scenario are between 16 and 22 % lower than for the random selection scenario, with an increasing trend over time. Selection on EBVP gave a realized genetic selection intensity of only 0.02 with 50 % selected and between 0.09 and 0.10 with 2 % selected in generation 20, and dropped to 0 in the later generations much more quickly than with the random selection scenario. These realized intensities with EBVP were more than 80 % lower than with the random selection scenario. With EBVP &Y, the reduction of realized genetic selection intensity in comparison to the random selection scenario was 66 % for females and 25 % for males. With EBVM &Y and EBVM, the reduction of realized genetic selection intensity was between 12 and 30 %, with the largest difference observed in generation 21, which was the first generation of prediction without phenotype information.
Changes in variances and effect on reliability
Additive genic variance decreased with each generation in both the random and BLUP selection scenarios, although the reduction was larger with the BLUP selection scenario (Fig. 1). Additive genic variance in the base generation was equal to 0.28 with both scenarios and by generation 20 it was reduced to 0.25 with the random selection scenario and to 0.22 with the BLUP selection scenario. These reductions were mainly caused by inbreeding and were quantified by subtracting the expected additive genic variance under the finite population model from the base generation value (5). The reduction caused by inbreeding up to generation 20 was equal to 0.03 with the random selection scenario and 0.045 with the BLUP selection scenario. The remaining loss of 0.015 in genic variance with the BLUP selection scenario was attributed to the effect of selection.
Additive genic variance (σ 2 α ) and changes due to inbreeding and selection by scenario and generation. Average values with 95 % confidence intervals are presented
For both scenarios, the additive genetic variance also decreased with each generation, but with a significant change in generation 10 when selection on EBV was introduced in the BLUP selection scenario (Fig. 2). Additive genetic variance was equal to 0.28 in the base generation with both scenarios and by generation 10, it decreased to 0.26 for both scenarios because of inbreeding. Introduction of selection in generation 10 reduced the additive genetic variance to 0.21 in generation 11, while the additive genic variance was equal to 0.26. The difference between these two variances gave an estimate of −0.05 for the gametic phase disequilibrium covariance. By generation 20, the additive genetic variance was further reduced to 0.16. The overall reduction of the base additive genetic variance (0.28) was due half to the Bulmer effect (0.06) and half to loss in additive genic variance caused by inbreeding (0.045) and selection (0.015). In the random selection scenario, the additive genetic variance in generation 20 was equal to 0.24, which was equal to additive genic variance within the bounds of sampling.
Additive genetic variance (σ 2 A ) and Bulmer effect (σ 2 α − σ 2 A ) by scenario and generation. Average values with 95 % confidence intervals are presented
The effect of changes in variances (Figs. 1 and 2) on the reliability of selection was quantified in detail in generations 20 and 21 by calculating the PEV reliability with different values of additive genetic variance (Table 4). In the random selection scenario, the reliability of selection tended to be lower than the base PEV reliabilities. Taking into account the reduction in variance due to inbreeding, or using the additive genetic variance from generation 20 or 21, gave a PEV reliability that matched the reliability of selection within the bounds of sampling. In the BLUP selection scenario, the base PEV reliabilities considerably overestimated the reliability of selection, as previously noted (Table 2). This overestimation was, due to the reduction in additive genetic variance, as caused by several underlying processes (Table 4). Inbreeding, and to a small extent selection, reduced the additive genic variance and therefore also the additive genetic variance by changing the allele frequencies of causative loci. More importantly, the additive genetic variance was also reduced by the generation of gametic phase disequilibrium between the causative loci by selection, i.e., the Bulmer effect. These reductions in additive genetic variance due to inbreeding, selection, and the Bulmer effect were used to gradually reduce the base additive genetic variance to the additive genetic variance in generation 20 or 21 and to recalculate the PEV reliabilities for each reduction. The resulting PEV reliabilities matched the reliability of selection within the bounds of sampling. These results not only show which processes contribute to the reduction of the reliability of selection in selected populations but also that the base PEV reliabilities overestimate the reliability of selection in such populations by using the base additive genetic variance instead of actual additive genetic variance of selection candidates. Finally, the equilibrium reliabilities matched the reliability of selection for EBVP and EBVP &Y (Table 4 and Figs. 3 and 4, while there were minor discrepancies for EBVM &Y and EBVM. Figures 3 and 4 show contours of equilibrium reliabilities for the different proportion of selected males and females and a dot for the reliability of selection obtained in this study (Table 4). The discrepancies for EBVM &Y and EBVM arose because, in this study, selection was on EBVP &Y and calculating the equilibrium reliabilities with the higher EBVM &Y or EBVM base PEV reliabilities, as if selection was on the EBVM &Y or EBVM, leads to underestimation of the equilibrium reliabilities. Changing the proportion of selected males and females when calculating the equilibrium reliability for EBVP &Y and EBVM &Y in generation 20 (Fig. 3) and for EBVP and EBVM in generation 21 (Fig. 4) showed that the observed base PEV reliabilities were recovered when selection was absent, i.e., the equilibrium reliabilities from the bottom-left corners of Figs. 3 and 4 matched the base PEV reliabilities corrected for inbreeding in Table 4.
Table 4 Prediction error variance (PEV) reliabilitiesa based on different measures of additive genetic varianceb (V A ), reliability of selectiona, and equilibrium reliabilitiesa (%) of different types of estimates of breeding values (EBV)c by scenario in generations 20 and 21
Equilibrium reliability and reliability of selection of different types of estimated breeding values in generation 20. Breeding values estimated using (a) pedigree and phenotype information in males (EBVP &Y,m), (b) marker and phenotype information in males (EBVM &Y,m), and (c) pedigree and phenotype information in females (EBVP &Y,f). Equilibrium reliabilities are shown with contours, as a function of the proportions of males and females selected, while reliability of selection is shown as a point at the proportions selected used in this study
Equilibrium reliability and reliability of selection of different types of estimated breeding values in generation 21. Breeding values estimated (predicted) using (a) pedigree information (EBVP) and (b) marker information (EBVM). Equilibrium reliabilities are shown with contours as a function of the proportions of males and females selected, while reliability of selection is shown as a point at the proportions selected used in this study
Reliability is important in breeding because it measures the potential for response to selection in a breeding program. The results of this study show that, in populations under selection, reliability computed from PEV and the base additive genetic variance (base PEV reliability) is not equal to the squared correlation between the EBV and BV in selection candidates (reliability of selection), with which potential for response to selection is measured. The difference between these two measures of reliability arises from their different scopes of interpretation. The base PEV reliability overestimates the reliability of selection in selected populations because it is computed from PEV and the base additive genetic variance. The latter describes genetic variation in the base population and not in the selection candidates. As shown in this study, this overestimation can be mitigated either by calculating the PEV reliability based on the reduced additive genetic variance of the selection candidates or by using theoretical equilibrium reliabilities. It was also shown that the degree of overestimation differs between types of EBV and that this has important consequences when breeding schemes and genotyping strategies are compared based on the base PEV reliability; in particular when the base PEV reliability of pedigree prediction is compared to that of other types of EBV.
Reliability in selected populations
The following example illustrates that selection reduces the reliability of selection and that this effect differs between types of EBV. Selecting the parents of the next generation on any type of EBV reduces the variance of these EBV, which are in turn used to obtain pedigree predictions (EBVP) of the progeny. In the extreme case, selecting and mating only two parents (with EBV1 and EBV2 and corresponding base PEV reliabilities \( {R}_{EB{V}_1}^2 \) and \( {R}_{EB{V}_2}^2\Big) \) would create a new generation for which all individuals have the same pedigree prediction, \( EB{V}_P=\frac{1}{2}\left( EB{V}_1+ EB{V}_2\right), \) although there would be variation in their BV due to the Mendelian sampling of parental genomes. In such a situation, the EBVP has no predictive ability to differentiate between individuals, although the base PEV reliability of EBV P would be greater than 0, i.e., \( {R}_{EB{V}_P}^2\ge \frac{1}{4}\left({R}_{EB{V}_1}^2+{R}_{EB{V}_2}^2\right) \) [See Additional file 1]. Consequently, these EBVP have no potential to generate response to selection if selection is carried out among progeny. In contrast, genomic predictions (EBVM) for these individuals would have some predictive ability and potential to generate response to selection, because genome-wide markers provide new information to estimate both the PA and MS component of EBV for each individual [17–19], which can then be differentiated. However, in selected populations, the predictive ability of EBVM is also overestimated by the base PEV reliability, albeit less so than for EBVP.
A detailed illustration on how selection reduces the reliability of selection and how this effect differs between types of EBV is in Additional file 1 [See Additional file 1]. In summary, selection of parents reduces the variance of BV (i.e., additive genetic variance) in progeny but in particular the variance of EBVP in progeny. The reduction of additive genetic variance in progeny reduces the reliability of selection because the unchanged precision of EBV coupled with a smaller variation in BV make it more difficult to differentiate between individuals. The reduction in variance of EBVP in progeny reduces the reliability of selection because EBVP only predicts the PA component of BV and with increasing selection in parents, the predictive ability of EBVP decreases, as illustrated previously. The reduced additive genetic variance in progeny has the same effect on the reliability of selection for any type of EBV. In contrast, the reduced variance of EBVP in progeny has a different effect on the reliability of selection for different types of EBV and is larger for EBV that are primarily based on the PA component and smaller for EBV that are primarily based on the MS component.
These illustrations indicate that the base PEV reliability overestimates the reliability of selection because it does not take into account the effect of selection on variances. The expression for the base PEV reliability involves PEV and the base additive genetic variance. Selection does not affect the PEV [1, 13] but it does affect the additive genetic variance. It causes a reduction in the additive genetic variance that should be taken into account if the PEV reliability is to be used as a measure of the reliability of selection. The rationale behind the expression for the base PEV reliability derives from the PEV being the (posterior) variance of BV conditional on the observed phenotypic information and the base additive genetic variance being the (prior) unconditional variance of BV in the base population. Relating this posterior to the prior quantifies the amount by which the uncertainty in BV is reduced after phenotypic information has been collected [2]. While the base additive genetic variance must be used when calculating EBV and PEV [2], which unconditional variance of BV should be used when calculating the PEV reliability depends on the scope of interpretation. If the aim is to measure the reliability of selection among parents and progeny, then the PEV reliability should be calculated based on the additive genetic variance in parents. However, if the aim is to measure the reliability of selection among progeny, as in the present study, then the PEV reliability should be calculated based on the additive genetic variance in progeny. When the scope of interpretation is not taken into account, the PEV reliability can overestimate the reliability of selection. The amount of overestimation depends on the type of EBV, its base PEV reliability, and the intensity of selection, which determines how much additive genetic variance has been lost over the generations of selection [16].
Therefore, if the PEV reliability is used as a measure of the reliability of selection, it should be computed based on the additive genetic variance of selection candidates. However, this is often not possible because the additive genetic variance for sets of individuals is usually unknown in real populations and its estimation is computationally demanding [13]. In addition, there is usually no clear definition of the generation or groups of individuals of interest in livestock populations, which complicates estimation even more. In such situations, the base additive genetic variance may be the only estimate available and therefore the base PEV reliabilities can only be used as a measure of precision of EBV in relation to the base population variation and not as a measure of the reliability of selection to compare breeding schemes. However, the difference between these two measures of reliability can be predicted using the equilibrium reliabilities calculated from the base PEV reliabilities and the proportions selected among males and females [16]. As shown in this study, the equilibrium reliabilities matched the reliability of selection for any type of EBV, which confirms the utility of theoretical expressions to calculate the equilibrium reliability [16].
In this study, the reduction of additive genetic variance across generations was caused by three processes: the initial cycles of selection caused changes in gametic phase disequilibrium (i.e., the Bulmer effect), and inbreeding and selection caused changes in allele frequencies. The Bulmer effect was responsible for 50 % of the loss of variance, while changes in allele frequencies due to inbreeding and selection were responsible for 37.5, and 12.5 % of the loss of variance, respectively. The theoretical expressions for the equilibrium reliability derived in [16] only account for the reduction in variance due to the Bulmer effect, and not for reductions due to changes in allele frequencies resulting from inbreeding and selection. However, our study demonstrates that the Bulmer effect is the largest source of reduction in variance. In addition, the expected loss of additive genetic variance in finite populations [29] can be used to account for the effect of inbreeding on variance. The effect of inbreeding was substantial in this study, because of the deep pedigree and a small number of parents. In more typical scenarios, the pedigree is not as deep, which suggests that the impact of reduction in additive genetic variance due to selection changing allele frequency would also be smaller than in this study.
Implications for comparison of breeding programs
The difference between the base PEV reliabilities and the reliability of selection has important consequences for the design of breeding programs using genome-wide marker information. Genome-wide marker information is often considered to be of much lower value for un-phenotyped females than for males. This perception is in part due to the smaller impact that females have on the next generation, but also due to the relatively small difference between the base PEV reliability of EBVP or EBVP &Y and the base PEV or validation reliabilities of EBVM. For example, in the BLUP selection scenario used in this study, the base PEV reliabilities of EBVP and EBVM in generation 21 were equal to 27 and 70 %, respectively, with an absolute difference of 43 %. Several studies have derived the value of genotyping un-phenotyped females on the basis of gains in reliability, while accounting for cost of genotyping and raising replacement females, e.g., [20–22]. However, our results show that the gain in reliability of selection is much higher than expected from comparison of the base PEV reliabilities; in generation 21, reliability of selection was 3 % for EBVP and 63 % for EBVM, with an absolute difference of 60 %. This large difference demonstrates that there is more value in genotyping un-phenotyped females in selected populations than previously reported. This was further demonstrated by measuring the realized genetic selection intensity for the different types of EBV; in generation 21 of the BLUP selection scenario, selecting 50 % of selection candidates gave realized genetic selection intensity of 0.02 when selecting on EBVP and of 0.38 when selecting on EBVM. These results clearly show the benefit of investing in genotyping un-phenotyped females. With increased selection intensity, the effect of selection on realized genetic selection intensity was even more pronounced due to further reductions of the base PEV reliability of EBVP. Comparing the predictive abilities of EBVP and EBVM is, in some sense, a comparison of extremes. Smaller but still significant differences can be expected when the EBV of selection candidates have a large dependency on information from the parental generation. Failing to take the effect of selection on additive genetic variance into account, can overstate the reliability of selection on such EBV in comparison with EBVM [15, 16]. This is not an issue when the comparison of predictive abilities of EBVP or EBVP &Y and EBVM are all based on validation correlations among selection candidates.
Selection reduces genetic variance and the reliability of selection, which is usually not accounted for when the base additive genetic variance is used to calculate base PEV reliabilities. This reduction in reliability of selection is more pronounced for EBV that are based mainly on information from the parental generation. An extreme example of this is when EBV are based solely on parent average. This implies that the genome-wide marker information has been undervalued in populations that are under selection, and that genotyping un-phenotyped females must be reconsidered.
Henderson CR. Best linear unbiased estimation and prediction under a selection model. Biometrics. 1975;31:423–47.
Henderson CR. Applications of Linear Models in Animal Breeding. Schaeffer LR, editor. 3rd ed. University of GuelphL: Guelph;1984. http://cgil.uoguelph.ca/pub/Henderson.html.
Misztal I, Wiggans GR. Approximation of prediction error variance in large-scale animal models. J Dairy Sci. 1988;71:27–32.
Meyer K. Approximate accuracy of genetic evaluation under an animal model. Livest Prod Sci. 1989;21:87–100.
Jamrozik J, Schaeffer LR, Jansen GB. Approximate accuracies of prediction from random regression models. Livest Prod Sci. 2000;66:85–92.
Tier B, Meyer K. Approximating prediction error covariances among additive genetic effects within animals in multiple-trait and random regression models. J Anim Breed Genet. 2004;121:77–89.
Hickey JM, Veerkamp RF, Calus MPL, Mulder HA, Thompson R. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance. Genet Sel Evol. 2009;41:23.
Harris B, Johnson D. Approximate reliability of genetic evaluations under an animal model. J Dairy Sci. 1998;81:2723–8.
VanRaden PM, Wiggans GR. Derivation, calculation, and use of national animal model information. J Dairy Sci. 1991;74:2737–46.
Falconer DS, Mackay TFC. Introduction to quantitative genetics. Harlow: Pearson Education Limited; 1996.
Bulmer MG. The effect of selection on genetic variability. Am Nat. 1971;105:201–11.
Fimland E. The effect of selection on additive genetic parameters. J Anim Breed Genet. 1979;96:120–34.
Henderson CR. Best linear unbiased prediction in populations that have undergone selection. In: Barton RA, Smith WC, editors. Proceedings of the world congress on sheep and beef cattle breeding. 1982. p. 191–200.
Wray NR, Hill WG. Asymptotic rates of response from index selection. Anim Prod. 1989;49:217–27.
Dekkers JCM. Asymptotic response to selection on best linear unbiased predictors of breeding values. Anim Prod. 1992;54:351–60.
Bijma P. Accuracies of estimated breeding values from ordinary genetic evaluations do not reflect the correlation between true and estimated breeding values in selected populations. J Anim Breed Genet. 2012;129:345–58.
Schaeffer LR. Strategy for applying genome-wide selection in dairy cattle. J Anim Breed Genet. 2006;123:218–23.
PubMed Central CAS PubMed Google Scholar
Van Grevenhof EM, Van Arendonk JA, Bijma P. Response to genomic selection: the Bulmer effect and the potential of genomic selection when the number of phenotypic records is limiting. Genet Sel Evol. 2012;44:26.
De Roos APW. Recent trends in genomic selection in dairy cattle. In: Proceedings of the 62nd Annual Meeting of the European Federation of Animal Science: 29 August-2 September 2011; Stavanger. 2011. p. Contribution 01–7.
Simianer H, Chen J, Erbe M. Animal breeding in the genomics era: challenges and opportunities for the maintenance of genetic diversity. In: Proceedings of the 62nd Annual Meeting of the European Federation of Animal Science: 29 August-2 September 2011; Stavanger. 2011. p. Contribution 11–3.
Strandberg E. Opportunities to optimize the role of functional traits in dairy breeding goals using genomics information. In Proceedings of the 62nd Annual Meeting of the European Federation of Animal Science: 29 August-2 September 2011; Stavanger. 2011:Contribution 11–2.
R Development Core Team. R: A Language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2014.
Villa-Angulo R, Matukumalli LK, Gill CA, Choi J, Van Tassell CP, Grefenstette JJ. High-resolution haplotype block structure in the cattle genome. BMC Genet. 2009;10:19.
Hickey JM, Gorjanc G. Simulated data for genomic selection and genome-wide association studies using a combination of coalescent and gene drop methods. G3 (Bethesda). 2012;2:425–7.
VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23.
Gianola D, de los Campos G, Hill WG, Manfredi E, Fernando R. Additive genetic variability and the Bayesian alphabet. Genetics. 2009;183:347–63.
Wright S. The genetical structure of populations. Ann Eugen. 1949;15:323–54.
Rendel JM, Robertson A. Estimation of genetic gain in milk yield by selection in a closed herd of dairy cattle. J Genet. 1950;50:1–8.
Cochran WG. Improvement by means of selection. In: Neyman J, editor. Proceedings of the 2nd Berkeley Symposium on Mathematical Statistics and Probability. Berkeley: University of California Press; 1951. p. 449–70.
We acknowledge input from the three reviewers and the associate editors JCM Dekkers and H Hayes whose comments improved the manuscript. GG and JH acknowledge support from the BBSRC ISP grant to The Roslin Institute.
The Roslin Institute and Royal (Dick) School of Veterinary Studies, The University of Edinburgh, Easter Bush, Midlothian, Scotland, UK
Gregor Gorjanc & John M. Hickey
Wageningen University, Animal Breeding and Genomics Centre, Wageningen, The Netherlands
Piter Bijma
Correspondence to Gregor Gorjanc.
All authors participated in the design of the study. GG performed analyses and wrote the manuscript. PB and JMH assisted in the interpretation of the results and in writing the manuscript. All authors read and approved the final manuscript.
Theoretical basis of the effect of selection on reliability. Detailed illustration on how selection reduces the reliability of selection and how this effect differs between types of estimated breeding values [31]. (PDF 7873 kb)
Equivalence between base PEV reliability and reliability of selection in unselected populations
The purpose of this appendix is to show that the two measures of reliability, the base PEV reliability and the reliability of selection, are equivalent for unselected populations. This is first shown by defining the reliability of selection as the squared correlation between the true breeding values (BV) and estimated breeding values (EBV) in an unselected population [10], and then by demonstrating that this is equivalent to the commonly used expression to compute the base PEV reliability [1, 2, 13].
The reliability of selection is defined as a squared correlation between the BV (a) and EBV (â) for selection candidates. This value is commonly presented as a single value for a group of individuals [10], which is likely based on the intuition of obtaining the sample correlation between a vector of BV and a vector of EBV. However, expression for the correlation between two vectors results in a matrix of correlations C â,a and squaring its elements via the Hadamard product (∘) gives a matrix of squared correlations:
$$ \begin{array}{rr}\hfill Corr\left(\mathbf{a},{\widehat{\mathbf{a}}}^{\mathrm{T}}\right)={\mathbf{C}}_{\mathrm{a},\widehat{\mathrm{a}}}& \hfill = diag{\left(Var\left(\mathbf{a}\right)\right)}^{-\frac{1}{2}}Cov\left(\mathbf{a},{\widehat{\mathbf{a}}}^{\mathrm{T}}\right) diag{\left(Var\left({\widehat{\mathbf{a}}}^T\right)\right)}^{-\frac{1}{2}},\\ {}\hfill & = diag{\left({\mathbf{V}}_{\mathrm{a}}\right)}^{-\frac{1}{2}}{\mathbf{V}}_{\mathrm{a},\widehat{\mathrm{a}}} diag{\left({\mathbf{V}}_{\widehat{a}}\right)}^{-\frac{1}{2}},\hfill \\ {}\hfill {R}^2\left(\widehat{\mathbf{a}}\right)& \begin{array}{ll}=\hfill & \hfill {\mathbf{C}}_{\mathrm{a},\widehat{\mathrm{a}}}\circ {\mathbf{C}}_{\mathrm{a},\widehat{\mathrm{a}}},\end{array}\hfill \end{array} $$
where the diagonal elements are the squared correlation between BV and EBV for each individual, i.e., the reliability for each individual, while the off-diagonal elements are the squared correlation between BV of one individual and EBV of another individual, i.e., the "co-reliability" for each pair of individuals. In the expression (A1), the notation diag(X) indicates a diagonal matrix with the diagonal equal to the diagonal of matrix X. If the evaluated individuals were unrelated and all of them had a single own phenotype pre-corrected for any other effect without error, then the reliability of this evaluation would be the same for all individuals, R 2, and (A1) could be written as R 2(â) = I R 2, which is in line with the common usage [10]. However, in real applications, evaluated individuals are related, they might have different amounts of information, and phenotypes are not pre-corrected, which leads to different reliabilities for different individuals and to non-zero "co-reliabilities". In such cases, the mean of these reliabilities can be used to obtain a single measure of reliability to predict response to selection. Alternatively, the individual specific reliability could be used along with the individual specific selection intensity and generation interval as is done when response to selection is predicted for breeding programs with different "paths" of selection [17, 30].
The additive genetic covariance matrix Var(a) = V a = A σ 2 A in (A1) holds covariances between BV of selection candidates, where A is the relationship matrix between the selection candidates and σ 2 A is the additive genetic variance for the selection candidates. If the selection candidates either represent the whole unselected population or are a random sample from such a population, then σ 2 A is equal to the base additive genetic variance σ 2 A,0 in such a population. The other two components in (A1), Cov(a, â T) and Var(â), depend on σ 2 A as well as on the properties of the estimator of BV and will be worked out in the following.
Using the standard linear mixed model (1) and assuming known fixed effects (E(y) = Xb) and variance components (σ 2 A,0 and σ 2 E ), the EBV can be obtained by regressing BV on the observed phenotypes pre-corrected for fixed effects. The conditional expectation and variance of this regression can be expressed in two equivalent ways [1, 2, 13]:
$$ \begin{array}{rr}\hfill \widehat{\mathbf{a}}=E\left(\mathbf{a}\Big|\mathbf{y}\right)& \hfill =E\left(\mathbf{a}\right)+Cov\left(\mathbf{a},{\mathbf{y}}^{\mathrm{T}}\right)Var{\left(\mathbf{y}\right)}^{-1}\left(\mathbf{y}-\mathbf{X}\mathbf{b}\right),\\ {}\hfill & \hfill ={\mathbf{V}}_{\mathrm{a}}{\mathbf{Z}}^{\mathrm{T}}{\mathbf{V}}_{\mathrm{y}}^{-1}\left(\mathbf{y}-\mathbf{X}\mathbf{b}\right),\\ {}\hfill & \hfill ={\left({\mathbf{V}}_{\mathrm{a}}^{-1}+{\mathbf{Z}}^{\mathrm{T}}{\mathbf{V}}_{\mathrm{e}}^{-1}\mathbf{Z}\right)}^{-1}\mathbf{Z}{\mathbf{V}}_{\mathrm{e}}^{-1}\left(\mathbf{y}-\mathbf{X}\mathbf{b}\right),\end{array} $$
$$ \begin{array}{rr}\hfill \mathrm{V}\mathrm{a}\mathrm{r}\left(\mathbf{a}\Big|\mathbf{y}\right)& \hfill =\mathrm{V}\mathrm{a}\mathrm{r}\left(\mathbf{a}\right)-\mathrm{C}\mathrm{o}\mathrm{v}\left(\mathbf{a},{\mathbf{y}}^{\mathrm{T}}\right)\mathrm{V}\mathrm{a}\mathrm{r}{\left(\mathbf{y}\right)}^{-1}\mathrm{C}\mathrm{o}\mathrm{v}\left(\mathbf{y},{\mathbf{a}}^{\mathrm{T}}\right),\\ {}\hfill & \hfill ={\mathbf{V}}_{\mathrm{a}}-{\mathbf{V}}_{\mathrm{a}}{\mathbf{Z}}^{\mathrm{T}}{\mathbf{V}}_{\mathrm{y}}^{-1}\mathbf{Z}{\mathbf{V}}_{\mathrm{a}},\\ {}\hfill & \hfill ={\left({\mathbf{V}}_{\mathrm{a}}^{-1}+{\mathbf{Z}}^{\mathrm{T}}{\mathbf{V}}_{\mathrm{e}}^{-1}\mathbf{Z}\right)}^{-1}.\end{array} $$
where Var(a) = V a = A σ 2 A,0 is the additive genetic covariance matrix with respect to the base population, Var(e) = V e = E σ 2 E is the residual covariance matrix, and Var(y) = V y = ZV a Z T + V e is the phenotypic covariance matrix. The two equivalent expressions are shown to point out that the conditional variance of BV given the phenotypes is the variance of prediction errors (PEV) of EBV, i.e., Var(a|y) = Var(a − â), which is commonly obtained by inverting the coefficient matrix (V − 1 a + Z T V − 1 e Z).
Using (A2) it can be shown that the components of (A1), Cov(a, â T) and Var(â), are equal to [1, 2, 13]:
$$ \begin{array}{rr}\hfill Cov\left(\mathbf{a},{\widehat{\mathbf{a}}}^T\right)& \hfill ={\mathbf{V}}_{a,\widehat{a}}={\mathbf{V}}_a{\mathbf{Z}}^T{\mathbf{V}}_y^{-1}\mathbf{Z}{\mathbf{V}}_a\end{array}. $$
$$ \begin{array}{rr}\hfill Var\left(\widehat{\mathbf{a}}\right)& \hfill ={\mathbf{V}}_{\widehat{a}}={\mathbf{V}}_a{\mathbf{Z}}^T{\mathbf{V}}_y^{-1}\mathbf{Z}{\mathbf{V}}_a\end{array}. $$
which along with Var(a) gives all the required components for computing the individual specific reliabilities in (A1). Since Cov(a, â T) = Var(â) and Var(a − â) = Var(a) − Var(â) the individual reliabilities in (A1) can be equivalently expressed by contrasting the conditional variance of BV to the variance of BV, i.e., additive genetic variance [1, 2, 13]:
$$ \begin{array}{rr}\hfill {R}^2\left({\widehat{a}}_i\right)& \hfill =\frac{Var\left({\widehat{a}}_i\right)}{Var\left({a}_i\right)}=\frac{Var(a)-Var\left({a}_i-{\widehat{a}}_i\right)}{Var\left({a}_i\right)}=1-\frac{Var\left({a}_i-{\widehat{a}}_i\right)}{Var\left({a}_i\right)}\end{array}. $$
Since the conditional variance of BV, Var(a i − â i ), is not affected by selection [1], the expressions (A1) and (A6) give the same reliability when they refer to the same group of individuals, i.e., when the same additive genetic variance is used in both expressions. However, the two expressions do not give the same reliability, if they refer to two different additive genetic variances. Commonly, the base additive genetic variance (σ 2 A,0 ) is used to compute the base PEV reliability (A6), while the reliability of selection is computed as squared correlation between the EBV and BV in a selected group of individuals (A1) that do not necessarily have the same additive genetic variance (σ 2 A ) as the base population.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Gorjanc, G., Bijma, P. & Hickey, J.M. Reliability of pedigree-based and genomic evaluations in selected populations. Genet Sel Evol 47, 65 (2015). https://doi.org/10.1186/s12711-015-0145-1
Additive Genetic Variance
Selection Candidate
Estimate Breeding Value
Good Linear Unbiased Prediction
Causative Locus | CommonCrawl |
[Submitted on 11 Nov 2013 (v1), last revised 8 Apr 2014 (this version, v2)]
Title:A Notable Relation between $N$-Qubit and $2^{N-1}$-Qubit Pauli Groups via Binary ${\rm LGr}(N,2N)$
Authors:Frédéric Holweck, Metod Saniga, Péter Lévay
Abstract: Employing the fact that the geometry of the $N$-qubit ($N \geq 2$) Pauli group is embodied in the structure of the symplectic polar space $\mathcal{W}(2N-1,2)$ and using properties of the Lagrangian Grassmannian ${\rm LGr}(N,2N)$ defined over the smallest Galois field, it is demonstrated that there exists a bijection between the set of maximum sets of mutually commuting elements of the $N$-qubit Pauli group and a certain subset of elements of the $2^{N-1}$-qubit Pauli group. In order to reveal finer traits of this correspondence, the cases $N=3$ (also addressed recently by Lévay, Planat and Saniga [J. High Energy Phys. 2013 (2013), no. 9, 037, 35 pages, arXiv:1305.5689]) and $N=4$ are discussed in detail. As an apt application of our findings, we use the stratification of the ambient projective space ${\rm PG}(2^N-1,2)$ of the $2^{N-1}$-qubit Pauli group in terms of $G$-orbits, where $G \equiv {\rm SL}(2,2)\times {\rm SL}(2,2)\times\cdots\times {\rm SL}(2,2)\rtimes S_N$, to decompose $\underline{\pi}({\rm LGr}(N,2N))$ into non-equivalent orbits. This leads to a partition of ${\rm LGr}(N,2N)$ into distinguished classes that can be labeled by elements of the above-mentioned Pauli groups.
Subjects: Mathematical Physics (math-ph); Combinatorics (math.CO); Quantum Physics (quant-ph)
Journal reference: SIGMA 10 (2014), 041, 16 pages
Related DOI: https://doi.org/10.3842/SIGMA.2014.041
From: Metod Saniga [view email] [via SIGMA proxy]
[v1] Mon, 11 Nov 2013 10:27:11 UTC (17 KB)
[v2] Tue, 8 Apr 2014 05:03:59 UTC (23 KB)
math.CO
quant-ph | CommonCrawl |
If you find any mistakes, please make a comment! Thank you.
Solutions to Linear Algebra Done Right
Linear Algebra Done Right
Hoffman & Kunze
Baby Rudin
Best Linear Algebra Books
Home » Solution Manual » Solution to Linear Algebra Hoffman & Kunze Chapter 1.4
Solution to Linear Algebra Hoffman & Kunze Chapter 1.4
Solution Manual
Exercise 1.4.1
The coefficient matrix is
$$\left[\begin{array}{ccc}
\frac13 & 2 & -6\\
-4& 0& 5\\
-3&6&-13\\
-\frac73&2&-\frac83
\end{array}\right]
$$This reduces as follows:
$$\rightarrow\left[\begin{array}{ccc}
1 & 6 & -18\\
-7&6&-8
\rightarrow
\left[\begin{array}{ccc}
0&24& -67\\
0&24&-67\\
0&48&-134
0&0&0\\
0&0&0
$$$$\rightarrow
0&1& -67/24\\
1 & 0 & -5/4\\
$$Thus
$$x-\frac54z=0$$$$y-\frac{67}{24}z=0$$Thus the general solution is $(\frac54z, \frac{67}{24}z,z)$ for arbitrary $z\in F$.
$A$ row-reduces as follows:
$$\rightarrow
\left[\begin{array}{cc}
1 & -i \\
i & 1+i
0 & 1+i \\
0 & i
$$Thus the only solution to $AX=0$ is $(0,0)$.
1 & 0\\
\end{array}\right],\quad
1 & x\\
The augmented coefficient matrix is
\left[\begin{array}{ccc|c}
1 & -1 & 2 & 1\\
2 & 0 & 2 & 1\\
1 & -3 & 4 & 2
$$We row reduce it as follows:
0 & 2 & -2 & -1\\
0 & 1 & -1 & -1/2\\
0 & 0 & 0 & 0
1 & 0 & 1 & 1/2\\
$$Thus the system is equivalent to\begin{alignat*}{1}
x_1+x_3 &= 1/2\\
x_2-x_3 &= -1/2
\end{alignat*}Thus the solutions are parameterized by $x_3$. Setting $x_3=c$ gives $x_1=1/2-c$, $x_2=c-1/2$. Thus the general solution is
$$\left(\textstyle\frac12-c,\ \ c-\frac12,\ \ c\right)$$for $c\in\mathbb R$.
$$x+y=0$$$$x+y=1$$
The augmented coefficient matrix is as follows
$$\left[\begin{array}{cccc|c}
1 & -2 & 1 & 2 & 1\\
1 & 1 & -1 & 1& 2\\
1 & 7 & -5 & -1 & 3
$$This row reduces as follows:
\left[\begin{array}{cccc|c}
0 &3 & -2 & -1& 1\\
0 & 0 & 0 & 0 & -1
$$At this point there's no need to continue because the last row says $0x_1 + 0x_2+0x_3+0x_4=-1$. But the left hand side of this equation is zero so this is impossible.
\left[\begin{array}{ccccc|c}
2 & -3 & -7 & 5 & 2 & -2\\
2 & 0 & -4 & 2 & 1 & 3\\
1 & -5 & -7 & 6 & 2 & -7
$$We row-reduce it as follows
\rightarrow\left[\begin{array}{ccccc|c}
0 & 1 & 1 & -1 & 0 & 2\\
0 & 4 & 4 & -4 & -1 & 7\\
0 & 0 & 0 & 0 & -1 & -1\\
0 & 0 & 0 & 0 & 1 & 1
0 & 0 & 0 & 0 & 1 & 1\\
\begin{alignat*}{1}
x_1-2x_3+x_4 &= 1\\
x_2+x_3-x_4 &=2\\
x_5 &= 1
\end{alignat*}Thus the general solution is given by $(1+2x_3-x_4, 2+x_4-x_3, x_3, x_4, 1)$ for arbitrary $x_3,x_4\in F$.
The matrix $A$ is row-reduced as follows:
\rightarrow\left[\begin{array}{ccc}
1 & -3 & 0\\
0 & 7 & 1\\
0 & 8 & 2
0 & 0 & -6
$$Thus for every $(y_1,y_2,y_3)$ there is a (unique) solution.
We row reduce as follows
3 & -6 & 2 & -1 & y_1\\
-2 & 4 & 1 & 3 & y_2\\
0 & 0 & 1 & 1 & y_3\\
1 & -2 & 1 & 0 & y_4
1 & -2 & 1 & 0 & y_4\\
0 & 0 & 1 & 1 & y_3
0 & 0 & -1 & -1 & y_1-3y_4\\
0 & 0 & 3 & 3 & y_2+2y_4\\
0 & 0 & 0 & 0 & y_1-3y_4+y_3\\
0 & 0 & 0 & 0 & y_2+2y_4+3y_3\\
0 & 0 & 0 & 0 & y_2+2y_4+3y_3
$$Thus $(y_1,y_2,y_3,y_4)$ must satisfy
y_1+y_3-3y_4 &= 0\\
y_2 + 3y_3 + 2y_4 &= 0
\end{alignat*}The matrix for this system is
\left[\begin{array}{cccc}
1 & 0 & 1 & -3\\
$$of which the general solution is
$(-y_3+3y_4, -3y_3-2y_4, y_3, y_4)$ for arbitrary $y_3,y_4\in F$. These are the only $(y_1,y_2,y_3,y_4)$ for which the system $AX=Y$ has a solution.
Exercise 1.4.10
There are seven possible $2\times3$ row-reduced echelon matrices:
\begin{equation}
R_1=\left[\begin{array}{ccc}
0 &0 &0 \\
0 & 0& 0
\label{m1}
\end{equation}\begin{equation}
1 &0 &a \\
0 &1 & b
1 & a& 0\\
0 &0 &1
1 &a & b\\
0& 0& 0
0& 1& a\\
0&0 &0
0&1 &0 \\
0& 0&1 \\
0& 0&0
\end{equation}We must show that no two of these have exactly the same solutions. For the first one $R_1$, any $(x,y,z)$ is a solution and that's not the case for any of the other $R_i$'s. Consider next $R_7$. In this case $z=0$ and $x$ and $y$ can be anything. We can have $z\not=0$ for $R_2$, $R_3$ and $R_5$. So the only ones $R_7$ could share solutions with are $R_3$ or $R_6$. But both of those have restrictions on $x$ and/or $y$ so the solutions cannot be the same. Also $R_3$ and $R_6$ cannot have the same solutions since $R_6$ forces $y=0$ while $R_3$ does not.
Thus we have shown that if two $R_i$'s share the same solutions then they must be among $R_2$, $R_4$, and $R_5$.
The solutions for $R_2$ are $(-az, -bz, z)$, for $z$ arbitrary. The solutions for $R_4$ are $(-a'y-b'z,y,z)$ for $y,z$ arbitrary. Thus $(-b',0,1)$ is a solution for $R_4$. Suppose this is also a solution for $R_2$. Then $z=1$ so it is of the form $(-a,-b,1)$ and it must be that $(-b',0,1)=(-a,-b,1)$. Comparing the second component implies $b=0$. But if $b=0$ then $R_2$ implies $y=0$. But $R_4$ allows for arbitrary $y$. Thus $R_2$ and $R_4$ cannot share the same solutions.
The solutions for $R_2$ are $(-az, -bz, z)$, for $z$ arbitrary. The solutions for $R_5$ are $(x,-a'z,z)$ for $x,z$ arbitrary. Thus $(0,-a',1)$ is a solution for $R_5$. As before if this is a solution of $R_2$ then $a=0$. But if $a=0$ then $R_2$ forces $x=0$ while in $R_5$ $x$ can be arbitrary. Thus $R_2$ and $R_5$ cannot share the same solutions.
The solutions for $R_4$ are $(-ay-bz,y,z)$ for $y,z$ arbitrary. The solutions for $R_5$ are $(x,-a'z,z)$ for $x,z$ arbitrary. Thus setting $x=1$, $z=0$ gives $(1,0,0)$ is a solution for $R_5$. But this cannot be a solution for $R_4$ since if $y=z=0$ then first component must also be zero.
Thus we have shown that no two $R_i$ and $R_j$ have the same solutions unless $i=j$.
From http://greggrant.org
Tags: Hoffman & Kunze
Previous PostSolution to Linear Algebra Hoffman & Kunze Chapter 1.3
Next PostSolution to Linear Algebra Hoffman & Kunze Chapter 1.5
This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.
ZC on Chapter 2 Exercise C
Linearity on Solution to Linear Algebra Hoffman & Kunze Chapter 7.1
Hecke on Solution to Linear Algebra Hoffman & Kunze Chapter 7.1
Linearity on Chapter 6 Exercise C
Hunter on Chapter 6 Exercise C | CommonCrawl |
Hitchhiker's Guide to the Galaxy
Donate to Megadodo Publications
Cost of electricity by source
(Redirected from Levelized cost of electricity)
For the price of electricity, see Electricity pricing.
The distinct ways of electricity generation can incur significantly different costs. Calculations of these costs can be made at the point of connection to a load or to the electricity grid. The cost is typically given per kilowatt-hour or megawatt-hour. It includes the initial capital, discount rate, as well as the costs of continuous operation, fuel, and maintenance. This type of calculation assists policymakers, researchers and others to guide discussions and decision making.
The levelized cost of energy (LCOE) is a measure of a power source that allows comparison of different methods of electricity generation on a consistent basis. It is an economic assessment of the average total cost to build and operate a power-generating asset over its lifetime divided by the total energy output of the asset over that lifetime. The LCOE can also be regarded as the average minimum price at which electricity must be sold in order to break-even over the lifetime of the project.
1 Cost factors
1.1 Capital costs
1.2 Levelized cost of electricity
1.3 Avoided cost
1.4 Marginal cost of electricity
1.5 External costs of energy sources
1.6 Additional cost factors
2 Current global studies
2.1 Lazard (2018)
2.2 Bloomberg (2018)
2.3 IRENA (2018)
2.4 Banks (2018)
3 Regional and historical studies
3.6.1 Energy Information Administration
3.6.2 NREL OpenEI (2015)
3.6.3 California Energy Commission (2014)
3.6.4 Lazard (2015)
3.7 Global
3.7.1 IEA and NEA (2015)
3.8 Other studies and analysis
3.8.1 Buffett Contract (2015)
3.8.2 Sheikh Mohammed Bin Rashid solar farm (2016)
3.8.3 Brookings Institution (2014)
3.8.4 Brazilian electricity mix: the Renewable and Non-renewable Exergetic Cost (2014)
4 Renewables
4.1 Photovoltaics
4.2 Solar thermal
4.3 Wind power
Cost factors
While calculating costs, several internal cost factors have to be considered.[1] Note the use of "costs," which is not the actual selling price, since this can be affected by a variety of factors such as subsidies and taxes:
Capital costs (including waste disposal and decommissioning costs for nuclear energy) – tend to be low for fossil fuel power stations; high for wind turbines, solar PV (photovoltaics); very high for waste to energy, wave and tidal, solar thermal, and nuclear.
Fuel costs – high for fossil fuel and biomass sources, low for nuclear, and zero for many renewables. Fuel costs can vary somewhat unpredictably over the life of the generating equipment, due to political and other factors.
Factors such as the costs of waste (and associated issues) and different insurance costs are not included in the following: Works power, own use or parasitic load – that is, the portion of generated power actually used to run the station's pumps and fans has to be allowed for.
To evaluate the total cost of production of electricity, the streams of costs are converted to a net present value using the time value of money. These costs are all brought together using discounted cash flow.[2][3]
Capital costs
For power generation capacity capital costs are often expressed as overnight cost per watt. Estimated costs are:
gas/oil combined cycle power plant - $1000/kW [4]
wind - $1600/kW[4]
offshore wind - $6500/kW[4]
solar PV (fixed) - $1800/kW[4]
solar PV (tracking)- $2000/kW[4]
battery storage - $2000/kW[4]
geothermal - $2800/kW[4]
coal (with S02 and NOx controls)- $3500-3800/kW[5]
advanced nuclear - $6000/kW[4]
fuel cells - $7200/kW[4]
Levelized cost of electricity
The levelized cost of electricity (LCOE), also known as Levelized Energy Cost (LEC), is the net present value of the unit-cost of electrical energy over the lifetime of a generating asset. It is often taken as a proxy for the average price that the generating asset must receive in a market to break even over its lifetime. It is a first-order economic assessment of the cost competitiveness of an electricity-generating system that incorporates all costs over its lifetime: initial investment, operations and maintenance, cost of fuel, cost of capital.
The levelized cost is that value for which an equal-valued fixed revenue delivered over the life of the asset's generating profile would cause the project to break even. This can be roughly calculated as the net present value of all costs over the lifetime of the asset divided by the total electrical energy output of the asset.[6]
The levelized cost of electricity (LCOE) is given by:
L C O E = sum of costs over lifetime sum of electrical energy produced over lifetime = ∑ t = 1 n I t + M t + F t ( 1 + r ) t ∑ t = 1 n E t ( 1 + r ) t {\displaystyle \mathrm {LCOE} ={\frac {\text{sum of costs over lifetime}}{\text{sum of electrical energy produced over lifetime}}}={\frac {\sum _{t=1}^{n}{\frac {I_{t}+M_{t}+F_{t}}{\left({1+r}\right)^{t}}}}{\sum _{t=1}^{n}{\frac {E_{t}}{\left({1+r}\right)^{t}}}}}}
It : investment expenditures in the year t
Mt : operations and maintenance expenditures in the year t
Ft : fuel expenditures in the year t
Et : electrical energy generated in the year t
r : discount rate
n : expected lifetime of system or power station
Note: Some caution must be taken when using formulas for the levelized cost, as they often embody unseen assumptions, neglect effects like taxes, and may be specified in real or nominal levelized cost. For example, other versions of the above formula do not discount the electricity stream.[citation needed]
Typically the LCOE is calculated over the design lifetime of a plant, which is usually 20 to 40 years, and given in the units of currency per kilowatt-hour or megawatt-day, for example AUD/kWh or EUR/kWh or per megawatt-hour, for example AUD/MWh (as tabulated below).[7] However, care should be taken in comparing different LCOE studies and the sources of the information as the LCOE for a given energy source is highly dependent on the assumptions, financing terms and technological deployment analyzed.[8] In particular, assumption of capacity factor has significant impact on the calculation of LCOE. Thus, a key requirement for the analysis is a clear statement of the applicability of the analysis based on justified assumptions.[8]
Many scholars,[specify] such as Paul Joskow, have described limits to the "levelized cost of electricity" metric for comparing new generating sources. In particular, LCOE ignores time effects associated with matching production to demand. This happens at two levels:
Dispatchability, the ability of a generating system to come online, go offline, or ramp up or down, quickly as demand swings.
The extent to which the availability profile matches or conflicts with the market demand profile.
Thermally lethargic technologies like coal and nuclear are physically incapable of fast ramping. Capital intensive technologies such as wind, solar, and nuclear are economically disadvantaged unless generating at maximum availability since the LCOE is nearly all sunk-cost capital investment. Intermittent power sources, such as wind and solar, may incur extra costs associated with needing to have storage or backup generation available.[9] At the same time, intermittent sources can be competitive if they are available to produce when demand and prices are highest, such as solar during summertime mid-day peaks seen in hot countries where air conditioning is a major consumer.[8] Despite these time limitations, leveling costs is often a necessary prerequisite for making comparisons on an equal footing before demand profiles are considered, and the levelized-cost metric is widely used for comparing technologies at the margin, where grid implications of new generation can be neglected.
Another limitation of the LCOE metric is the influence of energy efficiency and conservation (EEC).[10] EEC has caused the electricity demand of many countries to remain flat or decline. Considering only the LCOE for utility scale plants will tend to maximise generation and risks overestimating required generation due to efficiency, thus "lowballing" their LCOE. For solar systems installed at the point of end use, it is more economical to invest in EEC first, then solar (resulting in a smaller required solar system than what would be needed without the EEC measures). However, designing a solar system on the basis of LCOE would cause the smaller system LCOE to increase (as the energy generation [measured in kWh] drops faster than the system cost [$]). The whole of system life cycle cost should be considered, not just the LCOE of the energy source.[10] LCOE is not as relevant to end-users than other financial considerations such as income, cashflow, mortgage, leases, rent, and electricity bills.[10] Comparing solar investments in relation to these can make it easier for end-users to make a decision, or using cost-benefit calculations "and/or an asset's capacity value or contribution to peak on a system or circuit level".[10]
Avoided cost
The US Energy Information Administration has recommended that levelized costs of non-dispatchable sources such as wind or solar may be better compared to the avoided energy cost rather than to the LCOE of dispatchable sources such as fossil fuels or geothermal. This is because introduction of fluctuating power sources may or may not avoid capital and maintenance costs of backup dispatchable sources. Levelized Avoided Cost of Energy (LACE) is the avoided costs from other sources divided by the annual yearly output of the non-dispatchable source. However, the avoided cost is much harder to calculate accurately.[11][12]
Marginal cost of electricity
A more accurate economic assessment might be the marginal cost of electricity. This value works by comparing the added system cost of increasing electricity generation from one source versus that from other sources of electricity generation (see Merit Order).[citation needed] [13]
External costs of energy sources
See also: Environmental impact of the energy industry and Economics of new nuclear power plants
Typically pricing of electricity from various energy sources may not include all external costs – that is, the costs indirectly borne by society as a whole as a consequence of using that energy source.[14] These may include enabling costs, environmental impacts, usage lifespans, energy storage, recycling costs, or beyond-insurance accident effects.
The US Energy Information Administration predicts that coal and gas are set to be continually used to deliver the majority of the world's electricity.[15] This is expected to result in the evacuation of millions of homes in low-lying areas, and an annual cost of hundreds of billions of dollars' worth of property damage.[16][17][18][19][20][21][22]
Furthermore, with a number of island nations becoming slowly submerged underwater due to rising sea levels,[23] massive international climate litigation lawsuits against fossil fuel users are currently[when?] beginning in the International Court of Justice.[24][25]
An EU funded research study known as ExternE, or Externalities of Energy, undertaken over the period of 1995 to 2005 found that the cost of producing electricity from coal or oil would double over its present value, and the cost of electricity production from gas would increase by 30% if external costs such as damage to the environment and to human health, from the particulate matter, nitrogen oxides, chromium VI, river water alkalinity, mercury poisoning and arsenic emissions produced by these sources, were taken into account. It was estimated in the study that these external, downstream, fossil fuel costs amount up to 1%–2% of the EU's entire Gross Domestic Product (GDP), and this was before the external cost of global warming from these sources was even included.[26][27] Coal has the highest external cost in the EU, and global warming is the largest part of that cost.[14]
A means to address a part of the external costs of fossil fuel generation is carbon pricing — the method most favored by economics for reducing global-warming emissions. Carbon pricing charges those who emit carbon dioxide (CO2) for their emissions. That charge, called a 'carbon price', is the amount that must be paid for the right to emit one tonne of CO2 into the atmosphere.[28] Carbon pricing usually takes the form of a carbon tax or a requirement to purchase permits to emit (also called "allowances").
Depending on the assumptions of possible accidents and their probabilites external costs for nuclear power vary significantly and can reach between 0.2 and 200 ct/kWh.[29] Furthermore, nuclear power is working under an insurance framework that limits or structures accident liabilities in accordance with the Paris convention on nuclear third-party liability, the Brussels supplementary convention, and the Vienna convention on civil liability for nuclear damage[30] and in the U.S. the Price-Anderson Act. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity; but the cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a CBO study.[31]
These beyond-insurance costs for worst-case scenarios are not unique to nuclear power, as hydroelectric power plants are similarly not fully insured against a catastrophic event such as the Banqiao Dam disaster, where 11 million people lost their homes and from 30,000 to 200,000 people died, or large dam failures in general. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state.[32]
Because externalities are diffuse in their effect, external costs can not be measured directly, but must be estimated. One approach estimate external costs of environmental impact of electricity is the Methodological Convention of Federal Environment Agency of Germany. That method arrives at external costs of electricity from lignite at 10.75 Eurocent/kWh, from hard coal 8.94 Eurocent/kWh, from natural gas 4.91 Eurocent/kWh, from photovoltaic 1.18 Eurocent/kWh, from wind 0.26 Eurocent/kWh and from hydro 0.18 Eurocent/kWh.[33] For nuclear the Federal Environment Agency indicates no value, as different studies have results that vary by a factor of 1,000. It recommends the nuclear given the huge uncertainty, with the cost of the next inferior energy source to evaluate.[34] Based on this recommendation the Federal Environment Agency, and with their own method, the Forum Ecological-social market economy, arrive at external environmental costs of nuclear energy at 10.7 to 34 ct/kWh.[35]
Additional cost factors
Calculations often do not include wider system costs associated with each type of plant, such as long distance transmission connections to grids, or balancing and reserve costs. Calculations do not include externalities such as health damage by coal plants, nor the effect of CO2 emissions on the climate change, ocean acidification and eutrophication, ocean current shifts. Decommissioning costs of power plants are usually not included (nuclear power plants in the United States is an exception, because the cost of decommissioning is included in the price of electricity per the Nuclear Waste Policy Act), is therefore not full cost accounting. These types of items can be explicitly added as necessary depending on the purpose of the calculation. It has little relation to actual price of power, but assists policy makers and others to guide discussions and decision making.[citation needed]
These are not minor factors but very significantly affect all responsible power decisions:
Comparisons of life-cycle greenhouse gas emissions show coal, for instance, to be radically higher in terms of GHGs than any alternative. Accordingly, in the analysis below, carbon captured coal is generally treated as a separate source rather than being averaged in with other coal.
Other environmental concerns with electricity generation include acid rain, ocean acidification and effect of coal extraction on watersheds.
Various human health concerns with electricity generation, including asthma and smog, now dominate decisions in developed nations that incur health care costs publicly. A Harvard University Medical School study estimates the US health costs of coal alone at between 300 and 500 billion US dollars annually.[36]
While cost per kWh of transmission varies drastically with distance, the long complex projects required to clear or even upgrade transmission routes make even attractive new supplies often uncompetitive with conservation measures (see below), because the timing of payoff must take the transmission upgrade into account.
Current global studies
This section needs expansion with: Needs tables or charts of these results. You can help by adding to it. (January 2019)
Lazard (2018)
In November, 2018, Lazard found that not only are utility-scale solar and wind cheaper than fossil fuels, "[i]n some scenarios, alternative energy costs have decreased to the point that they are now at or below the marginal cost of conventional generation." Overall, Lazard found "The low end levelized cost of onshore wind-generated energy is $29/MWh, compared to an average illustrative marginal cost of $36/MWh for coal. The levelized cost of utility-scale solar is nearly identical to the illustrative marginal cost of coal, at $36/MWh. This comparison is accentuated when subsidizing onshore wind and solar, which results in levelized costs of energy of $14/MWh and $32/MWh, respectively. ... The mean levelized cost of energy of utility-scale PV technologies is down approximately 13% from last year and the mean levelized cost of energy of onshore wind has declined almost 7%."[37]
Bloomberg (2018)
Bloomberg New Energy Finance estimates a "global LCOE for onshore wind [of] $55 per megawatt-hour, down 18% from the first six months of [2017], while the equivalent for solar PV without tracking systems is $70 per MWh, also down 18%." Bloomberg does not provide its global public LCOEs for fossil fuels, but it notes in India they are significantly more expensive: "BNEF is now showing benchmark LCOEs for onshore wind of just $39 per MWh, down 46% on a year ago, and for solar PV at $41, down 45%. By comparison, coal comes in at $68 per MWh, and combined-cycle gas at $93." [38][39]
IRENA (2018)
The International Renewable Energy Agency (IRENA) released a study based on comprehensive international datasets in January 2018 which projects the fall by 2020 of the kilowatt cost of electricity from utility scale renewable projects such as onshore wind farms to a point equal or below that of electricity from conventional sources.[40]
Banks (2018)
The European Bank for Reconstruction and Development (EBRD) says that "renewables are now cheapest energy source", elaborating: "the Bank believes that renewable energy markets in many of the countries where it invests have reached a stage where the introduction of competitive auctions will lead both to a steep drop in electricity prices and an increase in investment." [41] The World Bank (World Bank) President Jim Yong Kim agreed on 10 October 2018: "We are required by our by-laws to go with the lowest cost option, and renewables have now come below the cost of [fossil fuels]." [42]
Regional and historical studies
This section needs to be updated. Please update this article to reflect recent events or newly available information. (July 2015)
LCOE in AUD per MWh for some coal and wind technologies (2012) from the Australian Technology Assessment (2012), Table 5.2.1.[43]
Cost with CO
Cost without CO
Supercritical brown coal $162 $95
Supercritical brown coal with CCS $205 $192
Supercritical black coal $135 – $145 $84 – $94
Supercritical black coal with CCS $162 – $205 $153 – $196
Wind $111 – $122 $111 – $122
LCOEs by source in Australia in 2012.
According to various studies, the cost for wind and solar has dramatically reduced since 2006. For example, the Australian Climate Council states that over the 5 years between 2009–2014 solar costs fell by 75% making them comparable to coal, and are expected to continue dropping over the next 5 years by another 45% from 2014 prices.[44] They also found that wind has been cheaper than coal since 2013, and that coal and gas will become less viable as subsidies are withdrawn and there is the expectation that they will eventually have to pay the costs of pollution.[44]
A CO2CRC report, printed on the 27th of November 2015, titled "Wind, solar, coal and gas to reach similar costs by 2030:", provides the following updated situation in Australia. "The updated LCOE analysis finds that in 2015 natural gas combined cycle and supercritical pulverised coal (both black and brown) plants have the lowest LCOEs of the technologies covered in the study. Wind is the lowest cost large-scale renewable energy source, while rooftop solar panels are competitive with retail electricity prices. By 2030 the LCOE ranges of both conventional coal and gas technologies as well as wind and large-scale solar converge to a common range of A$50 to A$100 per megawatt hour."
An updated report, posted on the 27th of September 2017, titled "Renewables will be cheaper than coal in the future. Here are the numbers", indicated that a 100% renewables system is competitive with new-build supercritical (ultrasupercritical) coal, which, according to the Jacobs calculations in the report link above, would come in at around A$75(80) per MWh between 2020 and 2050. This projection for supercritical coal is consistent with other studies by the CO2CRC in 2015 (A$80 per MWh) and used by CSIRO in 2017 (A$65–80 per MWh).
The International Energy Agency and EDF have estimated for 2011 the following costs.[citation needed] For nuclear power, they include the costs due to new safety investments to upgrade the French nuclear plant after the Fukushima Daiichi nuclear disaster; the cost for those investments is estimated at 4 €/MWh. Concerning solar power, the estimate of 293 €/MWh is for a large plant capable of producing in the range of 50–100 GWh/year located in a favorable location (such as in Southern Europe). For a small household plant that can produce around 3 MWh/year, the cost is between 400 and 700 €/MWh, depending on location. Solar power was by far the most expensive renewable source of electricity among the technologies studied, although increasing efficiency and longer lifespan of photovoltaic panels together with reduced production costs have made this source of energy more competitive since 2011. By 2017, the cost of photovoltaic solar power had decreased to less than 50 €/MWh.
French LCOE in €/MWh (2011)
Cost in 2011
Hydro power 20
Nuclear (with State-covered insurance costs) 50 50
Nuclear EPR 100[45]
Natural gas turbines without CO2 capture 61
Onshore wind 69 60[45]
Solar farms 293 43.24[46]
Comparison of the levelized cost of electricity for some newly built renewable and fossil-fuel based power stations in EuroCent per kWh (Germany, 2018)[47]
Note: employed technologies and LCOE differ by country and change over time.
In November 2013, the Fraunhofer Institute for Solar Energy Systems ISE assessed the levelised generation costs for newly built power plants in the German electricity sector.[48] PV systems reached LCOE between 0.078 and 0.142 Euro/kWh in the third quarter of 2013, depending on the type of power plant (ground-mounted utility-scale or small rooftop solar PV) and average German insolation of 1000 to 1200 kWh/m² per year (GHI). There are no LCOE-figures available for electricity generated by recently built German nuclear power plants as none have been constructed since the late 1980s. An update of the ISE study was published in March 2018.[47]
German LCOE in €/MWh
ISE (2013)
High cost
Coal-fired power plants brown coal 38 53 46 80
hard coal 63 80 63 99
CCGT power plants 75 98 78 100
Wind Power Onshore wind farms 45 107 40 82
Offshore wind farms 119 194 75 138
Solar PV systems 78 142 37 115
Biogas power plant 135 250 101 147
Source: Fraunhofer ISE (2013) – Levelized cost of electricity renewable energy technologies[48]
Source: Fraunhofer ISE (2018) – Stromgestehungskosten erneuerbare Energien[47]
A 2010 study by the Japanese government (pre-Fukushima disaster), called the Energy White Paper,[citation needed] concluded the cost for kilowatt hour was ¥49 for solar, ¥10 to ¥14 for wind, and ¥5 or ¥6 for nuclear power.
Masayoshi Son, an advocate for renewable energy, however, has pointed out that the government estimates for nuclear power did not include the costs for reprocessing the fuel or disaster insurance liability. Son estimated that if these costs were included, the cost of nuclear power was about the same as wind power.[49][50][51]
The Institution of Engineers and Shipbuilders in Scotland commissioned a former Director of Operations of the British National Grid, Colin Gibson, to produce a report on generation levelised costs that for the first time would include some of the transmission costs as well as the generation costs. This was published in December 2011.[52] The institution seeks to encourage debate of the issue, and has taken the unusual step among compilers of such studies of publishing a spreadsheet.[53]
On 27 February 2015 Vattenfall Vindkraft AS agreed to build the Horns Rev 3 offshore wind farm at a price of 10.31 Eurocent per kWh. This has been quoted as below £100 per MWh.
In 2013 in the United Kingdom for a new-to-build nuclear power plant (Hinkley Point C: completion 2023), a feed-in tariff of £92.50/MWh (around 142 USD/MWh) plus compensation for inflation with a running time of 35 years was agreed.[54][55]
The Department for Business, Energy and Industrial Strategy (BEIS) publishes regular estimates of the costs of different electricity generation sources, following on the estimates of the merged Department of Energy and Climate Change (DECC). Levelised cost estimates for new generation projects begun in 2015 are listed in the table below.[56]
Estimated UK LCOE for projects starting in 2015, £/MWh
Power generating technology
Wind Onshore 47 62 76
Offshore 90 102 115
Solar Large-scale PV (Photovoltaic) 71 80 94
Nuclear PWR (Pressurized Water Reactor)(a) 82 93 121
Biomass 85 87 88
Natural Gas Combined Cycle Gas Turbine 65 66 68
CCGT with CCS (Carbon capture and storage) 102 110 123
Open-Cycle Gas Turbine 157 162 170
Coal Advanced Supercritical Coal with Oxy-comb. CCS 124 134 153
IGCC (Integrated Gasification Combined Cycle) with CCS 137 148 171
(a) new nuclear power: guaranteed strike price of £92.50/MWh for Hinkley Point C in 2023[57][58])
Projected LCOE in the U.S. by 2020 (as of 2015) in dollars per MWh[59]
The following data are from the Energy Information Administration's (EIA) Annual Energy Outlook released in 2015 (AEO2015). They are in dollars per megawatt-hour (2013 USD/MWh). These figures are estimates for plants going into service in 2020.[12] The LCOE below is calculated based off a 30-year recovery period using a real after tax weighted average cost of capital (WACC) of 6.1%. For carbon intensive technologies 3 percentage points are added to the WACC. (This is approximately equivalent fee of $15 per metric ton of carbon dioxide CO
Since 2010, the US Energy Information Administration (EIA) has published the Annual Energy Outlook (AEO), with yearly LCOE-projections for future utility-scale facilities to be commissioned in about five years' time. In 2015, EIA has been criticized by the Advanced Energy Economy (AEE) Institute after its release of the AEO 2015-report to "consistently underestimate the growth rate of renewable energy, leading to 'misperceptions' about the performance of these resources in the marketplace". AEE points out that the average power purchase agreement (PPA) for wind power was already at $24/MWh in 2013. Likewise, PPA for utility-scale solar PV are seen at current levels of $50–$75/MWh.[60] These figures contrast strongly with EIA's estimated LCOE of $125/MWh (or $114/MWh including subsidies) for solar PV in 2020.[61]
Projected LCOE in the U.S. by 2022 (as of 2016) $/MWh
Wind Onshore 43.4 55.8 75.6
Wind Offshore 136.6 NB 212.9
Solar PV 58.3 73.7 143.0
Geothermal 42.8 44.0 53.4
Hydro 57.4 63.9 69.8
Natural Gas-fired Conventional Combined Cycle 52.4 58.6 83.2
Natural Gas-fired Advanced Combined Cycle 51.6 53.8 81.7
Natural Gas-fired Advanced CC with CCS 63.1 NB 90.4
Natural Gas-fired Conventional Combustion Turbine 98.8 100.7 148.3
Natural Gas-fired Advanced Combustion Turbine 85.9 87.1 129.8
Biomass 84.8 97.7 125.3
Advanced Nuclear 95.9 96.2 104.3
Solar Thermal 176.7 NB 372.8
Coal with 30% carbon sequestration 128.9 NB 196.3
The electricity sources which had the most decrease in estimated costs over the period 2010 to 2019 were solar photovoltaic (down 88%), onshore wind (down 71%) and advanced natural gas combined cycle (down 49%).
For utility-scale generation put into service in 2040, the EIA estimated in 2015 that there would be further reductions in the constant-dollar cost of concentrated solar power (CSP) (down 18%), solar photovoltaic (down 15%), offshore wind (down 11%), and advanced nuclear (down 7%). The cost of onshore wind was expected to rise slightly (up 2%) by 2040, while natural gas combined cycle electricity was expected to increase 9% to 10% over the period.[61]
Historical summary of EIA's LCOE projections (2010–2019)
Estimate in $/MWh
convent'l
Nat. Gas combined cycle
of year
2010 [62] 2016 100.4 83.1 79.3 119.0 149.3 191.1 396.1 256.6
2011 [63] 2016 95.1 65.1 62.2 114.0 96.1 243.7 211.0 312.2
2012 [64] 2017 97.7 66.1 63.1 111.4 96.0 N/A 152.4 242.0
2013 [65] 2018 100.1 67.1 65.6 108.4 86.6 221.5 144.3 261.5
2014 [66] 2019 95.6 66.3 64.4 96.1 80.3 204.1 130.0 243.1
2016 [67] 2022 NB 58.1 57.2 102.8 64.5 158.1 84.7 235.9
2017 [68] 2022 NB 58.6 53.8 96.2 55.8 NB 73.7 NB
2018 [69] 2022 NB 48.3 48.1 90.1 48.0 124.6 59.1 NB
2019 [69] 2023 NB 40.8 40.2 NB 42.8 117.9 48.8 NB
Nominal change 2010–2019
Note: Projected LCOE are adjusted for inflation and calculated on constant dollars based on two years prior to the release year of the estimate.
Estimates given without any subsidies. Transmission cost for non-dispatchable sources are on average much higher.
NB = "Not built" (No capacity additions are expected.)
NREL OpenEI (2015)
OpenEI, sponsored jointly by the US DOE and the National Renewable Energy Laboratory (NREL), has compiled a historical cost-of-generation database[70] covering a wide variety of generation sources. Because the data is open source it may be subject to frequent revision.
LCOE from OpenEI DB as of June, 2015
Plant Type (USD/MWh)
Data Source Year
Distributed Generation 10 70 130 2014
Hydropower Conventional 30 70 100 2011
Small Hydropower 140 2011
Wind Onshore (land based) 40 80 2014
Offshore 100 200 2014
Natural Gas Combined Cycle 50 80 2014
Combustion Turbine 140 200 2014
Coal Pulverized, scrubbed 60 150 2014
Pulverized, unscrubbed 40 2008
IGCC, gasified 100 170 2014
Solar Photovoltaic 60 110 250 2014
CSP 100 220 2014
Geothermal Hydrothermal 50 100 2011
Blind 100 2011
Enhanced 80 130 2014
Biopower 90 110 2014
Fuel Cell 100 160 2014
Nuclear 90 130 2014
Ocean 230 240 250 2011
Only Median value = only one data point.
Only Max + Min value = Only two data points
California Energy Commission (2014)
LCOE data from the California Energy Commission report titled "Estimated Cost of New Renewable and Fossil Generation in California".[71] The model data was calculated for all three classes of developers: merchant, investor-owned utility (IOU), and publicly owned utility (POU).
Year 2013 (Nominal $$) ($/MWh)
Year 2024( Nominal $$) ($/MWh)
Generation Turbine 49.9 MW 662.81 2215.54 311.27 884.24 2895.90 428.20
Generation Turbine 100 MW 660.52 2202.75 309.78 881.62 2880.53 426.48
Generation Turbine – Advanced 200 MW 403.83 1266.91 215.53 533.17 1615.68 299.06
Combined Cycle 2CTs No Duct Firing 500 MW 116.51 104.54 102.32 167.46 151.88 150.07
Combined Cycle 2CTs With Duct Firing 500 MW 115.81 104.05 102.04 166.97 151.54 149.88
Biomass Fluidized Bed Boiler 50 MW 122.04 141.53 123.51 153.89 178.06 156.23
Geothermal Binary 30 MW 90.63 120.21 84.98 109.68 145.31 103.00
Geothermal Flash 30 MW 112.48 146.72 109.47 144.03 185.85 142.43
Solar Parabolic Trough W/O Storage 250 MW 168.18 228.73 167.93 156.10 209.72 156.69
Solar Parabolic Trough With Storage 250 MW 127.40 189.12 134.81 116.90 171.34 123.92
Solar Power Tower W/O Storage 100 MW 152.58 210.04 151.53 133.63 184.24 132.69
Solar Power Tower With Storage 100 MW 6HR 145.52 217.79 153.81 132.78 196.47 140.58
Solar Power Tower With Storage 100 MW 11HR 114.06 171.72 120.45 103.56 154.26 109.55
Solar Photovoltaic (Thin Film) 100 MW 111.07 170.00 121.30 81.07 119.10 88.91
Solar Photovoltaic (Single-Axis) 100 MW 109.00 165.22 116.57 98.49 146.20 105.56
Solar Photovoltaic (Thin Film) 20 MW 121.31 186.51 132.42 93.11 138.54 101.99
Solar Photovoltaic (Single-Axis) 20 MW 117.74 179.16 125.86 108.81 162.68 116.56
Wind Class 3 100 MW 85.12 104.74 75.8 75.01 91.90 68.17
Wind Class 4 100 MW 84.31 103.99 75.29 75.77 92.88 68.83
In November 2015, the investment bank Lazard headquartered in New York, published its ninth annual study on the current electricity production costs of photovoltaics in the US compared to conventional power generators. The best large-scale photovoltaic power plants can produce electricity at 50 USD per MWh. The upper limit at 60 USD per MWh. In comparison, coal-fired plants are between 65 USD and $150 per MWh, nuclear power at 97 USD per MWh. Small photovoltaic power plants on roofs of houses are still at 184–300 USD per MWh, but which can do without electricity transport costs. Onshore wind turbines are 32–77 USD per MWh. One drawback is the intermittency of solar and wind power. The study suggests a solution in batteries as a storage, but these are still expensive so far.[72][73]
Lazard's long standing Levelized Cost of Energy (LCOE) report is widely considered and industry benchmark. In 2015 Lazard published its inaugural Levelized Cost of Storage (LCOS) report, which was developed by the investment bank Lazard in collaboration with the energy consulting firm, Enovation.[74]
Below is the complete list of LCOEs by source from the investment bank Lazard.[72]
Plant Type ( USD/MWh)
Energy Efficiency 0 50
Wind 32 77
Solar PV-Thin Film Utility Scale 50 60
Solar PV-Crystalline Utility Scale 58 70
Solar PV-Rooftop Residential 184 300
Solar PV-Rooftop C&I 109 193
Solar Thermal with Storage 119 181
Microturbine 79 89
Geothermal 82 117
Biomass Direct 82 110
Fuel Cell 106 167
Natural Gas Reciprocating Engine 68 101
Gas Combined Cycle 52 78
Gas Peaking 165 218
IGCC 96 183
Nuclear 97 136
Coal 65 150
Battery Storage ** **
Diesel Reciprocating Engine 212 281
NOTE: ** Battery Storage is no longer include in this report (2015). It has been rolled into its own separate report LCOS 1.0, developed in consultation with Enovation Partners (See charts below).
Below are the LCOSs for different battery technologies. This category has traditionally been filled by Diesel Engines. These are "Behind the meter" applications.[75]
Low ($/MWh)
High ($/MWh)
MicroGrid Flow Battery 429 1046
MicroGrid Lead-Acid 433 946
MicroGrid Lithium-Ion 369 562
MicroGrid Sodium 411 835
MicroGrid Zinc 319 416
Island Flow Battery 593 1231
Island Lead-Acid 700 1533
Island Lithium-Ion 581 870
Island Sodium 663 1259
Island Zinc 523 677
Commercial and Industrial Flow Battery 349 1083
Commercial and Industrial Lead-Acid 529 1511
Commercial and Industrial Lithium-Ion 351 838
Commercial and Industrial Sodium 444 1092
Commercial and Industrial Zinc 310 452
Commercial Appliance Flow Battery 974 1504
Commercial Appliance Lead-Acid 928 2291
Commercial Appliance Lithium-Ion 784 1363
Commercial Appliance Zinc 661 833
Residential Flow Battery 721 1657
Residential Lead-Acid 1101 2238
Residential Lithium-Ion 1034 1596
Traditional Method
Below are the LCOSs for different battery technologies. This category has traditionally been filled by Natural Gas Engines. These are "In front of the meter" applications.[75]
Transmission System Compressed Air 192 192
Transmission System Flow Battery 290 892
Transmission System Lead-Acid 461 1429
Transmission System Lithium-Ion 347 739
Transmission System Pumped Hydro 188 274
Transmission System Sodium 396 1079
Transmission System Zinc 230 376
Peaker Replacement Flow Battery 248 927
Peaker Replacement Lead-Acid 419 1247
Peaker Replacement Lithium-Ion 321 658
Peaker Replacement Sodium 365 948
Peaker Replacement Zinc 221 347
Frequency Regulation Flywheel 276 989
Frequency Regulation Lithium-Ion 211 275
Distribution Services Flow Battery 288 923
Distribution Services Lead-Acid 516 1692
Distribution Services Lithium-Ion 400 789
Distribution Services Sodium 426 1129
Distribution Services Zinc 285 426
PV Integration Flow Battery 373 950
PV Integration Lead-Acid 402 1068
PV Integration Lithium-Ion 355 686
PV Integration Sodium 379 957
PV Integration Zinc 245 345
Gas Peaker 165 218
On December 15, 2016 Lazard released version 10[76] of their LCOE report and version 2[77] of their LCOS report.
Solar PV-Community 78 135
Solar PV-Rooftop C&I 88 193
Solar Thermal Tower with Storage 119 182
On November 2, 2017 the investment bank Lazard released version 11[78] of their LCOE report and version 3[79] of their LCOS report.[80]
Generation Type
Solar PV - Crystalline Utility Scale 46 53
Solar PV - Thin Film Utility Scale 43 48
Solar PV - Community 76 150
Solar PV - Rooftop Residential 187 319
Solar PV - Rooftop C&I 85 194
Solar Thermal Tower with Storage 98 181
Nuclear 112 183
Below are the unsubsidized LCOSs for different battery technologies for "Behind the Meter" (BTM) applications.[79]
Commercial Lithium-Ion 891 985
Commercial Lead-Acid 1057 1154
Commercial Advanced Lead 950 1107
Residential Advanced Lead 1138 1188
Below are the Unsubsidized LCOSs for different battery technologies "Front of the Meter" (FTM) applications.[79]
Peaker Replacement Flow Battery(V) 209 413
Peaker Replacement Flow Battery(Zn) 286 315
Distribution Flow Battery(V) 184 338
Distribution Lithium-Ion 272 338
Microgrid Flow Battery(V) 273 406
Note: Flow battery value range estimates
IEA and NEA (2015)
The International Energy Agency and the Nuclear Energy Agency published a joint study in 2015 on LCOE data internationally.[81][82]
Other studies and analysis
Buffett Contract (2015)
In a power purchase agreement in the United States in July 2015 for a period of 20 years of solar power will be paid 3.87 UScent per kilowatt hour (38.7 USD/MWh). The solar system, which produces this solar power, is in Nevada (USA) and has 100 MW capacity.[83]
Sheikh Mohammed Bin Rashid solar farm (2016)
In the spring of 2016 a winning bid of 2.99 US cents per kilowatt-hour of photovoltaic solar energy was achieved for the next (800 MW capacity) phase of the Sheikh Mohammed Bin Rashid solar farm in Dubai.[84]
Brookings Institution (2014)
In 2014, the Brookings Institution published The Net Benefits of Low and No-Carbon Electricity Technologies which states, after performing an energy and emissions cost analysis, that "The net benefits of new nuclear, hydro, and natural gas combined cycle plants far outweigh the net benefits of new wind or solar plants", with the most cost effective low carbon power technology being determined to be nuclear power.[85][86]
Brazilian electricity mix: the Renewable and Non-renewable Exergetic Cost (2014)
Exergy costs of Integrated Brazilian Electricity Mix
As long as exergy stands for the useful energy required for an economic activity to be accomplished, it is reasonable to evaluate the cost of the energy on the basis of its exergy content. Besides, as exergy can be considered as measure of the departure of the environmental conditions, it also serves as an indicator of environmental impact, taking into account both the efficiency of supply chain (from primary exergy inputs) and the efficiency of the production processes. In this way, exergoeconomy can be used to rationally distribute the exergy costs and CO
2 emission cost among the products and by-products of a highly integrated Brazilian electricity mix. Based on the thermoeconomy methodologies, some authors[87] have shown that exergoeconomy provides an opportunity to quantify the renewable and non-renewable specific exergy consumption; to properly allocate the associated CO
2 emissions among the streams of a given production route; as well as to determine the overall exergy conversion efficiency of the production processes. Accordingly, the non-renewable unit exergy cost (cNR) [kJ/kJ] is defined as the rate of non-renewable exergy necessary to produce one unit of exergy rate/flow rate of a substance, fuel, electricity, work or heat flow, whereas the Total Unit Exergy Cost (cT) includes the Renewable (cR) and Non-Renewable Unit Exergy Costs. Analogously, the CO
2 emission cost (cCO
2) [gCO
2/kJ] is defined as the rate of CO
2 emitted to obtain one unit of exergy rate/flow rate.[87]
European PV LCOE range projection 2010–2020 (in €-cts/kWh)[88]
Price history of silicon PV cells since 1977
Photovoltaic prices have fallen from $76.67 per watt in 1977 to nearly $0.13 per watt in May 2019, for crystalline silicon solar cells and module price to $0.23 per watt.[89][90] This is seen as evidence supporting Swanson's law, which states that solar cell prices fall 20% for every doubling of cumulative shipments. The famous Moore's law calls for a doubling of transistor count every two years.
By 2011, the price of PV modules per MW had fallen by 60% since 2008, according to Bloomberg New Energy Finance estimates, putting solar power for the first time on a competitive footing with the retail price of electricity in some sunny countries; an alternative and consistent price decline figure of 75% from 2007 to 2012 has also been published,[91] though it is unclear whether these figures are specific to the United States or generally global. The levelised cost of electricity (LCOE) from PV is competitive with conventional electricity sources in an expanding list of geographic regions,[8] particularly when the time of generation is included, as electricity is worth more during the day than at night.[92] There has been fierce competition in the supply chain, and further improvements in the levelised cost of energy for solar lie ahead, posing a growing threat to the dominance of fossil fuel generation sources in the next few years.[93] As time progresses, renewable energy technologies generally get cheaper,[94][95] while fossil fuels generally get more expensive:
The less solar power costs, the more favorably it compares to conventional power, and the more attractive it becomes to utilities and energy users around the globe. Utility-scale solar power [could in 2011] be delivered in California at prices well below $100/MWh ($0.10/kWh) less than most other peak generators, even those running on low-cost natural gas. Lower solar module costs also stimulate demand from consumer markets where the cost of solar compares very favourably to retail electric rates.[96]
In the year 2015, First Solar agreed to supply solar power at 3.87 cents/kWh levelised price from its 100 MW Playa Solar 2 project which is far cheaper than the electricity sale price from conventional electricity generation plants.[97] From January 2015 through May 2016, records have continued to fall quickly, and solar electricity prices, which have reached levels below 3 cents/kWh, continue to fall.[98] In August 2016, Chile announced a new record low contract price to provide solar power for $29.10 per megawatt-hour (MWh).[99] In September 2016, Abu Dhabi announced a new record breaking bid price, promising to provide solar power for $24.2 per MWh[100] In October 2017, Saudi Arabia announced a further low contract price to provide solar power for $17.90 per MWh.[101]
With a carbon price of $50/ton (which would raise the price of coal-fired power by 5c/kWh), solar PV is cost-competitive in most locations. The declining price of PV has been reflected in rapidly growing installations, totaling a worldwide cumulative capacity of 297 GW by end 2016. According to some estimates total investment in renewables for 2011 exceeded investment in carbon-based electricity generation.[102]
In the case of self consumption, payback time is calculated based on how much electricity is not brought from the grid. Additionally, using PV solar power to charge DC batteries, as used in Plug-in Hybrid Electric Vehicles and Electric Vehicles, leads to greater efficiencies, but higher costs. Traditionally, DC generated electricity from solar PV must be converted to AC for buildings, at an average 10% loss during the conversion. Inverter technology is rapidly improving and current equipment has reached 99% efficiency for small scale residential,[103] while commercial scale three-phase equipment can reach well above 98% efficiency. However, an additional efficiency loss occurs in the transition back to DC for battery driven devices and vehicles, and using various interest rates and energy price changes were calculated to find present values that range from $2,057.13 to $8,213.64 (analysis from 2009).[104]
It is also possible to combine solar PV with other technologies to make hybrid systems, which enable more stand alone systems. The calculation of LCOEs becomes more complex, but can be done by aggregating the costs and the energy produced by each component. As for example, PV and cogen and batteries [105] while reducing energy- and electricity-related greenhouse gas emissions as compared to conventional sources.[106]
LCOE of solar thermal power with energy storage which can operate round the clock on demand, has fallen to AU$78/MWh (US$61/MWh) in August 2017.[107] Though solar thermal plants with energy storage can work as stand alone systems, combination with solar PV power can deliver further cheaper power.[108] Cheaper and dispatchable solar thermal storage power need not depend on costly or polluting coal/gas/oil/nuclear based power generation for ensuring stable grid operation.[109][110]
When a solar thermal storage plant is forced to idle due to lack of sunlight locally during cloudy days, it is possible to consume the cheap excess infirm power from solar PV, wind and hydro power plants (similar to a lesser efficient, huge capacity and low cost battery storage system) by heating the hot molten salt to higher temperature for converting the stored thermal energy in to electricity during the peak demand hours when the electricity sale price is profitable.[111][112]
NREL projection: the LCOE of U.S. wind power will decline by 25% from 2012 to 2030.[113]
Estimated cost per MWh for wind power in Denmark as of 2012
Current land-based wind
In the windy great plains expanse of the central United States new-construction wind power costs in 2017 are compellingly below costs of continued use of existing coal burning plants. Wind power can be contracted via a power purchase agreement at two cents per kilowatt hour while the operating costs for power generation in existing coal-burning plants remain above three cents.[114]
Current offshore wind
In 2016 the Norwegian Wind Energy Association (NORWEA) estimated the LCoE of a typical Norwegian wind farm at 44 €/MWh, assuming a weighted average cost of capital of 8% and an annual 3,500 full load hours, i.e. a capacity factor of 40%. NORWEA went on to estimate the LCoE of the 1 GW Fosen Vind onshore wind farm which is expected to be operational by 2020 to be as low as 35 €/MWh to 40 €/MWh.[115] In November 2016, Vattenfall won a tender to develop the Kriegers Flak windpark in the Baltic Sea for 49.9 €/MWh,[116] and similar levels were agreed for the Borssele offshore wind farms. As of 2016, this is the lowest projected price for electricity produced using offshore wind.
Historic levels
In 2004, wind energy cost a fifth of what it did in the 1980s, and some expected that downward trend to continue as larger multi-megawatt turbines were mass-produced.[117] As of 2012[update] capital costs for wind turbines are substantially lower than 2008–2010 but are still above 2002 levels.[118] A 2011 report from the American Wind Energy Association stated, "Wind's costs have dropped over the past two years, in the range of 5 to 6 cents per kilowatt-hour recently.... about 2 cents cheaper than coal-fired electricity, and more projects were financed through debt arrangements than tax equity structures last year.... winning more mainstream acceptance from Wall Street's banks.... Equipment makers can also deliver products in the same year that they are ordered instead of waiting up to three years as was the case in previous cycles.... 5,600 MW of new installed capacity is under construction in the United States, more than double the number at this point in 2010. 35% of all new power generation built in the United States since 2005 has come from wind, more than new gas and coal plants combined, as power providers are increasingly enticed to wind as a convenient hedge against unpredictable commodity price moves."[119]
This cost has additionally reduced as wind turbine technology has improved. There are now longer and lighter wind turbine blades, improvements in turbine performance and increased power generation efficiency. Also, wind project capital and maintenance costs have continued to decline.[120] For example, the wind industry in the USA in 2014 was able to produce more power at lower cost by using taller wind turbines with longer blades, capturing the faster winds at higher elevations. This opened up new opportunities in Indiana, Michigan, and Ohio. The price of power from wind turbines built 300 to 400 ft (91 to 122 m) above the ground can now compete with conventional fossil fuels like coal. Prices have fallen to about 4 cents per kilowatt-hour in some cases and utilities have been increasing the amount of wind energy in their portfolio, saying it is their cheapest option.[121]
Energy portal
Renewable energy portal
Comparisons of life-cycle greenhouse gas emissions
Economics of new nuclear power plants
Intermittent energy source
National Grid Reserve Service
Nuclear power in France
List of thermal power station failures
Calculating the cost of the UK Transmission network: Estimating cost per kWh of transmission
List of countries by electricity production from renewable sources
List of U.S. states by electricity production from renewable sources
Environmental concerns with electricity generation
Grid parity
Economic Value of U.S. Fossil Fuel Electricity Health Impacts. United States Environmental Protection Agency.
The Hidden Costs of Electricity: Comparing the Hidden Costs of Power Generation Fuels. Civil Society Institute.
Lazard's Levelized Cost of Energy Analysis – Version 11.0 (Nov. 2017)
^ A Review of Electricity Unit Cost Estimates Working Paper, December 2006 – Updated May 2007 "Archived copy" (PDF). Archived from the original (PDF) on January 8, 2010. Retrieved October 6, 2009. CS1 maint: Archived copy as title (link)
^ "Cost of wind, nuclear and gas powered generation in the UK". Claverton-energy.com. Retrieved 2012-09-04.
^ "David Millborrows paper on wind costs". Claverton-energy.com. Retrieved 2012-09-04.
^ a b c d e f g h i "Cost and Performance Characteristics of New Generating Technologies, Annual Energy Outlook 2019" (PDF). U.S. Energy Information Administration. 2019. Retrieved 2019-05-10.
^ [1] 2017 Annual Technology Baseline: Coal NREL
^ Nuclear Energy Agency/International Energy Agency/Organization for Economic Cooperation and Development Projected Costs of Generating Electricity (2005 Update) Archived 2016-09-12 at the Wayback Machine
^ K. Branker, M. J.M. Pathak, J. M. Pearce, doi:10.1016/j.rser.2011.07.104 A Review of Solar Photovoltaic Levelized Cost of Electricity, Renewable and Sustainable Energy Reviews 15, pp.4470–4482 (2011). Open access
^ a b c d Branker, K.; Pathak, M.J.M.; Pearce, J.M. (2011). "A Review of Solar Photovoltaic Levelized Cost of Electricity". Renewable and Sustainable Energy Reviews. 15 (9): 4470–4482. doi:10.1016/j.rser.2011.07.104. Open access
^ "Comparing the Costs of Intermittent and Dispatchable Electricity-Generating Technologies", by Paul Joskow, Massachusetts Institute of Technology, September 2011". Retrieved 2019-05-10.
^ a b c d Bronski, Peter (29 May 2014). "You Down With LCOE? Maybe You, But Not Me:Leaving behind the limitations of levelized cost of energy for a better energy metric". RMI Outlet. Rocky Mountain Institute (RMI). Archived from the original on 28 October 2016. Retrieved 28 October 2016. Desirable shifts in how we as a nation and as individual consumers—whether a residential home or commercial real estate property—manage, produce, and consume electricity can actually make LCOE numbers look worse, not better. This is particularly true when considering the influence of energy efficiency...If you're planning a new, big central power plant, you want to get the best value (i.e., lowest LCOE) possible. For the cost of any given power-generating asset, that comes through maximizing the number of kWh it cranks out over its economic lifetime, which runs exactly counter to the highly cost-effective energy efficiency that has been a driving force behind the country's flat and even declining electricity demand. On the flip side, planning new big, central power plants without taking continued energy efficiency gains (of which there's no shortage of opportunity—the February 2014 UNEP Finance Initiative report Commercial Real Estate: Unlocking the energy efficiency retrofit investment opportunity identified a $231–$300 billion annual market by 2020) into account risks overestimating the number of kWh we'd need from them and thus lowballing their LCOE... If I'm a homeowner or business considering purchasing rooftop solar outright, do I care more about the per-unit value (LCOE) or my total out of pocket (lifetime system cost)?...The per-unit value is less important than the thing considered as a whole...LCOE, for example, fails to take into account the time of day during which an asset can produce power, where it can be installed on the grid, and its carbon intensity, among many other variables. That's why, in addition to [levelized avoided cost of energy (LACE)], utilities and other electricity system stakeholders...have used benefit/cost calculations and/or an asset's capacity value or contribution to peak on a system or circuit level.
^ US Energy Information Administration, Levelized cost of new generation resources, 28 January 2013.
^ a b "U.S. Energy Information Administration (EIA) – Source". Retrieved 25 November 2016.
^ "Marginal Energy Price Analyses". Energy Efficiency Standards (DOE).
^ a b "Subsidies and costs of EU energy. Project number: DESNL14583" Pages: 52. EcoFys, 10 October 2014. Accessed: 20 October 2014. Size: 70 pages in 2MB.
^ International Energy Outlook: Electricity "Although coal-fired generation increases by an annual average of only 1.9 percent, it remains the largest source of electricity generation through 2035. In 2008, coal-fired generation accounted for 40 percent of world electricity supply; in 2035, its share decreases to 37 percent, as renewables, natural gas, and nuclear power all are expected to advance strongly during the projection and displace the need for coal-fired-generation in many parts of the world. World net coal-fired generation grows by 67 percent, from 7.7 trillion kilowatthours in 2008 to 12.9 trillion kilowatthours in 2035." "Archived copy". Archived from the original on August 22, 2012. Retrieved September 4, 2012. CS1 maint: Archived copy as title (link) CS1 maint: BOT: original-url status unknown (link)
^ "BBC NEWS – Business – The economic impact of global warming". 2002-10-14. Retrieved 25 November 2016.
^ O'Loughlin, Toni (27 October 2009). "Climate change threatens Australia's coastal lifestyle, report warns". The Guardian. Retrieved 25 November 2016.
^ Tufts Civil Engineer Predicts Boston's Rising Sea Levels Could Cause Billions Of Dollars In Damage
^ "Rising Sea Levels' cost on Boston" (PDF). Retrieved 2019-05-10.
^ "Tufts University slide 28, note projected Bangladesh evacuation". Retrieved 25 November 2016.
^ "The Hidden Costs of Fossil Fuels". Retrieved 25 November 2016.
^ "Climate Change Effects – Rising Sea Level in depth". Archived from the original on 21 September 2011. Retrieved 25 November 2016.
^ "Short Sharp Science: Five nations under threat from climate change". Retrieved 25 November 2016.
^ "BBC News – ASIA-PACIFIC – Tiny Pacific nation takes on Australia". 2002-03-04. Retrieved 25 November 2016.
^ Boom, Keely. "See you in court: the rising tide of international climate litigation". Retrieved 25 November 2016.
^ "New research reveals the real costs of electricity in Europe" (PDF). Retrieved 2019-05-10.
^ ExternE-Pol, External costs of current and advanced electricity systems, associated with emissions from the operation of power plants and with the rest of the energy chain, final technical report. See figure 9, 9b and figure 11
^ IPCC, Glossary A-D: "Climate price", in IPCC AR4 SYR 2007.
^ Viktor Wesselak, Thomas Schabbach, Thomas Link, Joachim Fischer: Regenerative Energietechnik. Springer 2013, ISBN 978-3-642-24165-9, p. 27.
^ Publications: Vienna Convention on Civil Liability for Nuclear Damage. International Atomic Energy Agency.
^ Nuclear Power's Role in Generating Electricity Congressional Budget Office, May 2008.
^ Availability of Dam Insurance Archived 2016-01-08 at the Wayback Machine 1999
^ Methodenkonvention 2.0 zur Schätzung von Umweltkosten B, Anhang B: Best-Practice-Kostensätze für Luftschadstoffe, Verkehr, Strom -und Wärmeerzeugung Archived 2016-01-22 at the Wayback Machine (PDF; 886 kB). Studie des Umweltbundesamtes (2012). Abgerufen am 23. Oktober 2013.
^ Ökonomische Bewertung von Umweltschäden METHODENKONVENTION 2.0 ZUR SCHÄTZUNG VON UMWELTKOSTEN Archived 2013-10-04 at the Wayback Machine (PDF; 799 kB), S. 27–29. Studie des Umweltbundesamtes (2012). Abgerufen am 23. Oktober 2013.
^ Externe Kosten der Atomenergie und Reformvorschläge zum Atomhaftungsrecht (PDF; 862 kB), 9/2012. Forum Ökologisch-Soziale Marktwirtschaft e.V. im Auftrag von Greenpeace Energy eG und dem Bundesverband Windenergie e.V. Abgerufen am 23. Oktober 2013.
^ "New Harvard Study Examines Cost of Coal". Environment.harvard.edu. 2011-02-17. Retrieved 2012-09-04.
^ "Levelized Cost of Energy and Levelized Cost of Storage 2018". November 8, 2018. Retrieved November 11, 2018.
^ "Tumbling Costs for Wind, Solar, Batteries Are Squeezing Fossil Fuels". London and New York: Bloomberg New Energy Finance. March 28, 2018. Retrieved July 28, 2018. Latest BNEF study of comparative costs worldwide shows an 18% improvement in the competitiveness of onshore wind and solar in the last year, and new and rapidly developing roles for batteries.
^ "Solar and wind now the cheapest power source says BloombergNEF". London and New York. 1 September 2018. Retrieved 19 November 2018.
^ Renewable Power Generation Costs in 2017. Abu Dhabi: International Renewable Energy Agency (IRENA). January 2018. ISBN 978-92-9260-040-2. Retrieved June 14, 2018. The trend is clear: by 2020, all mainstream renewable power generation technologies can be expected to provide average costs at the lower end of the fossil-fuel cost range. In addition, several solar PV and wind power projects will provide some of the lowest-cost electricity from any source.
^ "EBRD says renewables are now cheapest energy source". October 2018.
^ CIVIL SOCIETY TOWNHALL 2018. October 2018.
^ "The Australian Energy Technology Assessment (AETA) 2012". Office of the Chief Economist. Bureau of Resources and Energy Economics (BREE). Retrieved 28 October 2016.
^ a b The Climate Council The global renewable energy boom: how Australia is missing out, 2014
^ a b "Coûts de production des ENR" (PDF). ADEME. November 22, 2017. Retrieved 2019-05-10.
^ "One simple chart shows why an energy revolution is coming — and who is likely to come out on top". Business Insider France (in French). Retrieved 2018-10-17.
^ a b c "Studie: Stromgestehungskosten erneuerbare Energien - März 2018". Fraunhofer ISE. 2018. Retrieved 2 April 2018.
^ a b "Levelized cost of electricity renewable energy technologies" (PDF). Fraunhofer ISE. 2013. Retrieved 6 May 2014.
^ Johnston, Eric, "Son's quest for sun, wind has nuclear interests wary", Japan Times, 12 July 2011, p. 3.
^ Bird, Winifred, "Powering Japan's future", Japan Times, 24 July 2011, p. 7.
^ Johnston, Eric, "Current nuclear debate to set nation's course for decades", Japan Times, 23 September 2011, p. 1.[dead link]
^ "Institution of Engineers and Shipbuilders in Scotland report" (PDF). Retrieved 2012-09-04.
^ "Institution of Engineers and Shipbuilders in Scotland data". Iesisenergy.org. Retrieved 2012-09-04.
^ Electricity Market Reform – Delivery Plan Department of Energy and Climate Change, December 2013
^ Carsten Volkery: Kooperation mit China: Großbritannien baut erstes Atomkraftwerk seit Jahrzehnten, In: Spiegel Online vom 21. Oktober 2013.
^ "ELECTRICITY GENERATION COSTS" (PDF). www.gov.uk. BEIS. November 2016. Retrieved 6 December 2016.
^ "UK nuclear power plant gets go-ahead". BBC News. 21 October 2013.
^ Roland Gribben and Denise Roland (21 October 2013). "Hinkley Point nuclear power plant to create 25,000 jobs, says Cameron". London: Daily Telegraph.
^ "U.S. Energy Information Administration (EIA) – Source". www.eia.gov. Retrieved 2015-11-02.
^ "New Report: Renewable Energy and Energy Efficiency Will Grow, Provide Options for Clean Power Plan Compliance Based on Cost Competitiveness—Official Projections Fail to Capture Market Realities, Skewing Policy Considerations". PR newswire. 22 June 2015.
^ a b c US Energy Information Administration, Levelized cost and levelized avoided cost of new generation resources in the Annual Energy Outlook 2015, 14 April 2015
^ US Energy Information Administration, 2016 Levelized cost of new generation resources in the Annual Energy Outlook 2010, 26 April 2010
^ US Energy Information Administration, Levelized cost of new generation resources in the Annual Energy Outlook 2011, 26 April 2011
^ US Energy Information Administration, Levelized cost of new generation resources in the Annual Energy Outlook 2012, 12 July 2012
^ US Energy Information Administration, Levelized cost of new generation resources in the Annual Energy Outlook 2013, 28 January 2013
^ US Energy Information Administration, Levelized cost and levelized avoided cost of new generation resources in the Annual Energy Outlook 2014, 17 April 2014
^ Levelized cost and levelized avoided cost of new generation resources, US Energy Information Administration, Annual Energy Audit 2016, 5 August 2016.
^ Levelized cost and levelized avoided cost of new generation resources, US Energy Information Administration, Annual Energy Outlook 2017, April 2017.
^ a b Levelized cost and levelized avoided cost of new generation resources, US Energy Information Administration, Annual Energy Outlook 2018, March 2018.
^ OpenEI Transparent Cost Database. Accessed 06/19/2015.
^ "Estimated Cost of New Renewable and Fossil Generation in California" (PDF). C ali fornia Ene rgy C ommissi on. Retrieved 2019-05-10.
^ a b [2] November 2014
^ Solar and Wind Outshine Fossil Fuels November 2014
^ "Lazard Press Release" (PDF). Lazard. 2016-12-16. Retrieved 2017-11-06.
^ a b "Lazard's Levelized Cost of Storage Analysis — Version 1.0" (PDF). Lazard. November 2015. Retrieved 2019-05-10.
^ "Lazard's Levelized Cost of Energy Analysis — Version 10.0" (PDF). Lazard. December 2016. Retrieved 2019-05-10.
^ "Lazard's Levelized Cost of Storage — Version 2.0" (PDF). December 2016. Retrieved 2019-05-10.
^ "Lazard's Levelized Cost of Energy Analysis - Version 11.0" (PDF). Lazard. 2017-11-02. Retrieved 2017-11-04.
^ a b c "Lazard's Levelized Cost of Storage Analysis - Version 3.0" (PDF). Lazard. 2017-11-02. Retrieved 2017-11-04.
^ "Lazard Press Release November 2, 2017" (PDF). Lazard. 2017-11-02. Retrieved 2017-11-04.
^ IEA and NEA (2015). Projected costs of generating electricity: 2015 edition — Executive summary (PDF). Paris, France: International Energy Agency (IEA), Nuclear Energy Agency (NEA), and Organization for Economic Co-operation and Development (OECD). Retrieved 2016-11-08. CS1 maint: Uses authors parameter (link)
^ IEA and NEA (2015). Projected costs of generating electricity: 2015 edition. Paris, France: International Energy Agency (IEA), Nuclear Energy Agency (NEA), and Organization for Economic Co-operation and Development (OECD). ISBN 978-92-64-24443-6. CS1 maint: Uses authors parameter (link)
^ Buffett strikes cheapest electricity price in US with Nevada solar farm July 2015
^ "MESIA und DEWA melden Rekordgebot bei Photovoltaik-Ausschreibung: 0,0299 USD/kWh Solarstrom" (in German). solarserver.de. 2016-05-01. p. 1. Archived from the original on 2016-05-11. Retrieved 2016-05-11.
^ "Sun, wind and drain". The Economist. 26 July 2014. Retrieved 25 November 2016.
^ THE NET BENEFITS OF LOW AND NO-CARBON ELECTRICITY TECHNOLOGIES. MAY 2014, Charles Frank PDF Archived August 14, 2015, at the Wayback Machine
^ a b Flórez-Orrego, Daniel; Silva, Julio A.M.; Oliveira Jr, Silvio de (2014). "Renewable and non-renewable exergy cost and specific CO
2 emission of electricity generation: The Brazilian case". Energy Conversion and Management. 85: 619–629. doi:10.1016/j.enconman.2014.04.058.
^ "Solar Photovoltaics Competing in the Energy Sector—On the road to competitiveness" (PDF). European Photovoltaic Industry Association. September 2011. p. 18. Archived from the original (PDF) on February 26, 2013. Retrieved March 11, 2015.
^ "Price Quotes (see 'PV spot price')". Retrieved 23 August 2017.
^ "Sunny Uplands: Alternative energy will no longer be alternative". The Economist. 21 November 2012. Retrieved 2012-12-28.
^ Ken Wells (October 25, 2012), "Solar Energy Is Ready. The U.S. Isn't", Bloomberg Businessweek, businessweek.com, retrieved November 1, 2012
^ "Utilities' Honest Assessment of Solar in the Electricity Supply". Retrieved 25 November 2016.
^ "Renewables Investment Breaks Records". Renewable Energy World. 29 August 2011.
^ Renewable energy costs drop in '09 Reuters, November 23, 2009.
^ Solar Power 50% Cheaper By Year End – Analysis Reuters, November 24, 2009.
^ Arno Harris (31 August 2011). "A Silver Lining in Declining Solar Prices". Renewable Energy World.
^ "NV Energy buys utility-scale solar at record low price under 4 cents/kWh". Retrieved 23 July 2015.
^ New Record Set for World's Cheapest Solar, Now Undercutting Coal (2.99 cents/kWh United Arab Emirates, easily besting coal, which came in at 4.501 cents per kilowatt-hour under a 25-year power purchase agreement, with chart of solar prices in 2015 to May, 2016)
^ EcoWatch (22 August 2016). "Great news!". Retrieved 25 November 2016.
^ "UPDATE – Abu Dhabi confirms USD 24.2/MWh bid in solar tender – SeeNews Renewables". Retrieved 25 November 2016.
^ ""The Birth of a New Era in Solar PV" — Record Low Cost On Saudi Solar Project Bid". Retrieved 7 October 2017.
^ John Quiggin (January 3, 2012). "The End of the Nuclear Renaissance |". National Interest.
^ Osborne, Mark (2016-11-10). "SolarEdge sales slow on US residential market sluggishness". pv-tech.org. Retrieved 2016-12-09.
^ Converting Solar Energy into the PHEV Battery "VerdeL3C.com", May 2009
^ Mundada, Aishwarya; Shah, Kunal; Pearce, Joshua M. (2016). "Levelized cost of electricity for solar photovoltaic, battery and cogen hybrid systems". Renewable and Sustainable Energy Reviews. 57: 692–703. doi:10.1016/j.rser.2015.12.084.
^ Shah, Kunal K.; Mundada, Aishwarya S.; Pearce, Joshua M. (2015). "Performance of U.S. hybrid distributed energy systems: Solar photovoltaic, battery and combined heat and power". Energy Conversion and Management. 105: 71–80. doi:10.1016/j.enconman.2015.07.048.
^ "Solar Reserve awarded AU$78/MWh Concentrated Solar Power contract". Retrieved 23 August 2017.
^ "Aurora: What you should know about Port Augusta's solar power-tower". Retrieved 22 August 2017.
^ "Dispatchable Concentrated Solar Power Broke Price Records in 2017". Retrieved 22 September 2017.
^ "UAE's push on concentrated solar power should open eyes across world". Retrieved 26 September 2017.
^ "Salt, silicon or graphite: energy storage goes beyond lithium ion batteries". Retrieved 1 September 2017.
^ "Commercializing Standalone Thermal Energy Storage". Retrieved 1 September 2017.
^ Lantz, E.; Hand, M. and Wiser, R. (13–17 May 2012) "The Past and Future Cost of Wind Energy," National Renewable Energy Laboratory conference paper no. 6A20-54526, p. 4
^ Moody's: Utilities increasingly adding low cost wind power to rate base, leaving inefficient coal plants at risk - March 15, 2017
^ "Europe's biggest and cheapest onshore wind project". norwea.no. 2016-06-07. Archived from the original on 2016-08-29. Retrieved 2016-08-21.
^ "Vattenfall wins tender to build the largest wind farm in the Nordics". corporate.vattenfall.com. Retrieved 17 November 2016.
^ Helming, Troy (2004) "Uncle Sam's New Year's Resolution" ArizonaEnergy.org
^ "LBNL/NREL Analysis Predicts Record Low LCOE for Wind Energy in 2012–2013". US Department of Energy Wind Program Newsletter. 24 February 2012. Archived from the original on 5 March 2012. Retrieved 10 March 2012.
^ Salerno, E., AWEA Director of Industry and Data Analysis, as quoted in Shahan, Z. (2011) Cost of Wind Power – Kicks Coal's Butt, Better than Natural Gas (& Could Power Your EV for $0.70/gallon)" CleanTechnica.com
^ Danielson, David (14 August 2012). "A Banner Year for the U.S. Wind Industry". Whitehouse Blog.
^ Diane Cardwell (20 March 2014). "Wind Industry's New Technologies Are Helping It Compete on Price". New York Times.
Electricity delivery
Availability factor
Automatic Generation Control
Base load
Capacity factor
Demand factor
Droop speed control
Home energy storage
Intermittency
Load factor
Load following
Power-flow study
Nonrenewable
Fossil fuel power station
Osmotic
Induction generator
Microgeneration
Rankine cycle
Three-phase electric power
Virtual power plant
and distribution
Dynamic demand
Electric power distribution
Electrical busbar system
Electric power system
Electric power transmission
Electrical interconnector
High-voltage direct current
High-voltage shore connection
Mains electricity by country
Power storage
Pumped hydro
Super grid
Transmission system operator (TSO)
Transmission tower
Utility pole
Blackout (Rolling blackout)
Arc-fault circuit interrupter
Residual-current device (GFI)
Power-system protection
Protective relay
Digital protective relay
Sulfur hexafluoride circuit breaker
and policies
Ecotax
Energy subsidies
Fossil-fuel phase-out
Pigovian tax
Renewable Energy Certificates
Renewable energy payments
Renewable energy policy
Spark/Dark/Quark/Bark spread
Statistics and Production
List of electricity sectors
Electric energy consumption
Electricity economics
Power station technology
Human impact
meat production
cocoa production
Genetic pollution
Industrialisation
pharmaceuticals and personal care
Overdrafting
Biodiversity threats
biodiversity loss
decline in amphibian populations
decline in insect populations
Defaunation
Ecocide
Freshwater cycle
Holocene extinction
Nitrogen cycle
Land degradation
Land consumption
Land surface effects on climate
Loss of green belts
Phosphorus cycle
Resource depletion
Water degradation
Climate engineering
Ecological engineering
Environmental mitigation
Industrial ecology
Waste minimization
Retrieved from "https://en.wikipedia.org/w/index.php?title=Cost_of_electricity_by_source&oldid=905651422"
Modern economic history
Economics comparisons
Articles with dead external links from June 2016
CS1 maint: Uses authors parameter
Articles with unsourced statements from July 2015
Articles needing more detailed references
All articles with vague or ambiguous time
Vague or ambiguous time from October 2014
Articles to be expanded from January 2019
Articles using small message boxes
Wikipedia articles in need of updating from July 2015
All Wikipedia articles in need of updating
This content was retrieved from Wikipedia : http://en.wikipedia.org/wiki/Levelized_cost_of_electricity
This page is based on the copyrighted Wikipedia article "Cost of electricity by source"; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA | CommonCrawl |
Search Results: 1 - 10 of 138669 matches for " K. Lapidus "
Dielectron production in pp and dp collisions at 1.25 GeV/u with HADES
K. Lapidus,for the HADES Collaboration
Physics , 2009,
Abstract: Inclusive production of e+e--pairs in pp and dp collisions at a kinetic beam energy of 1.25 GeV/u has been studied with the HADES spectrometer. In the latter case, the main goal was to obtain data on pair emission in quasi-free np collisions. To select this particular reaction channel the HADES experimental setup was extended with a Forward Wall hodoscope, which allowed to register spectator protons. Here, the measured invariant mass distributions demonstrate a strong enhancement of the pair yield for M > 140 MeV/c2 in comparison to pp data.
PFAPA: a single phenotype with genetic heterogeneity
Lapidus Sivia K,Chitkara Puja,Kim Peter W,Aksentijevich Ivona
Pediatric Rheumatology , 2012, DOI: 10.1186/1546-0096-10-s1-a86
Yo tengo sentido, tengo rima : Cano Estremera and the Art of the Soneo
Benjamin Lapidus
Centro Journal , 2004,
Abstract: Calling himself El due o del soneo, the boss of vocal improvisation, the Puerto Rican singer, Carlos Cano Estremera, is at the forefront of many innovations in soneos. For the uninitiated, a soneo is a vocal improvisation sung by a lead singer during the montuno, or call-and-response section in Afro-Cuban son-based musics, commercially referred to as salsa. As he is always up for a good duel, planned or unforseen, the results of Cano s duelos have been recorded both legally and illegally and spread throughout the world by salsa fans. Through conversations with Cano and a look at several techniques he uses when improvising, this article shows Cano Esteremera s improvisational framework to be a synthesis of previous soneros as well as singers and musicians from beyond the realm of salsa. His style can be summed up as unique and creative while remaining in the tradition.
Transformaciones económicas en los noventa y cambios espaciales en la provincia ciudad de La Habana, Cuba
Batia Lapidus Radlow
Investigaciones geográficas , 2000,
Abstract: Los a os noventa del siglo que recién culminó fueron particularmente difíciles para Cuba. Tal situación, propulsó la aplicación de una serie de medidas Y reformas económicas que, de modo gradual, han ido adoptando una dimensión regional. En especial, en la provincia ciudad de La Habana, es posible comprobar el contenido territorial de los nuevos procesos relacionados con el reajuste económico Es esta la perspectiva de análisis que propone el trabajo. Se hace referencia a algunas experiencias prácticas de corte sectorial a través de las cuales se reconocen diferentes vertientes de impacto. Finalmente se sugiere reflexionar en tomo a la capacidad que el territorio posee para adaptarse a los cambios, considerando al hombre como principal protagonista y receptor de los mismos.
Towards Quantized Number Theory: Spectral Operators and an Asymmetric Criterion for the Riemann Hypothesis
Michel L. Lapidus
Abstract: This research expository article contains a survey of earlier work (in \S2--\S4) but also contains a main new result (in \S5), which we first describe. Given $c \geq 0$, the spectral operator $\mathfrak{a} = \mathfrak{a}_c$ can be thought of intuitively as the operator which sends the geometry onto the spectrum of a fractal string of dimension not exceeding $c$. Rigorously, it turns out to coincide with a suitable quantization of the Riemann zeta function $\zeta = \zeta(s)$: $\mathfrak{a} = \zeta (\partial)$, where $\partial = \partial_c$ is the infinitesimal shift of the real line acting on the weighted Hilbert space $L^2 (\mathbb{R}, e^{-2ct} dt)$. In this paper, we establish a new asymmetric criterion for the Riemann hypothesis, expressed in terms of the invertibility of the spectral operator for all values of the dimension parameter $c \in (0, 1/2)$ (i.e., for all $c$ in the left half of the critical interval $(0,1)$). This corresponds (conditionally) to a mathematical (and perhaps also, physical) "phase transition" occurring in the midfractal case when $c= 1/2$. Both the universality and the non-universality of $\zeta = \zeta (s)$ in the right (resp., left) critical strip $\{1/2 < \text{Re}(s) < 1 \}$ (resp., $\{0 < \text{Re}(s) < 1/2 \}$) play a key role in this context. These new results are presented in \S5. In \S2, we briefly discuss earlier joint work on the complex dimensions of fractal strings, while in \S3 and \S4, we survey earlier related work of the author with H. Maier and with H. Herichi, respectively, in which were established symmetric criteria for the Riemann hypothesis, expressed respectively in terms of a family of natural inverse spectral problems for fractal strings of Minkowski dimension $D \in (0,1),$ with $D \neq 1/2$, and of the quasi-invertibility of the family of spectral operators $\mathfrak{a}_c$ (with $c \in (0,1), c \neq 1/2$).
The Sound of Fractal Strings and the Riemann Hypothesis
Abstract: We give an overview of the intimate connections between natural direct and inverse spectral problems for fractal strings, on the one hand, and the Riemann zeta function and the Riemann hypothesis, on the other hand (in joint works of the author with Carl Pomerance and Helmut Maier, respectively). We also briefly discuss closely related developments, including the theory of (fractal) complex dimensions (by the author and many of his collaborators, including especially Machiel van Frankenhuijsen), quantized number theory and the spectral operator (jointly with Hafedh Herichi), and some other works of the author (and several of his collaborators).
The vertical hip fracture – a treatment challenge. A cohort study with an up to 9 year follow-up of 137 consecutive hips treated with sliding hip screw and antirotation screw
Enocson Anders,Lapidus Lasse J
BMC Musculoskeletal Disorders , 2012, DOI: 10.1186/1471-2474-13-171
Abstract: Background Femoral neck fractures with a vertical orientation have been associated with an increased risk for failure as they are both axial and rotational unstable and experience increased shear forces compared to the conventional and more horizontally oriented femoral neck fractures. The purpose of this study was to analyse outcome and risk factors for reoperation of these uncommon fractures. Methods A cohort study with a consecutive series of 137 hips suffering from a vertical hip fracture, treated with one method: a sliding hips screw with plate and an antirotation screw. Median follow-up time was 4.8 years. Reoperation data was validated against the National Board of Health and Welfare's national registry using the unique Swedish personal identification number. Results The total reoperation rate was 18%. After multivariable Logistic regression analysis adjusting for possible confounding factors there was an increased risk for reoperation for displaced fractures (22%) compared to undisplaced fractures (3%), and for fractures with poor implant position (38%) compared to fractures with adequate implant position (15%). Conclusions The reoperation rate was high, and special attention should be given to achieve an appropriate position of the implant.
Localization on Snowflake Domains
Britta Daudert,Michel L. Lapidus
Mathematics , 2006,
Abstract: The geometric features of the square and triadic Koch snowflake drums are compared using a position entropy defined on the grid points of the discretizations (pre-fractals) of the two domains. Weighted graphs using the geometric quantities are created and random walks on the two pre-fractals are performed. The aim is to understand if the existence of narrow channels in the domain may cause the `localization' of eigenfunctions.
Fractal Complex Dimensions, Riemann Hypothesis and Invertibility of the Spectral Operator
Hafedh Herichi,Michel L. Lapidus
Abstract: A spectral reformulation of the Riemann hypothesis was obtained in [LaMa2] by the second author and H. Maier in terms of an inverse spectral problem for fractal strings. This problem is related to the question "Can one hear the shape of a fractal drum?" and was shown in [LaMa2] to have a positive answer for fractal strings whose dimension is $c\in(0,1)-\{1/2}$ if and only if the Riemann hypothesis is true. Later on, the spectral operator was introduced heuristically by M. L. Lapidus and M. van Frankenhuijsen in their theory of complex fractal dimensions [La-vF2, La-vF3] as a map that sends the geometry of a fractal string onto its spectrum. We focus here on presenting the rigorous results obtained by the authors in [HerLa1] about the invertibility of the spectral operator. We show that given any $c\geq0$, the spectral operator $\mathfrak{a}=\mathfrak{a}_{c}$, now precisely defined as an unbounded normal operator acting in a Hilbert space $\mathbb{H}_{c}$, is `quasi-invertible' (i.e., its truncations are invertible) if and only if the Riemann zeta function $\zeta=\zeta(s)$ does not have any zeroes on the line $Re(s)=c$. It follows that the associated inverse spectral problem has a positive answer for all possible dimensions $c\in (0,1)$, other than the mid-fractal case when $c=1/2$, if and only if the Riemann hypothesis is true.
The Decimation Method for Laplacians on Fractals: Spectra and Complex Dynamics
Nishu Lal,Michel L. Lapidus
Abstract: In this survey article, we investigate the spectral properties of fractal differential operators on self-similar fractals. In particular, we discuss the decimation method, which introduces a renormalization map whose dynamics describes the spectrum of the operator. In the case of the bounded Sierpinski gasket, the renormalization map is a polynomial of one variable on the complex plane. The decimation method has been generalized by C. Sabot to other fractals with blow-ups and the resulting associated renormalization map is then a multi-variable rational function on a complex projective space. Furthermore, the dynamics associated with the iteration of the renormalization map plays a key role in obtaining a suitable factorization of the spectral zeta function of fractal differential operators. In this context, we discuss the works of A. Teplyaev and of the authors regarding the examples of the bounded and unbounded Sierpinski gaskets as well as of fractal Sturm-Liouville differential operators on the half-line. | CommonCrawl |
A fast algorithm for the semi-definite relaxation of the state estimation problem in power grids
JIMO Home
Optimal reinsurance-investment problem with dependent risks based on Legendre transform
doi: 10.3934/jimo.2019014
On phaseless compressed sensing with partially known support
Ying Zhang , Ling Ma and Zheng-Hai Huang ,
School of Mathematics, Tianjin University, Tianjin 300072, China
* Corresponding author: ZHENG-HAI HUANG
Received May 2018 Revised October 2018 Published March 2019
Fund Project: This work was supported by the China Scholarship Council (Grant No. 201706255092) and the National Natural Science Foundation of China (Grant Nos. 11201332, 11431002 and 11871051)
Full Text(HTML)
Related Papers
We establish a theoretical framework for the problem of phaseless compressed sensing with partially known signal support, which aims at generalizing the Null Space Property and the Strong Restricted Isometry Property from phase retrieval to partially sparse phase retrieval. We first introduce the concepts of the Partial Null Space Property (P-NSP) and the Partial Strong Restricted Isometry Property (P-SRIP); and then show that both the P-NSP and the P-SRIP are exact recovery conditions for the problem of partially sparse phase retrieval. We also prove that a random Gaussian matrix $ A\in \mathbb{R}^{m\times n} $ satisfies the P-SRIP with high probability when $ m = O(t(k-r)\log(\frac{n-r}{t(k-r)})). $
Keywords: Phase retrieval, compressed sensing, phaseless compressed sensing, partial null space property, partial strong restricted isometry property.
Mathematics Subject Classification: Primary: 90C90; Secondary: 94A12.
Citation: Ying Zhang, Ling Ma, Zheng-Hai Huang. On phaseless compressed sensing with partially known support. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2019014
B. Alexeev, A. S. Bandeira, M. Fickus and D. G. Mixon, Phase retrieval with polarization, SIAM J. Imag. Sci., 7 (2014), 35-66. doi: 10.1137/12089939X. Google Scholar
R. Balan, B. Bodmann, P. G. Casazza and D. Edidin, Saving phase: injectivity and stability for phase retrieval, J. Fourier Anal. Appl., 15 (2009), 488-501. doi: 10.1007/s00041-009-9065-1. Google Scholar
A.S. Bandeira, J. Cahill, D. Mixon and A. Nelson, Painless reconstruction from magnitudes of frame coefficients, Appl. Comput. Harmon. Anal., 37 (2014), 106-125. doi: 10.1016/j.acha.2013.10.002. Google Scholar
A. S. Bandeira, K. Scheinberg and L. N. Vicente, On partial sparse recovery, preprint, arXiv: 1304.2809 (2013).Google Scholar
B. Bodmann and N. Hammen, Stable phase retrieval with low-redundancy frames, Adv. Comput. Math., 41 (2015), 317-331. doi: 10.1007/s10444-014-9359-y. Google Scholar
O. Bunk, A. Diza, F. Pfeiffer, C. David, B. Schmitt, D. K. Satapathy and J. F. van der Veen, Diffractive imaging for periodic samples: Retrieving one-dimensional concentration profiles across microfluidic channels, Acta Crystallogr., A, Found. Crystallogr., 63 (2007), 306-314. doi: 10.1107/S0108767307021903. Google Scholar
T. Cai and A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices, IEEE Trans. Inf. Theory, 60 (2014), 122-132. doi: 10.1109/TIT.2013.2288639. Google Scholar
E. J. Candès, The restricted isometry property and its implications for compressed sensing, C. R. Acad. Sci. Paris, Ser. I, 346 (2008), 589-592. doi: 10.1016/j.crma.2008.03.014. Google Scholar
E. J. Candès, Y. C. Eldar, T. Strohmer and V. Voroninski, Phase retrieval via completion, SIAM Review, 57 (2015), 225-251. doi: 10.1137/151005099. Google Scholar
E. J. Candès, T. Strohmer and V. Voroninski, Exact and stable signal recovery from magnitude measurements via convex programming, Commun. Pure Appl. Math., 66 (2013), 1241-1274. doi: 10.1002/cpa.21432. Google Scholar
A. Conca, D. Edidin, M. Hering and C. Vinzant, An algebraic characterization of injectivity in phase retrieval, Appl. Comput. Harmon. Anal., 38 (2015), 346-356. doi: 10.1016/j.acha.2014.06.005. Google Scholar
J. V. Corbett, The pauli problem, state reconstruction and quantum real numbers, Rep. Math. Phys., 57 (2006), 53-68. doi: 10.1016/S0034-4877(06)80008-X. Google Scholar
L. Demanet and V. Jugnon, Convex recovery from interferometric measurements, IEEE Trans. Comput. Imaging, 3 (2017), 282–295, arXiv: 1307.6864. doi: 10.1109/TCI.2017.2688923. Google Scholar
M. P. Friedlander, H. Mansour, R. Saab and O. Yilmaz, Recovering compressively sampled signals using partial support information, IEEE Trans. Inf. Theory, 58 (2012), 1122-1134. doi: 10.1109/TIT.2011.2167214. Google Scholar
B. Gao, Y. Wang and Z. Q. Xu, Stable signal recovery from phaseless measurements, J. Fourier Anal. Appl., 22 (2016), 787-808. doi: 10.1007/s00041-015-9434-x. Google Scholar
R. W. Harrison, Phase problem in crystallography, J. Opt. Soc. Am. A., 10 (1993), 1046-1055. Google Scholar
L. Jacques, A short note on compressed sensing with partially known signal support, Signal Process., 90 (2010), 3308-3312. doi: 10.1016/j.sigpro.2010.05.025. Google Scholar
L. C. Kong and N. H. Xiu, Low-rank matrix recovery via nonconvex schatten p-minimization, Asia-Pac. J. Oper. Res., 30 (2013), 1340010. doi: 10.1142/S0217595913400101. Google Scholar
J. Miao, T. Ishikawa, Q. Shen and T. Earnest, Extending X-ray crystallography to allow the imagine of non-crystalline materials, cells and single protein complexes, Annu. Rev. Phys. Chem., 59 (2008), 387-410. Google Scholar
R. P. Millane, Phase retrieval in crystallography and optics, J. Opt. Soc. Am. A., 7 (1990), 394-411. doi: 10.1364/JOSAA.7.000394. Google Scholar
D. T. Peng, N. H. Xiu and J. Yu, $S_{1/2}$ regularization methods and fixed point algorithms for affine rank minimization problems, Comput. Optim. Appl., 67 (2017), 543-569. doi: 10.1007/s10589-017-9898-5. Google Scholar
H. Qiu, X. Chen, W. Liu, G. Zhou, Y. J. Wang and J. Lai, A fast $l_1$-solver and its applications to robust face recognition, J. Ind. Manag. Optim., 8 (2012), 163-178. doi: 10.3934/jimo.2012.8.163. Google Scholar
N. Vaswani and W. Lu, Modified-CS: Modifying compressive sensing for problems with partially known support, IEEE Trans. Signal Process., 58 (2010), 4595-4607. doi: 10.1109/TSP.2010.2051150. Google Scholar
V. Voroninski and Z. Q. Xu, A strong restricted isometry property, with an application to phaseless compressed sensing, Appl. Comput. Harmon. Anal., 40 (2016), 386-395. doi: 10.1016/j.acha.2015.06.004. Google Scholar
A. Walther, The question of phase retrieval in optics, J. Modern Opt., 10 (1963), 41-49. doi: 10.1080/713817747. Google Scholar
Y. Wang and Z. Q. Xu, Phase retrieval for sparse signals, Appl. Comput. Harmon. Anal., 37 (2014), 531-544. doi: 10.1016/j.acha.2014.04.001. Google Scholar
Y. Wang, W. Liu, L. Caccetta and G. Zhou, Parameter selection for nonnegative $l_1$ matrix/tensor sparse decomposition, Oper. Res. Lett., 43 (2015), 423-426. doi: 10.1016/j.orl.2015.06.005. Google Scholar
Y. Wang, G. Zhou, L. Caccetta and W. Liu, An alternative Lagrange-dual based algorithm for sparse signal reconstruction, IEEE Trans. Signal Process., 59 (2011), 1895-1901. doi: 10.1109/TSP.2010.2103066. Google Scholar
C. L. Xud an Y. B. Zhao, Uniqueness conditions for a class of $l_0$-minimization problems, Asia-Pac. J. Oper. Res., 32 (2015), 1540002, 17pp. doi: 10.1142/S0217595915400023. Google Scholar
G. W. You, Z. H. Huang and Y. Wang, A theoretical perspective of solving phaseless compressive sensing via its nonconvex relaxation, Inform. Sci., 415 (2017), 254-268. doi: 10.1016/j.ins.2017.06.020. Google Scholar
L. J. Zhang, L. C. Kong, Y. Li and S. L. Zhou, A smoothing iterative method for quantile regression with nonconvex $l_p$ penalty, J. Ind. Manag. Optim., 13 (2017), 93-112. doi: 10.3934/jimo.2016006. Google Scholar
Steven L. Brunton, Joshua L. Proctor, Jonathan H. Tu, J. Nathan Kutz. Compressed sensing and dynamic mode decomposition. Journal of Computational Dynamics, 2015, 2 (2) : 165-191. doi: 10.3934/jcd.2015002
Yingying Li, Stanley Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Problems & Imaging, 2009, 3 (3) : 487-503. doi: 10.3934/ipi.2009.3.487
Song Li, Junhong Lin. Compressed sensing with coherent tight frames via $l_q$-minimization for $0 < q \leq 1$. Inverse Problems & Imaging, 2014, 8 (3) : 761-777. doi: 10.3934/ipi.2014.8.761
Farid Ammar Khodja, Franz Chouly, Michel Duprez. Partial null controllability of parabolic linear systems. Mathematical Control & Related Fields, 2016, 6 (2) : 185-216. doi: 10.3934/mcrf.2016001
Mikhail Krastanov, Michael Malisoff, Peter Wolenski. On the strong invariance property for non-Lipschitz dynamics. Communications on Pure & Applied Analysis, 2006, 5 (1) : 107-124. doi: 10.3934/cpaa.2006.5.107
Yangyang Xu, Wotao Yin, Stanley Osher. Learning circulant sensing kernels. Inverse Problems & Imaging, 2014, 8 (3) : 901-923. doi: 10.3934/ipi.2014.8.901
Vikram Krishnamurthy, William Hoiles. Information diffusion in social sensing. Numerical Algebra, Control & Optimization, 2016, 6 (3) : 365-411. doi: 10.3934/naco.2016017
Zdzisław Brzeźniak, Paul André Razafimandimby. Irreducibility and strong Feller property for stochastic evolution equations in Banach spaces. Discrete & Continuous Dynamical Systems - B, 2016, 21 (4) : 1051-1077. doi: 10.3934/dcdsb.2016.21.1051
Onur Alp İlhan. Solvability of some partial integral equations in Hilbert space. Communications on Pure & Applied Analysis, 2008, 7 (4) : 837-844. doi: 10.3934/cpaa.2008.7.837
Michael Röckner, Jiyong Shin, Gerald Trutnau. Non-symmetric distorted Brownian motion: Strong solutions, strong Feller property and non-explosion results. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3219-3237. doi: 10.3934/dcdsb.2016095
Masahito Ohta. Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1671-1680. doi: 10.3934/cpaa.2018080
Hong Jiang, Wei Deng, Zuowei Shen. Surveillance video processing using compressive sensing. Inverse Problems & Imaging, 2012, 6 (2) : 201-214. doi: 10.3934/ipi.2012.6.201
Valter Pohjola. An inverse problem for the magnetic Schrödinger operator on a half space with partial data. Inverse Problems & Imaging, 2014, 8 (4) : 1169-1189. doi: 10.3934/ipi.2014.8.1169
Vianney Perchet, Marc Quincampoix. A differential game on Wasserstein space. Application to weak approachability with partial monitoring. Journal of Dynamics & Games, 2019, 6 (1) : 65-85. doi: 10.3934/jdg.2019005
Kazuhiro Sakai. The oe-property of diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 581-591. doi: 10.3934/dcds.1998.4.581
Pablo Sánchez, Jaume Sempere. Conflict, private and communal property. Journal of Dynamics & Games, 2016, 3 (4) : 355-369. doi: 10.3934/jdg.2016019
Kazumine Moriyasu, Kazuhiro Sakai, Kenichiro Yamamoto. Regular maps with the specification property. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2991-3009. doi: 10.3934/dcds.2013.33.2991
Konstantinos Drakakis, Scott Rickard. On the generalization of the Costas property in the continuum. Advances in Mathematics of Communications, 2008, 2 (2) : 113-130. doi: 10.3934/amc.2008.2.113
Bo Su. Doubling property of elliptic equations. Communications on Pure & Applied Analysis, 2008, 7 (1) : 143-147. doi: 10.3934/cpaa.2008.7.143
Peng Sun. Minimality and gluing orbit property. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4041-4056. doi: 10.3934/dcds.2019162
Download XML
PDF downloads (21)
HTML views (258)
on AIMS
Ying Zhang Ling Ma Zheng-Hai Huang
Recipient's E-mail*
Content* | CommonCrawl |
Puzzling Meta
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.
How long can you survive at the devil's playground?
The devil has trapped you in his playground.
The devil knows that you can't cross over the burning boundary of his circle, so he allows you to choose a position within the circle before he starts to chase you down. You know that
You and the devil move at speeds $V$ and $1$ respectively.
Both move simultaneously and continuously, in any choice of direction.
Radius of the circle $R=1$.
The devil leaves an uncrossable burning track along his trajectory:
You're caught by the devil if the distance between you is $0$. The devil will try to catch you as quickly as possible. You know that an angel is en route to save you, so you move to survive for as long as possible.
Question 1: How long can you manage to survive if $V=1$? How should you move?
Question 2: Suppose now you move twice as fast as the devil, i.e. $V=2$. How long can you manage to survive?
Question 3: As your speed $V$ approaches infinity, how long can you manage to survive?
Notice that you can survive for at least $T=2$ by choosing to stay at the opposite side of the devil. On the other hand, you can't survive indefinitely no matter how fast you move, because the devil can carve the disk into patches of exponentially decreasing areas with you inside, shrinking that area to $0$ in finite time.
game-theory
pursuit-evasion
edited Sep 8, 2021 at 17:11
Mike Earnest
EricEric
$\begingroup$ But what is the devil's strategy? While he can move in any direction, which would he choose at any time? A deterministic devil strategy seems important for anyone else to develop a counter-strategy. $\endgroup$
– bobble
$\begingroup$ @bobble The devil tries to catch you as quickly as possible. He's current optimal velocity depends on your relative positions and your current velocity. $\endgroup$
– Eric
$\begingroup$ Regarding the hint: if I'm understanding correctly, I don't think it's correct. Can the devil really shrink the area to 0? The area can certainly approach 0, but the line of fire has no width. $\endgroup$
– Alira
$\begingroup$ @Alira That is essentially like Zeno's paradox. The fact that each subdivision takes the devil a proportionally smaller amount of time means that this is a supertask. In my opinion this is a supertask that does make sense and can be completed so that a zero area is reached in a finite time, but opinions can differ. $\endgroup$
– Jaap Scherphuis
Sep 6, 2021 at 6:45
$\begingroup$ @Alira, I was confused at first, but check out this answer for a similar dilemma. $\endgroup$
– justhalf
You're screwed in constant time, no matter your speed, since the devil has a good strategy.
I cannot claim to having found the devil's optimal strategy, but I do claim that there is an upper bound to the time the devil takes to catch the victim. And that upper bound is quite low.
Assume that the circle is inscribed in an equilateral triangle of ABC of base $b = 2\sqrt[]3$, with the devil resting at the midpoint D of its base AB, like so:
In fact, forget about the circle. Also, mark the midpoint of BC as E, and of CA as F:
The devil shall move a distance of $b/2$ from D towards E, dividing it in two areas. Mark the midpoint of DE as G:
Is the victim within the smaller triangle BDE? Good, move $b/4$ towards G, and repeat this division algorithm on the BDE triangle. If not, move $b/2$ from E to F, marking the midpoint of EF as H:
Again: is the victim in the smaller triangle CEF? Good, then move $b/4$ back to H and repeat. If not, divide the remaining rhombus in two by moving $b/2$ from F to D, marking the midpoint of DF as J:
...and since the victim sure is either DEF or ADF, move $b/4$ from D to J, and repeat the algorithm. The idea is to divide the triangles into smaller and smaller triangles in a methodical way.
Note several important facts about this strategy:
- Every time a triangle is divided in four, its area is divided by four
- Each time a triangle is divided in four, its base is divided by two
- In order to divide a triangle in four, the devil must move at most a distance equal to $7/4$ times its base: $1/2$ each move from midpoint to midpoint, maximum three such moves (D→E→F→D), plus $1/4$ to set up in the midpoint of the appropriate side of a subdivision (E→G, F→H or D→J).
Thus, the first subdivision takes (at most) $\frac{7\sqrt[]3}{2}$; since the triangle's base is halved, the second one takes $\frac{7\sqrt[]3}{4}$, the third one $\frac{7\sqrt[]3}{8}$; and in general the $n$-th one shall take $\frac{7\sqrt[]3}{2^{n}}$.
If the devil could perform these subdivisions an infinite number of times, then the area of the triangle containing the victim would be $\lim_{x \to \infty} \frac{a}{4^x} = 0$. (The initial area doesn't matter so why bother calculating it).
Now, is it possible to move long enough to perform all the subdivisions in a finite amount of length? That's the same as asking "Does the following infinite sequence converge?" $$\sum_{n=1}^{\infty} \frac{7\sqrt[]3}{2^{n}}$$ Since I absolutely suck at doing these calculations (and I have completely forgot the "infinite sequences" chapter from my calculus classes), I cheated a bit by using wolfram-alpha. The sequence does, in fact, converge, to $7\sqrt[]3$ or about 12.124 units of length.
The victim's strategy would be
to lead the devil's movements by an infinitesimal distance, so the devil cannot choose the right subdivision until said subdivision is complete.
The generalized strategy explained above provides an upper bound for the distance the devil must move, but has two characteristics that intuitively look like problems: (a) the devil backtracks, potentially wasting movement and (b) the search space is way bigger than needed.
The backtracking issue can be optimized by
using right-angled isosceles triangles instead of equilateral triangles, and positioning the devil at the right-angle corner. Any such triangle can be halved into two right-angled isosceles triangles, like so:
As before, the devil splits a triangle, checks the subdivision containing the lost soul, and recursively proceeds to split that. The devil will follow a fractal path looking like:
At each subdivision, the area halves; that means the area converges to zero as before since $$\lim_{x \to \infty} \frac{a}{2^x} = 0$$ The height of the triangles (i.e. the length of the devil's path) shrinks by a factor of $\frac{\sqrt[]2}{2} ≃ 0.7071$ on each subdivision; assuming that the length needed to perform the first subdivision is $\frac{\sqrt[]2}{2}$, then the length needed to perform the $n$th subdivision shall be $$\left(\frac{\sqrt[]2}{2}\right)^n$$, and the total length of the devil's fractal path shall be $$\sum_{n=1}^{\infty} \left(\frac{\sqrt[]2}{2}\right)^n$$.
That seems to solve the backtracking issue, but what about the wasted search space? A possible approach would be for the devil to start moving on a path like...
Which means: Starting at A, move to B. Choose the half circle containing the victim (the diagram only shows a solution for the bottom half; the solution for the top half is symmetrical), then proceed to C (ABC). If the victim is within BCD, move to D then start the fractal subdivision of BCD. Else, move to E (ABCE). If the victim is within BCE, start the fractal subdivision of BCE. Else, move to F (ABCEF). Start the fractal subdivision of either EFH or EFG, depending on which of those two triangles contains the victim.
The (worst case) length of the initial path ABCEF is $4 + \sqrt[]2$; and since the distance from F to the midpoint of either EG or EH is $\frac{\sqrt[]2}{2}$, we can use the infinite series described before, so the total length of the devil's path is given by $$4 + \sqrt[]2 + \sum_{n=1}^{\infty} \left(\frac{\sqrt[]2}{2}\right)^n$$ and after cheating a bit with wolfram alpha to solve the infinite series, that becomes: $$4 + \sqrt[]2 + 1 + \sqrt[]2 = 5 + 2\sqrt[]2 ≃ 7.82843$$
That's significantly better than before (better for the devil, not for the poor soul), but I suspect that it's still not the lowest upper bound possible. The victim's strategy would remain unchanged, and would still depend on knowing the devil's optimal strategy.
IvanSanchezIvanSanchez
If the devil always chases straight after me, following my movements tropistically, then
it seems like I should be able to always stay one step ahead of him at any speed, and lead him on a convoluted meditation-labyrinth-style path almost indefinitely.
But if he's smarter than that,
he'll probably ignore my location altogether and just keep hemming me into smaller and smaller patches of the disk, as the hint suggests.
In that case, my instinct would be to cower on the other side of the circle (or circle portion) from him (i.e., first near point 1, then 2, then 3, then 4, etc. -- where he aims each time to cut the portion I'm in again in half), and dodge to one side or the other at the last moment. (I use the word "cower" even though the question says I move continuously -- I'm assuming little jittery back-and-forth movements would amount to the same thing as staying still for all intents and purposes. [Also the hint uses the word "stay".)
But I must be missing something, because variations in my speed wouldn't seem to matter to this solution, as long as I can move at least as fast as him.
C. P. BoykoC. P. Boyko
$\begingroup$ Of course you are missing something big. D does not have to follow the halving algorithm, so you cannot assume that in escaping. $\endgroup$
Though at first instinct getting as far away as possible from the devil sounds good, I think a good strategy would be to
position yourself close to him, very near to the perimeter of the circle. Then move around the perimeter at the same speed as the devil, slowly spiraling toward the center. The closer to the devil that you start, the wider will be the remaining space after circling. Also the closest you can come to the fiery trail, the bigger you can make the spiral. That will allow for a lot of time for the angel to come and zap the devil.
Note that this will only work if the devil just chases mindlessly by the shorter path to you. If the devil is more interested in strategy than chasing, you're doomed. Also this requires a speed equal or faster than the devil.
Hexagonal ImpossibilitiesHexagonal Impossibilities
$\begingroup$ This sounds to me like the same thing as the first paragraph of C. P. Boyko's answer from September 2021. Am I missing something? $\endgroup$
– Gareth McCaughan ♦
$\begingroup$ His strategy was a "convoluted meditation-labyrinth-style path". This one is simpler, and does not mess up (i.e. dead ends) like a laberinthical one would. $\endgroup$
– Hexagonal Impossibilities
$\begingroup$ Maybe I'm misunderstanding CPB's idea, but my interpretation of what he meant was exactly the sort of spiral you're proposing. (It is very possible that I'm misunderstanding his idea, which I guess means that stating it more clearly and explicitly is valuable even if I'm right :-).) $\endgroup$
This is a strategy that I think the devil should be using, but I don't have a proof that it works. Maybe someone else can supply either a proof or a counterexample.
Consider V=1 first. The devil starts by moving to the center of the circle completly ignoring you. This takes 1 time unit. At this point the devil is at distance at most 1 from you. Therefore any point you can reach at time T=2 the devil can reach by T=2 as well. That is what the devil will aim for.
At any time t in [1,2] the devil will look at your velocity (as a vector) and assuming you continue moving at that exact speed and in that direction compute where you will be at T=2 and then move at maximum speed towards that point. I'm fairly sure that this guarantees that the devil will have caught you at T=2 at the latest but I don't have a proper proof.
This should also generalize to V>1. Either by looking farther into the future or by first carving the circle into smaller pieces one should be able to use essentially the same strategy.
quaraguequarague
$\begingroup$ The devil tries to catch you as quickly as possible. I don't think any strategy that ignores your position at any time is a solution, since there's the possibility the devil walks right past you and misses an opportunity to catch you earlier. Large discrete time steps also aren't helpful - with discrete steps, the devil can only close the distance to you from T=1 to T=2, but you may have the option to start walking away from the devil in that period. If you assume constant speed from T=1 to T=2, why not T=0 to T=2? $\endgroup$
– Nuclear Hoagie
$\begingroup$ @NuclearHoagie My claim is that the devil will get you at T=2 the latest no matter what you do. That doesn't mean he can't get you earlier if you are helpful. The catching strategy is meant to apply at all times from T=1 to T=2, so if you change your speed or direction the devil will instantly adjust as well. $\endgroup$
– quarague
$\begingroup$ That doesn't work. At T=1, suppose you're on the edge of the circle and the devil is in the center. You start walking directly toward the center, so the devil expects you to arrive there at T=2 and therefore stays put. At T=1.5, you turn around and head directly back to the edge. The devil now heads there, and catches you at T=2.5. You can do the same thing with infinitesimal time steps by walking along the circumference and then back to where you started. If you can get the devil to walk any path other than a straight line across the circle, you can last longer than T=2. $\endgroup$
$\begingroup$ If you move along the edge and change direction, the devil's path from the center isn't straight so it's longer than 1. $\endgroup$
– justforplaylists
Thanks for contributing an answer to Puzzling Stack Exchange!
Reunite the Stars
Catch the angel in less than 7 units of time
Escaping a hungry lion you can't outrun
Variant of lion and 100 zebras
Game against the Devil
Lions and Zebras on a Chess Board
The vicious wizard...and you!
Can the policeman actually catch the thief, instead of shooting?
Can you survive this infinite zombie attack?
Survive the infinite zombie attack II
Can you distribute the balls equally into 2 boxes?
Dividing a piece of land
What American accent pronounces color like collar?
I ripped the lead off this film capacitor by mistake - is it beyond repair? | CommonCrawl |
FUN-2: Hierarchical approximation
Motivation for using Hierarchical shape functions
Node oriented and Hierarchical shape functions
Element stiffness matrix
Element mass matrix
Entity approximation functions
The present tutorial is aimed to introduce elementary concepts of the Finite Element Method (FEM) with hierarchical shape functions and their implementation in MoFEM.
The reader is assumed to be familiar with the FEM approach to solve Partial Differential Equations (PDEs). Here, differences between the node oriented (that most readers are familiar with) and hierarchical shape functions is described. Aspects of MoFEM implementation of this approach will be described for the user to understand the FEM implementation within the present framework. These concepts are presented as briefly as possible with sole aim to clarify their implementation within MoFEM.
Advantages of the Hierarchical shape functions are not going to be described in the present tutorial. The key points for choosing MoFEM implementation of the FEM method with Hierarchical basis functions instead of the node oriented approach are presented in this section to clearly set the motivation behind this particular choice. Choosing Hierarchical basis functions over node oriented basis is that it results to:
Easy implementation of heterogeneous approximation
Use of more efficient solvers
In the FEM, the unknown field \(u({\mathbf {x}}) \) is approximated as
\[ \begin{equation} u({\mathbf {x}}) \approx u^h({\mathbf {x}}) = \sum_{i=1}^{n} N_i u_i \label{eq:Approx} \end{equation} \]
where \({\mathbf{ x}}\) denotes the coordinate vector, \(u\) is the function to be approximated, \(u^h\) is the approximating function and \(N_i\) are the shape functions associated with the \(n\) number of degrees of freedom \(u_i\) of a particular choice.
Node oriented shape functions are always associated with the degrees of freedom corresponding to a node. Node oriented shape functions are presented for a 1D element in Figure 1 a, b and c for first, second and third order, respectively. For simplicity we will be restricted to 1D element for the moment. Each degree of freedom is associated with a node and has a physical meaning. In the case of p- and hp- adaptivity, where the order of approximation is chosen to increase at certain regions of the domain, this kind of shape functions increase the complexity of implementation since new meshes with extra nodes must be generated.
Figure 1: Node oriented shape functions of a 1D element: (a) linear, (b) second order and (c) third order. Each shape functions is equal to unity at the location of its associated node.
These complexities can be resolved by using hierarchical shape functions. Examples of hierarchical shape functions in a 1D element are presented in Figure 2 a, b and c for first, second and third order, respectively. By inspection of Figure 2 b and c it can be seen that no extra nodes are added for functions of order higher than the first one. This feature makes the p- and hp- adaptivity implementation much easier since no extra nodes are needed to be introduced into the mesh.
Figure 2: Hierarchical shape functions of a 1D element: (a) linear, (b) second order and (c) third order. Each shape functions is equal to unity at the location of its associated node.
Furthermore, all but the first order functions are unique (i.e. there is only one function of each order) and identically zero at the element nodes. For the first order there are two functions, one for each node. Partition of unity is therefore preserved at the nodes but not necessarily in-between. Hence, for order of choice \(p\) the number of degrees of freedom is going to \(p + 1\).
For the 1D case, only the first two degrees of freedom ( \(u_i\)) presented in \( \eqref {eq:Approx}\) are associated with the nodes. The rest of the degrees of freedom are free of any node notion and there is no particular need to be linked to a physical value. These degrees of freedom are purely mathematical.
From now on nodes will be referred as vertices and the entity generated by the 1D space that connects the two vertices will be referred as edge.
A practical way of assembling the degrees of freedom associated to a 1D element can be done by separation of the degrees of freedom associated with vertices (first order) to those with edges (higher orders). Therefore, the elemental degrees of freedom \({\mathbf{ n}^e}\) of a 1D element can be presented as
\[ \begin{equation} {\mathbf {u}^e} = \left[ \begin{array}{c c} {\mathbf { n}^e} &{\mathbf { e}^e} \end{array} \right] \label{eq:ElementDOF1D} \end{equation} \]
where \({\mathbf { n}^e}\) and \({\mathbf { e}^e}\) are sub-vectors containing degrees of freedom associated with vertices and the edge, respectively. According to \(\eqref {eq:Approx} \), \({\mathbf { n}^e}\) will always be consisted of two degrees of freedom as presented below
\[ \begin{equation} {\mathbf { n}^e} = \left[ \begin{array}{cc} u_0 & u_1 \\ \end{array} \right] \label{eq:VertexDOF1} \end{equation} \]
However, the length of \({\mathbf { e}^e}\) depends on the order of approximation chosen. For \(p = 1 \) , \({\mathbf { e}^e}\) will have no members. For \(p = 3 \) the vector will have two members and it will have the form
\[ \begin{equation} {\mathbf { e}^e} = \left[ \begin{array}{cc} u_2 & u_3 \\ \end{array} \right] \label{eq:VertexDOF2} \end{equation} \]
Similar separation can be performed with the corresponding shape functions. Usefulness of the separation will become apparent later and it is intrinsic to the implementations that will be presented in this tutorial.
These concepts can be extended to 2D and 3D elements. In 1D the two entities that were presented were vertex and edge. The extra entities that emerge in the 2D and 3D domains are faces and volumes, respectively.
Shape functions that can be found implemented in MoFEM now are associated with vertices, edges, triangular faces and tetrahedral volumes. The shape functions for edges that are part of triangular faces and the shape functions of triangular faces are presented in Figure 4 and Figure 5, respectively. The values of the shape functions along the 2D space of a face can be thought as the varying height or depth of the nonlinear surfaces from the planar triangular face. When the curved surface is below or above the planar face, the shape function takes negative or positive values, respectively. Vertex shape functions are equal to unity at their associated vertices and identically zero along the opposite edge and one the other two vertices. The edge functions are always identically zero at all vertices and along all edges except for the associated edge. The face functions are generally non zero on the face area but are always zero on vertices and along edges.
The volume shape functions are not presented in this tutorial due to the high complexity of their presentation in a 2D fashion. However, the reader can imagine the volume shape functions as smooth grey colour changes in 3D space. When a location is white or black coloured then the shape function takes its minimum or maximum values. Furthermore, volume shape functions are identically zero on vertices and along edges and faces of the associated tetrahedron.
Figure 3: Hierarchical shape functions for vertices on a 2D face for vertices: (a) \f$j_1\f$, (b) \f$j_2\f$ and (c) \f$j_3\f$. Each shape functions is zero on the remaining vertices and on the edge they defined.
Figure 4: Hierarchical shape functions for edges on a 2D face: (a) second order, (b) third order and (c) fourth order. Each shape functions is generally non zero on its corresponding edge and zero on all other edges.
Figure 5: Hierarchical shape functions for faces on a 2D face: (a) third order, (b) fourth order, (c) fifth order and sixth order. Each shape functions is zero on edges and vertices.
Now the concept of decomposition of elemental degrees of freedom described by the equation \(\eqref {eq:ElementDOF1D}\) can be taken further from a 1D element to the 3D tetrahedron element. In fact, it will be evident that this decomposition applies to any type of 3D element under consideration. Now the vector of the degrees of freedom \({\mathbf {u}^e}\) of one element is
\[ \begin{equation} {\mathbf { u}^e} = \left[ \begin{array}{cccc} {\mathbf { n}^e} & {\mathbf { e}^e} & {\mathbf { f}^e} & {\mathbf { v}^e} \end{array} \right] \label{eq:ElementDOF3D} \end{equation} \]
where \({\mathbf { f}^e}\) and \({\mathbf { v}^e}\) are the sub-vectors containing the degrees of freedom associated with the face and volume shape functions, respectively. Therefore, the number of degrees of freedom for each sub-vector associated with an entity is equal to the number of shape functions associated with that particular entity. For instance, when the shape functions presented in Figure 4 and Figure 5 are used, \({\mathbf { e}^e}\) and \({\mathbf { f}^e}\) contain nine and ten degrees of freedom, respectively.
Similarly, the vector of shape functions of an element can be decomposed into four sub vectors
\[ \begin{equation} {\mathbf {N}^e} = \left[ \begin{array}{cccc} {\mathbf {N}^e_{\textrm {ver}}} & {\mathbf {N}^e_{\textrm {edge}}} & {\mathbf {N}^e_{\textrm {face}}} & {\mathbf {N}^e_{\textrm {vol}}} \\ \end{array} \right] \label{eq:ElementShape3D} \end{equation} \]
where \({\mathbf {N}^e}\) is the vector of the element's shape function and \({\mathbf {N}^e_{\textrm {ver}}}\), \({\mathbf {N}^e_{\textrm {edge}}}\), \({\mathbf {N}^e_{\textrm {face}}}\) and \({\mathbf {N}^e_{\textrm {vol}}}\) are the sub-vectors of the shape functions associated with vertices, edges, faces and volume respectively.
Here it should be mentioned that the choice of the entities and their associated degrees of freedom and shape functions under consideration for solutions of boundary value problems are dictated by the particular choice of the shape function space and the order of approximation. The case where all entities' shape functions are taken into account is for the H1 space for \(p\geq4\). Spaces that can be used in MoFEM are also L2, H-curl and H-Div that are sub-spaces of H1. For the sake of simplicity, this tutorial will proceed assuming an H1 space with \(p\geq4\) in order to have all entities degrees of freedom and shape functions in the examples. Discussion on the particular choice of shape function space will be presented in another tutorial.
One of the most common proccesses encompassed within the Finite Element Method is the evaluation of the stiffness matrix. The evaluation of the element stiffness matrix is going to be presented by making use of the decomposition of degrees of freedom and shape functions presented in \(\eqref {eq:ElementDOF3D}\) and \(\eqref {eq:ElementShape3D}\).
The stiffness matrix of a volume element is evaluated as the integral of
\[ \begin{equation} {\mathbf {\textrm K}^e} = \int_{\Omega^e}({\mathbf {\nabla^{T}N}^e})^{\textrm T} {\mathbf {\nabla^{\textrm T} N}^e} {\textrm d\Omega^e} \label{eq:StiffnessIntegral} \end{equation} \]
where \({\mathbf {\textrm K}^e}\) is the element stiffness matrix, \(\Omega^e\) is the element domain and \({\mathbf {\rm \nabla^{\textrm T}}}\) is the vector form of the first order gradient operator taken as
\[ \begin{equation} {\mathbf {\rm \nabla}}^{\rm T} = \left[ \begin{array}{c} \dfrac{\partial}{\partial x}\\ \dfrac{\partial}{\partial y}\\ \dfrac{\partial}{\partial z}\\ \end{array} \right] \label{eq:Nabla} \end{equation} \]
hence the product \({\mathbf {\nabla^{\textrm T} N}^e}\) can be evaluated as
\[ \begin{equation} \begin{split} {\mathbf {\nabla^{\textrm T} N}^e} = \left[ \begin{array}{c} \dfrac{\partial}{\partial x}\\ \dfrac{\partial}{\partial y}\\ \dfrac{\partial}{\partial z}\\ \end{array} \right] \left[ \begin{array}{cccc} {\mathbf {N}^e_{\textrm ver}} & {\mathbf {N}^e_{\textrm edge}} & {\mathbf {N}^e_{\textrm face}} & {\mathbf {N}^e_{\textrm vol}} \end{array} \right] = \left[ \begin{array}{cccc} \dfrac{\partial {\mathbf {N}^e_{\textrm ver}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{\textrm edge}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{\textrm face}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{\textrm vol}} }{\partial x} \\ \dfrac{\partial {\mathbf {N}^e_{\textrm ver}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{\textrm edge}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{\textrm face}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{\textrm vol}} }{\partial y}\\ \dfrac{\partial {\mathbf {N}^e_{\textrm ver}} }{\partial z} & \dfrac{\partial {\mathbf {N}^e_{\textrm edge}} }{\partial z} & \dfrac{\partial {\mathbf {N}^e_{\textrm face}} }{\partial z} & \dfrac{\partial {\mathbf {N}^e_{\textrm vol}} }{\partial z} \\ \end{array} \right] = \left[ \begin{array}{cccc} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm ver}} & {\mathbf {\nabla^{\textrm T} N}^e_{\textrm edge}} & {\mathbf {\nabla^{\textrm T} N}^e_{\textrm face}} & {\mathbf {\nabla^{\textrm T} N}^e_{\textrm vol}} \end{array} \right] \end{split} \label{eq:NablaShape1} \end{equation} \]
therefore the product \(({\mathbf {\nabla^{\textrm T} N}^e})^{\textrm T} {\mathbf {\nabla^{\textrm T} N}^e}\) is evaluated as
\[ \begin{equation} \begin{split} ({\mathbf {\nabla^{\textrm T} N}^e})^{\textrm T} {\mathbf {\nabla^{\textrm T} N}^e} = \left[ \begin{array}{c} ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}})^{\rm T}\\ ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}})^{\rm T}\\ ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}})^{\rm T}\\ ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}})^{\rm T} \end{array} \right] \left[ \begin{array}{cccc} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}} & {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}} & {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}} & {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}} \end{array} \right] =\\ \\ = \left[ \begin{array}{cccc} ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}} & ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}} & ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}} & ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}}\\ ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}} & ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}} & ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}} & ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}} \\ ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}}& ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}}& ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}}& ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}} \\ ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ver}}}& ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {edge}}}& ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {face}}}& ({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}})^{\rm T} {\mathbf {\nabla^{\textrm T} N}^e_{\textrm {vol}}} \\ \end{array} \right] \end{split} \label{eq:FullStiffness} \end{equation} \]
In dynamic analyses, mass matrix is an extra component to solve the discrete problem. The stiffness matrix of a volume element is evaluated as the integral of
\[ \begin{equation} {\mathbf {\textrm M}^e} = \int_{\Omega^e}({\mathbf {N}^e})^{\textrm T} {\mathbf {N}^e} {\textrm d\Omega^e} \label{eq:MassMatrixIntegral} \end{equation} \]
where \({\mathbf {\textrm M}^e}\) is the element mass matrix. Similar to the element stiffness matrix, the matrix quantity in the integrals is evaluated similar to \(\eqref {eq:FullStiffness}\) as
\[ \begin{equation} \begin{split} ({\mathbf {N}^e})^{\textrm T} {\mathbf {N}^e} = \left[ \begin{array}{c} ({\mathbf {N}^e_{\textrm {ver}}})^{\rm T}\\ ({\mathbf {N}^e_{\textrm {edge}}})^{\rm T}\\ ({\mathbf {N}^e_{\textrm {face}}})^{\rm T}\\ ({\mathbf {N}^e_{\textrm {vol}}})^{\rm T} \end{array} \right] \left[ \begin{array}{cccc} {\mathbf {N}^e_{\textrm {ver}}} & {\mathbf {N}^e_{\textrm {edge}}} & {\mathbf { N}^e_{\textrm {face}}} & {\mathbf {N}^e_{\textrm {vol}}} \end{array} \right] = \\ \left[ \begin{array}{cccc} ({\mathbf {N}^e_{\textrm {ver}}})^{\rm T} {\mathbf {N}^e_{\textrm {ver}}} & ({\mathbf {N}^e_{\textrm {ver}}})^{\rm T} {\mathbf {N}^e_{\textrm {edge}}} & ({\mathbf {N}^e_{\textrm {ver}}})^{\rm T} {\mathbf {N}^e_{\textrm {face}}} & ({\mathbf {N}^e_{\textrm {ver}}})^{\rm T} {\mathbf {N}^e_{\textrm {vol}}}\\ ({\mathbf {N}^e_{\textrm {edge}}})^{\rm T} {\mathbf {N}^e_{\textrm {ver}}} & ({\mathbf {N}^e_{\textrm {edge}}})^{\rm T} {\mathbf {N}^e_{\textrm {edge}}} & ({\mathbf {N}^e_{\textrm {edge}}})^{\rm T} {\mathbf {N}^e_{\textrm {face}}} & ({\mathbf {N}^e_{\textrm {edge}}})^{\rm T} {\mathbf {N}^e_{\textrm {vol}}} \\ ({\mathbf {N}^e_{\textrm {face}}})^{\rm T} {\mathbf {N}^e_{\textrm {ver}}}& ({\mathbf {N}^e_{\textrm {face}}})^{\rm T} {\mathbf {N}^e_{\textrm {edge}}}& ({\mathbf {N}^e_{\textrm {face}}})^{\rm T} {\mathbf {N}^e_{\textrm {face}}}& ({\mathbf {N}^e_{\textrm {face}}})^{\rm T} {\mathbf {N}^e_{\textrm {vol}}} \\ ({\mathbf {N}^e_{\textrm {vol}}})^{\rm T} {\mathbf {N}^e_{\textrm {ver}}}& ({\mathbf {N}^e_{\textrm {vol}}})^{\rm T} {\mathbf {N}^e_{\textrm {edge}}}& ({\mathbf {N}^e_{\textrm {vol}}})^{\rm T} {\mathbf {N}^e_{\textrm {face}}}& ({\mathbf {N}^e_{\textrm {vol}}})^{\rm T} {\mathbf {N}^e_{\textrm {vol}}} \\ \end{array} \right] \end{split} \label{eq:FullMass} \end{equation} \]
To assemble the global stiffness matrix to solve the discretised problem, three nested loops are performed. The out most loop operates over the finite elements, the second loop operated over the element entities and the innermost loop operates over the element gauss points.
In a typical implementation, the user can access base functions on the finite element for each sub-entity of finite element. For example, the quadrilateral is made from four nodes, four edges and one quadrilateral face. The tetrahedron is constructed from four nodes, six edges, four faces and one tetrahedron volume. Developer can access base functions on those entities by structure MoFEM::EntitiesFieldData::EntData. This structure carries basic on information entity, like approximation order, orientation (sense), number of DOFs, and base functions, etc.. It is accessed from Users Data Operator explained in other tutorials.
In particular, base functions are accessed on entity by MoFEM::EntitiesFieldData::EntData::getN and derivatives by MoFEM::EntitiesFieldData::EntData::getDiffN. Here we explain the structure of matrices returned by those two functions. Note that those functions are overloaded and have many variants for developer convenience, and derivatives of them for H-div and H-curl spaces.
Assembling contribution from finite element, for each entity on finite element MoFEM calls the developer overridden implementation of User Data Operator, i.e. method MoFEM::DataOperator::doWork. You can see how such operator is implemented for example in COR-6: Solid elasticity.
The goal of the loop over element entities is to assemble the element stiffness matrix presented in \(\eqref {eq:FullStiffness}\). The set of information associated to each entity is row_data and col_data which are instances of MoFEM::EntitiesFieldData::EntData and passed as argument in MoFEM::DataOperator::doWork. In col_data the matrix \({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ent}}}\) presented in \(\eqref {eq:FullStiffness}\) is stored. In \(row\_data\), a series of matrices \(({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ent}}})^{\rm T}\) is stored presented in \(\eqref {eq:FullStiffness}\), where \({\mathbf {N}}^e_{\textrm {ent}}\) is the shape function of a given entity. Each entity (ent), has two matrices \(({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ent}}})^{\rm T}\) and \({\mathbf {\nabla^{\textrm T} N}^e_{\textrm {ent}}}\) as well as shape functions corresponding to the entities degrees of freedom. These matrices are stored as rows in matrices, where each row corresponds to a gauss point as presented in \(\eqref {eq:NablaShape1}\)
\[ \newcommand{\myarray}[1]{{\left\downarrow\vphantom{#1}\right.{#1}}} \newcommand{\myarraysecond}[1]{{\overset{\xrightarrow[\hphantom{#1}]{\text{$n$ degrees of freedom of element entity}}}{#1}}} \begin{equation} \text{element $m$ gauss points}\myarray{ \begin{array}{c} \\ {\rm {gg}}_1 \\ \\ {\rm {gg}}_2 \\ \\ {\rm {gg}}_3 \\ \\ \vdots \\ {\rm {gg}}_m \end{array} } \myarraysecond{ \left[ \begin{array}{ccccc} {\overbrace{\begin{array}{c c c} \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial z} \end{array} }^{\textrm{1st Element DoF} } } &\dots & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial z} \\ {\begin{array}{c c c} \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial z}\end{array}} & \dots & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{n}} }{\partial z} \\ {\begin{array}{c c c}\dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial z}\end{array}} & \dots & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial z} \\ {\begin{array}{c c c c c}\vdots\,\, & & \vdots & & \,\, \vdots \end{array}} & \ddots & \vdots & \vdots &\vdots \\ {\begin{array}{c c c}\dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{\textrm 1}} }{\partial z}\end{array}} & \dots & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial x} & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial y} & \dfrac{\partial {\mathbf {N}^e_{ n}} }{\partial z} \end{array} \right] } \hspace{2.cm} \label{eq:NablaShape2} \end{equation} \]
For the same gauss point \({\rm {gg}}_m\), row data from matrices \(\eqref {eq:NablaShape2}\) and \(\eqref {eq:ShapeGG}\) is retrieved and a loop is performed to evaluate all multiplication of degree of freedom combination.
Similarly, the mass matrix presented in \(\eqref {eq:FullMass}\) is evaluated performing the loop over the element entities and gauss points where the shape functions are stored to a matrix data structure as presented below
\[ \newcommand{\myarray}[1]{{\left\downarrow\vphantom{#1}\right.{#1}}} \newcommand{\myarraysecond}[1]{{\overset{\xrightarrow[\hphantom{#1}]{\text{$n$ degrees of freedom of element entity}}}{#1}}} \begin{equation} \text{element $m$ gauss points}\myarray{ \begin{array}{c} {\rm {gg}}_1 \\ {\rm {gg}}_2 \\ {\rm {gg}}_3 \\ \vdots \\ {\rm {gg}}_m \end{array} } \myarraysecond{ \left[ \begin{array}{ccccccc} \mathbf {N}^e_{\textrm 1} & {\mathbf {N}^e_{\textrm 2}} & {\mathbf {N}^e_{\textrm 3}} & \dots & {\mathbf {N}^e_{n-2}} & {\mathbf {N}^e_{ n-1}} & {\mathbf {N}^e_{n}} \\ {\mathbf {N}^e_{\textrm 1}} & {\mathbf {N}^e_{\textrm 2}} & {\mathbf {N}^e_{\textrm 3}} & \dots & {\mathbf {N}^e_{n-2}} & {\mathbf{N}^e_{n-1}} & {\mathbf {N}^e_{n}} \\ {\mathbf {N}^e_{\textrm 1}} & {\mathbf {N}^e_{\textrm 2}} & {\mathbf {N}^e_{\textrm 3}} & \dots & {\mathbf {N}^e_{n-2}} & {\mathbf {N}^e_{n-1}} & {\mathbf {N}^e_{n}}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots &\vdots \\ {\mathbf {N}^e_{\textrm 1}} & {\mathbf {N}^e_{\textrm 2}} & {\mathbf {N}^e_{\textrm 3}} & \dots & {\mathbf {N}^e_{n-2}} & {\mathbf {N}^e_{n-1}} & {\mathbf {N}^e_{n}} \end{array} \right] } \hspace{2.5cm} \label{eq:ShapeGG} \end{equation} \]
Note that there is no notion of the shape of the element since matrices are integrated and assembled entity-by-entity, which in finite element. This implementation approach is completely general. To evaluate the element stiffness matrix one need to list entities, and on them gauss points, degrees of freedom, shape functions and their gradients and perform the aforementioned loop. | CommonCrawl |
Research | Open | Published: 11 August 2016
Solutions to some congruence equations via suborbital graphs
Bahadır Özgür Güler1,
Tuncay Kör1 &
Zeynep Şanlı1
SpringerPlusvolume 5, Article number: 1327 (2016) | Download Citation
We relate the connection between the sizes of circuits in suborbital graph for the normalizer of \(\Gamma _0(m)\) in PSL(2,\(\mathbb {R}\)) and the congruence equations arising from related group action. We give a number theoretic result which says that all prime divisors of \(3u^2\mp 3u+1\) for any integer u must be congruent to \(1\pmod {3}\).
It is known that the graph of a group provides a method by which a group can be visualized; in many cases it suggests an economical algebraic proof for a result and it gives same information but in a much more efficient way (Magnus et al. 1966). In this view, the idea of suborbital graph has been used mainly by finite group theorists.
After it was shown that this idea is also useful in the study of the modular group which is a finitely generated Fuchsian group (Jones et al. 1991), some other finitely generated groups have been studied by suborbital graphs (see Akbaş and Başkan 1996; Akbaş 2001; Akbaş et al. 2013; Beşenk et al. 2013; Deger et al. 2011; Güler et al. 2011, 2015; Kader et al. 2010; Kader and Güler 2013; Kesicioğlu et al. 2013; Keskin 2006; Kör et al. 2016). In most of them, it has been emphasized the connection between elliptic elements in group and circuits of the same order in graph closely related with the signature problem.
On the other hand, interesting number theoretic results arise from suborbital graphs as follows:
A shortest path in subgraphs can be expressed as a continued fraction (Jones et al. 1991);
A shortest path in trees of suborbital graphs is a special case of Pringsheim continued fraction (Deger et al. 2011);
The subgraph \(F_{1,2}\) can be defined as a new kind of continued fraction and any irrational numbers has a unique \(F_{1,2}\) expansion (Sarma et al. 2015);
The set of vertices of some suborbital graphs is strongly connected to the Fibonacci sequence (Akbaş et al. 2013).
In this light, we conclude that these graphs might be worth examining when just viewed from number theory aspect. In fact, it is well-known that modular groups have been studied much in number theory.
The aim of this paper is to examine the action of the normalizer of \(\Gamma _0(m)\) which produce some congruence equations with solutions. Actually, the suborbital graphs of the normalizer were studied for some special cases (see Güler et al. 2011; Kader et al. 2010; Keskin 2006). In here, we take a different case that m will be of the form \(3p^2\) where p is prime number greater than or equal to 5.
Let \(PSL(2,\mathbb {R})\) denote the group of all linear fractional transformations
$$\begin{aligned} T:z \rightarrow \frac{{az + b}}{{cz + d}}, \quad \text { where }\ a,b,c\ \text { and }\ d \ \text { are real and }\ ad-bc=1. \end{aligned}$$
In terms of matrix representation, the elements of \(PSL(2,\mathbb {R})\) correspond to the matrices
$$\begin{aligned} \pm \left( \begin{array}{cc} a &\quad b \\ c &\quad d \end{array} \right) ; \quad a,b,c,d \in \mathbb {R}\quad \mathrm {and} \quad ad-bc=1. \end{aligned}$$
This is the automorphism group of the upper half plane \(\mathbb {H}:=\left\{ z\in \mathbb {C}:\mathrm {Im(z)}>0\right\}\). \(\Gamma\), the modular group which is also denoted by \(PSL(2,\mathbb {Z})\), is the subgroup of \(PSL(2,\mathbb {R})\) such that \(a,b,c\ {\text{ and }} \ d\) are integers. It is one of the most well-known and important discrete groups.
Arithmetic subgroups are finite index subgroups of the modular group. An arithmetic subgroup is said to be congruence if it contains the kernel of a modulo m homomorphism from \(PSL(2,\mathbb {Z})\) to \(PSL(2,\mathbb {Z}/m\mathbb {Z})\) for some positive integer m. \(\Gamma _0(m)\) is the congruence subgroup of \(\Gamma\) with m|c.
$$\begin{aligned} \mathrm {\Gamma }_0(m)={\left\{ \left( \begin{array}{cc} a &\quad b \\ c &\quad d \end{array} \right) \equiv \pm \left( \begin{array}{cc} *&\quad *\\ 0 &\quad *\end{array} \right) \pmod {m} \right\} } \end{aligned}$$
We refer the interested reader to a number of sources (Kulkarni 1991; Miyake 1989; Schoeneberg 1974; Shimura 1971) which are also useful to follow the proofs in next section.
Conway and Norton (1977), the normalizer Nor(m) of \(\Gamma _0(m)\) in \(PSL(2,\mathbb {R})\) consists exactly of matrices
$$\begin{aligned} \left( \begin{array}{cc} {ae} & \quad {b/h} \\ {cm/h} & \quad {de} \end{array}\right) , \end{aligned}$$
where \(e\parallel \frac{m}{h^2}\) and h is the largest divisor of 24 for which \(h^2|m\) with understandings that the determinant e of the matrix is positive, and that \(r\parallel s\) means that r|s and \((r,s/r)=1\) (r is called an exact divisor of s). Nor(m) is a Fuchsian group whose fundamental domain has finite area, so it has a signature consisting of the geometric invariants
$$\begin{aligned} (g;m_{1},\ldots ,m_{r};s) \end{aligned}$$
where g is the genus of the compactified quotient space, \(m_{1},\ldots ,m_{r}\) are the periods of the elliptic elements and s is the parabolic class number.
The action of Nor(m) on \(\hat{\mathbb {Q}}\)
Every element of the extended set of rationals \(\hat{\mathbb {Q}}=\mathbb {Q}\cup \left\{ \infty \right\}\) can be represented as a reduced fraction \(\frac{x}{y}\), with \(x,y \in \mathbb {Z}\) and \((x,y)=1\); since \(x/y=-x/-y\), this presentation is not unique. We represented \(\infty\) as \(\frac{1}{0} = \frac{-1}{0}\). The action of the matrix \(\left( \begin{array}{cc} a &\quad b \\ c &\quad d \end{array} \right)\) on x / y is
$$\begin{aligned} \left( \begin{array}{cc} a &\quad b \\ c &\quad d \end{array} \right) : \frac{x}{y} \rightarrow \frac{ax+by}{cx+dy} . \end{aligned}$$
Lemma 1
(Akbaş and Singerman 1992, Corollary 2) Let m has prime power decomposition \(2^{\alpha _1}\cdot 3^{\alpha _2}\cdot p_3^{\alpha _3}\cdots p_r^{\alpha _r}\) . Then Nor (m) acts transitively on \(\hat{\mathbb {Q}}\) if and only if \(\alpha _1\le 7\) , \(\alpha _2\le 3\) and \(\alpha _i\le 1\) for \(i=3,\dots ,r\).
The action of the normalizer \(Nor(3p^2)\) is not transitive on \(\hat{\mathbb {Q}}\).
Since m is taken as the aforementioned case, the result is obvious by Lemma 1.
In this case, we will find a maximal subset of \(\hat{\mathbb {Q}}\) on which the normalizer acts transitively. Since \(\Gamma _0(m)\subset Nor(m)\), we give more special case before our desired result to understand the situation better. We now give a Lemma as follows.
(Akbaş and Başkan 1996, Theorem 4.1) Given an arbitrary rational number k / s with \((k,s)=1\) , then there exists an element \(A\in \Gamma _0(m)\) such that \(A(k/s)=(k_1/s_1)\) with \(s_1|m\).
The following known Theorem is also proved in the same paper. We will present a different proof for the sake of completeness.
(Akbaş and Başkan 1996, Theorem 4.3) Let b|m and let \((a_1,b)=(a_2,b)=1\) . Then \(\left( \begin{array}{cc} a_1 \\ b \end{array} \right)\) and \(\left( \begin{array}{cc} a_2 \\ b \end{array} \right)\) are conjugate under the action of \(\Gamma _0(m)\) if and only if \(a_1\equiv a_2\pmod {t}\) , where \(t=\left( b,\frac{m}{b}\right)\).
The necessary part is obvious by Lemma 3. We must prove the converse. Suppose that \(a_2=a_1+t(b,m/b)\) for some \(t\in \mathbb {Z}\). We need an element \(T=\left( \begin{array}{cc} k &\quad \ell \\ rm &\quad s \end{array} \right)\) of \(\Gamma _0(m)\) such that \(T\left( \begin{array}{cc} a_1 \\ b \end{array} \right) =\left( \begin{array}{cc} a_2 \\ b \end{array} \right)\). Performing the multiplication of matrix T and \(\left( \begin{array}{cc} a_1 \\ b \end{array} \right)\) we have three equations in four variables \(k, \ell , r\) and s as follows.
$$\begin{aligned} \begin{aligned} ka_1+\ell b&=a_1+t(b,m/b)\\ ra_1\frac{m}{b}+s&=1\\ ks-rm\ell&=1. \end{aligned} \end{aligned}$$
Put \(b_0=(b,m/b)\). Since \((a,b)=1\), the first equation has solutions \((k,\ell )\). So \(k=\frac{a_1+tb_0-\ell b}{a_1}\) and from the second equation \(s=1-ra\frac{m}{b}\). Putting these k and s into the third equation we get \(r(a_1^2+atb_0)\frac{m}{bb_0}+\ell \frac{b}{b_0}=t\). The coefficient of r and \(\frac{b}{b_0}\) are coprime. Therefore the equation has solutions r and \(\ell\). Consequently, we have obtained an element T (in fact, infinitely many) in \(\Gamma _0(m)\) such that \(T\left( \begin{array}{cc} a_1 \\ b \end{array} \right) =\left( \begin{array}{cc} a_2 \\ b \end{array} \right)\).
(Güler et al. 2011, Corollary 2.4) Let b|m. Then the orbit \(\left( {\begin{array}{c}a\\ b\end{array}}\right)\) of a/b under the action of \(\Gamma _0(m)\) is the set \(\Bigl \{x/y\in \hat{\mathbb {Q}}:(m,y)=b,a\equiv x\frac{y}{b}\pmod {\left( b,\frac{m}{b}\right) } \Bigr \}\) . Furthermore the number of orbits is \(\varphi \left( b,\frac{m}{b}\right)\) where \(\varphi (n)\) is Euler's totient function which is the number of positive integers less than or equal to n that are coprime to n.
Lemma 3 and 4 complete the proof.
From the above we come to the following conclusion.
The orbits of the action of \(\Gamma _0(3p^2)\) on \(\hat{\mathbb {Q}}\) are
$$\begin{aligned}&\left( {\begin{array}{c}1\\ 1\end{array}}\right) ; \left( {\begin{array}{c}1\\ 3\end{array}}\right) ; \left( {\begin{array}{c}1\\ p\end{array}}\right) , \left( {\begin{array}{c}2\\ p\end{array}}\right) , \dots , \left( {\begin{array}{c}p-1\\ p\end{array}}\right) ;\\&\left( {\begin{array}{c}1\\ 3p\end{array}}\right) , \left( {\begin{array}{c}2\\ 3p\end{array}}\right) , \dots \ , \left( {\begin{array}{c}2p-1\\ 3p\end{array}}\right) ; \left( {\begin{array}{c}1\\ p^2\end{array}}\right) ; \left( {\begin{array}{c}1\\ 3p^2\end{array}}\right) . \end{aligned}$$
Let us denote the representatives of the orbits by \(\left( \begin{matrix} a \\ b\end{matrix}\right)\) as above. The possible values of b are \(1, 3, p, 3p, p^{2}, 3p^{2}\) by Lemma 3. Hence, the number of non-conjugate classes of these orbits with Euler formula are 1 and \(p-1\) for \(1, 3, p^{2}, 3p^{2}\) and p, 3p respectively. By Lemma 5, the result is obvious \(\square\)
Theorem 7
The orbits of the action of \(Nor(3p^2)\) on \(\hat{\mathbb {Q}}\) are as follows. Let \(l\in \{1,2,\dots ,p-1\}\) . Then
(a) If \(3\not \mid l\) and \(l\not \equiv p\pmod {3}\),
$$\begin{aligned} \left( {\begin{array}{c}l\\ p\end{array}}\right) \cup \left( {\begin{array}{c}p-l\\ p\end{array}}\right) \cup \left( {\begin{array}{c}l\\ 3p\end{array}}\right) \cup \left( {\begin{array}{c}p-l\\ 3p\end{array}}\right) \end{aligned}$$
(b) If \(3\not \mid l\) and \(l\equiv p\pmod {3}\),
$$\begin{aligned} \left( {\begin{array}{c}l\\ p\end{array}}\right) \cup \left( {\begin{array}{c}p-l\\ p\end{array}}\right) \cup \left( {\begin{array}{c}l\\ 3p\end{array}}\right) \cup \left( {\begin{array}{c}2p-l\\ 3p\end{array}}\right) \end{aligned}$$
If \(3\mid l\) , then
$$\begin{aligned} \left( {\begin{array}{c}l\\ p\end{array}}\right) \cup \left( {\begin{array}{c}p-l\\ p\end{array}}\right) \cup \left( {\begin{array}{c}p+l\\ 3p\end{array}}\right) \cup \left( {\begin{array}{c}p-l\\ 3p\end{array}}\right) \end{aligned}$$
$$\begin{aligned} \left( {\begin{array}{c}1\\ 1\end{array}}\right) \cup \left( {\begin{array}{c}1\\ 3\end{array}}\right) \cup \left( {\begin{array}{c}1\\ p^2\end{array}}\right) \cup \left( {\begin{array}{c}1\\ 3p^2\end{array}}\right) \end{aligned}$$
We prove only (1)-(a). The rest are done similarly. If \(T=\left( \begin{matrix} ae & \quad b \\ 3p^2c &\quad de\end{matrix}\right) \in Nor(3p^2)\), then \(e=1,3,p^2\) or \(3p^2\).
If \(e=1\), then \(T\genfrac(){0.0pt}0{l}{p}=\genfrac(){0.0pt}0{l}{p}\).
If \(e=3\), then \(T\genfrac(){0.0pt}0{l}{p}=\left( \begin{matrix} 3a &\quad b \\ 3p^2c &\quad 3d\end{matrix}\right) \genfrac(){0.0pt}0{l}{p}=\frac{3al+bp}{3p^2cl+3dp}\). Since \(det\left( \begin{matrix} 3a &\quad b \\ p^2c &\quad d\end{matrix}\right) =1\), then \((3al+bp,p^2cl+3d)=1\). So \(\frac{3al+bp}{3p(pcl+d)}\in \left( {\begin{array}{c}x\\ 3p\end{array}}\right)\) and \(x\equiv (3al+bp)(pcl+d)\pmod {p}\). As \(detT=3\), then \(x\equiv l\pmod {p}\). Consequently \(\genfrac(){0.0pt}0{l}{p}\cup \genfrac(){0.0pt}0{l}{3p}\).
If \(e=p^2\), then \(T\genfrac(){0.0pt}0{l}{p}=\frac{apl+b}{p(3cl+dp)}\in \left( {\begin{array}{c}x\\ p\end{array}}\right)\). As \(detT=p^2\), then \(x\equiv p-l\pmod {p}\). Consequently \(\genfrac(){0.0pt}0{l}{p}\cup \genfrac(){0.0pt}0{p-l}{p}\).
If \(e=3p^2\), then \(T\genfrac(){0.0pt}0{l}{p}=\frac{3apl+b}{3p(cl+3dp)}\in \left( {\begin{array}{c}x\\ 3p\end{array}}\right)\). So \(x\equiv -l\equiv p-l\pmod {p}\). (i), (ii), (iii) and (iv) complete the proof
\(\square\)
\(\hat{\mathbb {Q}}(3p^2)=\left( {\begin{array}{c}1\\ 1\end{array}}\right) \cup \left( {\begin{array}{c}1\\ 3\end{array}}\right) \cup \left( {\begin{array}{c}1\\ p^2\end{array}}\right) \cup \left( {\begin{array}{c}1\\ 3p^2\end{array}}\right)\) is the maximal subset of \(\hat{\mathbb {Q}}\) on which the normalizer \(Nor(3p^2)\) acts transitively.
The stabilizer of a point in \(\hat{\mathbb {Q}}(3p^2)\) is an infinite cyclic group.
Because of the transitive action, stabilizers of any two points are conjugate. So it is enough to look at just \(\infty =\frac{1}{0}\in \left( {\begin{array}{c}1\\ 3p^2\end{array}}\right)\). As \(T\genfrac(){0.0pt}0{1}{0}=\left( \begin{matrix} ae &\quad b \\ 3p^2c &\quad de\end{matrix}\right) \genfrac(){0.0pt}0{1}{0}=\frac{ae}{3p^2c}=\genfrac(){0.0pt}0{1}{0}\), then \(c=0\) and \(e=1\). From the determinant equality, \(T=\left( \begin{matrix} 1 &\quad b \\ 0 &\quad 1\end{matrix}\right)\). Consequently \((Nor(3p^2))_\infty =\left\langle \left( \begin{matrix} 1 &\quad 1 \\ 0 &\quad 1\end{matrix}\right) \right\rangle\).
Now we consider the imprimitivity of the action of \(Nor(3p^2)\) on \(\hat{\mathbb {Q}}(3p^2)\), beginning with a general discussion of primitivity of permutation groups.
Let \(\left( G,\Omega \right)\) be a transitive permutation group, consisting of a group G acting on a set \(\Omega\) transitively. An equivalence relation \(\approx\) on \(\Omega\) is called G-invariant if, whenever \(\alpha ,\beta \in \Omega\) satisfy \(\alpha \approx \beta\), then \(g(\alpha )\approx g(\beta )\) for all \(g\in G.\) The equivalence classes are called blocks.
We call \(\left( G,\Omega \right)\) imprimitive if \(\Omega\) admits some G-invariant equivalence relation different from
the identity relation, \(\alpha \approx \beta\) if and only if \(\alpha =\beta\);
the universal relation, \(\alpha \approx \beta\) for all \(\alpha ,\beta \in \Omega\).
Otherwise \(\left( G,\Omega \right)\) is called primitive. These two relations are supposed to be trivial relations.
Lemma 10
(Biggs and White 1979, Theorem 1.6.5) Let \(\left( G,\Omega \right)\) be a transitive permutation group. Then \(\left( G,\Omega \right)\) is primitive if and only if \(G_{\alpha },\) the stabilizer of \(\alpha \in \Omega\) , is a maximal subgroup of G for each \(\alpha \in \Omega\).
From the above Lemma we see that whenever, for some \(\alpha\), \(G_{\alpha }\lneq H\lneq G\), then \(\Omega\) admits some G-invariant equivalence relation other than the trivial cases. Because of the transitivity, every element of \(\Omega\) has the form \(g(\alpha )\) for some \(g\in G\). Thus one of the non-trivial G-invariant equivalence relation on \(\Omega\) is given as follows:
$$\begin{aligned} g(\alpha )\approx g^{\prime }(\alpha ) \hbox { if and only if } g^{\prime }\in gH. \end{aligned}$$
The number of blocks ( equivalence classes ) is the index \(\left| G:H\right|\) and the block containing \(\alpha\) is just the orbit \(H(\alpha )\).
For applying the above to the case; let's take that \(Nor(3p^2)\), \({\hat{\mathbb {Q}}}(3p^2)\), \(H_0(3p^2):=\left\langle \Gamma _0(3p^2),\genfrac(){0.0pt}0{3a \quad b}{3p^2c \quad 3d}\right\rangle\) and the stabilizer \((Nor(3p^2))_\infty\) instead of G, \(\Omega\), H and \(G_x\). Clearly
$$\begin{aligned} G_{\infty }<H_0(3p^2)<Nor(3p^2). \end{aligned}$$
(Akbaş and Singerman 1990, Proposition 2) The index \(|Nor(N):\Gamma _0(N)|=2^{\rho }h^2\tau\) , where \(\rho\) is the number of prime factors of \(N/h^2\), \(\tau =(\frac{3}{2})^{\varepsilon _1}(\frac{4}{3})^{\varepsilon _2}\),
$$\begin{aligned} \varepsilon _1= \left\{ \begin{array}{ll} 1 &\quad if \,2^2,2^4,2^6\parallel N \\ 0 &\quad otherwise \end{array} \right. ,\quad {\varepsilon _2= \left\{ \begin{array}{ll} 1 &\quad if \,9\parallel N \\ 0 &\quad otherwise \end{array} \right. } \end{aligned}$$
Theorem 12
The blocks arising from the imprimitive action of the normalizer by above relation (3.2) have the form:
$$\begin{aligned}{}[0]:=\left( \begin{matrix} 1\\ 1\end{matrix}\right) \cup \left( \begin{matrix} 1\\ 3\end{matrix}\right) \quad\text {and}\quad [\infty ]:=\left( \begin{matrix} 1\\ p^{2}\end{matrix}\right) \cup \left( \begin{matrix} 1\\ 3p^{2}\end{matrix}\right) . \end{aligned}$$
First, we calculate the index \(|Nor(3p^2):\Gamma _0(3p^2)|\) using Lemma 11. It is clear that \(h=1\) for \(N=3p^2\). Furthermore, we have \(\rho =2\) and \(\varepsilon _1=\varepsilon _2=0\) in this case. Hence, it can be concluded that \(|Nor(3p^2):\Gamma _0(3p^2)|=4\). Taking into account the definition of \(H_0(3p^2)\), it is clear that \(H_0(3p^2)=\Gamma _0(3p^2)\cup g\Gamma _0(3p^2)\) for the element g of the form \(\left( \begin{array}{cc} 3a &\quad b \\ 3p^2c &\quad 3d \end{array} \right)\). So, we have that \(|H_0(3p^2):\Gamma _0(3p^2)|=2\). Using the equation
$$\begin{aligned} |Nor(3p^2):\Gamma _0(3p^2)|= |Nor(3p^2):H_0(3p^2)|.|H_0(3p^2):\Gamma _0(3p^2)|, \end{aligned}$$
we have \(|Nor(3p^2):H_0(3p^2)|=2\). So, the number of blocks is 2 by earlier comments. As we observed in Theorem 7, the orbit \(\hat{\mathbb {Q}}(3p^2)\) is divided into two blocks as
$$\begin{aligned} \left( \begin{matrix} 1\\ 1\end{matrix}\right) \cup \left( \begin{matrix} 1\\ 3\end{matrix}\right) \quad\text {and}\quad \left( \begin{matrix} 1\\ p^{2}\end{matrix}\right) \cup \left( \begin{matrix} 1\\ 3p^{2}\end{matrix}\right) \end{aligned}$$
taking into account orbit \(\left( \begin{matrix} 1\\ 1\end{matrix}\right)\) under the action of g.
The suborbital graph of \(Nor(3p^2)\) and \(\hat{\mathbb {Q}}(3p^2)\)
Sims (1967) introduced the idea of the suborbital graphs of a permutation group G acting on a set \(\Delta\) , these are graphs with vertex-set \(\Delta\), on which G induces automorphisms. We summarize Sims'theory as follows: Let \((G,\Delta )\) be transitive permutation group. Then G acts on \(\Delta \times \Delta\) by \(g(\alpha ,\beta )=(g(\alpha ),g(\beta )) (g\in G,\alpha ,\beta \in \Delta )\). The orbits of this action are called suborbitals of G. The orbit containing \((\alpha ,\beta )\) is denoted by \(O(\alpha ,\beta )\). From \(O(\alpha ,\beta )\) we can form a suborbital graph \(G(\alpha ,\beta ):\) its vertices are the elements of \(\Delta\), and there is a directed edge from \(\gamma\) to \(\delta\) if \((\gamma ,\delta )\in O(\alpha ,\beta )\). A directed edge from \(\gamma\) to \(\delta\) is denoted by \((\gamma \rightarrow \delta )\). If \((\gamma ,\delta )\in O(\alpha ,\beta )\), then we will say that there exists an edge \((\gamma \rightarrow \delta )\) in \(G(\alpha ,\beta )\).
If \(\alpha =\beta\), the corresponding suborbital graph \(G(\alpha ,\alpha )\), called the trivial suborbital graph, is self-paired: it consists of a loop based at each vertex \(\alpha \in \Delta\). By a circuit of length m (or a closed edge path), we mean a sequence \(\nu _1 \rightarrow \nu _2 \rightarrow \dots \rightarrow \nu _m \rightarrow \nu _1\) such that \(\nu _{i}\ne \nu _{j}\) for \(i\ne j\), where \(m\ge 3\). If \(m=3, 4\) and 6, then the circuit is called a triangle, a quadrilateral and a hexagon, respectively.
We now investigate the suborbital graphs for the action of \(Nor(3p^2)\) on \(\hat{\mathbb {Q}}(3p^2)\). Since the action of \(Nor(3p^2)\) on \(\hat{\mathbb {Q}}(3p^2)\) is transitive, \(Nor(3p^2)\) permutes the blocks transitively; so the subgraphs are all isomorphic. Hence it is sufficent to study with only one block. On the other hand, it is clear that each non-trivial suborbital graph contains a pair (\(\infty ,u/p^2\)) for some \(u/p^2\in \hat{\mathbb {Q}}(3p^2)\). We let \(F(\infty ,u/p^2)\) be the subgraph of \(G(\infty ,u/p^2)\) whose vertices form the block \([\infty ]=\left( \begin{matrix} 1\\ p^{2}\end{matrix}\right) \cup \left( \begin{matrix} 1\\ 3p^{2}\end{matrix}\right)\), so that \(G(\infty ,u/p^2)\) consists of two disjoint copies of \(F(\infty ,u/p^2)\).
(Edge condition) Let r/s and x/y be in the block \([\infty ]\) . Then there is an edge \(r/s\rightarrow x/y\) in \(F(\infty ,u/p^2)\) if and only if
If \(p^2|s\) but \(3p^2\not \mid s\), then \(x\equiv \mp 3ur\pmod {p^2}, y\equiv \mp 3us\pmod {3p^2}\), \(ry-sx=\mp p^2\)
If \(3p^2|s\) then \(x\equiv \mp ur\pmod {p^2}, y\equiv \mp us\pmod {p^2}, ry-sx=\mp p^2\)
Assume first \(\frac{r}{s} \rightarrow \frac{x}{y}\) is an edge in \(F(\infty ,u/p^2)\) and \(p^2|s\) but \(3p^2\not \mid s\). Therefore there exists some T in the normalizer \(Nor(3p^2)\) such that T sends the pair \((\infty , u/p^2)\) to the pair (r / s, x / y), that is \(T(\infty )=r/s\) and \(T(u/p^2)=x/y\). Since \(3p^2\not \mid s\), T must be of the form \(\genfrac(){0.0pt}0{3a \qquad b}{3p^2c \quad 3d}\). \(T(\infty ) = 3a/3p^2c = \genfrac(){0.0pt}0{(-1)^ir}{(-1)^is}\) gives that \(r=(-1)^ia\) and \(s=(-1)^ip^2c\), for \(i=0,1\). \(T(u/p^2) =\left( \begin{matrix} 3a & \quad b \\ 3p^2c &\quad 3d\end{matrix}\right) \genfrac(){0.0pt}0{u}{p^2}=\)
$$\begin{aligned} = \left( \begin{array}{c} 3au+bp^2\\ 3p^2cu+3dp^2 \end{array} \right) = \left( \begin{array}{c} (-1)^jx \\ (-1)^jy \end{array} \right) \quad \mathrm{for} \ j=0,1. \end{aligned}$$
Since the matrix \(\genfrac(){0.0pt}0{3a \quad b}{p^2c \quad d}\) has determinant 1 and \((u,p^2)=1\), then \((3au+bp^2, p^2cu+dp^2)=1\). Therefore \((3au+bp^2, 3p^2cu+3dp^2)=1\). So
$$\begin{aligned} x = (-1)^j(3au+bp^2) , \ y = (-1)^j(3p^2cu+3dp^2) . \end{aligned}$$
That is, \(x=(-1)^{i+j}3au\pmod {p^2}\), \(y=(-1)^{i+j}3su\pmod {3p^2}\). Finally, since
$$\begin{aligned} \left( \begin{array}{cc} 3a & \quad b\\ 3p^2c & \quad 3d \end{array} \right) \left( \begin{array}{cc} 1 &\quad u\\ 0 &\quad p^2 \end{array} \right) = \left( \begin{array}{cc} (-1)^i3r &\quad (-1)^jx \\ (-1)^i3s &\quad (-1)^jy \end{array} \right) \quad \mathrm{for} \ i,j=0,1, \end{aligned}$$
we get \(ry-sx=\mp p^2\). This proves (i).
Secondly let \(\frac{r}{s} \rightarrow \frac{x}{y}\) be an edge in \(F(\infty ,u/p^2)\) and \(3p^2|s\). In this case T must be of the form \(\genfrac(){0.0pt}0{a \quad b}{3p^2c \ d}\), \(\det T=1\). Therefore, since \(T(\infty ) = \genfrac(){0.0pt}0{a}{3p^2c} = \genfrac(){0.0pt}0{(-1)^ir}{(-1)^is}\) we get \(a=r\) and \(s=3p^2c\), by taking i to be 0. Likewise, since
$$\begin{aligned} \left( \begin{array}{cc} a &\quad b\\ 3p^2c &\quad d \end{array} \right) \left( \begin{array}{c} u\\ p^2 \end{array} \right) = \left( \begin{array}{c} au+bp^2\\ 3p^2cu+dp^2 \end{array} \right) = \left( \begin{array}{c} (-1)^jx \\ (-1)^jy \end{array} \right) , \end{aligned}$$
we have \(x\equiv ur\)(mod \(p^2\)) and \(y\equiv us\)(mod \(p^2\)) and that \(ry-sx=p^2\) with \(j=0\). The case where \(i=0\) and \(j=1\) gives (b).
In the opposite direction we do calculations only for (i)(a). The others are likewise done. So suppose \(x\equiv 3ur\pmod {p^2}\), \(y\equiv 3us\pmod {3p^2}\), \(ry-sx=p^2\), \(p^2|s\) and \(3p^2\not \mid s\). Therefore there exist b, d in \(\mathbb {Z}\) such that \(x=3ur+p^2b\) and \(y=3su+3p^2d\). Since \(ry-sx=p^2\), we get \(3rd-bs=1\), or \(9rd-3bs=3\). Hence the element \(T:=\genfrac(){0.0pt}0{3r \quad b}{3s \quad 3d}\) is not only in the normalizer \(Nor(3p^2)\), but also in H. It is obvious that \(T(\infty )=\genfrac(){0.0pt}0{r}{s}\) and \(T\genfrac(){0.0pt}0{u}{p^2}=\genfrac(){0.0pt}0{x}{y}\). \(\square\)
Farey graph and subgraph \(F(\infty ,u/p^2)\)
Now, let us represent the edges of \(F(\infty ,u/p^2)\) as hyperbolic geodesics in the upper half-plane \(\mathbb {H}\), that is, as Euclidean semi-circles or half-lines perpendicular to real line as in Jones and Singerman (1987). To understand the situation better, we give the Farey graph and some its properties as follows:
Definition 14
The Farey graph, denoted by F, is defined as : the vertex \(\infty\) is joined to the integers, while two rational numbers r/s and x/y (in reduced form) are adjacent in F if and only if \(r/s-x/y=\mp 1\), or equivalently if they are consecutive terms in some Farey sequence \(F_{m}\) (consisting of the rationals x/y with \(|y|\le m\), arranged in increasing order). See also Fig. 1.
(Jones et al. 1991, Corollary 4.2) No edges of F cross in \(\mathbb {H}\).
Farey graph
Similar result can be given by both of following useful Lemma and Theorem 13 as in (Jones et al. 1991);
Let r/s and x/y be rational numbers such that \(r/s-x/y=-1\) , where \(s\ge 1\), \(y\ge 1\) . Then there exist no integers between r/s and x/y.
Let k be an integer such that \(r/s<k<x/y\). Then \(r<sk\) and \(x>ky\). Thus \(1=sx-ry>sx-sky=s(x-ky)\ge s\), which is a contradiction.
No edges of the subgraph \(F(\infty ,u/p^2)\) of \(Nor(3p^2)\) cross in \(\mathbb {H}\).
Without loss of generality, because of the transitive action, we can take the edges \(\infty \rightarrow \frac{u}{p^2}\), \(\frac{x_1}{y_1p^2}\rightarrow \frac{x_2}{y_2p^2}\) and \(\frac{x_1}{y_1p^2}<\frac{u}{p^2}< \frac{x_2}{y_2p^2}\), where all letters are positive integers. It is easily seen that \(x_1y_2p^2-x_2y_1p^2=-p^2\) by Theorem 13. \(\frac{x_1}{y_1}<u< \frac{x_2}{y_2}\) and Lemma 16 complete the proof.
\(F(\infty ,u/p^2)\) has a self-paired edge iff \(3u^2\equiv -1\pmod {p^2}\).
Because of the transitive action, the form of self-paired edge can be taken of \(1/0\rightarrow u/p^2\rightarrow 1/0\). The condition follows immediately from the second edge by Theorem 13.
\(F(\infty ,u/p^2)\) has no triangle or quadrilateral.
We suppose that it has a triangle. Because of the transitive action, it must be of the form \(1/0\rightarrow u/p^2\rightarrow x/3p^2y\rightarrow 1/0\). But this contradicts Theorem 13 which says that both denominators of vertices of an edge having the form \(\frac{r}{s}\rightarrow \frac{x}{y}\) are not divisible by 3 at the same time. We now suppose that it has a quadrilateral. It must be of the form \(1/0\rightarrow u/p^2\rightarrow x/3p^2y\rightarrow k/p^2\rightarrow 1/0\) by same reason. From second, third and fourth edges by Theorem 13, we have the equations; \(3uy-x=-1\), \(x-3ky=-1\) and \(1\equiv -3uk\pmod {p^2}\). Therefore we obtain a contradiction 3|2.
If \(3u^2\mp 3u+1 \equiv 0\pmod {p^2}\), \(F(\infty ,u/p^2)\) has a hexagon.
By Theorem 13, we obtain easily that
$$\begin{aligned} \infty \rightarrow \frac{u}{p^2}\rightarrow \frac{3u\mp 1}{3p^2}\rightarrow \frac{2u\mp 1}{2p^2}\rightarrow \frac{3u\mp 2}{3p^2} \rightarrow \frac{u\mp 1}{p^2}\rightarrow \infty \end{aligned}$$
As an example, we can verify easily \(\frac{1}{0}\rightarrow \frac{7}{169}\rightarrow \frac{22}{507}\rightarrow \frac{15}{338}\rightarrow \frac{23}{507}\rightarrow \frac{8}{169}\rightarrow \frac{1}{0}\) is a hexagon in \(F(\infty ,7/169)\). See also Fig. 2.
Subgraph \(F(\infty ,7/169)\)
\(H_0(3p^2)\) contains an elliptic element \(\varphi\) of order 6 if and only if \(F(\infty ,u/p^2)\) contains a hexagon.
Taking into account (3), we suppose that \(\varphi =\left( \begin{array}{cc} {3a} &\quad {b} \\ {3p^2c} &\quad {3d} \end{array}\right)\) is an elliptic element of order 6. It is known that \(a+d=\pm 1\) for order 3, 4, 6. Since \(det=3\), we have \(3a(\pm 1-a)\equiv 1\)(mod\(p^2\)), that is \(3a^2\mp 3a+1\equiv 1\)(mod\(p^2\)). As \((a,n)=1\), \(F(\infty ,u/p^2)\) contains a hexagon by above Theorem.
Conversely, we suppose that \(F(\infty ,u/p^2)\) contains a hexagon. Because of the transitive action, we have
Hence we get the element \(\varphi :=\left( \begin{array}{cc} -3u &\quad (3u^2\mp 3u+1)/p^2 \\ -3p^2 &\quad 3u+3 \end{array} \right)\).
(Akbaş and Singerman 1990, Theorem 2) The periods of elliptic elements of Nor (m) may be 2, 3, 4, 6. Nor( m) has at most one period of order 6. It has a period of order 6 iff \(3\Vert m/h^2\) and if p is an odd prime divisor of \(m/h^2\) then \(p\equiv 1\pmod {3}\).
Corollary 23
The prime divisors p of \(3u^2\mp 3u+1\) , for any \(u\in \mathbb {Z}\) , are of the form \(p\equiv 1\pmod {3}\).
Let p a prime number and a divisor of \(3u^2\mp 3u+1\) for any integer u. In this case, it is clear that Nor(3p) has the elliptic element \(\left( \begin{array}{cc} -3u &\quad (3u^2\mp 3u+1)/p \\ -3p &\quad 3u+3\end{array} \right)\) of order 6 as in \(Nor(3p^2)\). We get \(p\equiv 1\pmod {3}\) by above Lemma.
Because this work combine different fields of mathematics such as algebra, geometry, group theory and number theory, it can be seen as an example of multidisciplinary approach which offer a new understanding of some situations. We show that we can produce solutions for some number theoretic problems using finite group theory once again. Taking into account the conjecture (Güler et al. 2011) which is also confirmed for the simplest hexagonal case within non-transitive cases by this paper, the normalizer has a potential to suggest solutions for other congruence equations such as \(8u^2\mp 4u+1\equiv 0\pmod {p}\), \(9u^2\mp 3u+1\equiv 0\pmod {p}\), \(27u^2\mp 9u+1\equiv 0\pmod {p}\) etc.
Akbaş M, Singerman D (1990) The normalizer of \(\Gamma _{0}(N)\) in \(PSL(2, R)\). Glasgow Math 32:317–327
Akbaş M, Singerman D (1992) The signature of the normalizer of \(\Gamma _{0}(N)\) in \(PSL(2, R)\). Lond Math Soc 165:77–86
Akbaş M, Başkan T (1996) Suborbital graphs for the normalizer of \(\Gamma _{0}(N)\). Turk J Math 20:379–387
Akbaş M (2001) On suborbital graphs for the modular group. Bull Lond Math Soc 33:647–652
Akbaş M, Kör T, Kesicioglu Y (2013) Disconnectedness of the subgraph \(F^3\) for the group \(\Gamma ^3\). J Inequal Appl 283:7
Beşenk M et al (2013) Circuit lengths of graphs for the Picard group. J Inequal Appl 106:8
Bigg NL, White AT (1979) Permutation groups and combinatorial structures. London mathematical society lecture note series, 33, CUP, Cambridge
Conway JH, Norton SP (1977) Monstrous moonshine. Bull Lond Math Soc 11:308–339
Deger AH, Beşenk M, Güler BO (2011) On suborbital graphs and related continued fractions. Appl Math Comput 218(3):746–750
Güler BO et al (2011) Elliptic elements and circuits in suborbital graphs. Hacet J Math Stat 40(2):203–210
Güler BO et al (2015) Suborbital graphs for the group \(\Gamma ^2\). Hacet J Math Stat 44(5):1033–1044
Jones GA, Singerman D (1987) Complex functions: an algebraic and geometric viewpoint. Cambridge University Press, Cambridge
Jones GA, Singerman D, Wicks K (1991) The modular group and generalized Farey graphs. Lond Math Soc Lect Note Ser 160:316–338
Kader S, Güler BO, Değer AH (2010) Suborbital graphs for a special subgroup of the normalizer. Iran J Sci Technol A 34(A4):305–312
Kader S, Güler BO (2013) On suborbital graphs for the extended modular group \(\hat{\Gamma }\). Gr Comb 29(6):1813–1825
Kesicioğlu Y, Akbaş M, Beşenk M (2013) Connectedness of a suborbital graph for congruence subgroups. J Inequal Appl 117:7
Keskin R (2006) Suborbital graphs for the normalizer of \(\Gamma _{0}(m)\). Eur J Comb 27(2):193–206
Kör T, Güler BO, Şanlı Z (2016) Suborbital graphs for Atkin–Lehner group. Turk J Math. doi:10.3906/mat-1602-10
Kulkarni RS (1991) An arithmetic–geometric method in the study of the subgroups of the modular group. Am J Math 113(6):1053–1133
Magnus M, Karrass A, Solitar D (1966) Combinatorial group theory. Wiley, New York
Miyake T (1989) Modular forms. Springer, Berlin
Sarma R, Kushwaha S, Krishnan R (2015) Continued fractions arising from \(F_{1,2}\). J Number Theory 154:179–200
Schoeneberg B (1974) Elliptic modular functions. Springer, Berlin
Shimura G (1971) Introduction to the arithmetic theory of automorphic functions. Princeton University Press, Princeton
Sims CC (1967) Graphs and finite permutation groups. Math Z 95:76–86
BÖG, TK, ZŞ completed the paper together. All authors read and approved the final manuscript.
We would like to express our sincere gratitude to Professor M. Akbaş for his immense help during the preparation of this paper. The authors are also thankful to the anonymous referees for the valuable suggestions towards the improvement of this manuscript.
Department of Mathematics, Faculty of Science, Karadeniz Technical University, Trabzon, Turkey
Bahadır Özgür Güler
, Tuncay Kör
& Zeynep Şanlı
Search for Bahadır Özgür Güler in:
Search for Tuncay Kör in:
Search for Zeynep Şanlı in:
Correspondence to Bahadır Özgür Güler.
Imprimitive action
Suborbital graphs
Mathematics Subject Classification
11F06
Mathematics (Theoretical) | CommonCrawl |
Comparison of registered and published intervention fidelity assessment in cluster randomised trials of public health interventions in low- and middle-income countries: systematic review protocol
Myriam Cielo Pérez1,2,
Nanor Minoyan1,2,
Valéry Ridde2,3,
Marie-Pierre Sylvestre1,2 &
Mira Johri ORCID: orcid.org/0000-0001-5642-787X1,4
Cluster randomised trials (CRTs) are a key instrument to evaluate public health interventions, particularly in low- and middle-income countries (LMICs). Fidelity assessment examines study processes to gauge whether an intervention was delivered as initially planned. Evaluation of implementation fidelity (IF) is required to establish whether the measured effects of a trial are due to the intervention itself and may be particularly important for CRTs of complex interventions. Current CRT reporting guidelines offer no guidance on IF assessment. We will systematically review the scientific literature to study current practices concerning the assessment of IF in CRTs of public health interventions in LMICs.
We will include CRTs of public health interventions in LMICs that planned or assessed IF in either the trial protocol or the main trial report (or an associated document). Search strategies use Medical Subject Headings (MESH) and text words related to CRTs, developing countries, and public health interventions. The electronic database search was developed first for MEDLINE and adapted for the following databases: EMBASE, CINAHL, PubMed, and EMB Reviews, to identify CRT reports in English, Spanish, or French published on or after January 1, 2012. To ensure availability of a study protocol, we will include CRTs reporting a registration number in the abstract. For each included study, we will compare planned versus reported assessment of IF, and consider the dimensions of IF studied, and data collection methods used to evaluate each dimension. Data will be synthesised using quantitative and narrative techniques. Risk of bias for individual studies will be assessed using the Cochrane Collaboration Risk of Bias Tool criteria and additional criteria related to CRT methods. We will investigate possible sources of heterogeneity by performing subgroup analysis. This review was not eligible for inclusion in the PROSPERO registry.
Fidelity assessment may be a key tool for making studies more reliable, internally valid, and externally generalizable. This review will provide a portrait of current practices related to the assessment of intervention fidelity in CRTs and offer suggestions for improvement. Results will be relevant to researchers, those who finance health interventions, and for decision-makers who seek the best evidence on public health interventions.
As evidenced by their growing presence in the scientific literature [1, 2], cluster randomised trials (CRTs) have become a key instrument to evaluate public health interventions [1, 3–7], particularly in low- and middle-income countries (LMICs) [3, 8]. Randomised controlled trials (RCTs) are widely considered to provide the highest quality of evidence on the effectiveness of health interventions [9–12], and CRTs are a form of randomised trial in which clusters of individuals (such as families, villages, hospital services, or schools) rather than independent individuals are randomly allocated to intervention or control groups [2]. Increasingly, public health researchers recognize the importance of developing health interventions that are directed not only to individuals but also to populations, communities, and a wide range of social and environmental factors influencing health [13, 14]. CRTs offer an appropriate design to assess such public health interventions and also to measure the overall effect of an intervention at the population level [3, 5, 8, 13, 15], heterogeneity of impact among population subgroups, and equity [16, 17].
Implementation fidelity in CRTs of public health interventions
Although the scientific debate is ongoing [18], randomised trials are generally viewed as the gold standard for establishing evidence of intervention effectiveness. Despite this, the use of CRTs to evaluate public health interventions raises unique methodological challenges. Recent systematic reviews of CRT methods have found evidence of improvements in the design and analysis of CRTs while noting deficiencies in trial implementation that may compromise their validity [19, 20]. Previous systematic reviews have emphasised the importance of process evaluation to mitigate these methodological problems, which can affect the internal and external validity of trial results [3, 19, 21–23].
"Implementation fidelity" refers to the degree to which an intervention is delivered as initially planned [24]. Fidelity assessment is an aspect of process evaluation that aims to understand and measure to what extent the intervention is being implemented as intended, with a view to clarifying relationships between intervention and its intended outcomes, and learning what specific reasons have caused the success or failure of the intervention [9, 24, 25]. Evaluation of implementation fidelity within trials has multiple benefits, which may include increased confidence in scientific findings, increased power to control for confounding factors and detect intervention effects, and increased ability to evaluate the performance of an intervention based on theory [26]. Several studies have found that interventions implemented with high fidelity achieved better results in comparison with low-fidelity interventions [27–33]. Fidelity assessment can improve the internal and external validity of CRTs [19] by providing evidence that the trial results are due to the intervention itself rather than to confounding variables and facilitating generalization of results to contexts that may differ substantially from the original trial setting [9, 24]. Fidelity assessment may be particularly important for trials of public health interventions, as these interventions tend to be complex and constituted by multiple components [10, 34] that may act independently or interdependently [35], leading to a greater potential for variation during implementation [24].
Framework for the evaluation of implementation fidelity used in this review
Table 1 outlines the conceptual framework for evaluation of implementation fidelity used in this review. The framework is based principally on the work by Carroll et al. [24] and includes elements of implementation fidelity and moderating factors that may affect the delivery process. The framework was further refined by Hasson, who expanded the list of moderating factors considered in the framework [36]. We selected this framework to guide the review because it provides a comprehensive synthesis of previous work on implementation fidelity and has been widely influential.
Table 1 Conceptual framework for implementation fidelity used in this review
Fidelity assessment in CRT reporting guidelines
The Consolidated Standards of Reporting Trials (CONSORT) group was created to provide guidance to improve the quality and transparency of reporting of RCTs [37]. The CONSORT Statement offers a checklist of essential items that should be included in reporting a RCT [37]. Due to the increasing use of CRT designs, the CONSORT group proposed a version of the CONSORT Statement for the reporting of cluster randomised trials in 2004 and updated these guidelines in 2012 [2, 38].
The CONSORT Statement recognises that the trial protocol for a given study may not have been followed fully for some trial participants for a wide variety of reasons, including failure to receive the entire intervention as planned [37]. Cases of protocol nonadherence may influence the interpretation and credibility of the results and thus the validity of the conclusions [19, 26, 39, 40]. To preserve the ability to make strong inferences about the intervention effect, CONSORT offers recommendations on how issues of nonadherence should be handled at the level of analysis. Specifically, it recommends that all participants randomised be retained in the analysis and analysed according to their original assigned groups, an approach known as "intention-to-treat" or "ITT" analysis. This approach ignores noncompliance, protocol deviations, and anything that occurs after randomisation. The rationale for the ITT approach is that random allocation procedures avoid bias when assigning interventions to trial participants and thus facilitate causal inference. Any exclusion of patients from the analysis risks compromising the randomisation and may lead to biased results. This ITT approach can be contrasted with a "per protocol" or "PP" analysis, which restricts the analysis to participants who fulfil the protocol in terms of eligibility, interventions, and outcome assessment [19, 26, 39, 40]. According to the CONSORT, although a PP analysis may be appropriate in some instances, due to the exclusion of participants, it should be considered as a non-randomised, observational comparison.
The CONSORT guidance on handling protocol nonadherence has been primarily developed in relation to individually randomised parallel group trials. However, reasons for protocol nonadherence in individually randomised RCTs may differ from those in CRTs. In a clinical trial setting, nonadherence depends largely on the actions of the trial participant (e.g. failure to adhere to therapy) and the treatment provider (e.g. failure to follow treatment protocol), which may in turn be related to issues such as treatment side effects and safety. In CRTs of public health interventions, protocol nonadherence may occur because complex interventions that include multiple components are delivered with poor fidelity. However, despite the scientific importance of protocol nonadherence, the current CONSORT guidelines for individually randomised parallel group trials [37] and CRTs [2, 38] offer no advice on the methods to assess its occurrence during the course of a trial.
Rationale for undertaking this review
LMIC governments and other development partners have strengthened research and intervention efforts to support the UN Millennium Development Goals (MDGs) and Sustainable Development Goals (SDGs) agenda. As the global community intensifies the search for the best evidence on public health interventions to improve health and development outcomes in LMICs, CRTs have become an essential tool. Policymakers are interested in using the best available evidence to make decisions about the effectiveness of specific interventions in LMICs facing considerable budget constraints. Although CRTs have been widely implemented to evaluate public health interventions in both high-income countries and LMICs, country context, interventions, approaches, and outcomes may differ substantially between settings. We therefore limit our focus to LMICs.
As earlier methodologically-oriented systematic reviews have demonstrated, CRTs of complex public health interventions may be particularly at risk of experiencing protocol deviations and nonadherence, and these may compromise the validity of their findings [19, 20]. Although process evaluation techniques such as evaluation of implementation fidelity can help to assess the extent of these problems and mitigate their negative effects, current reporting guidelines for CRTs offer no specific guidance on the assessment of intervention fidelity within CRTs. Wide divergence in current practices is therefore likely. We will undertake a methodologically-oriented systematic review of current practices related to the assessment of intervention fidelity within CRTs of public health interventions in LMICs, with a view to informing the best practices for these CRTs. To our knowledge, no other systematic review has been conducted on this question.
We will conduct a systematic review of the published scientific literature to study current practices concerning the assessment of intervention fidelity in CRTs of public health interventions in LMICs.
This review will address the following research questions:
Based on information from the trial registry (and the published study protocol, if applicable): What proportion of recent CRTs of public health interventions in LMICs planned to assess implementation fidelity (IF)?
Based on information from the published trial report (or a complementary document such as a published article, a grey literature report, or an online appendix reporting the assessment of IF), what proportion of recent CRTs of public health interventions in LMICs reported assessing IF?
For those studies that assessed IF, which fidelity components were examined, and which data collection methods were employed to assess each component?
Is there evidence of divergent practices between planned and reported studies, or of outcome reporting bias related to the assessment of IF?
Based on comparison of the results of questions 1 and 2, what is the overall agreement between planned and reported assessment of IF?
Are trial reports with negative findings for the ITT analysis more likely to report a PP analysis?
For the subset of studies that included both ITT and PP analyses, what is the overall agreement between ITT and PP analyses concerning the intervention's effectiveness?
Does the magnitude of the intervention effect differ for PP as compared to ITT analyses?
To answer our research questions, we will first identify all CRTs from 2012 onwards of public health interventions conducted in LMICs with an available study protocol registered in a public trial registry. A given CRT will be included in the review if the protocol, the trial report, or both address IF. For each CRT included in the review, we will compare planned assessment methods for IF as described in the trial registry (and published study protocol, if applicable) with published methods and results from the main trial report (and related documents, if relevant). We will use a variety of measures to summarise the results.
We describe the study methods in seven steps adapted from the 2015 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-P reporting guidelines for systematic reviews and meta-analysis protocols [41]. The PRISMA-P checklist is provided as an additional file (see Additional file 1). As this review focuses on methodological issues rather than on health-related outcomes, it was not eligible for inclusion in the PROSPERO registry [42]. In the event of protocol amendments, we will provide the date of each amendment, a description of the change, and the rationale for the change.
Studies will be selected from the peer-reviewed scientific literature according to the following study and report characteristics.
Study designs
We will include all CRTs. For the purposes of this review, a CRT is defined as a trial in which intact social units or clusters of individuals, rather than independent individuals, are randomly allocated to intervention groups [38]. CRTs may include trials employing parallel group, stepped wedge, or factorial designs; cluster randomised adaptive trials; and cluster randomised pragmatic trials, among others. CRTs with an adaptive design allow modifications based on data accumulated following trial start, while preserving the integrity of the trial [37]. Pragmatic trials are designed to evaluate the efficacy of an intervention in routine clinical practice in order to maximise applicability and generalizability of the results of the study [43, 44].
Study participants will be human adults or children living in LMICs. LMICs will be defined according to the 2016 World Bank country classifications [45].
This review focuses on "public health interventions". We employ the definition of "public health" proposed in the World Health Organization (WHO) health promotion glossary as "The science and art of promoting health, preventing disease, and prolonging life through the organized efforts of society" [46]. Adapting the definition proposed by Rychetnik and colleagues, we define a public health intervention as a disease prevention or health promotion intervention applied to many, most, or all members in a community, which aims to deliver a net benefit to the community or population, as well as benefits to individuals [47, 48]. Public health interventions are distinguished from clinical interventions aimed at preventing or treating diseases at the individual level [47, 48].
In order to operationalise this definition and guide selection of specific studies, we will use the "Intervention Wheel", a graphic model of population-based public health practice illustrated with specific examples, developed by the Minnesota Department of Health [49]. The intervention wheel provides 17 public health interventions, selected to meet five criteria. To be considered as public health interventions, interventions should as follows: (i) focus on entire populations (or particular subgroups within a population), (ii) be grounded in an assessment of community health, (iii) consider the broad determinants of health, (iv) emphasise health promotion and prevention, and (v) intervene at multiple levels [49]. We used these five criteria to aid in decisions concerning study inclusion.
According to Rychetnik and colleagues, public health interventions are inherently "complex, programmatic, and context dependent" and these characteristics raise challenges for their evaluation [47]. The assessment of intervention fidelity may be especially important for public health interventions, and this consideration underlies our choice to focus on them in this review.
Comparators will be defined as planned per the original CRT. Given the nature of public health interventions and the pragmatic orientation of CRTs in LMICs, we anticipate that a large proportion of studies included in the review will define the comparison group as receiving the "usual care".
The focus of this methodologically-oriented review is on comparisons of planned and reported outcomes related to IF. For studies that assessed IF in either the trial protocol or the main trial report, we will include both the study protocol and the main trial report. Recognising that word limits for scientific journal articles are highly constrained and that the current CONSORT reporting guidelines for CRTs do not require description of elements related to IF, we also decided to include CRTs reporting the assessment of IF in a complementary document such as a published article, an online appendix to the main paper, or a grey literature report, in lieu of reporting the assessment of IF in the main trial report. These elements will be verified by checking the bibliography for the main trial report and additional sources.
For the purposes of study selection, we considered that studies evaluated IF if they either proposed methods to assess or reported results related to the evaluation of at least one of the four key fidelity components: (1) content, (2) coverage, (3) frequency, and (4) duration. For CRTs taking an adaptive approach, we will consider if these trials respect pre-established decision rules regarding changes to their design. In addition, we will include all CRTs that reported a per-protocol analysis.
Report characteristics
Eligible studies will be implemented in LMICs as classified by the World Bank [45].
Availability of the study protocol
To ensure availability of a study protocol, we will include CRTs reporting a registration number in the abstract for any trial registry meeting the WHO criteria [50]. The WHO trial registration data set (TRDS) is an internationally agreed-upon set of items that provide information on the design, conduct, and administration of clinical trials. The WHO International Clinical Trials Registry Platform (ICTRP) facilitates the publication of the TRDS on a publicly accessible website, through a network of partner registries that have agreed to adopt the TRDS as a common standard. The TRDS will be used in this review to evaluate planned assessment of intervention fidelity, either alone, or in conjunction with a published study protocol. TRDS field 20 "Key secondary outcomes" is particularly pertinent for this assessment.
We will include studies for which the main trial report was published on or after January 1, 2012. We chose this date because the last update of the CONSORT Statement to improve reporting of CRTs was published in 2012, and we wanted to analyse current practices. No restriction will be applied to the publication date for the protocol.
We will include studies published in English, Spanish, or French, which are languages known by the research team.
Exclusion criteria
We will exclude studies that are (i) not cluster randomised trials, studies that (ii) do not plan or report the assessment of IF and (iii) are not public health interventions, (iv) conducted in a high-income country as defined by the World Bank 2016 country classification [45], (v) are published before 2012, (vi) do not have a publicly available protocol for comparison, or (vii) for which only the protocol but not the main trial report has been published. Manuscripts will be also excluded if they are (viii) pilot studies, (ix) secondary reports of a main study for which the relevant findings were published prior to 2012, (x) studies published in a language other than English, Spanish, or French, or (xi) studies from the grey literature.
Information sources and search strategy
Literature search strategies were developed in collaboration with an academic librarian experienced in conducting systematic review searches. Search strategies use Medical Subject Headings (MESH) and text words related to cluster randomised trials, developing countries, and public health interventions. The electronic database search was developed first for MEDLINE (Ovid) (for the full search strategies, see Additional file 2) and then adapted for the following electronic databases: EMBASE (Ovid), CINAHL (Ovid), PubMed, and EMB Reviews (Ovid). Search terms are a combination of "cluster-randomized", "cluster analysis", "health program", "public health service", "health education", "public health", "health promotion", "health behavior", "health knowledge/attitudes practice", "Preventive health services", "health care system", "health education", and "developing countries". The search strategy will span the time period from January 2012 to May 2016 and will be updated towards the end of the review. Searches will be filtered to articles concerning humans and written in English, French, or Spanish. To augment this list, we will add relevant studies suggested by members of the systematic review team. Identified records will be uploaded into the EndNote reference management software (version X7.5.3, Thompson Reuters, 2016), and duplicates will be eliminated.
Study screening and selection
Study screening and selection will be done manually within the EndNote based on the inclusion and exclusion criteria for this systematic review. To ensure the availability of study protocols, we will limit the search to CRTs that have the word stem "regist*" in the abstract and use these results to begin the process of screening and selection. We validated this procedure by examining a subset of excluded articles. Screening and selection will be done in two stages by two independent reviewers (MCP and NM). In the first stage, reviewers will independently screen the titles and abstracts of each identified reference against the inclusion criteria to eliminate irrelevant publications. In the second stage, we will screen the full text of all studies that appear to meet the inclusion criteria or for which there is uncertainty as to eligibility. For each study, we will identify additional articles of potential relevance, such as a published protocol or a process evaluation, by reviewing references from the main trial report, consulting the trial registry record, and searching the PubMed database for new publications by the lead trial author. To aid in article screening and selection, the team will develop and test a screening sheet for full-text review. Any disagreement between reviewers will be resolved through discussion and, as necessary, through arbitration by a third author (MJ). The process of study selection will be documented in a flow diagram describing studies identified and excluded at each stage. We will also provide a summary table describing studies excluded at the stage of full-text review, along with reasons for their exclusion.
Outcomes and prioritisation
The search and selection process for this review is designed to identify two quantities required for calculation of outcomes based on proportions: (1) numerator: These are studies that meet all the inclusion and exclusion criteria. As for all systematic reviews, these studies are our principal focus and will be included in the review and given detailed analysis. (2) Denominator: This is the total N for the study, which we defined as all studies that satisfy all the inclusion and exclusion criteria, with the exception of the outcome criterion (planned or reported IF assessment). It is essentially the universe of cluster randomised trials of public health interventions in LMICs. Both quantities will be clearly indicated in the study flow diagram.
The primary outcome for this study will be the proportion of overall agreement between the protocol and trial report concerning occurrence of IF assessment. This corresponds to research question 4a.
Data will be summarised in a two-by-two table comparing the assessment of intervention fidelity in the trial report to that in the protocol. N represents the set of recent CRTs of public health interventions in LMICs that have registered the study protocol in a publicly availably trial registry. For each CRT in N, we will determine whether IF was assessed in the registered (or published) protocol or in the trial report (or associated documents). Studies judged to have assessed IF will be coded as "1"; others will be coded as "0". Judgements will represent reviewer consensus (MCP and NM, with appeal to MJ in case of divergences). The proportion of overall agreement is defined as the proportion of eligible CRTs for which judgements concerning the occurrence of implementation fidelity assessment agree in the protocol and in the trial report (i.e. both positive or both negative). It will be computed as (a + d)/N.
Trial report + a b a + b
− c d c + d
a + c b + d N
Secondary outcomes
To address research questions 1, 2, and 3, we will also calculate the following:
The frequency and proportion of trial protocols reporting the assessment of intervention fidelity, out of N
The frequency and proportion of trial reports reporting the assessment of intervention fidelity, out of N
The proportion of positive agreement among those that agree, computed as a/(a + d)
The frequency counts and percentages summarising fidelity components examined and data collection methods proposed or employed
To address research question 4b, for all studies included in the trial, we will also record the authors' judgments as to whether the intervention was effective. Studies that concluded that the intervention is more effective than the control will be coded as "1"; studies that were unable to reject the null hypothesis that there are no significant differences between groups will be coded as "0". We will calculate as follows:
The conditional probability that a PP analysis is performed given that the ITT analysis shows no difference between groups.
The conditional probability that a PP analysis is performed given that the ITT analysis shows a positive intervention effect.
These measures will be calculated using a standard formula for conditional probabilities:
$$ P\left(B\Big|A\right)=\frac{P\left(A\ \mathrm{and}\ B\right)}{P(A)} $$
To address research questions 4c and 4d, we will examine the subset of trial reports containing both ITT and PP analyses. For studies comparing several interventions (e.g. factorial design), data on each intervention will be extracted separately.
To address research question 4c, we will study the proportion of the overall agreement between the ITT and PP analyses concerning intervention effectiveness.
Data will be summarised in a two-by-two table comparing the assessment of intervention effectiveness in the ITT analysis to that in the PP (intervention fidelity) analysis. T is the total number of included CRTs reporting both an ITT and PP analysis. Studies that concluded in favour of the intervention group will be coded as "1"; those that are unable to reject the null hypothesis that there is no significant difference between groups will be coded as "0". Judgements will represent reviewer consensus (MCP and NM, with appeal to MJ in case of divergences). The proportion of overall agreement is defined as the proportion trial reports for which judgements concerning intervention effectiveness agree in ITT and PP analyses (i.e. both positive (favour the intervention group) or both negative (unable to reject the null hypothesis of no difference between groups)). It will be computed as (w + z)/T.
ITT analysis
PP analysis + w x w + x
− y z y + z
w + y x + z T
We will also calculate
The frequency and proportion of ITT analyses that conclude in favour of the intervention, out of T
The frequency and proportion of PP analyses that conclude in favour of the intervention, out of T
To address research question 4d, we will compare intervention effect sizes reported for ITT and PP analyses. Comparisons will be summarised as the percentage change in effect size, computed as the effect size for the PP analysis/effect size for the ITT analysis *100.
Risk of bias in individual studies
To assess possible risk of bias for included studies, we will use the Cochrane Collaboration tool to assess the risk of bias in randomised trials [51] based on the following factors: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting, and other sources of bias. Because the Cochrane Collaboration tool was developed for individually randomised studies whereas our study focuses on CRTs, we will also include several additional criteria specifically relevant to assessing risk of bias in CRTs, recommended by the Cochrane Collaboration [51] and other key sources [51–53]. These additional criteria will consider issues related to the following: recruitment bias (potential for participant self-selection to occur if individuals are recruited to the trial after the clusters have been randomised); baseline imbalances (because CRTs generally randomise a limited number of clusters, chance imbalances may affect comparability of intervention and control groups); loss of clusters (complete clusters may sometimes be lost from a trial and thus be omitted from the analysis; these missing data may lead to biased outcome assessments); and unit of analysis (failure to properly account for clustering in the analysis) [51]. For each domain or criterion of interest, we will assess each criterion as low risk, high risk, or uncertain risk and provide a sample text that illustrates the reasons for this judgement. This evaluation will be done independently by two reviewers (MCP and NM). Disagreements between reviewers will be resolved by consensus or, if consensus cannot be achieved, by consulting a third reviewer (MJ). Judgements related to risk of bias will be summarised graphically using RevMan 5.1 [51]. Risk of bias assessments will be used to create categories of high-, uncertain-, and low-risk studies to be used in subgroup analyses.
Systematic reviews of health outcomes often assess the quality of a body of evidence using standardised tools such as the GRADE system [54]. However, as this review focuses on methodological issues rather than on health-related outcomes, we will not use this tool.
Data extraction and data items
Two review authors will extract data independently (MCP and NM). From each study protocol and trial report, reviewers will extract data on (i) the study characteristics (study location, aims, intervention); (ii) all applicable descriptors of the CRT trial design (for example, parallel group, stepped wedge, factorial, adaptive, pragmatic); (iii) concepts related to the assessment of IF (assessment of fidelity reported in protocol and/or main study, fidelity components and moderating factors evaluated, data collection methods, and any dimension used by the authors to evaluate intervention fidelity distinct from those proposed by Caroll and Hasson [24, 32]); (iv) whether events taking place in the control group were monitored, as these can influence the effectiveness of the intervention [27, 55, 56]; and (v) information for assessing the risk of bias of included studies. We will also extract (vi) statistical results concerning the intervention effectiveness and the authors' qualitative conclusions regarding the intervention effect for the primary (generally, ITT) analysis and one or more subgroup analyses relevant for intervention fidelity (generally, the PP analysis). If studies investigate more than one intervention, we will extract data relevant for each comparison. To reduce bias and errors in data extraction, reviewers will use a pre-defined template pilot tested on a subset of studies and a guide for data extraction. To ensure consistency, reviewers will receive training prior to commencing extraction for the review and undertake calibration exercises. Reviewers will resolve disagreements by discussion and by appeal to a third author (MJ) where necessary. All data extraction tools will be available as online supplementary documents.
Results will be presented in accordance with the PRISMA Statement [41]. A narrative synthesis will be provided, with information presented in tables to summarise key data. The narrative synthesis will explore relationships and findings within and between the included studies. It will highlight the four key dimensions of intervention fidelity identified from the literature (content, coverage, frequency, and duration), moderating factors for intervention fidelity (participant responsiveness, comprehensiveness of policy, strategies to facilitate implementation, quality of delivery, recruitment, and context), any new dimensions explored, and data collection method used to evaluate each key dimension.
We will present quantitative data for all primary and secondary outcomes proposed. Where appropriate, data will be presented in tabular form.
We will investigate the possible sources of heterogeneity by performing subgroup analysis. Specifically, we will recompute the main quantitative outcomes for subgroups of studies with high, uncertain, and low risk of bias to better understand potential sources of variation in results. If the data permit, we will conduct a sensitivity analysis to explore whether studies at lower risk of bias undertake more comprehensive assessment of intervention fidelity. Because of the study question and the nature of the outcomes assessed, we do not intend to perform meta-analyses.
Planned assessment of meta-biases
We recognize that data may be biased due to non-study-related processes and plan to assess specific meta-biases. This study compares results for protocols and published trial reports, and is thus designed to address potential reporting bias and to investigate potential outcome bias. As our review focuses on methodological issues rather than on outcome assessment, we will not assess potential publication bias.
Development initiatives require high-quality evaluations to determine whether the programmes work or not and to know how to improve them [57, 58]. According to Rychetnik et al. [48], evaluation of public health interventions requires detailed information about the "design and implementation of an intervention; contextual circumstances in which the intervention was implemented; and how the intervention was received".
We will conduct a methodological systematic review to evaluate the current practices for evaluating implementation fidelity in CRTs of public health interventions carried out in LMICs. Fidelity assessment may be a key tool for making studies more reliable, internally valid, and externally generalizable [59]. In the absence of fidelity assessment, it may be difficult to determine if CRT results are due to the intervention design, to its implementation, or to unknown or external factors that may influence results. The rejection of effective interventions or acceptance of ineffective interventions incurs incalculable costs, due to the use of financial and scientific resources, and the inability of the authors to extrapolate the results [26]. Improved assessment and reporting of intervention fidelity may be important for researchers, for those who finance health interventions, and for decision-makers who seek the best evidence on public health interventions to promote health, prevent disease, and reduce health inequalities.
CONSORT:
Consolidated Standards of Reporting Trials
CRTs:
Cluster randomization trials
ICTRP:
International Clinical Trials Registry Platform
Intervention fidelity
ITT:
Intention-to-treat
LMICs:
Low- and middle-income countries
MESH:
Medical Subject Headings
Per protocol
PRISMA:
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RCTs:
Randomised controlled trials
TRDS:
Trial registration data set
Bland JM. Cluster randomised trials in the medical literature: two bibliometric surveys. BMC Med Res Methodol. 2004;4(1):21.
Campbell MK, Elbourne DR, Altman DG. CONSORT statement: extension to cluster randomised trials. BMJ. 2004;328(7441):702–8.
Isaakidis P, Ioannidis JP. Evaluation of cluster randomized controlled trials in sub-Saharan Africa. Am J Epidemiol. 2003;158(9):921–6.
Campbell MJ, Donner A, Klar N. Developments in cluster randomized trials and statistics in medicine. Stat Med. 2007;26(1):2–19.
Moberg J, Kramer M. A brief history of the cluster randomised trial design. J R Soc Med. 2015;108(5):192–8.
Campbell MK, Mollison J, Steen N, Grimshaw JM, Eccles M. Analysis of cluster randomized trials in primary care: a practical approach. Fam Pract. 2000;17:192–6.
Osrin D, Azad K, Fernandez A, Manandhar DS, Mwansambo CW, Tripathy P, Costello AM. Ethical challenges in cluster randomized controlled trials: experiences from public health interventions in Africa and Asia. Bull World Health Org. 2009;87(10):772–9.
Handlos LN, Chakraborty H, Sen PK. Evaluation of cluster‐randomized trials on maternal and child health research in developing countries. Trop Med Int Health. 2009;14(8):947–56.
Richards DA, Hallberg IR. Complex interventions in health: an overview of research methods. London & New York: Routledge; 2015.
Campbell M, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P, Spiegelhalter D, Tyrer P. Framework for design and evaluation of complex interventions to improve health. BMJ. 2000;321(7262):694–6.
Craig P, Cooper C, Gunnell D, Haw S, Lawson K, Macintyre S, Thompson S. Using natural experiments to evaluate population health interventions: new Medical Research Council guidance. J Epidemiol Community Health. 2012;66(12):1182–6. jech-2011.
Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA. 1992;268(17):2420.
Sanson-Fisher RW, D'Este CA, Carey ML, Noble N, Paul CL. Evaluation of systems-oriented public health interventions: alternative research designs. Annu Rev Public Health. 2014;35:9–27.
Upshur RE. Principles for the justification of public health intervention. Can J Public Health. 2002;93(2):101–3.
Hayes RJ, Alexander ND, Bennett S, Cousens SN. Design and analysis issues in cluster-randomized trials of interventions against infectious diseases. Stat Methods Med Res. 2000;9(2):95–116.
Morris SS, Ranson MK, Sinha T, Mills AJ. Measuring improved targeting of health interventions to the poor in the context of a community-randomised trial in rural India. Contemp Clin Trials. 2007;28(4):382–90.
Ranson MK, Sinha T, Morris SS, Mills AJ. CRTs–cluster randomized trials or "courting real troubles": challenges of running a CRT in rural Gujarat, India. Can J Public Health. 2006;97(1):72.
PubMed PubMed Central Google Scholar
Grossman J, Mackenzie FJ. The randomized controlled trial: gold standard, or merely standard? Perspect Biol Med. 2005;48(4):516–34.
Eldridge S, Ashby D, Bennett C, Wakelin M, Feder G. Internal and external validity of cluster randomised trials: systematic review of recent trials. BMJ. 2008;336(7649):876–80.
Bonell C, Oakley A, Hargreaves J, Strange V, Rees R. Assessment of generalisability in trials of health interventions: suggested framework and systematic review. Br Med J. 2006;333:346-49. doi:10.1136/bmj.333.7563.346.
Oakley A, Strange V, Bonell C, Allen E, Stephenson J. Process evaluation in randomised controlled trials of complex interventions. BMJ. 2006;332(7538):413–6.
Brierley G, Brabyn S, Torgerson D, Watson J. Bias in recruitment to cluster randomized trials: a review of recent publications. J Eval Clin Pract. 2012;18(4):878–86.
Puffer S, Torgerson D, Watson J. Evidence for risk of bias in cluster randomised trials: review of recent trials published in three general medical journals. BMJ. 2003;327(7418):785–9.
Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci. 2007;2(1):40.
Medical Research Council. Developing and evaluating complex interventions: new guidance. London: Medical Research Council; 2008.
Borrelli B. The assessment, monitoring, and enhancement of treatment fidelity in public health clinical trials. J Public Health Dent. 2011;71(s1):S52–63.
Article PubMed Central Google Scholar
Hasson H, Blomberg S, Dunér A. Fidelity and moderating factors in complex interventions: a case study of a continuum of care program for frail elderly people in health and social care. Implement Sci. 2012;7(23):1–11.
Keith RE, Hopp FP, Subramanian U, Wiitala W, Lowery JC. Fidelity of implementation: development and testing of a measure. Implement Sci. 2010;5(1):99.
Spoth RL, Redmond C, Shin C. Randomized trial of brief family interventions for general populations: adolescent substance use outcomes 4 years following baseline. J Consult Clin Psychol. 2001;69(4):627.
Bradley F, Wiles R, Kinmonth A-L, Mant D, Gantley M. Development and evaluation of complex interventions in health services research: case study of the Southampton heart integrated care project (SHIP). BMJ. 1999;318(7185):711–5.
Thom B. Good practice in school based alcohol education programmes, Patient Educ Couns; 2015. http://dx.doi.org/10.1016/j.pec.2015.11.020
Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Educ Res. 2003;18(2):237–56.
Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: are implementation effects out of control? Clin Psychol Rev. 1998;18(1):23–45.
Hawe P. Lessons from complex interventions to improve health. Annu Rev Public Health. 2015;36:307–23.
Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Baird J. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258.
Hasson H. Systematic evaluation of implementation fidelity of complex interventions in health and social care. Implement Sci. 2010;5(1):67.
Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, Altman DG. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol. 2010;63(8):e1–e37.
Campbell MK, Piaggio G, Elbourne DR, Altman DG. CONSORT 2010 Statement: extension to cluster randomised trials. 2012.
Schneeweiss S. Sensitivity analysis and external adjustment for unmeasured confounders in epidemiologic database studies of therapeutics. Pharmacoepidemiol Drug Saf. 2006;15(5):291–303.
Thabane L, Mbuagbaw L, Zhang S, Samaan Z, Marcucci M, Ye C, Debono VB. A tutorial on sensitivity analyses in clinical trials: the what, why, when and how. BMC Med Res Methodol. 2013;13(1):92.
Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Stewart LA. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1):1.
PROSPERO International prospective register of systematic reviews. http://www.crd.york.ac.uk/PROSPERO/
Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, Moher D. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 2008;337:a2390.
Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutic trials. J Chronic Dis. 20(8):637-48.
The World Bank. Country and lending groups. Data & statistics: country classification. Washington, D.C: The World BanK; 2016. Available at: https://datahelpdesk.worldbank.org/knowledgebase/articles/906519 Accessed 3 July,2016.
World Health Organization. Health promotion glossary. 1998.
Rychetnik L, Frommer M, Hawe P, Shiell A. Criteria for evaluating evidence on public health interventions. J Epidemiol Community Health. 2002;56:119–27. doi:10.1136/jech.56.2.119.
Rychetnik L, Hawe P, Waters E, Barratt A, Frommer M. A glossary for evidence based public health. J Epidemiol Community Health. 2004;58(7):538–45.
Keller LO, Strohschein S, Lia‐Hoagberg B, Schaffer MA. Population‐based public health interventions: practice‐based and evidence‐supported. Part I. Public Health Nurs. 2004;21(5):453–68.
World Health Organization. WHO international clinical trials registry platform (ICRP). New standards for registration of human medical research. Trial registration data set (version 1.2.1) Available at http://www.who.int/ictrp/network/trds/en/ Accessed 15 May 2016
Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.
Hayes R, Moulton L. Cluster randomised trials. 2009.
Donner A, Klar N. Design and analysis of cluster randomization trials in health research. 2000. p. 6–10.
Guyatt Gordon H, Oxman Andrew D, Vist Gunn E, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336:924.
Ibrahim S, Sidani S. Fidelity of intervention implementation: a review of instruments. Health. 2015;7(12):1687.
Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41(3-4):327–50.
Duflo E, Glennerster R, Kremer M. Using randomization in development economics research: a toolkit. Handb Dev Econ. 2007;4:3895–962.
Banerjee AV, Esther D. Poor economics: a radical rethinking of the way to fight global poverty. New York: Public Affairs; 2011. p. 303. ISBN 978-1-58648-798-0.
Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, Czajkowski S. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23(5):443.
We would like to acknowledge Daniela Ziegler, librarian at the University of Montreal Hospital (CHUM), for her help with the database search strategy and Professor Christina Zarowsky, University of Montreal, for the helpful comments on an earlier manuscript draft.
IC-IMPACTS (the India-Canada Centre for Innovative Multidisciplinary Partnerships to Accelerate Community Transformation and Sustainability) provided funding for this study in the form of doctoral scholarships for MCP and NM. V. Ridde holds a CIHR-funded Research Chair in Applied Public Health (CPP-137901) and is an IC-IMPACTS co-investigator. The funder had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
MJ and MCP developed the idea for the systematic review and led the development of the study protocol. MCP and MJ prepared the first draft of the protocol; MPS provided the statistical guidance. NM, VR, and MPS reviewed the protocol for important intellectual content and provided feedback. All authors read and approved the final manuscript for submission. MCP and MJ contributed equally to the protocol; MJ assumes overall responsibility for the scientific integrity of the work.
This research does not involve human subjects. It is exempted from research ethics board review as it relies exclusively on publicly available information for which there is no reasonable expectation of privacy.
Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, Québec, Canada
Myriam Cielo Pérez, Nanor Minoyan, Marie-Pierre Sylvestre & Mira Johri
Département de Médicine Sociale et Préventive, École de Santé Publique (ESPUM), Université de Montréal, Montréal, Québec, Canada
Myriam Cielo Pérez, Nanor Minoyan, Valéry Ridde & Marie-Pierre Sylvestre
Institut de Recherche en Santé Publique Université de Montrèal (IRSPUM), Pavillon 7101 Avenue du Parc, Centre-ville Station, P.O. Box 6128, Montreal, Quebec, H3C 3J7, Canada
Valéry Ridde
Département de Gestion, d'évaluation, et de Politique de Santé, École de Santé Publique, Université de Montréal, Montréal, Québec, Canada
Mira Johri
Myriam Cielo Pérez
Nanor Minoyan
Marie-Pierre Sylvestre
Correspondence to Mira Johri.
Comparison of registered and published assessment of intervention fidelity in cluster randomised trials of public health interventions in developing countries: systematic review protocol. Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) 2015 checklist: recommended items to address in a systematic review protocol*. (DOCX 134 kb)
The search strategy for MEDLINE. (DOCX 102 kb)
Pérez, M.C., Minoyan, N., Ridde, V. et al. Comparison of registered and published intervention fidelity assessment in cluster randomised trials of public health interventions in low- and middle-income countries: systematic review protocol. Syst Rev 5, 177 (2016). https://doi.org/10.1186/s13643-016-0351-0
Cluster randomised trials
Implementation fidelity
Public health interventions
Process evaluation
Systematic review protocol | CommonCrawl |
Creodocs is new!
Experiencing an issue? Have feedback?
Creodocs
LaTeX Templates
Documentation pages for using Creodocs.
What is Creodocs?
Report Issues & Feedback
Deleting Your Account
Billing Groups
What Are Templates?
Template Specifications
Adding a Template
Template Versions
Ownership and Sharing
Deleting a Template
Document Data
What Is Document Data?
Manual Entry
Creodocs is a web-based platform that allows you and your business to create documents from predefined templates. Templates define the structure and layout of the document and specify fields where information can be entered. Unlike most document creation tools, Creodocs does not allow modification of the document outside of these predefined fields, and even restricts the type and length of information that can be entered for each field. This leads to documents that are highly consistent and guaranteed to contain the right type of information at each place where information can be entered. Templates are provided by Creodocs for all users, but you may add your own private templates.
Creodocs enables business use in several ways. Billing groups allow everyone in your organisation to charge usage to a single account responsible for payments and tracking individual use and spending. Custom document templates allow your organisation to personalise documents with your brand and access to private templates can be controlled so only particular people in your organisation can make use of them.
Both business users and individuals can specify document information in one of three ways, each covering a different document creation use case. Information can be manually entered on the website, which is useful for individuals to interactively log in and populate documents. It can also be submitted by uploading a database where each row corresponds to a document. This allows the bulk creation of documents when a large number of the same document are needed as a batch. Finally, information can be submitted by issuing a request to the Creodocs API. This allows applications to create documents automatically without human intervention.
Report Issues & Submit Feedback
Creodocs is a new product created by a single developer. As such, there may be issues and questionable design decisions that a large team may more readily catch. Your help is greatly appreciated if something on Creodocs doesn't work, contains vulnerabilities or the user experience could be improved.
Simply enter the issue you encountered or your feedback in the form below. Be as descriptive as possible and include any error messages you saw.
Alternatively, submit your feedback to [email protected].
Issue Description or Feedback
Please contact me about my feedback
Changing Email Address
Your email address acts as the username for your Creodocs account. It's used for communicating important account information and confirming account-changing actions, so you should ensure it's always up to date. You can change the email address associated with your account on the Account Settings page. As a safety precaution, once you change your email address, you will be immediately logged out and will need to confirm your new email address before you can log in again.
Once your email is changed, other users will only see your new email address in billing groups and shared templates. Your previous address will no longer be visible.
If you accidentally change your email address to one not under your control, please contact support immediately so it can be changed back.
Changing Password
You can change your account password from the Account Settings page.
What Are API Keys?
An Application Programming Interface (API) is used to automatically interact with Creodocs without having to navigate through the website. APIs are primarily used by software developers to integrate a desired functionality from one service, such as creating receipt documents, into a different service or product, such as an online store.
Presently, Creodocs only allows the creation of single documents via its API. If you would like to make use of this functionality, you will need to create an API key. Your API key acts as a token to identify you and authorise your requests. When you request for a document to be created through the API, you must include your key to prove that you have access to the template.
Creating a Key
You can create an API key from the Create API Key Account Settings page. You will need to specify a name for your key, which could identify how the key will be used or who will use it. Every key must also have an expiry time, specified as a number of weeks. This is a security precaution as keys allow unlimited access to your private templates and credits if they are made public. Regularly rotating keys limits the potential for their abuse and an expiry time is a good way to force this to happen regularly. If you understand the risks and would prefer no expiry, you can enter a maximum term of 520 weeks, equivalent to 10 years.
Once you create a key, it will be displayed to you in plain text just once and you should immediately copy it to a secure location. Creodocs does not store keys in plain text so there is no way to retrieve a key that you have misplaced; you will need to generate a new one.
Viewing and Deleting Current Keys
The Active Keys Account Settings page shows you a table of your current active API keys, identified by their names. Along with the time each key was created and when it will expire, you can delete keys and this will immediately invalidate all API requests that attempt to use them. If you suspect any of your keys have been compromised by a third party, it is highly recommended to delete them and create new ones immediately.
Expired Keys
You can find a list of expired keys on the Expired Keys Account Settings page. This can be used to identify when a key was created or expired, or past products/people that had access to the API through your account.
There is currently no automated way to delete your account on Creodocs. If you simply stop using Creodocs, any documents you have produced will be automatically deleted in 1 month from creation and minimal personal information will be retained about you, as per the general privacy-by-default architecture of Creodocs. If you have a legal or policy requirement for your account to be completely erased from Creodocs, please contact support and this can be done manually.
What Are Credits?
Simply, credits are the currency for creating documents on Creodocs. Credits are purchased for money and spent on creating documents. All credits have a finite lifetime and will expire if they are not used in time. This icon is used as shorthand for credits: Credits .
Purchasing Credits & Expiry
The Pricing page allows you to purchase credits. All credits expire after 2 years to allow ample time to make use of purchases. Discounts apply for large purchases of credits—currently, each block of 1000 credits purchased results in a further 10% discount applied on the base price ($0.05/credit) for further credits purchased. The average cost per credit is indicated below the purchase price on the Pricing page.
You can purchase multiple sets of credits and credits that expire soonest are always used first. This allows you to manage your spend based on available budget and expected usage.
Why Credits? Why Not Dollars or Subscriptions?
Credits may seem like a dark pattern of obfuscating cost and adding unnecessary complication. This is certainly not the intention and this section aims to explain why credits are a good system for you and for Creodocs.
The price to create individual documents on Creodocs is on the order of cents and there is no reliable method for charging such small amounts directly, therefore, either an account crediting system or a subscription model is required.
Crediting accounts with dollars directly would make for unintuitive remaining balances and prices, where a document cost of $0.06 and remaining balance of $4.78 requires the use of a calculator to determine how many documents you can create. Whole numbers, as with credits, simplify this to 1 credit makes 1 document. Another important consideration is that dollar balances disallow discounting for bulk buying, at least, not without disassociating amount credited from the price paid, at which point you have an unintuitive credit system where $200 paid in gives you $300 in credit.
Alternatively, a subscription model allows recurring payments for fixed or unlimited numbers of documents to be produced over specific time scales. However, subscriptions are ultimately bad for the consumer and don't adequately support users with low workloads. The expiring credit system already provides the benefits of a subscription by allowing you to maintain a constant pool of credits that are purchased on a schedule of 2 years any time before the previous set expires. This comes without lock-in for fixed terms as with many subscriptions and if you are unhappy with the product, you can walk away at any time by letting your credits expire without having to continue paying or going through the hassle of cancelling your subscription.
What Are Billing Groups?
A billing group is a set of one or more users who share the same pool of credits, i.e. all documents created by any user in the group are charged to the group. The purpose of billing groups is to enable users to share credits with other users to reduce costs and consolidate billing.
When you create an account, a billing group is automatically created for you. When you purchase credits, they are added to your billing group and when you create documents, they are subtracted.
Sharing Credits
Each billing group has a single owner but can have any number of members. Owners can invite new members, remove existing members or transfer ownership from the Billing page. If you are the owner of a billing group with at least one other member, you are responsible for purchasing credits since only the owner can make purchases on behalf of the group.
Commercial users may wish to set up a single billing group for the company under a separate account and add all users of Creodocs to this group for consolidated billing and tracking usage. The owner of the billing group can see the full purchase history of the group in the Transactions section of the Billing page, including the breakdown for how each credit was spent by members of the group and on which templates.
The Current Members section of the Billing page lists the current membership status of each member in your currently active billing group where you are the owner. Each member can have one of the following statuses:
active - This is the member's active billing group and any documents they create will use the group's credits.
dormant - The member is part of the group but has a different group as their active billing group. Any documents they create will not use this group's credits.
invited - The member has been invited to the group but has not yet accepted the invitation. They are still active in a different billing group and cannot use this group's credits.
Multiple Billing Groups
When you are invited to a billing group, you will receive an email notification asking you to log in and accept—or decline—the invitation. Once you accept, you are immediately added to the group and it becomes your active group. Any documents you create from then on will be charged to the new group.
You can be a member of multiple groups but only one can be active at a time. Switching between groups is instant and is done in the Change Active Group section of the Billing page. The ability to switch between groups allows you to make use of credits from multiple sources, such as from different departments, companies or from friends. The default group that is created when you create your account allows you to switch between credits purchased for personal use and those purchased by someone else.
You cannot leave a billing group when you are the owner and there are one or more other members. This restriction is in place to force those who are responsible for a billing group with other members to see balance information regularly and purchase credits for them as needed. It is recommended for business users to designate one account as the billing group owner who will be responsible for managing group membership, purchasing and tracking usage.
Templates are the document layouts in your Workshop that you can use to create documents. Each template encompasses the visual design and typography of a document, but also contains a specification for how you are allowed to interact with the template to specify document data. A template can be made for virtually any required document use case, and quality templates strive to be beautiful but also functional by intelligently allowing only essential information to be entered by users. Due to this, templates are at the very core of what Creodocs does.
Behind the scenes, Creodocs uses LaTeX to specify the document design of each template and to produce the resulting PDF after substituting document data from user input.
What Is LaTeX?
LaTeX is a document preparation system originally released in 1983 by Leslie Lamport. It is built on top of the TeX typesetting system, released in 1978 by Donald Knuth. LaTeX is widely used in academia and especially for mathematics, but is not commonly used outside these fields due to the relatively steep learning curve for those unfamiliar with text-based markup languages (such as HTML).
In LaTeX, a document design is defined using commands in plain text, instead of using a point-and-click graphical user interface such as Microsoft Word. This plain text code is then compiled (typeset) to produce the document in PDF format. While this approach is too cumbersome for most people to learn, it has immense benefits for the resulting documents and for those who are technically inclined enough to learn it.
Why Does Creodocs Use LaTeX?
Typography The act of typesetting with LaTeX optimally arranges text on the page using best-practices typographic rules and algorithms. This influences such things as the spacing between words, hyphenation, how fonts are utilised and justification. Practically, resulting documents are more beautiful and feel nicer to read.
Powerful LaTeX has the ability to implement virtually any design through the use of a wide diversity of extensions (called packages). Combined with the ability to typeset mathematics, symbols and custom fonts with presets for any language, there are few document elements that can't be produced.
Programmatic LaTeX is programmatic in nature, which means variables, loops, if statements and switches can be readily utilised. This is particularly useful for Creodocs where small user inputs can be made to have big impacts on the document produced. For example, a single boolean (true/false) switch in the LaTeX code can have conditional impacts in many places in the document, such as whether tax should be calculated in an invoice, and whether the corresponding rows and columns for tax totals should be shown in the invoice table. Since LaTeX documents are plain text, templates are amenable to version control to easily track changes over time.
Free and Open Source LaTeX itself, and almost all LaTeX packages used to add functionality to documents, are free and open source, licensed under the LPPL (LaTeX Project Public License). This means there is no subscription to pay or forced upgrades over time to justify additional payments. Further, there is a large body of code examples, questions with answers and community spirit to LaTeX, opposed to the one-sided nature of a commercial product.
Longevity Documents produced with LaTeX are usually compilable years, or even decades, after they are written. Yearly releases of LaTeX rarely break backwards compatibility and packages tend to receive infrequent updates that don't often interfere with other packages.
Creating a Creodocs Template
Creodocs offers a paid service called LaTeX Typesetting to create and modify templates for use on Creodocs.
This page contains a guide for how to create a new document template to appear in your Workshop, hereafter referred to as a Creodocs template, or just template. Creating a template for Creodocs requires three distinct steps:
Writing the LaTeX code that creates the document layout
Defining user inputs and groups in the LaTeX code
Specifying how inputs should be populated by users on Creodocs
This guide will walk through the entire process of creating a basic template. The goal is to introduce you to the process and leave you equipped to create complex templates, such as the default documents available to all users of Creodocs.
Step 1. Creating a LaTeX Document
Creodocs uses LaTeX to define the structure and layout of all documents. It is the compilation (or typesetting) of LaTeX code that gives rise to the PDF documents produced by Creodocs. This makes the underlying LaTeX code the most important part of a Creodocs template. This guide assumes you are familiar with LaTeX and comfortable with using it to create documents. If not, there are a multitude of free guides available on how to use LaTeX, or you can use the Creodocs Template Creation Service to commission a LaTeX template to your specifications.
For the purpose of this guide, let's pretend we are a real estate company with multiple property managers who need to send official letters to tenants notifying them that an inspection of their rental property is due. We first need to create a LaTeX document that lays out the letter and includes company information and the letter body.
Create a file called main.tex in a new folder and add the smallest possible amount of LaTeX code required to generate a document:
\documentclass{letter}
Download the code above
When we compile this with pdflatex we produce the following as a PDF document:
All Creodocs templates must contain a file called main.tex. This is the file that will be compiled, but other .tex files can be present, such as when using \input{} to include LaTeX code in the main template file.
Let's now create the entire letter with everything we need to communicate to tenants. At this point, it's best to write the LaTeX code as if we are just sending one instance of this document by entering fake information throughout. This will help us in the next step when we need to determine which parts of the document will be populated by users on Creodocs. Our final letter might look something like this:
\usepackage{palatino}
\signature{Terrence Black \\ \textit{Property Manager} \\ Impact Real Estate}
\address{John Smith \\ Apartment 1315 \\ 126 Albert Street \\ Auckland 1010}
\begin{letter}{Impact Real Estate \\ 72 Queen Street \\ Auckland 1010}
\opening{{\Large\textbf{Notice of Inspection}} \\\\ Dear John Smith,}
This letter is to notify you that I will be conducting a routine inspection of your apartment on \textbf{March 15th, between 2pm-5pm}. The previous inspection was carried out on September 18th and as per your tenancy contract, biannual inspections must be carried out to continue your tenancy.
I will be checking for the following:
\item{Damage to the property}
\item{Functionality of smoke detectors}
\item{Carpet cleanliness}
\item{Any other maintenance issues}
If you are unavailable on the date listed, please leave a key with the concierge in my name and I will carry out the inspection alone. Please note that I may need to take photographs of any damage found for documentation purposes.
Should you have any questions or concerns, please contact me directly by email at [email protected].
\closing{Yours sincerely,}
\end{letter}
This is what this letter should look like when we compile the code with pdflatex:
Step 2. Defining User Inputs in the LaTeX Document
We made a LaTeX document for the letter we want to send tenants notifying them of upcoming inspections. We have populated it with all the information we expect to send and filled it with dummy data for a fake tenant and property manager. Our next task is to go through the LaTeX code and add special syntax where Creodocs can insert data supplied by users of this template. Each piece of information we want template users to be able to supply is called a variable.
Variables are added inline to our LaTeX code and have the syntax of [[[VARNAME]]]. The text within the square brackets is the variable identifier (ID) and can only contain upper case letters and numbers with a length of 1 to 30 characters. These IDs should be succinct and descriptive of the purpose of each variable. These variable identifiers are not seen by template users.
Let's go through our letter and replace all occurrences of information we want users of the template to enter with Creodocs variable IDs. This will require carefully thinking about all potential users of the template and what their needs might be, now and in the future. The resulting code is seen below with changes in bold:
\signature{[[[SENDERNAME]]] \\ \textit{[[[SENDERJOBTITLE]]]} \\ Impact Real Estate}
\address{[[[TENANTNAME]]] \\ [[[TENANTADDR1]]] \\ [[[TENANTADDR2]]] \\ [[[TENANTADDR3]]]}
\opening{{\Large\textbf{Notice of Inspection}} \\\\ Dear [[[TENANTNAME]]],}
This letter is to notify you that I will be conducting a routine inspection of your apartment on \textbf{[[[INSPECTIONTIME]]]}. The previous inspection was carried out on [[[PREVINSPECTIONTIME]]] and as per your tenancy contract, biannual inspections must be carried out to continue your tenancy.
[[[EXTRANOTES]]] Please note that I may need to take photographs of any damage found for documentation purposes.
Should you have any questions or concerns, please contact me directly [[[SENDERCONTACTINFO]]].
If we try to compile the new code with pdflatex, we find that it no longer compiles without errors due to the square brackets. If you need to see what it looks like, find and replace [[[ and ]]] with nothing through the template code and it will compile again. This will show you where the Creodocs variable IDs have been added:
You'll notice some information from our first draft has been kept while other information has been made into variables. This is because some information in the template will never change, while other information needs to change each time one of these letters is created. For example, the company address of Impact Real Estate won't change very often so this is safe to hard code, while the tenant's address will change every time. Often times there will be grey areas where information will usually not change, but it could, so it might be useful to make it into a variable. An example of this is the EXTRANOTES variable, where we initially mentioned leaving a key with the concierge if the tenant is not available, but what if the tenant's apartment building doesn't have a concierge? Likewise, while the sender of the letter will usually be a property manager, some property managers may have assistants and the company could have a mixture of managers with Senior Property Manager job titles, so it's best to allow the sender's job title to change.
The same Creodocs variable ID can be used within the LaTeX code multiple times. This is powerful when we need to output the same piece of information in multiple places in the resulting document. An example of this is the TENANTNAME variable which is used at the top of the letter with the tenant's address and for addressing the letter to the tenant with Dear TENANTNAME,.
Looking through our letter again, we can see that the tenant's address is split across 3 variables. What if we need to enter a suburb on a 4th line or we want to write the apartment number inline with the street number so the address only takes up 2 lines? For this, we can use variable multi-groups.
Variable multi-groups allow you to specify that some LaTeX code containing one or more variable IDs can be inserted into the document multiple times. Multi-groups are enclosed in 3 pipe characters (|||) and must contain one or more variable IDs specified as usual within square brackets, e.g. |||[[[TENANTADDR]]]|||. Groups can span multiple lines of LaTeX code and contain any other LaTeX code. If more than one set of data is submitted by a user for a multi-group, all code within the multi-group specification is duplicated one after another prior to document creation.
Let's make it so the users of our template on Creodocs can specify any number of lines for the tenant's address by replacing the 3 tenant address variables with 1 and putting it into a multi-group (change in bold):
\address{[[[TENANTNAME]]] |||\\ [[[TENANTADDR]]]|||}
Notice how the multi-group contains \\ before the variable ID. This is because Creodocs duplicates all code inside the group delimiters and in this case we want each line of the address to be output on a new line, so we need to include the code that will do this inside the group. Creating groups can be tricky as it requires thinking about what will happen to the code you enclose in the group once it is duplicated. For example, in this case we could have naively specified the group as \address{[[[TENANTNAME]]] \\ |||[[[TENANTADDR]]] \\|||} and this would compile fine, but there would always be an extra newline at the end of the tenant's address block which is not what we intended with the design.
Step 3. Defining Template User Interaction on Creodocs
We have now created our LaTeX document and decided which parts of it we will expose on Creodocs for users to populate. All that remains is to describe our document to Creodocs by specifying template, variable and group information in a format that it will understand. This will allow it to display the template and variables to users in a clean and helpful way, and restrict the types and amounts of information that can be submitted for each variable. This definition is done in a creodocs.json file that should be in the same directory as the LaTeX code.
All Creodocs templates must contain a creodocs.json file. This file specifies template information, acts as the key for variables in the LaTeX code and groups variables for display to users on Creodocs. The data within the file is in JSON format.
It's a good idea to familiarise yourself with the JSON format if you haven't come across it before. Broadly, it's a hierarchical format of keys and values that supports named and unnamed lists (or arrays) of information. The creodocs.json file is composed of 3 top-level sections. The first contains information about the whole template, the second defines each variable and the third specifies how variables should be grouped for display on Creodocs. Let's take a look at a basic creodocs.json file with 2 variables and groups:
"template": {
"name": "Template Name",
"description": "Template Description",
"engine": "pdflatex",
"contact": "[email protected]"
"VARNAME": {
"name": "Variable Name",
"description": "What does this variable do in the document?",
"type": "string",
"max_length": 50,
"demo_value": "VARNAME",
"default_value": "Testing",
"required": true
"NUMBER": {
"name": "Number Variable",
"type": "integer",
"max_length": 4,
"demo_value": "NUMBER",
"default_value": "100",
"required": false
"groups": {
"Group Name": {
"variables": [ "VARNAME" ],
"multi": true,
"Another Group":
"variables": [ "NUMBER" ],
"multi": false,
Within each of the top-level sections, there are one or more levels of keys and values that define the information required to add the template to Creodocs. The first template section contains overall information about the template such as its name and description which will appear in the Workshop, but also what TeX engine should be used to compile it, the current version of the template and who users should contact for changes and support. The variables section contains a list of variables where the keys are the variable IDs used in the LaTeX code. Within each variable ID is information for how users will interact with the variable and includes how it should be displayed (name and description), what kind of data it can take (type), how much data can it take (max_length), do users have to specify it (required) and is there a default value that should be used in different situations (demo_value and default_value). Finally, the groups section combines variables into logical groups so they can be displayed together and specifies whether the variables in each group can be added by users once or many times. For a full description of the structure of this file and all the options available, please refer to the Complete Template Specifications section.
We can now follow the format of the example creodocs.json file and refer to the advanced documentation to create a creodocs.json file for our letter template:
"name": "Notice of Inspection Letter",
"description": "For notifying tenants of an upcoming property inspection",
"contact": "[email protected]"
"SENDERNAME": {
"name": "Sender Name",
"description": "The name of the property manager or the person sending the letter on their behalf.",
"max_length": 100,
"demo_value": "SENDERNAME",
"default_value": "",
"SENDERJOBTITLE": {
"name": "Sender Job Title",
"description": "The job title of the person sending the letter.",
"demo_value": "SENDERJOBTITLE",
"default_value": "Property Manager",
"TENANTNAME": {
"name": "Tenant Name",
"demo_value": "TENANTNAME",
"TENANTADDR": {
"name": "Tenant Address",
"description": "The mailing address of the tenant.",
"demo_value": "TENANTADDR",
"INSPECTIONTIME": {
"name": "Inspection Time",
"description": "The day/time when the inspection will take place.",
"demo_value": "INSPECTIONTIME",
"PREVINSPECTIONTIME": {
"name": "Previous Inspection Date",
"description": "The date of the last inspection of the property.",
"demo_value": "PREVINSPECTIONTIME",
"EXTRANOTES": {
"name": "Extra Notes",
"description": "Any other notes or requirements for the inspection, such as where to leave the key if the tenant isn't home.",
"demo_value": "EXTRANOTES",
"default_value": "If you are unavailable on the date listed, please leave a key with the concierge in my name and I will carry out the inspection alone.",
"SENDERCONTACTINFO": {
"name": "Sender Contact Information",
"description": "How the tenant should contact you with questions or concerns. Should start with 'by ...' or 'on ...' as the text in the letter prior to this is 'please contact me directly '.",
"demo_value": "SENDERCONTACTINFO",
"Sender Information": {
"variables": [ "SENDERNAME", "SENDERJOBTITLE", "SENDERCONTACTINFO" ],
"Tenant Name": {
"variables": [ "TENANTNAME" ],
"Tenant Address": {
"variables": [ "TENANTADDR" ],
"Inspection Information": {
"variables": [ "INSPECTIONTIME", "PREVINSPECTIONTIME", "EXTRANOTES" ],
Let's step through the three sections of the completed creodocs.json file as there are multiple nuances to note.
The template section contains the template name and description as they will be shown on the Workshop page. We have written our LaTeX code to be compiled with pdflatex so this is the specified engine, but if we needed to use a custom font then xelatex would be a better choice. The version of the template can be specified in any format, but it is advised to stick to the semantic X.Y.Z versioning system where X is iterated for major changes, Y for minor changes and Z for bug fixes, e.g. 1.5.12. Finally, we list a fictional Arthur at Impact Real Estate and the contact person if there are any issues with the template.
In the variables section, all of the variable IDs in the LaTeX code must be defined. The order of the variable IDs doesn't matter, that is, it doesn't need to correspond to the order they appear in the LaTeX code. For each variable, we first have a name attribute for the friendly name to be displayed to users on Creodocs. The description provides further explanation of the variable's function to users. The type of content variables can accept from users is specified in the type attribute. In our example, all variables have a value of string, which means they can hold any text content. Alternative options include integer (whole numbers), float (numbers with decimals or whole numbers) and boolean (true or false). The max_length attribute specifies the maximum number of individual letters, numbers or symbols a user can enter for the variable. This is a difficult specification to get right, since you need to set it large enough to refrain from restricting users unnecessarily, but small enough that the location of the variable in the document can accommodate the maximum number of characters. The demo_value is used for making sure the template works and these values are used in the preview image for the template. A few of the variables have a default_value, which is automatically populated in the manual entry form. Users still have the option to modify these, but the idea is that the default value represents the most common value the variable will contain. Finally, whether variables are required is also an important attribute. Optional variables can be left blank by users and the document will be produced without them, but variables marked as required must have content entered. In our case, most of our variables are required because they represent pieces of information that are needed in the letter for it to make sense.
In the groups section, group names each contain a number of key-value pairs with variable IDs specified as a list under a variables attribute. Each variable ID must be present in only one group. The group names will be displayed to users, so they should be clear and concise. The order in which they appear will be their order on Creodocs. The idea is to group variables in a logical fashion so like is with like. For example, in our letter, we have 3 variables related to the person sending the letter: sender name, sender job title and sender contact information. These are logically grouped into the Sender Information group to be displayed together. The multi attribute specifies whether the variables in the group can be added multiple times by template users. If multi is true, all variables in the group must be in the same variable multi-group in the LaTeX code (i.e. enclosed in |||). In our letter, only the TENANTADDR variable ID can be entered multiple times for each line of the tenant's address, so it must be in its own group in the groups specification. Finally, the required attribute allows overriding the required attribute of individual variables in the group. If required is false for the group, all variables can have no content submitted for them by the template user, even if they are required individually. However, if any variable does have content submitted for it, then the normal per-variable required values will apply.
We have finished making our template! All that remains is to package the two files we have created into a zip file so they can be uploaded to Creodocs together. For convenience, you can download this file here:
All Creodocs templates must be uploaded as zip files containing at minimum a main.tex file and a creodocs.json file. Other files can be included if needed, such as additional .tex files, images or fonts.
Hopefully you are now equipped to create your own templates. When doing so, it is recommended that you refer to the Complete Template Specifications section for additional information and options, as this guide intentionally glossed over details in favour of simplicity.
The next step is to add our template to Creodocs which is covered in the Adding a Template section.
Complete Template Specifications
This page contains a complete reference of all requirements and options for creating Creodocs templates. The creation of a template requires you to first implement your document in LaTeX, then to define your document in a creodocs.json file in a way that allows Creodocs to display it and accept user inputs. As such, this page is divided into the following sections:
LaTeX Code Specifications
creodocs.json Specifications
Example Template Downloads
Creodocs uses TeX Live for typesetting templates and the two typesetting engines available are pdflatex and xelatex. There are no restrictions on what you can do in your LaTeX code; in general, if it compiles using TeX Live, it will compile on Creodocs. This means you are free to use packages that are part of TeX Live without including them as files with your template, unless you need a specific version of a package or it is not included with TeX Live. Other inclusions such as images, fonts and additional .tex files are allowed and can be housed in separate directories or alongside the template code.
The template you upload to Creodocs must be packaged as a zip file and must contain a main.tex file at the top (root) level. This is the file that will be compiled by Creodocs but you can include additional LaTeX files from within it using the standard LaTeX \input{} command.
Each location in the LaTeX code where Creodocs users can supply information is called a variable. Variables are added inline to the LaTeX code and have the syntax of [[[VARNAME]]]. The text within the square brackets is the variable identifier (ID) and can only contain upper case letters and numbers with a length of 1 to 30 characters. This is an example of using a variable inside a LaTeX document: Hello world, my name is \textbf{[[[NAME]]]}!. Variables can be used any number of times in the LaTeX code.
By default, variables accept one value from users, but it is sometimes useful to allow one or more variables to accept multiple sets of content, this is called a multi-group. For example, in an invoice document, each invoiced item works well as a multi-group of several variables, one for the date, description, count and rate. Multi-groups are enclosed in 3 pipe characters (|||) in the LaTeX code and must contain one or more variable IDs specified as usual within square brackets, e.g. |||[[[VARNAME]]]|||. Groups can span multiple lines of LaTeX code and contain any other LaTeX code, but they cannot contain other groups. If more than one set of data is submitted by a user for a multi-group, all code and text within the multi-group specification is duplicated for each set prior to typesetting.
A creodocs.json file must be included with every template at the top (root) level of the packaged template, alongside the LaTeX code. It is formatted using JSON syntax and describes the template to Creodocs in a standard format. This enables Creodocs to display the template and variables to users in a clean and helpful way, and restrict the types and amounts of information that can be submitted for each variable.
The creodocs.json file must contain template, variables and groups top level sections. You will find each of these described under their corresponding titles below. Please take careful note of the attributes required within each section and the allowed values. The order of attributes within each top level section does not matter in your creodocs.json file. Attributes that are listed as not required in the specifications must be present, but their values can be left blank as empty strings (e.g. "name:" "").
template Specification
This top level section contains information that applies to the entire template.
Allowed Values
name The name of the template as it will be displayed to users Yes 1-50 letters, numbers, spaces or any of _-:,./'"
engine The TeX engine to be used for typesetting the template Yes pdflatex or xelatex
version The version of the template, used for keeping track of updates to templates; the recommended syntax is semantic versioning (X.Y.Z; e.g. 1.4.12) Yes 1-10 letters, numbers, spaces or any of _-:.,/()
description A description of the template displayed in the Workshop to provide additional information about the template's purpose to users No 0-250 letters, numbers, spaces or any of _-:,./'"
contact Contact details of the person responsible for the template, typically an email address No 0-50 letters, numbers, spaces or any of @_-:.,/()
"name": "Company Design Invoice",
"description": "Use for invoicing clients for design work",
"contact": "[email protected]"
variables Specification
This top level section defines each variable ID used in the LaTeX code for display to users of the template and restricts the type and amount of content each variable can take.
Variable IDs
Under the variables top level section should be a list of all variable IDs used in the LaTeX code. Their order does not need to correspond to the order in which they are used in the code. Variable IDs must be 1-30 characters in length and contain only uppercase letters and numbers.
For Each Variable ID
name The name of the variable as it will be displayed to users Yes 1-40 letters, numbers, spaces or any of _-:.,/'
type The type of content the variable can accept from users Yes string, integer, float or boolean
required Whether users must enter content for the variable or if it can be empty Yes true or false
max_length Maximum number of characters that can be submitted for the variable Yes Whole number between 1-5000
A value of >300 will show multi-line input fields in the manual data submission form
demo_value A value to be used for testing the template works and in the image preview of the template Yes (if variable required) Must adhere to type and max_length of the variable
Can be an array of values if variable is in a multi-group (all other variables in the group must have the same number of array elements)
default_value Pre-filled value for the variable, useful when there is an obvious default value No Must adhere to type and max_length of the variable
description Additional information about the variable or usage instructions, displayed in a tooltip to users No 0-50 letters, numbers, spaces or any of _-:.,?/()'";
"INVITEE": {
"name": "Invitee Name",
"demo_value": "John Smith",
"GUESTS": {
"name": "Additional Guests",
"description": "The number of additional guests the invitee can bring.",
"demo_value": "1",
"default_value": "1",
groups Specification
This top level section groups variable IDs in logical categories for display to users. A variable ID cannot be in multiple groups and all variable IDs must be grouped.
Group Names
Under the groups top level section should be a list of all group names to display to users of the template. Their order corresponds to the order in which they will be displayed. Group names must be 1-50 characters in length and contain only letters, numbers, spaces and any of _-:,./'().
For Each Group Name
variables A list of variable IDs that belong in the group Yes One or more variable IDs in a list, each variable ID must be identical to one of the variable IDs present in the variables top level specification; variables will appear in the order they are listed
multi Whether users can submit multiple sets of data for the group's variables Yes true or false
required false if all variables can be submitted empty, even if they are required individually Yes true or false
"Invitees": {
"variables": [ "INVITEENAME", "INVITEEPARTNERNAME" ],
"Invitee Address":
"variables": [ "ADDRESSLINE" ],
Download Example creodocs.json
Download Example LaTeX Code
Download Example Complete Creodocs Template
Adding a Template to Creodocs
This page takes you through the process of adding a template or updating an existing template with a new version. Templates must be submitted in the format detailed on the Template Specifications page. Upon submitting your new template, the following steps must occur before it is available for creating documents:
Validation of template and specifications
Report on critical issues with template specifications
Report on optional improvements to template specifications
Test document creation
Submitting a Template
You can submit a new template from your Workshop or add a new version for an existing template from its Template Settings page. In both locations, you are presented with a simple form to upload a single file containing everything needed to use the template on Creodocs.
Templates must be packaged as a single zip file with a filename no longer than 100 characters, consisting of alphanumeric characters, dashes, underscores or periods. The maximum file size of templates is 20MB to minimise document creation time. The zip file can contain sub-directories for template assets such as images or fonts, but the following two files must be present at the root level: a main.tex template code file and a creodocs.json template and variable specification file.
Step 1. Validation of Template and Specifications
The creodocs.json file contains information about both the template itself and the variables present within it. This information is critical for classifying the template and interacting with its LaTeX code, so this file is the first to be validated for quality. Being a JSON file, it must contain valid JSON syntax and it is recommended to copy the contents of the file into an online JSON validation tool prior to submitting the template. Something as small as a missing quotation mark or bracket is enough to break a JSON file, so mistakes can occur frequently. Template and variable information is extracted from the creodocs.json file to be validated against the template code.
Template code that is parsed for variables and variable groups includes all TeX .tex, class .cls and style .sty files in the template (including recursively in sub-directories). Variables and groups can be specified anywhere in these files. Variables always occur on a single line but groups can span any number of lines. The same variable or group can occur in multiple places in one file and within multiple files. The only restriction is that all variables in a multi-group must always be present in a group together, but their order within the group can change when the multi-group is used more than once.
At least one variable must be present in at least one TeX file. All variables in the variable specification must be used at least once in the template files and no variables can be used in the template files that aren't defined in the specification.
Step 2. Critical Errors Report
Critical errors are issues that prevent a template from being usable in its submitted state. This could include: missing/invalid required information about the template, incorrect variable usage in the code, or something else of similar importance. Any critical errors found during the validation of a new template will cause the template addition to fail. This means the uploaded template will be deleted and you will be presented with a report of the issues identified so you can correct them and resubmit.
Step 3. Optional Improvements Report
During the validation of the specifications and template code, the submitted template is also scanned for improvements that could be implemented that would make it more user-friendly. If no critical errors are found, a report of these improvements is displayed after the template is saved.
You will be presented with the option to act on the suggested improvements and update the template by submitting a new version to overwrite the original one, or you can can disregard the suggestions and continue to the next step.
Step 4. Creating a Test Document
The final step of adding a template is to produce a test document to validate the template code. The demo_value variable values from the specifications will be used as the document data to produce a test document using the typesetting engine from the specifications. The resulting PDF document and typesetting log will be presented to you for inspection.
If any critical errors are issued by the typesetting engine which preclude the creation of a document PDF, you will only receive the typesetting log to diagnose the problem and you will be unable to approve the test document.
You should carefully look over the test document and logs to validate that there are no issues. It may be helpful to use demo variable values that show off the functionality and layout of the template rather than using the variable names. This is especially true as the preview image for the template is automatically created using the first page of the test document once you accept the test document.
Templates can be updated to modify their layout, variables and content. Each update becomes a new version of the template and versions can be managed by users with sufficient access to the template. All version changes are made from the Template Settings page, accessed by clicking for the template of interest in your Workshop. This page describes the creation and management of template versions.
Template Version Types
Templates can have any number of versions and each version has a status of either active, inactive or draft. A template can only have one active version and one draft version at a time. The active version is the current version that users use to create documents. The draft version is a new version of the template that is in the process of being validated for quality but is not yet available for document creation. Inactive versions are past versions of the template that are not currently usable but can be activated at any time by a template owner or administrator.
Creating a New Template Version
To create a new template version, select the Add New Version menu option on the Template Settings page. You will be prompted to upload a zip file containing the new version. This zip file should be composed of the same files as a new template since it entirely replaces the current version. The new version will be validated in the same way as a new template, so refer to the Adding a Template page for a detailed description of the process.
Don't forget to make sure your new version has a different template version number specified in the creodocs.json file to any other existing version of the template.
Once your new version passes initial validation for critical problems with template or variable specifications, it becomes stored as the draft version of the template. The draft must undergo the usual validation for variable specifications and code quality before it can be accepted as ready for use. Each template can only have one draft version at a time, shared across all users who have access to add or modify template versions. This means anyone with sufficient access to the template can continue validating the current draft or they can remove it and create their own.
Template users cannot create documents using the draft version until it has been successfully validated for quality. Once all validation is complete, the draft is immediately promoted to the active version of the template and is available for use.
All locked variable content that the template's users have saved will continue to work for the new version, but only for matching variable IDs. For example, if the current version of the template has a variable with the ID VARNAME and the new version has the same variable ID, then locked content will be restored for this variable in both versions. For multi-groups, all variable IDs must be identical in both versions for the group's locked variable content to be restored. For example, if the current version has a multi-group with variables NAME, AGE, HEIGHT but the new version has added the variable WEIGHT to these, locked variable content will not be restored.
You can rename a template when adding a new version. This is generally to be used with caution, since if you later go back to a previous version, the old name will appear for users.
Viewing, Downloading and Deleting Versions
All versions of a template can be found in the Template Versions section of the Template Settings page. A table lists the versions with important information to differentiate them, including their version number, status, template name, typesetting engine and the time they were added. Each version can be downloaded from the table by clicking the appropriate icon (), in case you misplaced the files yourself. Inactive and draft versions can be deleted by clicking the appropriate icon (), but note that any data submissions template users have for the deleted version will immediately disappear. It is a good idea to delete previous versions when a template receives a major update, or if the name is changed, to minimise the risk of reverting to a previous version and confusing users.
Promoting Inactive Versions
The Promote Inactive Version section of the Template Settings page lets you select an inactive version of the template and make it the current active version. Doing so immediately makes the current active version inactive and the promoted inactive version will appear to users as the version of the template available to use. This can be useful if you need to revert to an older version of the template when a problem is found with a new one. Use this feature with caution as the previous template information will also be displayed (name, description, contact, etc).
Template Ownership and Sharing
Creodocs provides a number of global templates that can be used by anyone with an account. These are owned and managed by Creodocs, but any user can submit their own template(s) which are private by default. Private templates can be owned by and shared with other users, such as within a company. This page covers the ways in which a user can have access to a private template, known as access levels, then describes how template sharing is managed.
Access Levels
Any number of users can have access to a private template. When a template is first added, the user who added it becomes the first owner and can then invite other users to have access. There are three access levels a user can have to a private template: owner, administrator or user. These access levels allow fine-grained control over how much power a given user has over the template. In a commercial team-based environment, this allows differentiating between people who simply need to use the template to create documents and those who are also able to make changes to it.
The table below lists all possible template actions, split by access level. Use this as a reference for deciding the access level to grant additional users of your template.
Create documents
View the access level of everyone with access
Update template with new versions
Switch to a previous template version
Delete inactive template version
Grant and remove user access
Grant and remove administrator access
Grant and remove owner access
Delete template
Template Sharing
Template sharing can be accessed for any private template in your Workshop by clicking the sharing icon ( ) under the template. This takes you to a page listing all users who have access to the template, split by their access level. If you have sufficient access, you can remove access to the template for a specific user by clicking 'Remove access' in their row. You can always remove your own access unless you are the sole owner, but removing another user's access requires a higher access level than them. Owners can remove anyone's access (including other owners), administrators can remove users and users can only remove their own access. Removing access is immediate and will delete any documents the user has recently created using the template.
To share the template with a new user, select the Share Template menu option on the Template Settings page. You will be able to enter an email address corresponding to the Creodocs user you wish to invite and select the access level you want to grant them.
If you are upgrading or downgrading a user who already has access to the template, remove their access first before sharing the template with them at the new access level.
You can permanently delete any template where you have an access level of owner. To do so, navigate to the Template Settings page by clicking on the settings icon () in your Workshop for the template you wish to delete. Then, find the Delete Template menu item to delete the template.
Upon deleting the template, the current version and all previous versions of the template will be immediately deleted from Creodocs and there is no way to restore a template that has been deleted. Any users who had access to the template will lose it immediately, and will also lose access to any recently created documents that are still available for them to download.
After a template has been deleted, anything that refers to it by name will say Template deleted, such as in the transaction usages breakdowns on the Billing Group Transactions page. This is in the interests of privacy so as to only retain information that is current.
Most business documents that are produced repeatedly contain large sections of text and visual elements that are identical from document to document. This content is static and typically doesn't change very often, such as company contact information, or is informative and applied to all recipients, such as the terms and conditions in an invoice. The information that does change from document to document is dynamic and typically includes customer information, such as the addressee and item lines in an invoice. To produce each document, dynamic information needs to be placed in specific locations amongst the static content, which is exactly what Creodocs does to produce your documents. Document data is the dynamic information which you need to enter to produce a unique document from an existing template.
A single piece of dynamic information is called a variable. In the simplest case, a document can contain one variable that's used in one place, such as a letter notifying customers of an event that merely needs to use the customer's name in the "Dear [name]," greeting. A variable can be used in one or more places in the document but its content is always the same. This means it only needs to be specified once and can be styled as needed wherever it appears in the document.
Most documents will have need for more than one variable, and some of these variables may not need to have values. For example, a letter may contain not only the customer's name but also their address and phone number. The name and address are required for correctly addressing the letter, but the phone number is merely added under the address if it's available and without it the document is still complete. Variables are marked as required if they must have information entered for them, or not required (i.e. optional) if they can be empty. It is the template's job to correctly adjust the document when an optional variable is submitted without content.
In some situations, we may wish to specify the type of a variable to allow only specific kinds of information to be submitted for it. For example, if we have an invoice document where the number of hours and hourly rate are used to calculate the gross and total amounts due, these variables can only contain numbers or the calculations can't happen. In this case we specify the types of these variables as number and a document will not be produced if submitted values contain anything but numbers.
Variables need not strictly store information that is output directly into the document. For example, a value of a variable can be used as an internal switch in the LaTeX code to alter the entire layout of a document without being written to the document.
Documents typically contain many variables and natural groups arise based on the similarity of information stored in two or more variables. For example, a receipt is likely to have several variables for customer information and several others for the goods purchased, which form two natural groups. Variables that are grouped together can be used independently in different places in a document but are presented together for users entering their information. All variables must belong to a group but a group can contain a single variable if there are no other variables that are similar.
Multi-groups
So far we have seen that variables accept a single input value from the user and this value is substituted for the variable in one or more places in the document. However, sometimes it makes sense for a variable (or group of variables) to accept and output multiple sets of values. Consider the case of multiple items in an invoice. Each item has the same information: date, description and cost, but if each variable is only allowed one value, there need to be separate variables for each item, e.g. date1, description1, cost1, date2, description2, cost2, etc. This is cumbersome, repetitive to specify and supports only a fixed number of invoice items.
Multi-groups are a solution to this problem and strongly tie one or more variables together with surrounding static content and/or code. This means the entirety of the multi-group specification in the code is duplicated for each set of content supplied. Using our invoice items example, this means date, description and cost are specified as 3 variables in a multi-group and this allows users to enter multiple sets of content for them. Each set is then output to the document one after another, and any code that is required to create the line is also duplicated to ensure each item is on a separate line and styled correctly.
Submitting Document Data Manually
The easiest way to create documents on Creodocs is by manually entering document data into the website. To do this, click on the template you want to use in your Workshop and you will be presented with a manual entry form containing all the document's variables and fields for entering your content.
Manual Entry Form
The manual entry form contains sections for each variable group in the template selected. Inside each group section, all variables belonging to the group are displayed as rows, with the variable name on the left and the document data input on the right. If the variable name is bold, it must have data entered for it for the document to be produced; if the variable name is bold italic, the variable must have data entered for it only if any other variable in the group has data entered (i.e. all variables in the group can be empty). It is not always obvious what each variable corresponds to in the document, so if there is a small question mark () to the right of the variable name, mouse over it to see a description of what the variable does and how the value you enter may be automatically transformed.
If the group is a multi-group, the variables will appear with a box around them indicating that multiple sets of variables can be created in the group. Three tabs are present for each set of variables that let you manipulate the set. The green tab below is for creating a new blank set below the current set, the red tab to the right is for deleting the current set and the grey tab on the left is for moving the current set up or down in relation to other sets.
Variables can only accept content up to a maximum defined length of characters. Maximum lengths are necessary to constrain the impact that each variable can have on the document. For example, a main title at a font size of 80pt looks good if it contains a handful of words, but if a whole sentence is entered for the main title variable, it would quickly take up the entire page and have an adverse effect on the document. Each variable in the manual entry form shows the current number of characters entered and the maximum allowed under the input field as "X/Y characters remaining".
Variables can accept different types of data depending on what they are used for in the document. Broadly, variables can accept text, whole numbers, numbers with decimals or true/false values (see the Template Specifications section for more information). If a variable is defined as accepting only whole numbers, you will only be allowed to enter whole numbers in the input field for the variable, and the same goes for the other types listed. You can determine what type of data is allowed by looking at the remaining characters text underneath the input field, which will read "digits remaining" for number inputs or "characters remaining" when the input accepts any text. True/false inputs will appear as 2 radio buttons for true and false.
Once you submit your variable data for document creation, it will be checked for accuracy and if there are any errors you will be returned to the manual entry form where what you entered will be restored so you don't lose your progress.
Locked Variable Content
Beside each variable's data input field you will find a lock icon (), which can be toggled on (locked) or off (unlocked). This is a feature exclusive to the manual entry form and allows you to save variable data entered into the form for later re-use. To use this feature, simply enter data for any variable and once you are done, click the lock button to lock the variable content. When you submit the form to create a document, all variables you have locked will have their content saved and next time you return to the manual entry form for the document the values will be restored. This is a very useful feature for creating documents that change over time, such as invoices or curriculum vitaes, where you can save your personal information once and never have to re-enter it again.
Locking variables in a multi-group will lock them per set, in other words, all variables in the locked set are locked together. The order of locked sets in each multi-group is also saved.
You may notice that some variables have pre-filled values even though they are unlocked, these are known as default values. Default values are specified in the template when there is a single variable value that is used the majority of the time, but it is allowed to be customised if required by the user. For example, an invoice template may have large lettering at the top saying Invoice, but some companies or jurisdictions require invoices to explictily say Tax Invoice for legal reasons. In this case, the template may expose this variable to the user to allow it to be changed, but specify Invoice as the default value since this will be used the majority of the time. If you would like to override a default value, simply replace it with your own and lock the value so it is saved. Likewise, if you want the value to always be blank, rather than use the default, just remove the default value and lock the empty variable.
Submitting Document Data via Database
Submitting document data via database lets you create multiple documents at once using the same template. This is useful if you need a large batch of documents created as a regular operation, such as producing monthly invoices for customers, or you need to produce a one off communication, such as notifying clients of an upcoming event in a letter.
To submit document data using a database, click on the template you want to use in your Workshop, then select the Upload a Database tab at the top. You will be presented with a form to upload your database for document creation.
Each document produced from a database of document data is charged to the billing group of the account making the request and costs the same number of credits as creating the document manually.
Database Structure Specification
The database you upload must be in a specific format for Creodocs to read it. Primarily, the database must be a *.tsv tab-delimited file where each column is an individual variable and each row contains the document data for a separate document. This structure means that each cell in the database contains the document data for a specific variable within a specific document, with tab characters separating variable content.
The first row (line) of the database must contain variable ID names for all variables in the template selected. Variable IDs are the internal identifiers for each variable and are not the friendly variable names seen in the manual entry form. For multi-groups where multiple sets of the same variables can be submitted per document, you can specify each set by entering the group's variable IDs next to each other multiple times. For example, VAR1 VAR2 VAR3 VAR1 VAR2 VAR3 can make up 6 columns of the database where these three variables are in a multi-group.
The table below is laid out in the format of an example database to illustrate the requirements written above. It uses the variables from the example letter template created in the Creating a Template section.
SENDERNAME
SENDERJOBTITLE
TENANTNAME
TENANTADDR
INSPECTIONTIME
PREVINSPECTIONTIME
EXTRANOTES
SENDERCONTACTINFO
Cindy Kim Senior Property Manager Martin Jones 388 Pleasant Lane City, State July 21 at 1pm January 18 021 123 4567
Cindy Kim Senior Property Manager Rebecca Smith Apartment 221 5 West Street City, State July 22 at 10am January 11 021 123 4567
Cindy Kim Senior Property Manager Hinata Adachi Apartment 638 5 West Street City, State July 22 at 10:30am December 8 021 123 4567
Download the database above
Optional variables can be left blank in the database but must be present as columns. You can see this in the table above for the optional EXTRANOTES variable where all three document rows have no data for this variable.
Multi-group variable sets that are required can be left blank, but only if at least one set has been entered for the row. This is because other rows in the database have content, so additional columns are required to accomodate it. An example of this is the multi-group containing the required TENANTADDR variable in the table above, which is present as 3 sets (columns) because the second and third rows have 3 sets for this variable each. However, the first row only has 2 sets of data for this variable and the third set is blank. The general rule is that all empty sets for multi-groups will be ignored in databases, but otherwise normal rules apply.
Variable data in your database must adhere to the corresponding variable types and lengths. This means data entered for a variable that only accepts boolean data can only have values of 'true' or 'false', a variable that only accepts integer data can only have values of whole numbers and so on. Maximum lengths of variables must also be respected.
Downloading an Automatically Generated Empty Database
To make the process of creating a database easier, you can download an automatically generated database file for every template. To do this, click on the template you want to use in your Workshop, then select the Upload a Database tab at the top. You will find a link to download a database template on the resulting page which will provide you a TSV file ready for populating with document content.
The first row of the database template file contains all the variable IDs in the document template in the required format. The second row contains an indication of whether each variable is required (must have content entered for it), to aid you in entering your document content on subsequent rows. This second row should be removed before submitting your database.
Multi-grouped variables are output 3 times side by side in the example database template. This is to indicate how to submit multiple sets only, you can increase or decrease the number of sets for each group as required.
Downloading Documents Produced Using a Database
Documents produced using a database are downloaded the same way as those produced using the manual entry form. The key difference is that the download itself will be a zip file containing all the documents produced instead of a single PDF file.
Submitting Document Data via API
The Creodocs API allows you to submit document data for a template and returns a URL to the document PDF when it is successful. This means you don't need to interact with the Creodocs website to create documents. This method is primarily useful for software developers who need to automatically create documents as part of another product or service.
All API requests are charged to the billing group of the account making the request and cost the same number of credits as creating the document manually.
API Specifications
The API uses the JSON format to structure requests and responses. To verify your identity when making an API request, you will need to generate an API key as described on the API Keys Support page. Creating a document requires you to send your Creodocs user ID, the template ID of the document you'd like to create, your API key and the document data for each variable. API requests must be made via POST to https://www.creodocs.com/api/create under a data key.
The table below details each of the required components of an API request and provides details on their expected values.
user_id Your Creodocs user ID number Must be a number
template_id The template ID number you would like to use to create the document Must be a number
key Your secret API key The key provided to you by Creodocs
document_data Document data for all variables in the template An object of keys and values where keys are template variable IDs and values are arrays of document data. Unless the variable is in a multi-group, the array must contain one element. All variables in each multi-group must contain the same number of elements.
"user_id": 12,
"template_id": 52,
"key": "b70b53f2c9e09fa836410bed18126c38b497b660cd7fcc72a6",
"document_data": {
"John Smith"
"DATE": [
"20/04/2020"
"PARAGRAPHS": [
"First paragraph text.",
"Second paragraph text."
"OPTIONALSIGNATURE": [
You can find an automatically populated template for making API requests for a given template under the Submit via API section of the Add Document Content page for the template.
Additional Document Data Specification Information
All template variables must be included in the API request, even if they are not required and no data is being provided for them. In this case, the variable data should just be an empty string (i.e. "").
Only multi-grouped variables can have more than one set of content supplied. In this case, all other variables in the same group must have the same number of content sets supplied. For example, VAR1 and VAR2 are in a group that allows multiple sets of content but only VAR1 is required. If you specify four sets of content for VAR1, you must also specify four sets for VAR2, even if they are empty strings.
All rules that normally apply to variable content (such as maximum length, type, disallowed characters, whether multiple sets of content are allowed, etc) apply for in API request. If document data submitted for any variable does not match the variable's requirements, your request will fail.
API Return Values
The API will return a JSON string containing three keys.
request_outcome — will have a value of either success or failure
document_url — if the request was a success, the value will contain a URL to the created document, otherwise it will be an empty string
request_description — if the request was a failure, the value will contain the reason why, otherwise it will be an empty string
"request_outcome": "failure",
"request_description": "missing_input",
"document_url": ""
Try Submitting to the API
Test API requests can be made using dedicated software for this purpose, but if you would like to make a quick test request you can use the form below. Paste the entire JSON request object and press Submit to send your request.
Last updated on February 5, 2021.
This policy applies to creodocs.com, hereby referred to as Creodocs. Creodocs is operated by Creodocs Limited (NZBN 9429046066565), registered in New Zealand and located in Auckland, New Zealand.
Creodocs fundamentally believes in an ethical approach to privacy, where the only information collected is required for providing services, enabling features and complying with regulatory and legal requirements. Creodocs believes that analytics and data mining are distractions from creating a high quality product and refrains from these activities unless absolutely necessary to provide its services. Simply put, Creodocs is only interested in charging for creating documents, not in mining your personal information.
The sections below outline the major categories of information that Creodocs retains. Each details what information is collected, why it is collected, how it is stored and when it is removed, with the aim of being transparent about why the information is necessary.
Purchases and Billing
When you make a purchase of credits on Creodocs, your credit card information is not seen or stored by Creodocs. Payment processing is handled by Stripe and happens entirely on their infrastructure. Please refer to Stripe's Privacy Policy for more information on how your payment information may be stored and managed by Stripe.
A purchase and user membership history is retained for each billing group. For each purchase or grant of credits to a billing group, a usage history is stored summarizing which user used which template how many times at a given price, and when the last usage occurred. This information is stored to: enable a billing history for auditing and legal purposes, allow the billing group owner oversight into usage patterns and attribute responsibility in the case of issues or discrepancies down the line.
Purchase and usage information will be retained for as long as a billing group exists. A billing group owner may request the permanent deletion of the billing group history by contacting [email protected], but note that some record may still need to be retained by Creodocs for financial auditing and tax purposes.
Application information refers to information you explicitly provide as you use Creodocs. For example, if you add a private template, Creodocs needs to store the template information, the template code and link your user account to the template as its owner. This category of information is required for Creodocs to provide its services as a web application where your state is saved in a central location. When this information is no longer required by you or Creodocs, it is permanently discarded.
Account Information Your Creodocs account is primarily identified by the email address used to create it. Your email address is used to communicate with you (see the Communication section) and will be visible to other users of Creodocs when you engage in a sharing activity, such as by being part of a billing group with other users or making use of a shared template. A history of email and password changes is retained for each account, for the purposes of support and identification of malicious activity.
Created Documents Documents created by Creodocs contain potentially sensitive information submitted for document variables. Due to this, all documents are retained for only 30 days, at which point they are automatically deleted without backups. You may delete documents you have created prior to their automatic deletion from the Data Submissions page for the template used.
Private Templates If you choose to add a private template to Creodocs, template information and code will be retained indefinitely so are you able to make use of it. You can permanently delete a private template at any time from its Template Settings page.
Document Variable Content Document variable content is retained as long as the created document exists (up to 30 days). If you choose to make use of the locked variables feature when manually submitting document content, Creodocs will retain your document content indefinitely in order to restore it the next time you use the template.
Application information will be retained while you remain a user of Creodocs, unless otherwise stated. If you no longer wish to be a user, or require specific information about you removed, please contact [email protected].
Passive Information (Logging and Cookies)
Passive information refers to information you implicitly provide when you interact with Creodocs. For example, when you log in, Creodocs needs to know who you are and that you are still logged in as you use the site—this requires your browser to automatically send a token Creodocs provided you when you logged in (known as a cookie). This category of information is required for basic usage statistics, diagnosing issues with the product, detecting and minimising malicious activity and to verify legitimate access to your account.
Access and Error Logs Creodocs retains webserver access and error logs containing metadata including: IP addresses (which may be used for rough geolocation), the page requested, referrer URL, information about the software making the request, the time of the request and whether the request was successful. These logs are retained for the purpose of basic analytics, diagnosing problems with the service and identifying malicious activity. Logs are retained indefinitely but only up to 6 months of logs are stored on the active Creodocs webserver(s).
Failed Request Logging If a request, such as attempting to log in, results in a failure, such as due to an incorrect username or password supplied, the event information is logged to a database. This will usually include the IP address of the requester, the time of the request, and may include the content of the request (passwords will not be saved in plain text). This information is used to automatically block malicious requests, such as attempted brute-forcing of a user's password, for identifying issues with the service and for support communication regarding problems encountered. This information will be deleted periodically and will not be stored for longer than 6 months.
Cookies When you log in to Creodocs, a token is saved on your device—known as a cookie—which allows Creodocs to know you are logged in. Every time you interact with the website, the cookie is automatically sent to tell Creodocs who you are and confirm you are still logged in. Creodocs does not store any other cookies on your computer, and the session cookies are removed when you log out or when they expire after several hours.
Creodocs will use the email address associated with your account to send important information regarding your account. This will typically be for important notices that require your interaction, such as when you are invited to a billing group, or when you explicitly request information, such as having a document you created emailed to you. From time to time, Creodocs may need to send important notices to all users regarding the product, such as breaking changes or major updates to this policy. Emails are sent using Amazon Simple Email Service. Please refer to the Amazon Web Services Privacy Policy for more information on how your email address may be stored and managed.
Support, Feedback or Other Direct Interactions
If you directly contact Creodocs for any purpose, such as to request support or report a problem, your written communication will be retained indefinitely. This will include at least: the communication, your email address and any information you provide (such as diagnostic information or logs). The purpose of this is for Creodocs to maintain a formal record of communication. This record can be useful to give context from previous interactions, enable collation of ongoing issues with the product or serve as a reference point in case of disputes.
You may request for your interactions to be permanently deleted by contacting [email protected].
Creodocs will retain backups of Application Information and private templates, for the expressed purpose of restoring the Creodocs platform in case of catastrophic failure. Backups will be retained for up to 6 months and may include information that you have explicitly deleted. For example, if you delete a private template on January 2, it will be immediately deleted, but a backup made on January 1 will contain your template and can be retained for up to 6 months. Backups are made as an insurance policy and will not be accessed except in exceptional circumstances.
Sharing of Collected Information
Creodocs employees and contractors may have access to collected information as required to perform their jobs. This is for purposes such as providing support, diagnosing problems and making improvements. Creodocs does not sell or disclose your personal information to any third party. Exceptions to this include: 1) disclosure as required by law to a law enforcement or government agency 2) to the credit card payment processor for accepting payments 3) to the email service provider for sending communications.
This policy will be updated from time to time as Creodocs evolves. The date it was last updated is stated at the top of the policy. It is your responsibility to review the policy periodically to confirm your continued agreement. In the event of substantial changes to the policy, you may receive written communication to the email address associated with your account to alert you of the changes. Minor changes such as rewording or clarification will not trigger such communication.
Communication Regarding This Policy
If you have any questions or concerns regarding this policy, please contact [email protected].
Last updated on February 18, 2021.
AGREEMENT TO TERMS
These Terms and Conditions constitute a legally binding agreement made between you, whether personally or on behalf of an entity ('you') and Creodocs Limited, NZBN 9429046066565, Auckland, New Zealand ('we,' 'us' or 'our'), concerning your access to and use of the creodocs.com website as well as any other media form, media channel, mobile website or mobile application related, linked, or otherwise connected thereto (collectively, the 'Site').
You agree that by accessing the Site, you have read, understood, and agree to be bound by all of these Terms and Conditions. If you do not agree with all of these Terms and Conditions, then you are expressly prohibited from using the Site and you must discontinue use immediately.
Supplemental terms and conditions or documents that may be posted on the Site from time to time are hereby expressly incorporated herein by reference. We reserve the right, in our sole discretion, to make changes or modifications to these Terms and Conditions at any time and for any reason.
We will alert you about any changes by updating the 'Last updated' date of these Terms and Conditions, and you waive any right to receive specific notice of each such change.
It is your responsibility to periodically review these Terms and Conditions to stay informed of updates. You will be subject to, and will be deemed to have been made aware of and to have accepted, the changes in any revised Terms and Conditions by your continued use of the Site after the date such revised Terms and Conditions are posted.
The information provided on the Site is not intended for distribution to or use by any person or entity in any jurisdiction or country where such distribution or use would be contrary to law or regulation or which would subject us to any registration requirement within such jurisdiction or country.
Accordingly, those persons who choose to access the Site from other locations do so on their own initiative and are solely responsible for compliance with local laws, if and to the extent local laws are applicable.
The Site is intended for users who are at least 13 years of age. All users who are minors in the jurisdiction in which they reside (generally under the age of 18) must have the permission of, and be directly supervised by, their parent or guardian to use the Site. If you are a minor, you must have your parent or guardian read and agree to these Terms of Use prior to you using the Site.
Unless otherwise indicated, the Site is our proprietary property and all source code, databases, global document templates, functionality, software, website designs, audio, video, text, photographs, and graphics on the Site (collectively, the 'Content') and the trademarks, service marks, and logos contained therein (the 'Marks') are owned or controlled by us or licensed to us, and are protected by copyright and trademark laws and various other intellectual property rights and unfair competition laws of New Zealand, foreign jurisdictions, and international conventions.
The Content and the Marks are provided on the Site 'AS IS' for your information and personal use only. Except as expressly provided in these Terms and Conditions, no part of the Site and no Content or Marks may be copied, reproduced, aggregated, republished, uploaded, posted, publicly displayed, encoded, translated, transmitted, distributed, sold, licensed, or otherwise exploited for any commercial purpose whatsoever, without our express prior written permission.
Documents produced through use of the Site in adherence with these Terms may be used by you for any purpose and are not subject to the intellectual property rights described here.
USER REPRESENTATIONS
By using the Site, you represent and warrant that:
all registration information you submit will be true, accurate, current, and complete;
you will maintain the accuracy of such information and promptly update such registration information as necessary;
you have the legal capacity and you agree to comply with these Terms and Conditions;
you are not under the age of 13;
not a minor in the jurisdiction in which you reside, or if a minor, you have received parental permission to use the Site;
you will not access the Site through automated or non-human means (except for the documented Application Programming Interface (API)), whether through a bot, script, or otherwise;
you will not use the Site for any illegal or unauthorized purpose;
your use of the Site will not violate any applicable law or regulation.
You may be required to register with the Site. You agree to keep your password confidential and will be responsible for all use of your account and password.
make any unauthorized use of the Site, including collecting email addresses of users by electronic or other means for the purpose of sending unsolicited email, or creating user accounts by automated means or under false pretenses.
engage in any automated use of the system (excluding the documented Application Programming Interface (API)), such as using scripts to interact with the Site, or using any data mining, robots, or similar data gathering and extraction tools.
attempt to impersonate another user or person or use the account of another user.
sell or otherwise transfer your account.
harass, annoy, intimidate, or threaten any of our employees or agents engaged in providing any portion of the Site to you.
copy or adapt the Site's software, including but not limited to PHP, HTML, JavaScript, LaTeX, or other code.
upload or transmit (or attempt to upload or to transmit) viruses, Trojan horses, or other material, including spamming (continuous posting of repetitive text), that interferes with any party's uninterrupted use and enjoyment of the Site or modifies, impairs, disrupts, alters, or interferes with the use, features, functions, operation, or maintenance of the Site.
upload or transmit (or attempt to upload or to transmit) any material that acts as a passive or active information collection or transmission mechanism, including without limitation, clear graphics interchange formats ('gifs'), 1×1 pixels, web bugs, cookies, or other similar devices (sometimes referred to as 'spyware' or 'passive collection mechanisms' or 'pcms').
except as may be the result of standard search engine or Internet browser usage, use, launch, develop, or distribute any automated system, including without limitation, any spider, robot, cheat utility, scraper, or offline reader that accesses the Site.
USER GENERATED CONTRIBUTIONS
The Site may invite you to chat, contribute to, or participate in blogs, message boards, online forums, and other functionality, and may provide you with the opportunity to create, submit, post, display, transmit, perform, publish, distribute, or broadcast content and materials to us or on the Site, including but not limited to text, writings, video, audio, photographs, graphics, comments, suggestions, or personal information or other material (collectively, "Contributions").
Contributions may be viewable by other users of the Site. As such, any Contributions you transmit may be treated as non-confidential and non-proprietary. When you create or make available any Contributions, you thereby represent and warrant that:
you are the creator and owner of or have the necessary licenses, rights, consents, releases, and permissions to use and to authorize us, the Site, and other users of the Site to use your Contributions in any manner contemplated by the Site and these Terms and Conditions.
you have the written consent, release, and/or permission of each and every identifiable individual person in your Contributions to use the name or likeness of each and every such identifiable individual person to enable inclusion and use of your Contributions in any manner contemplated by the Site and these Terms and Conditions.
your Contributions do not violate any federal or state law concerning child pornography, or otherwise intended to protect the health or well-being of minors;
your Contributions do not otherwise violate, or link to material that violates, any provision of these Terms and Conditions, or any applicable law or regulation.
Any use of the Site in violation of the foregoing violates these Terms and Conditions and may result in, among other things, termination or suspension of your rights to use the Site.
You acknowledge and agree that any questions, comments, suggestions, ideas, feedback, or other information regarding the Site ("Submissions") provided by you to us are non-confidential and shall become our sole property. We shall own exclusive rights, including all intellectual property rights, and shall be entitled to the unrestricted use and dissemination of these Submissions for any lawful purpose, commercial or otherwise, without acknowledgment or compensation to you.
THIRD-PARTY WEBSITES AND CONTENT
The Site may contain (or you may be sent via the Site) links to other websites ("Third-Party Websites") as well as articles, photographs, text, graphics, pictures, designs, music, sound, video, information, applications, software, and other content or items belonging to or originating from third parties ("Third-Party Content").
Such Third-Party Websites and Third-Party Content are not investigated, monitored, or checked for accuracy, appropriateness, or completeness by us, and we are not responsible for any Third-Party Websites accessed through the Site or any Third-Party Content posted on, available through, or installed from the Site, including the content, accuracy, offensiveness, opinions, reliability, privacy practices, or other policies of or contained in the Third-Party Websites or the Third-Party Content.
Inclusion of, linking to, or permitting the use or installation of any Third-Party Websites or any Third-Party Content does not imply approval or endorsement thereof by us. If you decide to leave the Site and access the Third-Party Websites or to use or install any Third-Party Content, you do so at your own risk, and you should be aware these Terms and Conditions no longer govern.
You should review the applicable terms and policies, including privacy and data gathering practices, of any website to which you navigate from the Site or relating to any applications you use or install from the Site. Any purchases you make through Third-Party Websites will be through other websites and from other companies, and we take no responsibility whatsoever in relation to such purchases which are exclusively between you and the applicable third party.
You agree and acknowledge that we do not endorse the products or services offered on Third-Party Websites and you shall hold us harmless from any harm caused by your purchase of such products or services. Additionally, you shall hold us harmless from any losses sustained by you or harm caused to you relating to or resulting in any way from any Third-Party Content or any contact with Third-Party Websites.
We reserve the right, but not the obligation, to:
monitor the Site for violations of these Terms and Conditions;
take appropriate legal action against anyone who, in our sole discretion, violates the law or these Terms and Conditions, including without limitation, reporting such user to law enforcement authorities;
in our sole discretion and without limitation, refuse, restrict access to, limit the availability of, or disable (to the extent technologically feasible) any of your Contributions or any portion thereof;
in our sole discretion and without limitation, notice, or liability, to remove from the Site or otherwise disable all files and content that are excessive in size or are in any way burdensome to our systems;
otherwise manage the Site in a manner designed to protect our rights and property and to facilitate the proper functioning of the Site.
We care about data privacy and security. Please review our Privacy Policy posted on the Site. By using the Site, you agree to be bound by our Privacy Policy, which is incorporated into these Terms and Conditions. Please be advised the Site is hosted in Australia and developed in New Zealand.
If you access the Site from the United States, European Union, Asia, or any other region of the world with laws or other requirements governing personal data collection, use, or disclosure that differ from applicable laws in Australia and New Zealand, then through your continued use of the Site, you are transferring your data to Australia, and you expressly consent to have your data transferred to and processed in Australia.
Further, we do not knowingly accept, request, or solicit information from children or knowingly market to children. Therefore, in accordance with the U.S. Children's Online Privacy Protection Act, if we receive actual knowledge that anyone under the age of 13 has provided personal information to us without the requisite and verifiable parental consent, we will delete that information from the Site as quickly as is reasonably practical.
COPYRIGHT INFRINGEMENTS
We respect the intellectual property rights of others. If you believe that any material available on or through the Site infringes upon any copyright you own or control, please immediately notify us using the contact information provided below (a 'Notification'). A copy of your Notification will be sent to the person who posted or stored the material addressed in the Notification.
These Terms and Conditions shall remain in full force and effect while you use the Site. WITHOUT LIMITING ANY OTHER PROVISION OF THESE TERMS AND CONDITIONS, WE RESERVE THE RIGHT TO, IN OUR SOLE DISCRETION AND WITHOUT NOTICE OR LIABILITY, DENY ACCESS TO AND USE OF THE SITE (INCLUDING BLOCKING CERTAIN IP ADDRESSES), TO ANY PERSON FOR ANY REASON OR FOR NO REASON, INCLUDING WITHOUT LIMITATION FOR BREACH OF ANY REPRESENTATION, WARRANTY, OR COVENANT CONTAINED IN THESE TERMS AND CONDITIONS OR OF ANY APPLICABLE LAW OR REGULATION. WE MAY TERMINATE YOUR USE OR PARTICIPATION IN THE SITE OR DELETE YOUR ACCOUNT AND ANY CONTENT OR INFORMATION THAT YOU POSTED AT ANY TIME, WITHOUT WARNING, IN OUR SOLE DISCRETION.
If we terminate or suspend your account for any reason, you are prohibited from registering and creating a new account under your email address, a fake or borrowed email address, or the email address of any third party, even if you may be acting on behalf of the third party.
In addition to terminating or suspending your account, we reserve the right to take appropriate legal action, including without limitation pursuing civil, criminal, and injunctive redress.
MODIFICATIONS AND INTERRUPTIONS
We reserve the right to change, modify, or remove the contents of the Site at any time or for any reason at our sole discretion without notice. However, we have no obligation to update any information on our Site. We also reserve the right to modify or discontinue all or part of the Site without notice at any time.
We will not be liable to you or any third party for any modification, price change, suspension, or discontinuance of the Site.
We cannot guarantee the Site will be available at all times. We may experience hardware, software, or other problems or need to perform maintenance related to the Site, resulting in interruptions, delays, or errors.
We reserve the right to change, revise, update, suspend, discontinue, or otherwise modify the Site at any time or for any reason without notice to you. You agree that we have no liability whatsoever for any loss, damage, or inconvenience caused by your inability to access or use the Site during any downtime or discontinuance of the Site.
Nothing in these Terms and Conditions will be construed to obligate us to maintain and support the Site or to supply any corrections, updates, or releases in connection therewith.
These Terms and Conditions and your use of the Site are governed by and construed in accordance with the laws of New Zealand applicable to agreements made and to be entirely performed within New Zealand, without regard to its conflict of law principles.
There may be information on the Site that contains typographical errors, inaccuracies, or omissions that may relate to the Site, including descriptions, pricing, availability, and various other information. We reserve the right to correct any errors, inaccuracies, or omissions and to change or update the information on the Site at any time, without prior notice.
THE SITE IS PROVIDED ON AN AS-IS AND AS-AVAILABLE BASIS. YOU AGREE THAT YOUR USE OF THE SITE AND OUR SERVICES WILL BE AT YOUR SOLE RISK. TO THE FULLEST EXTENT PERMITTED BY LAW, WE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, IN CONNECTION WITH THE SITE AND YOUR USE THEREOF, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. WE MAKE NO WARRANTIES OR REPRESENTATIONS ABOUT THE ACCURACY OR COMPLETENESS OF THE SITE'S CONTENT OR THE CONTENT OF ANY WEBSITES LINKED TO THE SITE AND WE WILL ASSUME NO LIABILITY OR RESPONSIBILITY FOR ANY (1) ERRORS, MISTAKES, OR INACCURACIES OF CONTENT AND MATERIALS, (2) PERSONAL INJURY OR PROPERTY DAMAGE, OF ANY NATURE WHATSOEVER, RESULTING FROM YOUR ACCESS TO AND USE OF THE SITE, (3) ANY UNAUTHORIZED ACCESS TO OR USE OF OUR SECURE SERVERS AND/OR ANY AND ALL PERSONAL INFORMATION AND/OR FINANCIAL INFORMATION STORED THEREIN, (4) ANY INTERRUPTION OR CESSATION OF TRANSMISSION TO OR FROM THE SITE, (5) ANY BUGS, VIRUSES, TROJAN HORSES, OR THE LIKE WHICH MAY BE TRANSMITTED TO OR THROUGH THE SITE BY ANY THIRD PARTY, AND/OR (6) ANY ERRORS OR OMISSIONS IN ANY CONTENT AND MATERIALS OR FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OF THE USE OF ANY CONTENT POSTED, TRANSMITTED, OR OTHERWISE MADE AVAILABLE VIA THE SITE. WE DO NOT WARRANT, ENDORSE, GUARANTEE, OR ASSUME RESPONSIBILITY FOR ANY PRODUCT OR SERVICE ADVERTISED OR OFFERED BY A THIRD PARTY THROUGH THE SITE, ANY HYPERLINKED WEBSITE, OR ANY WEBSITE OR MOBILE APPLICATION FEATURED IN ANY BANNER OR OTHER ADVERTISING, AND WE WILL NOT BE A PARTY TO OR IN ANY WAY BE RESPONSIBLE FOR MONITORING ANY TRANSACTION BETWEEN YOU AND ANY THIRD-PARTY PROVIDERS OF PRODUCTS OR SERVICES.
AS WITH THE PURCHASE OF A PRODUCT OR SERVICE THROUGH ANY MEDIUM OR IN ANY ENVIRONMENT, YOU SHOULD USE YOUR BEST JUDGMENT AND EXERCISE CAUTION WHERE APPROPRIATE.
IN NO EVENT WILL WE OR OUR DIRECTORS, EMPLOYEES, OR AGENTS BE LIABLE TO YOU OR ANY THIRD PARTY FOR ANY DIRECT, INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, SPECIAL, OR PUNITIVE DAMAGES, INCLUDING LOST PROFIT, LOST REVENUE, LOSS OF DATA, OR OTHER DAMAGES ARISING FROM YOUR USE OF THE SITE, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
NOTWITHSTANDING ANYTHING TO THE CONTRARY CONTAINED HEREIN, OUR LIABILITY TO YOU FOR ANY CAUSE WHATSOEVER AND REGARDLESS OF THE FORM OF THE ACTION, WILL AT ALL TIMES BE LIMITED TO THE AMOUNT PAID, IF ANY, BY YOU TO US DURING THE 3 MONTH PERIOD PRIOR TO ANY CAUSE OF ACTION ARISING.
You agree to defend, indemnify, and hold us harmless, including our subsidiaries, affiliates, and all of our respective officers, agents, partners, and employees, from and against any loss, damage, liability, claim, or demand, including reasonable attorneys' fees and expenses, made by any third party due to or arising out of: (1) your Contributions; (2) use of the Site; (3) breach of these Terms and Conditions; (4) any breach of your representations and warranties set forth in these Terms and Conditions; (5) your violation of the rights of a third party, including but not limited to intellectual property rights; or (6) any overt harmful act toward any other user of the Site with whom you connected via the Site.
We will maintain certain data that you transmit to the Site for the purpose of managing the Site, as well as data relating to your use of the Site. Although we perform regular routine backups of data, you are solely responsible for all data that you transmit or that relates to any activity you have undertaken using the Site.
You agree that we shall have no liability to you for any loss or corruption of any such data, and you hereby waive any right of action against us arising from any such loss or corruption of such data.
ELECTRONIC COMMUNICATIONS, TRANSACTIONS, AND SIGNATURES
Visiting the Site, sending us emails, and completing online forms constitute electronic communications. You consent to receive electronic communications, and you agree that all agreements, notices, disclosures, and other communications we provide to you electronically, via email and on the Site, satisfy any legal requirement that such communication be in writing.
YOU HEREBY AGREE TO THE USE OF ELECTRONIC SIGNATURES, CONTRACTS, ORDERS, AND OTHER RECORDS, AND TO ELECTRONIC DELIVERY OF NOTICES, POLICIES, AND RECORDS OF TRANSACTIONS INITIATED OR COMPLETED BY US OR VIA THE SITE.
You hereby waive any rights or requirements under any statutes, regulations, rules, ordinances, or other laws in any jurisdiction which require an original signature or delivery or retention of non-electronic records, or to payments or the granting of credits by any means other than electronic means.
These Terms and Conditions and any policies or operating rules posted by us on the Site constitute the entire agreement and understanding between you and us. Our failure to exercise or enforce any right or provision of these Terms and Conditions shall not operate as a waiver of such right or provision.
These Terms and Conditions operate to the fullest extent permissible by law. We may assign any or all of our rights and obligations to others at any time. We shall not be responsible or liable for any loss, damage, delay, or failure to act caused by any cause beyond our reasonable control.
If any provision or part of a provision of these Terms and Conditions is determined to be unlawful, void, or unenforceable, that provision or part of the provision is deemed severable from these Terms and Conditions and does not affect the validity and enforceability of any remaining provisions.
There is no joint venture, partnership, employment or agency relationship created between you and us as a result of these Terms and Conditions or use of the Site. You agree that these Terms and Conditions will not be construed against us by virtue of having drafted them.
You hereby waive any and all defenses you may have based on the electronic form of these Terms and Conditions and the lack of signing by the parties hereto to execute these Terms and Conditions.
[email protected]
Creodocs is developed
© Creodocs Ltd. All rights reserved. | Privacy Policy | Terms and Conditions | CommonCrawl |
Existence of solutions for space-fractional parabolic hemivariational inequalities
DCDS-B Home
Forward attracting sets of reaction-diffusion equations on variable domains
March 2019, 24(3): 1273-1295. doi: 10.3934/dcdsb.2019016
On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity
Peter I. Kogut 1, and Olha P. Kupenko 2,3,
Oles Honchar Dnipro National University, Department of Differential Equations, Gagarin av., 72, 49010 Dnipro, Ukraine
Dnipro University of Technology, Department of System Analysis and Control, Yavornitskii av., 19, 49005 Dnipro, Ukraine
Institute for Applied System Analysis, National Academy of Sciences and Ministry of Education and Science of Ukraine, Peremogy av., 37/35, IASA, 03056 Kyiv, Ukraine
To the memory of our big Friend and Teacher V. S. Mel'nik
Received December 2017 Revised March 2018 Published March 2019 Early access January 2019
Full Text(HTML)
We study an optimal control problem for one class of non-linear elliptic equations with $p$-Laplace operator and $L^1$-nonlinearity. We deal with such case of nonlinearity when we cannot expect to have a solution of the state equation for any given control. After defining a suitable functional class in which we look for solutions, we reformulate the original problem and prove the existence of optimal pairs. In order to ensure the validity of such reformulation, we provide its substantiation using a special family of fictitious optimal control problems. The idea to involve the fictitious optimization problems was mainly inspired by the brilliant book of V.S. Mel'nik and V.I. Ivanenko "Variational Methods in Control Problems for the Systems with Distributed Parameters", Kyiv, 1998.
Keywords: Existence result, optimal control, $p$-Laplace operator, elliptic equation, fictitious control.
Mathematics Subject Classification: Primary: 49J20, 49K20; Secondary: 58J37.
Citation: Peter I. Kogut, Olha P. Kupenko. On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1273-1295. doi: 10.3934/dcdsb.2019016
L. Boccardo and F. Murat, Almost everywhere convergence of the gradients of solutions to elliptic and parabolic equations, Nonlinear Anal., Theory, Methods, Appl., 19 (1992), 581-597. doi: 10.1016/0362-546X(92)90023-8. Google Scholar
E. Casas, O. Kavian and J. P. Puel, Optimal control of an ill-posed elliptic semilinear equation with an exponential nonlinearity, ESAIM: Control, Optimization and Calculus of Variations, 3 (1998), 361-380. doi: 10.1051/cocv:1998116. Google Scholar
E. Casas, P. I. Kogut and G. Leugering, Approximation of optimal control problems in the coefficient for the $p$-Laplace equation. I. Convergence result, SIAM Journal on Control and Optimization, 54 (2016), 1406-1422. doi: 10.1137/15M1028108. Google Scholar
S. Chandrasekhar, An Introduction to the Study of Stellar Structures, Dover Publications, Inc., New York, N. Y., 1957. Google Scholar
M. G. Crandall and P. H. Rabinowitz, Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems, Arch. Rational Mech. Anal., 58 (1975), 207-218. doi: 10.1007/BF00280741. Google Scholar
J. Dolbeault and R. Stańczy, Non-existence and uniqueness results for supercritical semilinear elliptic equations, Annales Henri Poincaré, 10 (2010), 1311-1333. doi: 10.1007/s00023-009-0016-9. Google Scholar
R. Ferreira, A. De Pablo and J. L. Vazquez, Classification of blow-up with nonlinear diffusion and localized reaction, J. Differential Equations, 231 (2006), 195-211. doi: 10.1016/j.jde.2006.04.017. Google Scholar
D. A. Franck-Kamenetskii, Diffusion and Heat Transfer in Chemical Kinetics, Second edition, Plenum Press, 1969. Google Scholar
H. Fujita, On the blowing up of the solutions to the Cauchy problem for $u_t = Δ u+u^{1+α}$, J. Fac. Sci. Univ. Tokyo Sect. IA, Math., 13 (1996), 109-124. Google Scholar
T. Gallouët, F. Mignot and J. P. Puel, Quelques résultats sur le problème $-Δ u = λ e^u$, C. R. Acad. Sci. Paris, Série I, 307 (1988), 289–292. Google Scholar
I. M. Gelfand, Some problems in the theory of quasi-linear equations, Amer. Math. Soc. Transl., Ser. 2, 29 (1963), 295–381. doi: 10.1090/trans2/029/12. Google Scholar
V. I. Ivanenko and V. S. Mel'nik, Variational Methods in Control Problems for the Systems with Distributed Parameters, Naukova Dumka, Kyiv, 1988 (in Russian). Google Scholar
D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, Academic Press, New York, 1980. Google Scholar
P. I. Kogut and G. Leugering, Optimal Control Problems for Partial Differential Equations on Reticulated Domains. Approximation and Asymptotic Analysis, Series: Systems and Control, Birkhäuser, Boston, 2011. doi: 10.1007/978-0-8176-8149-4. Google Scholar
P. I. Kogut, R. Manzo and A. O. Putchenko, On approximate solutions to the Neumann elliptic boundary value problem with non-linearity of exponential type, Boundary Value Problems, 2016 (2016), 1-32. doi: 10.1186/s13661-016-0717-1. Google Scholar
P. I. Kogut and A. O. Putchenko, On approximate solutions to one class of non-linear Dirichlet elliptic boundary value problems, Visnyk DNU. Series: Mathematical Modelling, Dnipropetrovsk: DNU, 24 (2016), 27-25. Google Scholar
P. I. Kogut and V. S. Mel'nik, On one class of extremum problems for nonlinear operator systems, Cybern. Syst. Anal., 34 (1998), 894-904. Google Scholar
P. I. Kogut and V. S. Mel'nik, On weak compactness of bounded sets in Banach and locally convex spaces, Ukrainian Mathematical Journal, 52 (2001), 837-846. doi: 10.1007/BF02591778. Google Scholar
P. I. Kogut and O. P. Kupenko, On attainability of optimal solutions for linear elliptic equations with unbounded coefficients, Visnyk DNU. Series: Mathematical Modelling, Dnipropetrovsk: DNU, 20 (2012), 63-82. Google Scholar
O. P. Kupenko and R. Manzo, On optimal controls in coefficients for ill-posed non-linear elliptic Dirichlet boundary value problems, Discrete and Continuous Dynamic Systems. Series B, 23 (2018), 1363-1393. doi: 10.3934/dcdsb.2018155. Google Scholar
J.-L. Lions, Some Methods of Solving Non-Linear Boundary Value Problems, Dunod-Gauthier-Villars, Paris 1969. Google Scholar
F. Mignot and J. P. Puel, Sur une classe de problémes non linéaires avec nonlinéarité positive, croissante, convexe, Comm. in PDE, 5 (1980), 791-836. doi: 10.1080/03605308008820155. Google Scholar
I. Peral, Multiplicity of Solutions for the p-Laplacian, Second School of Nonlinear Functional Analysis and Applications to Differential Equations, Miramare–Trieste, 1997. Google Scholar
R. G. Pinsky, Existence and Nonexistence of global solutions $u_t=Δ u+a(x) u^p$ in $\mathbb{R}^d$, J. of Differential Equations, 133 (1997), 152-177. doi: 10.1006/jdeq.1996.3196. Google Scholar
D. H. Sattinger, Monotone methods in nonlinear elliptic and parabolic boundary value problems, Indiana Univ. Math., 21 (1972), 979-1000. doi: 10.1512/iumj.1972.21.21079. Google Scholar
Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control & Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73
Harbir Antil, Mahamadi Warma. Optimal control of the coefficient for the regional fractional $p$-Laplace equation: Approximation and convergence. Mathematical Control & Related Fields, 2019, 9 (1) : 1-38. doi: 10.3934/mcrf.2019001
Manuel González-Burgos, Sergio Guerrero, Jean Pierre Puel. Local exact controllability to the trajectories of the Boussinesq system via a fictitious control on the divergence equation. Communications on Pure & Applied Analysis, 2009, 8 (1) : 311-333. doi: 10.3934/cpaa.2009.8.311
Shu Luan. On the existence of optimal control for semilinear elliptic equations with nonlinear neumann boundary conditions. Mathematical Control & Related Fields, 2017, 7 (3) : 493-506. doi: 10.3934/mcrf.2017018
Constantin Christof, Christian Meyer, Stephan Walther, Christian Clason. Optimal control of a non-smooth semilinear elliptic equation. Mathematical Control & Related Fields, 2018, 8 (1) : 247-276. doi: 10.3934/mcrf.2018011
Peter I. Kogut. On approximation of an optimal boundary control problem for linear elliptic equation with unbounded coefficients. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2105-2133. doi: 10.3934/dcds.2014.34.2105
Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2021, 11 (3) : 521-554. doi: 10.3934/mcrf.2020052
Gökçe Dİlek Küçük, Gabil Yagub, Ercan Çelİk. On the existence and uniqueness of the solution of an optimal control problem for Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 503-512. doi: 10.3934/dcdss.2019033
B. Bonnard, J.-B. Caillau, E. Trélat. Geometric optimal control of elliptic Keplerian orbits. Discrete & Continuous Dynamical Systems - B, 2005, 5 (4) : 929-956. doi: 10.3934/dcdsb.2005.5.929
Urszula Ledzewicz, Stanislaw Walczak. Optimal control of systems governed by some elliptic equations. Discrete & Continuous Dynamical Systems, 1999, 5 (2) : 279-290. doi: 10.3934/dcds.1999.5.279
Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control & Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017
Shingo Takeuchi. Partial flat core properties associated to the $p$-laplace operator. Conference Publications, 2007, 2007 (Special) : 965-973. doi: 10.3934/proc.2007.2007.965
Alexander Arguchintsev, Vasilisa Poplevko. An optimal control problem by parabolic equation with boundary smooth control and an integral constraint. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 193-202. doi: 10.3934/naco.2018011
Eduardo Casas, Fredi Tröltzsch. Sparse optimal control for the heat equation with mixed control-state constraints. Mathematical Control & Related Fields, 2020, 10 (3) : 471-491. doi: 10.3934/mcrf.2020007
Diana Keller. Optimal control of a linear stochastic Schrödinger equation. Conference Publications, 2013, 2013 (special) : 437-446. doi: 10.3934/proc.2013.2013.437
Enrique Fernández-Cara, Juan Límaco, Laurent Prouvée. Optimal control of a two-equation model of radiotherapy. Mathematical Control & Related Fields, 2018, 8 (1) : 117-133. doi: 10.3934/mcrf.2018005
Fulvia Confortola, Elisa Mastrogiacomo. Optimal control for stochastic heat equation with memory. Evolution Equations & Control Theory, 2014, 3 (1) : 35-58. doi: 10.3934/eect.2014.3.35
Jitka Machalová, Horymír Netuka. Optimal control of system governed by the Gao beam equation. Conference Publications, 2015, 2015 (special) : 783-792. doi: 10.3934/proc.2015.0783
Kai Wang, Dun Zhao, Binhua Feng. Optimal nonlinearity control of Schrödinger equation. Evolution Equations & Control Theory, 2018, 7 (2) : 317-334. doi: 10.3934/eect.2018016
HTML views (133)
Peter I. Kogut Olha P. Kupenko | CommonCrawl |
Angels' staircases, Sturmian sequences, and trajectories on homothety surfaces
JMD Home
This Volume
Realizations of groups of piecewise continuous transformations of the circle
2020, 16: 81-107. doi: 10.3934/jmd.2020004
Counting square-tiled surfaces with prescribed real and imaginary foliations and connections to Mirzakhani's asymptotics for simple closed hyperbolic geodesics
Francisco Arana-Herrera
Department of Mathematics, Stanford University, 450 Jane Stanford Way, Stanford, CA 94305-2125, USA
Received April 20, 2019 Revised October 06, 2019 Published April 2020
We show that the number of square-tiled surfaces of genus $ g $, with $ n $ marked points, with one or both of its horizontal and vertical foliations belonging to fixed mapping class group orbits, and having at most $ L $ squares, is asymptotic to $ L^{6g-6+2n} $ times a product of constants appearing in Mirzakhani's count of simple closed hyperbolic geodesics. Many of the results in this paper reflect recent discoveries of Delecroix, Goujard, Zograf, and Zorich, but the approach considered here is very different from theirs. We follow conceptual and geometric methods inspired by Mirzakhani's work.
Keywords: Counting, square-tiled surfaces, simple closed geodesics, Mirzakhani, Masur-Veech volumes.
Mathematics Subject Classification: Primary: 30F60; Secondary: 32G15.
Citation: Francisco Arana-Herrera. Counting square-tiled surfaces with prescribed real and imaginary foliations and connections to Mirzakhani's asymptotics for simple closed hyperbolic geodesics. Journal of Modern Dynamics, 2020, 16: 81-107. doi: 10.3934/jmd.2020004
J. Athreya, A. Bufetov, A. Eskin and M. Mirzakhani, Lattice point asymptotics and volume growth on Teichmüller space, Duke Math. J., 161 (2012), 1055-1111. doi: 10.1215/00127094-1548443. Google Scholar
J. S. Athreya, A. Eskin and A. Zorich, Right-angled billiards and volumes of moduli spaces of quadratic differentials on $\Bbb C\rm P^1$, Ann. Sci. Éc. Norm. Supér. (4), 49 (2016), 1311–1386. Google Scholar
F. Bonahon, The geometry of Teichmüller space via geodesic currents, Invent. Math., 92 (1988), 139-162. doi: 10.1007/BF01393996. Google Scholar
F. Bonahon, Geodesic laminations on surfaces, in Laminations and Foliations in Dynamics, Geometry and Topology (Stony Brook, NY, 1998), Contemp. Math., vol. 269, Amer. Math. Soc., Providence, RI, 2001, 1–37. doi: 10.1090/conm/269/04327. Google Scholar
V. Delecroix, E. Goujard, P. Zograf and A. Zorich, Square-tiled surfaces of fixed combinatorial type: Equidistribution, counting, volumes of the ambient strata, arXiv e-prints, 2016, arXiv: 1612.08374. Google Scholar
V. Delecroix, E. Goujard, P. Zograf and A. Zorich, Enumeration of meanders and Masur–Veech volumes, arXiv e-prints, 2017, arXiv: 1705.05190. Google Scholar
V. Delecroix, E. Goujard, P. Zograf and A. Zorich, Masur–Veech volumes, frequencies of simple closed geodesics and intersection numbers of moduli spaces of curves, arXiv e-prints, 2019, arXiv: 1908.08611. Google Scholar
A. Eskin and A. Okounkov, Asymptotics of numbers of branched coverings of a torus and volumes of moduli spaces of holomorphic differentials, Invent. Math., 145 (2001), 59-103. doi: 10.1007/s002220100142. Google Scholar
V. Erlandsson, H. Parlier and J. Souto, Counting curves, and the stable length of currents, arXiv e-prints, 2016, arXiv: 1612.05980. Google Scholar
V. Erlandsson, A remark on the word length in surface groups, Trans. Amer. Math. Soc., 372 (2019), 441-455. doi: 10.1090/tran/7561. Google Scholar
V. Erlandsson and J. Souto, Counting curves in hyperbolic surfaces, Geom. Funct. Anal., 26 (2016), 729-777. doi: 10.1007/s00039-016-0374-7. Google Scholar
V. Erlandsson and C. Uyanik, Length functions on currents and applications to dynamics and counting, arXiv e-prints, 2018, arXiv: 1803.10801. Google Scholar
A. Fathi, F. Laudenbach and V. Poénaru, Thurston's Work on Surfaces, Translated from the 1979 French original by Djun M. Kim and Dan Margalit, Mathematical Notes, vol. 48, Princeton University Press, Princeton, NJ, 2012. Google Scholar
B. Farb and D. Margalit, A Primer on Mapping Class Groups, Princeton Mathematical Series, vol. 49, Princeton University Press, Princeton, NJ, 2012. Google Scholar
F. P. Gardiner, Teichmüller Theory and Quadratic Differentials, Pure and Applied Mathematics (New York), A Wiley-Interscience Publication, John Wiley & Sons, Inc., New York, 1987. Google Scholar
F. P. Gardiner and H. Masur, Extremal length geometry of Teichmüller space, Complex Variables Theory Appl., 16 (1991), 209-237. doi: 10.1080/17476939108814480. Google Scholar
J. Hubbard and H. Masur, Quadratic differentials and foliations, Acta Math., 142 (1979), 221-274. doi: 10.1007/BF02395062. Google Scholar
J. H. Hubbard, Teichmüller Theory and Applications to Geometry, Topology, and Dynamics. Vol. 2. Surface Homeomorphisms and Rational Functions, Matrix Editions, Ithaca, NY, 2016. Google Scholar
S. P. Kerckhoff, The asymptotic geometry of Teichmüller space, Topology, 19 (1980), 23-41. doi: 10.1016/0040-9383(80)90029-4. Google Scholar
M. Kontsevich, Intersection theory on the moduli space of curves and the matrix Airy function, Comm. Math. Phys., 147 (1992), 1-23. doi: 10.1007/BF02099526. Google Scholar
G. Levitt, Foliations and laminations on hyperbolic surfaces, Topology, 22 (1983), 119-135. doi: 10.1016/0040-9383(83)90023-X. Google Scholar
E. Lindenstrauss and M. Mirzakhani, Ergodic theory of the space of measured laminations, Int. Math. Res. Not. IMRN, 2008 (2008), Art. ID rnm126, 49pp. doi: 10.1093/imrn/rnm126. Google Scholar
G. A. Margulis, On Some Aspects of the Theory of Anosov Systems, With a survey by Richard Sharp: Periodic orbits of hyperbolic flows, Translated from the Russian by Valentina Vladimirovna Szulikowska, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2004. doi: 10.1007/978-3-662-09070-1. Google Scholar
B. Martelli, An introduction to geometric topology, arXiv e-prints, 2016, arXiv: 1610.02592. Google Scholar
H. Masur, Interval exchange transformations and measured foliations, Ann. of Math. (2), 115 (1982), 169–200. doi: 10.2307/1971341. Google Scholar
H. Masur, Ergodic actions of the mapping class group, Proc. Amer. Math. Soc., 94 (1985), 455-459. doi: 10.1090/S0002-9939-1985-0787893-5. Google Scholar
M. Mirzakhani, Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces, Invent. Math., 167 (2007), 179-222. doi: 10.1007/s00222-006-0013-2. Google Scholar
M. Mirzakhani, Ergodic theory of the earthquake flow, Int. Math. Res. Not. IMRN, 2008 (2008), Art. ID rnm116, 39pp. doi: 10.1093/imrn/rnm116. Google Scholar
M. Mirzakhani, Growth of the number of simple closed geodesics on hyperbolic surfaces, Ann. of Math. (2), 168 (2008), 97–125. doi: 10.4007/annals.2008.168.97. Google Scholar
M. Mirzakhani, Counting Mapping Class group orbits on hyperbolic surfaces, arXiv e-prints, 2016, arXiv: 1601.03342. Google Scholar
L. Monin and V. Telpukhovskiy, On normalizations of Thurston measure on the space of measured laminations, Topology Appl., 267 (2019), 106878, 12 pp. doi: 10.1016/j.topol.2019.106878. Google Scholar
A. Papadopoulos, Geometric intersection functions and Hamiltonian flows on the space of measured foliations on a surface, Pacific J. Math., 124 (1986), 375-402. doi: 10.2140/pjm.1986.124.375. Google Scholar
R. C. Penner and J. L. Harer, Combinatorics of Train Tracks, Annals of Mathematics Studies, vol. 125, Princeton University Press, Princeton, NJ, 1992. doi: 10.1515/9781400882458. Google Scholar
I. Rivin, Geodesics with one self-intersection, and other stories, Adv. Math., 231 (2012), 2391-2412. doi: 10.1016/j.aim.2012.07.018. Google Scholar
K. Rafi and J. Souto, Geodesic currents and counting problems, Geom. Funct. Anal., 29 (2019), 871-889. doi: 10.1007/s00039-019-00502-7. Google Scholar
W. A. Veech, Gauss measures for transformations on the space of interval exchange maps, Ann. of Math. (2), 115 (1982), 201–242. doi: 10.2307/1971391. Google Scholar
U. Wolf, The action of the mapping class group on the pants complex, preprint, 2009. Google Scholar
S. Wolpert, On the Weil-Petersson geometry of the moduli space of curves, Amer. J. Math., 107 (1985), 969-997. doi: 10.2307/2374363. Google Scholar
M. Wolf, On realizing measured foliations via quadratic differentials of harmonic maps to $\mathbf R$-trees, J. Anal. Math., 68 (1996), 107-120. doi: 10.1007/BF02790206. Google Scholar
U. Wolf, Die Aktion der Abbildungsklassengruppe auf dem Hosenkomplex, Ph.D. thesis, 2009. Google Scholar
Figure 1. Example of a quadratic differential in the principal stratum of $ \textbf{Re}^{-1}([\gamma_1]) \subseteq Q\mathcal{M}_{2,0} $ for a (non-separating) simple closed curve $ \gamma_1 $ in $ S_{2,0} $
Figure Options
Download as PowerPoint slide
Figure 2. No escape of mass property in the real period coordinate chart (b) associated to the polygon representation (a), representing a flat pillowcase in the principal stratum of $ \mathrm{Re}^{-1}(\gamma_1) \subseteq Q\mathcal{T}_{0,4} $. The blue region covers $ K_\epsilon $ and the gray region covers $ \widehat{E}(\gamma_1) \backslash K_\epsilon $
Giovanni Forni, Carlos Matheus, Anton Zorich. Square-tiled cyclic covers. Journal of Modern Dynamics, 2011, 5 (2) : 285-318. doi: 10.3934/jmd.2011.5.285
Alex Wright. Schwarz triangle mappings and Teichmüller curves: Abelian square-tiled surfaces. Journal of Modern Dynamics, 2012, 6 (3) : 405-426. doi: 10.3934/jmd.2012.6.405
Alex Eskin, Maxim Kontsevich, Anton Zorich. Lyapunov spectrum of square-tiled cyclic covers. Journal of Modern Dynamics, 2011, 5 (2) : 319-353. doi: 10.3934/jmd.2011.5.319
Sébastien Ferenczi, Pascal Hubert. Rigidity of square-tiled interval exchange transformations. Journal of Modern Dynamics, 2019, 14: 153-177. doi: 10.3934/jmd.2019006
Alex Eskin, Maryam Mirzakhani. Counting closed geodesics in moduli space. Journal of Modern Dynamics, 2011, 5 (1) : 71-105. doi: 10.3934/jmd.2011.5.71
Eduard Duryev, Charles Fougeron, Selim Ghazouani. Dilation surfaces and their Veech groups. Journal of Modern Dynamics, 2019, 14: 121-151. doi: 10.3934/jmd.2019005
Pierre Arnoux, Thomas A. Schmidt. Veech surfaces with nonperiodic directions in the trace field. Journal of Modern Dynamics, 2009, 3 (4) : 611-629. doi: 10.3934/jmd.2009.3.611
Pascal Hubert, Gabriela Schmithüsen. Infinite translation surfaces with infinitely generated Veech groups. Journal of Modern Dynamics, 2010, 4 (4) : 715-732. doi: 10.3934/jmd.2010.4.715
Eva Glasmachers, Gerhard Knieper, Carlos Ogouyandjou, Jan Philipp Schröder. Topological entropy of minimal geodesics and volume growth on surfaces. Journal of Modern Dynamics, 2014, 8 (1) : 75-91. doi: 10.3934/jmd.2014.8.75
Artur Avila, Carlos Matheus, Jean-Christophe Yoccoz. The Kontsevich–Zorich cocycle over Veech–McMullen family of symmetric translation surfaces. Journal of Modern Dynamics, 2019, 14: 21-54. doi: 10.3934/jmd.2019002
Wenzhi Luo, Zeév Rudnick, Peter Sarnak. The variance of arithmetic measures associated to closed geodesics on the modular surface. Journal of Modern Dynamics, 2009, 3 (2) : 271-309. doi: 10.3934/jmd.2009.3.271
Artur O. Lopes, Rafael O. Ruggiero. Large deviations and Aubry-Mather measures supported in nonhyperbolic closed geodesics. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1155-1174. doi: 10.3934/dcds.2011.29.1155
Wei Wang. Closed trajectories on symmetric convex Hamiltonian energy surfaces. Discrete & Continuous Dynamical Systems, 2012, 32 (2) : 679-701. doi: 10.3934/dcds.2012.32.679
Carlos Gutierrez, Víctor Guíñez. Simple umbilic points on surfaces immersed in $\R^4$. Discrete & Continuous Dynamical Systems, 2003, 9 (4) : 877-900. doi: 10.3934/dcds.2003.9.877
Feng Luo. On non-separating simple closed curves in a compact surface. Electronic Research Announcements, 1995, 1: 18-25.
Morched Boughariou. Closed orbits of Hamiltonian systems on non-compact prescribed energy surfaces. Discrete & Continuous Dynamical Systems, 2003, 9 (3) : 603-616. doi: 10.3934/dcds.2003.9.603
Bennett Palmer. Stable closed equilibria for anisotropic surface energies: Surfaces with edges. Journal of Geometric Mechanics, 2012, 4 (1) : 89-97. doi: 10.3934/jgm.2012.4.89
Hui Liu, Yiming Long, Yuming Xiao. The existence of two non-contractible closed geodesics on every bumpy Finsler compact space form. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3803-3829. doi: 10.3934/dcds.2018165
David Colton, Rainer Kress. Thirty years and still counting. Inverse Problems & Imaging, 2009, 3 (2) : 151-153. doi: 10.3934/ipi.2009.3.151
Daniel Guan. Classification of compact complex homogeneous spaces with invariant volumes. Electronic Research Announcements, 1997, 3: 90-92. | CommonCrawl |
Difference between revisions of "Colloquia/Fall18"
Nagreen (talk | contribs)
= Mathematics Colloquium =
All colloquia are on Fridays at 4:00 pm in Van Vleck B239, '''unless otherwise indicated'''.
<!-- ==[[Tentative Colloquia|Tentative schedule for next semester]] == -->
The calendar for spring 2019 can be found [[Colloquia/Spring2019|here]].
==Spring 2019==
!align="left" | date
|'''Monday, January 9, 9th floor'''
|Jan 25
| [http://www.stat.berkeley.edu/~racz/ Miklos Racz] (Microsoft)
| [http://www.users.miamioh.edu/randrib/ Beata Randrianantoanina] (Miami University Ohio) WIMAW
|[[#Monday, January 9: Miklos Racz (Microsoft) | ''Statistical inference in networks and genomics'' ]]
|[[#Beata Randrianantoanina (Miami University Ohio) | Some nonlinear problems in the geometry of Banach spaces and their applications ]]
| Valko
| Tullia Dymarz
|January 13, B239
| [https://math.berkeley.edu/people/faculty/mihaela-ifrim/ Mihaela Ifrim] (Berkeley)
|[[#Friday, January 13: Mihaela Ifrim (Berkeley) | ''Two dimensional water waves'' ]]
| Angenent
|'''Tuesday, January 17, B139'''
| [https://web.math.princeton.edu/~fabiop/ Fabio Pusateri] (Princeton)
|[[#Tuesday, January 17: Fabio Pusateri (Princeton) | ''The Water Waves problem'' ]]
| [http://math.mit.edu/~sraskin/ Sam Raskin] (MIT)
|[[#Friday, January 20: Sam Raskin (MIT) | Tempered local geometric Langlands ]]
| Arinkin
|'''Monday, January 23, B239'''
| [http://www.math.umd.edu/~tdarvas/ Tamas Darvas] (Maryland)
|[[#Monday, January 23: Tamas Darvas (Maryland) | Geometry on the space of Kahler metrics and applications to canonical metrics ]]
| Viaclovsky
|January 27
|Reserved for possible job talks
|[[# | ]]
|February 3, 9th floor
| Melanie Matchett Wood (UW-, Madison)
|[[#Friday, February 3: Melanie Matchett Wood (UW-Madison) | Random groups from generators and relations ]]
|Monday, February 6, B239 (Wasow lecture)
| Benoit Perthame (University of Paris VI)
|[[#Monday, February 6: Benoit Perthame (University of Paris VI)| Models for neural networks; analysis, simulations and behaviour ]]
| Jin
|February 10 (WIMAW lecture), B239
| Alina Chertock (NC State Univ.)
|[[#February 10: Alina Chertock (NC State Univ.) | Numerical Method for Chemotaxis and Related Models ]]
| WIMAW
|February 17, 9th floor
| [http://web.math.ucsb.edu/~ponce/ Gustavo Ponce] (UCSB)
| [[#Friday, February 17: Gustavo Ponce(UCSB) | The Korteweg-de Vries equation vs. the Benjamin-Ono equation ]]
| Minh-Binh Tran
|Monday, February 20, 9th floor
| [https://lsa.umich.edu/math/people/postdoc-faculty/cochraam.html/ Amy Cochran] (Michigan)
| [[#Monday, February 20, Amy Cochran (Michigan) | Mathematical Classification of Bipolar Disorder ]]
| Smith
|February 24
|March 3, B239
| [http://www.math.utah.edu/~bromberg/ Ken Bromberg] (University of Utah)
|[[#Friday, March 3, Ken Bromberg (Utah) | Renormalized volume for hyperbolic 3-manifolds ]]
|Dymarz
|'''Tuesday, March 7, 4PM, 9th floor (Distinguished Lecture) '''
| [http://pages.iu.edu/~temam/ Roger Temam] (Indiana University)
|[[#Tuesday, March 7: Roger Temam (Indiana University) | On the mathematical modeling of the humid atmosphere ]]
|Smith
|'''Wednesday, March 8, 4PM, B239 '''
|[[#Wednesday, March 8: Roger Temam (Indiana University) | Weak solutions of the Shigesada-Kawasaki-Teramoto system. ]]
|March 10
| '''No Colloquium'''
|'''Wednesday, March 15, 4PM '''
| [http://verso.mat.uam.es/web/ezuazua/zuazua.html Enrique Zuazua] (Universidad Autónoma de Madrid)
|[[#Wednesday, March 15: Enrique Zuazua (Universidad Autónoma de Madrid) | Control and numerics: Recent progress and challenges ]]
| Jin & Minh-Binh Tran
| [https://services.math.duke.edu/~pierce/ Lillian Pierce] (Duke University)
| p-torsion in class groups of number fields of arbitrary degree
| M. Matchett Wood
| '''Spring Break'''
|'''Wednesday, March 29 at 3:30PM (Wasow)'''
| [https://math.nyu.edu/faculty/serfaty/ Sylvia Serfaty] (NYU)
|[[# Wednesday, March 29 at 3:30PM (Wasow)| Microscopic description of Coulomb-type systems ]]
|Tran
|Jan 30 '''Wednesday'''
|[[#Lillian Pierce (Duke University) | Short character sums ]]
| Boston and Street
|April 7
|Jan 31 '''Thursday'''
| [http://www.math.uiuc.edu/~schenck/ Hal Schenck]
| [http://www.math.tamu.edu/~dbaskin/ Dean Baskin] (Texas A&M)
|[[#Dean Baskin (Texas A&M) | Radiation fields for wave equations ]]
|Erman
| Street
|Feb 1
| Wilfrid Gangbo
| [https://services.math.duke.edu/~jianfeng/ Jianfeng Lu] (Duke University)
|Feldman & Tran
| [http://www.math.stonybrook.edu/~mde/ Mark Andrea de Cataldo] (Stony Brook)
| Maxim
| [http://users.cms.caltech.edu/~hou/ Thomas Yizhao Hou]
|[[# TBA| TBA ]]
| Qin
==Fall 2017==
!align="left" | speaker
!align="left" | title
|September 8
|September 15
|Feb 5 '''Tuesday'''
| [http://www.math.tamu.edu/~alexei.poltoratski/ Alexei Poltoratski] (Texas A&M University)
| Denisov
| '''Wednesday, September 20, LAA lecture
| Andrew Stuart (Caltech)
| [https://sites.math.northwestern.edu/~anaber/ Aaron Naber] (Northwestern)
|[[#Aaron Naber (Northwestern) | A structure theory for spaces with lower Ricci curvature bounds ]]
|Feb 15
| [https://people.math.osu.edu/cueto.5/ Angelica Cueto] (Ohio State)
| Erman and Corey
|October 6
|March 4
| [http://www-users.math.umn.edu/~sverak/ Vladimir Sverak] (Minnesota) Wasow lecture
| Kim
|October 13
| [https://orion.math.iastate.edu/jmccullo/index.html Jason McCullough] (Iowa State)
| Erman
| [http://cims.nyu.edu/~pgermain/ Pierre Germain] (Courant, NYU)
| Maksym Radziwill (Caltech)
| Marshall
| Jennifer Park (OSU)
|November 3
| Ju-Lee Kim (MIT)
| Gurevich
|November 10
| Reserved for possible job talks
| Evitar Procaccia (TAMU)
| [http://www.math.rice.edu/~jkn3/ Jo Nelson] (Rice University)
| Jean-Luc
|'''Thanksgiving break'''
| [https://www.brown.edu/academics/applied-mathematics/faculty/kavita-ramanan/home Kavita Ramanan] (Brown University)
|December 1
|May 3
| Tomasz Przebinda (Oklahoma)
== Abstracts ==
=== September 16: Po-Shen Loh (CMU) ===
Title: Directed paths: from Ramsey to Pseudorandomness
Abstract: Starting from an innocent Ramsey-theoretic question regarding directed
===Beata Randrianantoanina (Miami University Ohio)===
paths in graphs, we discover a series of rich and surprising connections
that lead into the theory around a fundamental result in Combinatorics:
Szemeredi's Regularity Lemma, which roughly states that every graph (no
matter how large) can be well-approximated by a bounded-complexity
pseudorandom object. Using these relationships, we prove that every
coloring of the edges of the transitive N-vertex tournament using three
colors contains a directed path of length at least sqrt(N) e^{log^* N}
which entirely avoids some color. The unusual function log^* is the
inverse function of the tower function (iterated exponentiation).
=== September 23: Gheorghe Craciun (UW-Madison) ===
Title: Some nonlinear problems in the geometry of Banach spaces and their applications.
Title: Toric Differential Inclusions and a Proof of the Global Attractor Conjecture
Abstract: The Global Attractor Conjecture says that a large class of polynomial dynamical systems, called toric dynamical systems, have a globally attracting point within each linear invariant space. In particular, these polynomial dynamical systems never exhibit multistability, oscillations or chaotic dynamics.
Abstract: Nonlinear problems in the geometry of Banach spaces have been studied since the inception of the field. In this talk I will outline some of the history, some of modern applications, and some open directions of research. The talk will be accessible to graduate students of any field of mathematics.
The conjecture was formulated by Fritz Horn in the early 1970s, and is strongly related to Boltzmann's H-theorem.
===Lillian Pierce (Duke University)===
We discuss the history of this problem, including the connection between this conjecture and the Boltzmann equation. Then, we introduce toric differential inclusions, and describe how they can be used to prove this conjecture in full generality.
Title: Short character sums
=== September 30: Akos Magyar (University of Georgia) ===
Abstract: A surprisingly diverse array of problems in analytic number theory have at their heart a problem of bounding (from above) an exponential sum, or its multiplicative cousin, a so-called character sum. For example, both understanding the Riemann zeta function or Dirichlet L-functions inside the critical strip, and also counting solutions to Diophantine equations via the circle method or power sieve methods, involve bounding such sums. In general, the sums of interest fall into one of two main regimes: complete sums or incomplete sums, with this latter regime including in particular "short sums." Short sums are particularly useful, and particularly resistant to almost all known methods. In this talk, we will see what makes a sum "short," sketch why it would be incredibly powerful to understand short sums, and discuss a curious proof from the 1950's which is still the best way we know to bound short sums. We will end by describing new work which extends the ideas of this curious proof to bound short sums in much more general situations.
Title: Geometric Ramsey theory
Abstract: Initiated by Erdos, Graham, Montgomery and others in the 1970's, geometric Ramsey theory studies geometric configurations, determined up to translations, rotations and possibly dilations, which cannot be destroyed by finite partitions of Euclidean spaces. Later it was shown by ergodic and Fourier analytic methods that such results are also possible in the context of sets of positive upper density in Euclidean spaces or the integer lattice. We present a new approach, motivated by developments in arithmetic combinatorics, which provide new results as well new proofs of some classical results in this area.
===Dean Baskin (Texas A&M)===
=== October 14: Ling Long (LSU) ===
Title: Radiation fields for wave equations
Title: Hypergeometric functions over finite fields
Abstract: Hypergeometric functions are special functions with lot of
Abstract: Radiation fields are rescaled limits of solutions of wave equations near "null infinity" and capture the radiation pattern seen by a distant observer. They are intimately connected with the Fourier and Radon transforms and with scattering theory. In this talk, I will define and discuss radiation fields in a few contexts, with an emphasis on spacetimes that look flat near infinity. The main result is a connection between the asymptotic behavior of the radiation field and a family of quantum objects on an associated asymptotically hyperbolic space.
symmetries. In this talk, we will introduce hypergeometric functions over finite
fields, originally due to Greene, Katz and McCarthy, in a way that is
parallel to the classical hypergeometric functions, and discuss their
properties and applications to character sums and the arithmetic of
hypergeometric abelian varieties.
This is a joint work with Jenny Fuselier, Ravi Ramakrishna, Holly Swisher, and Fang-Ting Tu.
=== Tuesday, October 25, 9th floor: Stefan Steinerberger (Yale) ===
===Aaron Naber (Northwestern)===
Title: Three Miracles in Analysis
Abstract: I plan to tell three stories: all deal with new points of view on very classical objects and have in common that there is a miracle somewhere. Miracles are nice but difficult to reproduce, so in all three cases the full extent of the underlying theory is not clear and many interesting open problems await. (1) An improvement of the Poincare inequality on the Torus that encodes a lot of classical Number Theory. (2) If the Hardy-Littlewood maximal function is easy to compute, then the function is sin(x). (Here, the miracle is both in the statement and in the proof). (3) Bounding classical integral operators (Hilbert/Laplace/Fourier-transforms) in L^2 -- but this time from below (this problem originally arose in medical imaging). Here, the miracle is also known as 'Slepian's miracle' (this part is joint work with Rima Alaifari, Lillian Pierce and Roy Lederman).
Title: A structure theory for spaces with lower Ricci curvature bounds.
=== October 28: Linda Reichl (UT Austin) ===
Abstract: One should view manifolds (M^n,g) with lower Ricci curvature bounds as being those manifolds with a well behaved analysis, a point which can be rigorously stated. It thus becomes a natural question, how well behaved or badly behaved can such spaces be? This is a nonlinear analogue to asking how degenerate can a subharmonic or plurisubharmonic function look like. In this talk we give an essentially sharp answer to this question. The talk will require little background, and our time will be spent on understanding the basic statements and examples. The work discussed is joint with Cheeger, Jiang and with Li.
Title: Microscopic hydrodynamic modes in a binary mixture
Abstract: Expressions for propagation speeds and decay rates of hydrodynamic modes in a binary mixture can be obtained directly from spectral properties of the Boltzmann equations describing the mixture. The derivation of hydrodynamic behavior from the spectral properties of the kinetic equation provides an alternative to Chapman-Enskog theory, and removes the need for lengthy calculations of transport coefficients in the mixture. It also provides a sensitive test of the completeness of kinetic equations describing the mixture. We apply the method to a hard-sphere binary mixture and show that it gives excellent agreement with light scattering experiments on noble gas mixtures.
===Monday, October 31: Kathryn Mann (Berkeley) ===
== Past Colloquia ==
Title: Groups acting on the circle
Abstract: Given a group G and a manifold M, can one describe all the actions of G on M? This is a basic and natural question from geometric topology, but also a very difficult one -- even in the case where M is the circle, and G is a familiar, finitely generated group.
[[Colloquia/Blank|Blank]]
In this talk, I'll introduce you to the theory of groups acting on the circle, building on the perspectives of Ghys, Calegari, Goldman and others. We'll see some tools, old and new, some open problems, and some connections between this theory and themes in topology (like foliated bundles) and dynamics.
[[Colloquia/Fall2018|Fall 2018]]
===November 7: Gaven Martin (New Zealand Institute for Advanced Study) ===
[[Colloquia/Spring2018|Spring 2018]]
Title: Siegel's problem on small volume lattices
Abstract: We outline in very general terms the history and the proof of the identification
of the minimal covolume lattice of hyperbolic 3-space as the 3-5-3
Coxeter group extended by the involution preserving the symmetry of this
diagram. This gives us the smallest regular tessellation of hyperbolic 3-space.
This solves (in three dimensions) a problem posed by Siegel in 1945. Siegel solved this problem in two dimensions by deriving the
signature formula identifying the (2,3,7)-triangle group as having minimal
co-area.
There are strong connections with arithmetic hyperbolic geometry in
the proof, and the result has applications in the maximal symmetry groups
of hyperbolic 3-manifolds in much the same way that Hurwitz's 84g-84 theorem
and Siegel's result do.
===Wednesday, November 16 (9th floor): Kathryn Lindsey (U Chicago) ===
Title: Shapes of Julia Sets
Abstract: The filled Julia set of a complex polynomial P is the set of points whose orbit under iteration of the map P is bounded. William Thurston asked "What are the possible shapes of polynomial Julia sets?" For example, is there a polynomial whose Julia set looks like a cat, or your silhouette, or spells out your name? It turns out the answer to all of these is "yes!" I will characterize the shapes of polynomial Julia sets and present an algorithm for constructing polynomials whose Julia sets have desired shapes.
===November 18: Andrew Snowden (University of Michigan)===
Title: Recent progress in representation stability
Abstract: Representation stability is a relatively new field that studies
somewhat exotic algebraic structures and exploits their properties to
prove results (often asymptotic in nature) about objects of interest.
I will describe some of the algebraic structures that appear (and
state some important results about them), give a sampling of some
notable applications (in group theory, topology, and algebraic
geometry), and mention some open problems in the area.
===Monday, November 21: Mariya Soskova (University of Wisconsin-Madison)===
Title: Definability in degree structures
Abstract: Some incomputable sets are more incomputable than others. We use
Turing reducibility and enumeration reducibility to measure the
relative complexity of incomputable sets. By identifying sets of the
same complexity, we can associate to each reducibility a degree
structure: the partial order of the Turing degrees and the partial
order of the enumeration degrees. The two structures are related in
nontrivial ways. The first has an isomorphic copy in the second and
this isomorphic copy is an automorphism base. In 1969, Rogers asked a
series of questions about the two degree structures with a common
theme: definability. In this talk I will introduce the main concepts
and describe the work that was motivated by these questions.
===Friday, December 2: Hao Shen (Columbia)===
Title: Singular Stochastic Partial Differential Equations - How do they arise and what do they mean?
Abstract: Systems with random fluctuations are ubiquitous in the real world. Stochastic PDEs are default models for these random systems, just as PDEs are default models for deterministic systems. However, a large class of such stochastic PDEs were poorly understood until very recently: the presence of very singular random forcing as well as nonlinearities render it challenging to interpret what one even means by a ``solution". The recent breakthroughs by M. Hairer, M. Gubinelli and other researchers including the speaker not only established solution theories for these singular SPDEs, but also led to an explosion of new questions. These include scaling limits of random microscopic models, development of numerical schemes, ergodicity of random dynamical systems and a new approach to quantum field theory. In this talk we will discuss the main ideas of the recent solution theories of singular SPDEs, and how these SPDEs arise as limits of various important physical models.
===Monday, December 5: Botong Wang (UW-Madison)===
Title: Enumeration of points, lines, planes, etc.
Abstract: It is a theorem of de Bruijn and Erdos that n points in the plane determine at least n lines, unless all the points lie on a line. This is one of the earliest results in enumerative combinatorial geometry. We will present a higher dimensional generalization of this theorem, which confirms a "top-heavy" conjecture of Dowling and Wilson in 1975. I will give a sketch of the key ideas of the proof, which are the hard Lefschetz theorem and the decomposition theorem in algebraic geometry. I will also talk about a log-concave conjecture on the number of independent sets. These are joint works with June Huh.
=== Friday, December 9: Aaron Brown (U Chicago) ===
''Lattice actions and recent progress in the Zimmer program''
Abstract: The Zimmer Program is a collection of conjectures and questions regarding actions of lattices in higher-rank simple Lie groups on compact manifolds. For instance, it is conjectured that all non-trivial volume-preserving actions are built from algebraic examples using standard constructions. In particular—on manifolds whose dimension is below the dimension of all algebraic examples—Zimmer's conjecture asserts that every action is finite.
I will present some background, motivation, and selected previous results in the Zimmer program. I will then explain two of my results within the Zimmer program:
(1) a solution to Zimmer's conjecture for actions of cocompact lattices in SL(n,R) (joint with D. Fisher and S. Hurtado);
(2) a classification (up to topological semiconjugacy) of all actions on tori whose induced action on homology satisfies certain criteria (joint with F. Rodriguez Hertz and Z. Wang).
=== Monday, December 19: Andrew Zimmer (U Chicago) ===
''Metric spaces of non-positive curvature and applications in several complex variables''
Abstract: In this talk I will discuss how to use ideas from the theory of metric spaces of non-positive curvature to understand the behavior of holomorphic maps between bounded domains in complex Euclidean space. Every bounded domain has an metric, called the Kobayashi metric, which is distance non-increasing with respect to holomorphic maps. Moreover, this metric often satisfies well-known non-positive curvature type conditions (for instance, Gromov hyperbolicity or visibility) and one can then use these conditions to understand the behavior of holomorphic maps. Some of what I will talk about is joint work with Gautam Bharali.
=== Monday, January 9: Miklos Racz (Microsoft) ===
''Statistical inference in networks and genomics''
Abstract: From networks to genomics, large amounts of data are increasingly available and play critical roles in helping us understand complex systems. Statistical inference is crucial in discovering the underlying structures present in these systems, whether this concerns the time evolution of a network, an underlying geometric structure, or reconstructing a DNA sequence from partial and noisy information. In this talk I will discuss several fundamental detection and estimation problems in these areas.
I will present an overview of recent developments in source detection and estimation in randomly growing graphs. For example, can one detect the influence of the initial seed graph? How good are root-finding algorithms? I will also discuss inference in random geometric graphs: can one detect and estimate an underlying high-dimensional geometric structure? Finally, I will discuss statistical error correction algorithms for DNA sequencing that are motivated by DNA storage, which aims to use synthetic DNA as a high-density, durable, and easy-to-manipulate storage medium of digital data.
=== Friday, January 13: Mihaela Ifrim (Berkeley) ===
''Two dimensional water waves''
The classical water-wave problem consists of solving the Euler equations in the presence of a free fluid surface (e.g the water-air interface). This talk will provide an overview of recent developments concerning the motion of a two dimensional incompressible fluid with a free surface. There is a wide range of problems that fall under the heading of water waves, depending on a number of assumptions that can be applied: surface tension, gravity, finite bottom, infinite bottom, rough bottom, etc., and combinations thereof. We will present the physical motivation for studying such problems, followed by the discussion of several interesting mathematical questions related to them. The first step in the analysis is the choice of coordinates, where multiple choices are available. Once the equations are derived we will discuss the main issues arising when analysing local well-posedness, as well as the long time behaviour of solutions with small, or small and localized data. In the last part of the talk we will introduce a new, very robust method which allows one to obtain enhanced lifespan bounds for the solutions. If time permits we will also introduce an alternative method to the scattering theory, which in some cases yields a straightforward route to proving global existence results and obtaining an asymptotic description of solutions. This is joint work with Daniel Tataru, and in part with John Hunter.
=== Tuesday, January 17: Fabio Pusateri (Princeton) ===
''The Water Waves problem''
We will begin by introducing the free boundary Euler equations which are a system of nonlinear PDEs modeling the motion of fluids, such as waves on the surface of the ocean. We will discuss several works done on this system in recent years, and how they fit into the broader context of the study of nonlinear evolution problems. We will then focus on the question of global regularity for water waves, present some of our main results - obtained in collaboration with Ionescu and Deng-Ionescu-Pausader - and sketch some of the main ideas.
=== Friday, January 20: Sam Raskin (MIT) ===
''Tempered local geometric Langlands ''
The (arithmetic) Langlands program is a cornerstone of modern representation theory and number theory. It has two incarnations: local and global. The former conjectures the existence of certain "local terms," and the latter predicts remarkable interactions between these local terms. By necessity, the global story is predicated on the local.
Geometric Langlands attempts to find similar patterns in the geometry of curves. However, the scope of the subject has been limited by a meager local theory, which has not been adequately explored.
The subject of this talk is a part of a larger investigation into local geometric Langlands. We will give an elementary overview of the expectations of this theory, discuss a certain concrete conjecture in the area (on "temperedness"), and provide evidence for this conjecture.
=== Monday, January 23: Tamas Darvas (Maryland) ===
''Geometry on the space of Kahler metrics and applications to canonical metrics''
A basic problem in Kahler geometry, going back to Calabi in the 50's, is to find Kahler
metrics with the best curvature properties, e.g., Einstein metrics. Such special metrics are
minimizers of well known functionals on the space of all Kahler metrics H. However these
functionals become convex only if an adequate geometry is chosen on H. One such choice of
Riemannian geometry was proposed by Mabuchi in the 80's, and was used to address a number of
uniqueness questions in the theory. In this talk I will present more general Finsler geometries on
H, that still enjoy many of the properties that Mabuchi's geometry has, and I will give
applications related to existence of special Kahler metrics, including the recent resolution of
Tian's related properness conjectures.
=== Friday, February 3: Melanie Matchett Wood (UW-Madison) ===
''Random groups from generators and relations''
We consider a model of random groups that starts with a free group on n generators and takes the quotient by n random relations. We discuss this model in the case of abelian groups (starting with a free abelian group), and its relationship to the Cohen-Lenstra heuristics, which predict the distribution of class groups of number fields. We will explain a universality theorem, an analog of the central limit theorem for random groups, that says the resulting distribution of random groups is largely insensitive to the distribution from which the relations are chosen. Finally, we discuss joint work with Yuan Liu on the non-abelian random groups built in this way, including the existence of a limit of the random groups as n goes to infinity.
=== Monday, February 6: Benoit Perthame (University of Paris VI) ===
''Models for neural networks; analysis, simulations and behaviour''
Neurons exchange informations via discharges, propagated
by membrane potential, which trigger firing of the many connected
neurons. How to describe large networks of such neurons? What are the properties of these mean-field equations?
How can such a network generate a spontaneous activity?
Such questions can be tackled using nonlinear integro-differential
equations. These are now classically used in the neuroscience community to describe
neuronal networks or neural assemblies. Among them, the best known is certainly
Wilson-Cowan's equation which
describe spiking rates arising in different brain locations.
Another classical model is the integrate-and-fire equation that describes
neurons through their voltage using a particular type of Fokker-Planck equations. Several mathematical results will be presented concerning existence, blow-up, convergence to steady state,
for the excitatory and inhibitory neurons, with or without refractory states. Conditions for the transition to spontaneous activity (periodic solutions) will be discussed.
One can also describe directly the spike time
distribution which seems to encode more directly the neuronal information.
This leads to a structured population equation that describes
at time $t$ the probability to find a neuron with time $s$
elapsed since its last discharge. Here, we can
show that small or large connectivity
leads to desynchronization. For intermediate regimes, sustained
periodic activity occurs.
A common mathematical tool is the use of the relative entropy method.
This talk is based on works with K. Pakdaman and D. Salort, M. Caceres, J. A. Carrillo and D. Smets.
=== February 10: Alina Chertock (NC State Univ.) ===
''Numerical Method for Chemotaxis and Related Models''
Chemotaxis is a movement of micro-organisms or cells towards the areas of high concentration of a certain chemical, which attracts the cells and may be either produced or consumed by them. In its simplest form, the chemotaxis model is described by a system of nonlinear PDEs: a convection-diffusion equation for the cell density coupled with a reaction- diffusion equation for the chemoattractant concentration. It is well-known that solutions of such systems may develop spiky structures or even blow up in finite time provided the total number of cells exceeds a certain threshold. This makes development of numerical methods for chemotaxis systems extremely delicate and challenging task.
In this talk, I will present a family of high-order numerical methods for the Keller-Segel chemotaxis system and several related models. Applications of the proposed methods to to multi-scale and coupled chemotaxis–fluid system and will also be discussed.
=== Friday, February 17: Gustavo Ponce(UCSB) ===
''The Korteweg-de Vries equation vs. the Benjamin-Ono equation''
In this talk we shall study the <math>k</math>-generalized Korteweg-de Vries <math>(k</math>-KdV) equation
<math>\partial_t u + \partial_x^3u +u^k\,\partial_xu=0,\;\;\;\;\;\;\;x,t\in\Bbb R,\, k\in \Bbb Z^+, </math>
and the <math>k</math>-generalized Benjamin-Ono (<math>k</math>-BO) equation
<math>\partial_t u-\partial_x^2\mathcal {H} u+u^k\,\partial_x u=0, \;\;\;\;\;\;\;x,t\in\Bbb R,\, k\in \Bbb Z^+,</math>
where <math>\mathcal {H}</math> denotes the Hilbert transform,
<math>\mathcal {H} f(x)=\frac{1}{\pi}\, {p.v.}\big(\frac{1}{x}\ast f\big)(x)=(-i\,sgn(\xi) \widehat{f}(\xi))^{\vee}(x).</math>
The goal is to review and analyze results concerning solutions of the initial value properties associated to these equations.
These include a comparison of the local and global well-posedness and unique continuation properties
as well as special features of the special solutions of these models.
=== Monday, February 20, Amy Cochran (Michigan) ===
''Mathematical Classification of Bipolar Disorder''
Bipolar disorder is a chronic disease of mood instability. Longitudinal patterns of mood are central to any patient description, but are condensed into simple attributes and categories. Although these provide a common language for clinicians, they are not supported by empirical evidence. In this talk, I present patient-specific models of mood in bipolar disorder that incorporate existing longitudinal data. In the first part, I will describe mood as a Bayesian nonparametric hierarchical model that includes latent classes and patient-specific mood dynamics given by discrete-time Markov chains. These models are fit to weekly mood data, revealing three patient classes that differ significantly in attempted suicide rates, disability, and symptom chronicity. In the second part of the talk, I discuss how combined statistical inferences from a population do not support widely held assumptions (e.g. mood is one-dimensional, rhythmic, and/or multistable). I then present a stochastic differential equation model that does not make any of these assumptions. I show that this model accurately describes the data and that it can be personalized to an individual. Taken together, this work moves forward data-driven modeling approaches that can guide future research into precise clinical care and disease causes.
=== Friday, March 3, Ken Bromberg (Utah)===
"Renormalized volume for hyperbolic 3-manifolds"
Motivated by ideas in physics Krasnov and Schlenker defined the renormalized volume of a hyperbolic 3-manifold. This is a way of assigning a finite volume to a hyperbolic 3-manifold that has infinite volume in the usual sense. We will begin with some basic background on hyperbolic geometry and hyperbolic 3-manifolds before defining renormalized volume with the aim of explaining why this is a natural quantity to study from a mathematician's perspective. At the end will discuss some joint results with M. Bridgeman and J. Brock.
=== Tuesday, March 7: Roger Temam (Indiana University) ===
''On the mathematical modeling of the humid atmosphere''
The humid atmosphere is a multi-phase system, made of air, water vapor, cloud-condensate, and rain water (and possibly ice / snow, aerosols and other components). The possible changes of phase due to evaporation and condensation make the equations nonlinear, non-continuous (and non-monotone) in the framework of nonlinear partial differential equations.
We will discuss some modeling aspects, and some issues of existence, uniqueness and regularity for the solutions of the considered problems, making use of convex analysis, variational inequalities, and quasi-variational inequalities.
=== Wednesday, March 8: Roger Temam (Indiana University) ===
''Weak solutions of the Shigesada-Kawasaki-Teramoto system''
We will present a result of existence of weak solutions to the Shigesada-Kawasaki-Teramoto system, in all dimensions. The method is based on new a priori estimates, the construction of approximate solutions and passage to the limit. The proof of existence is completely self-contained and does not rely on any earlier result.
Based on an article with Du Pham, to appear in Nonlinear Analysis.
=== Wednesday, March 15: Enrique Zuazua (Universidad Autónoma de Madrid) ===
''Control and numerics: Recent progress and challenges''
In most real life applications Mathematics not only face the challenge of modelling (typically by means of ODE and/or PDE), analysis and computer simulations but also the need control and design.
And the successful development of the needed computational tools for control and design cannot be achieved by simply superposing the state of the art on Mathematical and Numerical Analysis. Rather, it requires specific tools, adapted to the very features of the problems under consideration, since stable numerical methods for the forward resolution of a given model, do not necessarily lead to stable solvers of control and design problems.
In this lecture we will summarize some of the recent work developed in our group, motivated by different applications, that have led to different analytical and numerical methodologies to circumvent these difficulties.
The examples we shall consider are motivated by problems of different nature and lead to various new mathematical developments. We shall mainly focus on the following three topics:
- Inverse design for hyperbolic conservation laws,
- The turnpike property: control in long time intervals,
- Collective behavior: guidance by repulsion.
We shall also briefly discuss the convenience of using greedy algorithms when facing parameter-dependence problems.
This lecture has been conceived for a broad audience. Accordingly, unnecessary technicalities will be avoided.
=== Wednesday, March 29 at 3:30PM (Wasow): Sylvia Serfaty (NYU)===
''Microscopic description of Coulomb-type systems''
We are interested in systems of points with Coulomb, logarithmic
or more generally Riesz interactions (i.e. inverse powers of the distance). They arise in various settings: an instance is the classical Coulomb gas which in some cases happens
to be a random matrix ensemble, another is vortices in the Ginzburg-Landau
model of superconductivity, where one observes in certain regimes the emergence of densely packed point vortices forming perfect triangular lattice patterns named
Abrikosov lattices, a third is the study of Fekete points which arise in approximation theory. After reviewing the motivations, we will take a point of view based on the detailed expansion of the interaction energy to describe the microscopic behavior of the systems. In particular a Central Limit Theorem for fluctuations and a Large Deviations Principle for the microscopic point processes are given.
This allows to observe the effect of the temperature as it gets very large or very small, and to connect with crystallization questions.
The main results are joint with Thomas Leblé and also based on previous works with Etienne Sandier, Nicolas Rougerie and Mircea Petrache.
[[Archived Fall 2016 Colloquia|Fall 2016]]
1 Mathematics Colloquium
1.1 Spring 2019
1.2 Abstracts
1.2.1 Beata Randrianantoanina (Miami University Ohio)
1.2.2 Lillian Pierce (Duke University)
1.2.3 Dean Baskin (Texas A&M)
1.2.4 Aaron Naber (Northwestern)
1.3 Past Colloquia
All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated.
The calendar for spring 2019 can be found here.
Jan 25 Beata Randrianantoanina (Miami University Ohio) WIMAW Some nonlinear problems in the geometry of Banach spaces and their applications Tullia Dymarz
Jan 30 Wednesday Lillian Pierce (Duke University) Short character sums Boston and Street
Jan 31 Thursday Dean Baskin (Texas A&M) Radiation fields for wave equations Street
Feb 1 Jianfeng Lu (Duke University) TBA Qin
Feb 5 Tuesday Alexei Poltoratski (Texas A&M University) TBA Denisov
Feb 8 Aaron Naber (Northwestern) A structure theory for spaces with lower Ricci curvature bounds Street
Feb 22 Angelica Cueto (Ohio State) TBA Erman and Corey
March 4 Vladimir Sverak (Minnesota) Wasow lecture TBA Kim
March 8 Jason McCullough (Iowa State) TBA Erman
March 15 Maksym Radziwill (Caltech) TBA Marshall
March 29 Jennifer Park (OSU) TBA Marshall
April 5 Ju-Lee Kim (MIT) TBA Gurevich
April 12 Evitar Procaccia (TAMU) TBA Gurevich
April 19 Jo Nelson (Rice University) TBA Jean-Luc
April 26 Kavita Ramanan (Brown University) TBA WIMAW
May 3 Tomasz Przebinda (Oklahoma) TBA Gurevich
Beata Randrianantoanina (Miami University Ohio)
Lillian Pierce (Duke University)
Dean Baskin (Texas A&M)
Aaron Naber (Northwestern)
Past Colloquia
Retrieved from "https://www.math.wisc.edu/wiki/index.php?title=Colloquia/Fall18&oldid=16709" | CommonCrawl |
A composite method for improving the resolution of passive radar target recognition based on WiFi signals
Xiaokun Zheng ORCID: orcid.org/0000-0002-1629-43441,2 na1,
Ting Jiang1 na1 &
Wenling Xue1,2
To improve the resolution of passive radar target recognition based on WiFi signals, a composite preamble scheme is proposed. In addition, a composite approach combined with cancelation and decomposition in an interior scenario is studied. With this method, short sensing signals, such as chirps with no extra bandwidth, are overlayed onto a long WiFi preamble (802.11a) in the time domain. After detection and coarse sync using the short preamble at the Rx side, most of the chirp signal is removed prior to fine channel estimation (CE). Although the slight influence on CE may cause a fractional increase in the relative constellation error, the method can obtain a higher communication performance than normal multiplex integration. Finite-difference time-domain (FDTD) simulations show that when the same time-domain features are extracted, the composite signal provides a better recognition effect than when only using the WiFi preamble. The method is useful for health care such as elderly toilet/bathroom standing/lying detection, or device-free people counting.
The Internet of Things (IOT) is being propelled by the convergence of sensing, communication, computing, and control. Integrated smart sensing and communication is a hot issue for communication terminals. It is important in the fields of health care, military, transportation [1], industrial monitoring, border patrol, etc. The key to signal integration is to design a waveform that simultaneously provides the functionalities of communication and sensing.
Traditional radar technology, such as bistatic radar [2], was initially used for smart sensing. Then, in [3, 4], WiFi signals were used for human fall detection and gesture recognition, [3] reported 87% or 90% fall detection accuracy when magnitude of CSI or the phase difference across multiple antennas was used, and [5, 6] discussed multistatic passive radar using WiMAX and RFID; however, their sensing resolutions were generally relatively limited. Yu Guo, Usman Mahmood, Zhong et al. [7–9] used deep learning/SVM classification based on privileged information or fourth-order cumulant to improve recognition correctness. On the other hand, to achieve higher resolution, certain signals suitable for sensing have been used to recognize and transmit data simultaneously. For example, LFM-MSK or FMCW [10] based on chirp signals, are used for integrated systems with relatively low data rates; in [11], pulse waveform OFDM was used for high-resolution detection, although its synchronization is costly.
Various multiplexing-based methods, such as time [1], frequency [12], code [13], and spatial-division multiplex integration [14], have also been proposed, whereby both data rate and sensing resolution can achieve a relatively high performance. Moreover, cancelation technology is used to reduce interference between the sensing carrier and the transfer sub-carrier in MIMO-OFDM integrated systems [15]. Mark Roberton proposed a chirp spread-spectrum signal [16], with the help of de-spreading to reduce mutual interference.Then long pseudo random code is generally needed and data rate is affected. In [17], CP-based OFDM high-resolution imaging, which requires a sufficient cyclic prefix, was proposed. Multiplex mode generally requires more resources.
A composite integration is proposed in this paper. Our main contributions are as follows. (1) A composite preamble scheme is proposed to improve the target recognition resolution of passive radar using WiFi. Thereby, a higher communication performance than normal multiplex integration can be obtained. The composition can only be applied at the Tx side. It can also be applied at both the Tx and Rx sides, which means that the preamble at the Tx and Rx sides can be replaced by the same composite preamble simultaneously. (2) The synthesis can be in the time domain, or in the frequency domain which possesses good autocorrelation, anti-fading performance and sharper ambiguity function. Thus, the channel estimation of the long preamble is retained, and its recognition resolution is improved. (3) In this paper, a time-domain composite at the Tx side combined with cancelation and decomposition is studied. We verified that the method can improve the resolution of object recognition for interior scenes. The scene can be empty; it can also include some fixed arrangement. And we are adding more pertinent recognition test for application-specific scenario; we will also focus on applying suitable composite(such as f-domain composite) preamble at both the Tx and Rx sides next; there would then be no need to consider the effects of deletion and residues.
In this paper, we emphasize the time domain. The waveform of the data section varies with the transmitted data, and it is rather complex for use in recognition tasks [4]. Thus, we use the preamble, and a short sensing signal, such as a chirp with no extra bandwidth, is overlaid onto the long preamble of 802.11a WiFi [18]. After detection and coarse sync at the Rx side, the pre-process or pre-fade chirp signal is removed according to k·Tstep based on the coarse sync and using a parallel correlator [19, 20]. Here, Tstep is the delay increment step, k=0,1,2…. Then, the residual jamming signal is decomposed when the long preamble makes correlation with many corresponding delays local sequences (Fig. 2b). OFDM can be used to transfer data at high speed in the data section, and the sensing part is easy to extract. A small constant-amplitude overlapping chirp at the transmitter has a minimal effect on the peak-to-average-power ratio (PAPR). The power of the overlapping signal can be easily adjusted according to the sensing precision and CE precision, and the system change is smaller than time multiplex integration [1]. The FDTD simulation shows that the composite signal provides better recognition performance than simply using the WiFi preamble. Although there is a slight influence on frequency offset (FO) estimation and frequency-domain equalization (FDE) [21], which may cause a fractional increase in the relative constellation error (RCE) [18], the method still has a relatively high data transmission capacity. The method is useful for device-free people counting [22] at the entrance, such as making a distinction between adults and children. And it is useful for fall detection in toilet/bathroom, etc, such as there was a nearly 100% human standing/lying recognition accuracy in the simulation in this paper when preamble signal was used.
The remainder of this paper is organized as follows. The scheme of the signal composite integration is introduced in Section 2. The principle and scheme of the separation method are introduced in Section 3. Section 4 presents the FDTD-based simulation results and discussion. The conclusion is provided in Section 5.
Methods of the composition
To improve the recognition resolution of WiFi signals and reduce the impact on communication, a short sensing signal is overlaid onto its long preamble, with neither extra bandwidth nor time slots required, as shown in Fig. 1. The data section varies with the transmitted data; it is too complex to be used for recognition [5]. Therefore, we use the preamble. The duration or power of the overlapping can be adjusted.
Signal composition integration. a Overlaying the sensing signal onto WiFi. b Ambiguity function with different fslope. c Autocorrelation of composition
The composite preamble signal is expressed as (1, 2), where La(t) is time domain of the 802.11a long preamble, L is frequency-domain, L = {00000,1,1,−1…1,1,000000} and \(\text {La(t)} = \sum \limits _{k} L_{k}{\cdot }exp(j2{\pi }k\Delta _{F}t)= \) IFFT(L) [18]. s(t) is the short sensing signal, TL and Tchrp are the duration times of La(t) and s(t), respectively.
$$ P_{r}(t) = \left\{ \begin{array}{ll} \;\;{L_{a}}(t) & \mathrm{0}\le\mathrm{t}\le T_{L}-T_{chrp}\\ L_{a}(t) + s(t) & T_{L}-{T_{\text{chrp}}} < \mathrm{t}\le T_{L}\\ \;\;\;\;0 & \text{else}\\ \end{array}\right. $$
$$ P_{rr} (t) = IFFT [L +k_{s} \cdot abs(FFT (s(t)))] $$
Figure 1c shows that both of the compositions have good autocorrelation. However, for the reason that the overlaid signal is an envelope signal, anti-fading performance of composition (1) is slightly worse when CE is performed, (1) is not quite suitable for replacing original preamble at both Tx and Rx sides directly. So in this paper, we emphasize the time-domain composition at Tx side; and the sensing signal s(t) is removed and decomposed at Rx side. Composition in f-domain (2) will be researched next.
Here, we select a chirp as the sensing signal, and we use the composite fragment La(t)+s(t)for recognition. s(t) can be expressed as \(\;s(t) = U(t){e^{- j2\pi t\left (f_{0} + {\textstyle {1 \over 2}}{f_{\text {slope}}} \cdot t\right)}}\; \), where fslope is the slope and U(t) is the roughly constant envelope pulse in this paper. If the power of the chirp Pchrp is less than 1.2 times the long preamble PL, recognition correctness will not increase noticeably (Fig. 4a); therefore, Pchrp should be > 1.2PL. On the other hand, if Pchrp is too large, it will seriously affect the fine FO estimation or RCE. For example, when Pchrp>2PL, the FO estimation correctness will greatly decrease, and FDE cannot be completed (see Figs. 4b and 7a). Moreover, when Pchrp=2PL, a satisfactory increase in the recognition correctness can be obtained (Fig. 9). Thus, 1.2PL<Pchrp≤2PL is recommended.
With the same power Pchrp, a longer Tchrp results in a more serious influence on FO estimation and FDE. In the case of Pchrp=1.2PL, when Ts = 1 us, the correctness will greatly decrease as well (Fig. 4b). Thus, Pchrp cannot be smaller, and therefore, Ts1 us is required. On the other hand, if Tchrp is too short, an insufficient number of sample points will be obtained, and the recognition task will not be performed accurately. As described by the USRP (Universal Software Radio peripheral) data sheet [23], if we set the front-end sample rate as fs = 200 MHz and Tchrp< 0.25 us, < 50 sample points can be used for recognition, which cannot achieve complete complex recognition. In addition, considering that general radar signal processing boards support higher fs, we emphasize fs = 200 and 400 MHz and Ts = 0.5 us in the FDTD simulation in this paper.
With the same duration and bandwidth, fslope of the chirp signal modulated with single-side band (SSB) technology could have a greater value than DSB. 802.11a uses a bandwidth of 20 MHz [18]; when the chirp signal is modulated with DSB, its fslope can be less than approximately 20 MHz/us (here, Ts = 0.5 us). Then, it would occupy a bandwidth of less than 20 MHz. On the other hand, when modulated with SSB, its frequency change rate fslope can be less than approximately 40 MHz/us. Equation (3) is the ambiguity function of the chirp signal [24]. We can see that a larger fslope results in a more pronounced spike and higher resolution. Figure 1b shows the ambiguity function with different fslope without extra bandwidth.
$$ {\begin{aligned} \left| x(\tau,{f_{d}}) \right|^{2} = \left\{ \begin{array}{c} \left| (T_{\text{chrp}} - \left| \tau \right|) \cdot \sin c((f_{d} - f_{\text{slope}}\tau)(T_{\text{chrp}} - \left| \tau \right|)) \right|^{2} \left| \tau \right| <T_{\text{chrp}} \\ 0\;\quad \quad \quad \quad \quad \quad \text{else} \\ \end{array}\right. \end{aligned}} $$
Methods of cancelation and decomposition
The composite signal can be used for recognition. In addition, when channel estimation (CE) is performed, the chirp signal should be removed. Figure 2 shows the scheme for extracting the sensing part and the scheme for removing when CE. After detection and coarse sync by the short preamble at the Rx side, the pre-process chirp is removed according to k·Tstep based on both the coarse sync (k=0,1,2,…) and the sliding parallel correlator [20, 21, 25]. Then, the residual jamming is decomposed when the long preamble makes a correlation with local sequence and makes FFT for FDE.
Separate sensing and CE based on 802.11a. a Separate FO estimate, FDE and sensing. b Removal based on sliding parallel correlator
Removing overlapped signal
To guarantee the channel estimation performance of the synthesis preamble, before being correlated with multiple delay-different k·Tstep local sequences, the corresponding phase chirp is removed according to k·Tstep based on the coarse sync (k=0,1,2,…). Here, we use a sliding parallel correlator to fine tune the sync. Based on the coarse sync, parallel correlators do not require too many resources. In addition to the chirp signal on the Tx side, it can be considered for removing pre-fade or pre-processing the chirp signal. Here, the pre-processed signal is defined as
$$ S_{\text{pre}}(f) = H(f) \cdot S(f)+N_{i}(f) $$
where S(f) is the chirp signal on the Tx side, and H(f) is the transfer function of the interior scene without the sensing object and no people (It can be empty or include some fixed arrangement). Spre(f) can be recorded in advance for a given scenario, where the locations of the transmitter and receiver are fixed and there are neither sensing objects nor people. The interior scene could be viewed as a waveguide; we use the FDTD method [26, 27] to achieve accurate computation.
The insertion of the sensing object will cause a disturbance in the electric field distribution. Let H′(f) denote the new transfer function, and the received signal is Sr(f). Then,
$$ S_{r}(f)= H^{\prime}(f) \cdot S(f)+N_{i}(f) $$
After removing the sending S(f) or pre-processed Spre(f), the remainder are Srest(f) or \(S_{p_{r}est}(f)\), respectively:
$$\begin{array}{*{20}l} &S_{\text{rest}}(f) =[H(f)-1] \cdot S(f)+N_{i}(f), \quad and\\ &S_{\text{p\_rest}}(f) \,=[H(f)-H^{\prime} (f)]\cdot S(f)+[N_{i} (f)-N(f)] \end{array} $$
In real applications, Srest(f), Sp_rest(f) or their mean value can be measured; they can then be compared to decide which should be removed. In this paper, the sensing objects are mainly hollow balls with diameters of 21 cm and 22 cm, the disturbance is small, and we select pre-processed signals for removal in our simulation test.
Figure 3 shows the Spre(f) and Sr(f) signals recorded by the recorder of the FDTD simulation; here, after the FDTD calculation, we introduce noise with an SNR of 6.5 dB. In addition, when we perform FO estimation, FDE and recognition tests, [ − 4 ∼16 dB] SNRs are considered in Figs. 4, 9, and 10. The dotted line represents Spre(f), and the solid line represents Sr(f) with the object. We can see the difference caused by the insertion of the sensing object. Table 1 shows the removal of the residue in our study.
Difference in received signal caused by sensed object
Frequency offset estimation result. a Canceling and decomposing vs. only de-spreading. b Powers of chirps overlapped are different
Table 1 Residue vs. sync precision
There are certain requirements on the front-end sample rate at the Rx side. Considering the 20 MHz bandwidth limit of 802.11a WiFi, see Table 1 for the main parameters of the chirp overlapping and removal in our study.
Residual decomposition and marginal impact on CE
There is a residual after removal due to the sync precision and nonlinear variations. The residual will be decomposed when the long preamble makes a correlation for the time sync and fine FO estimation; it will also be decomposed when the FFT is performed for frequency-domain equalization. Here, we focus on fine FO estimation for illustration. We use a step-search method, where fstep is the step size in the search. Pr(t) is the composite preamble at the transmitter, as shown in Eq. (1). \(P_{r}^{\prime \prime }(t)\) is the received composite preamble at the receiver after coarse offset correction, and its discrete form is \(P_{r}^{\prime \prime }(n) = P_{r}({nT}_{s})e^{j2\pi f_{\Delta }{nT}_{s}}\). Here, fΔ denotes the residual fractional offset and fΔ∈[fmin,fmax].
Let \(L_{ak}(n) = L_{a}(n)e^{j2\pi \left (f_{\min } + k \cdot f_{step}\right){nT}_{s}}\), denote multiple different FO local sequences, k=0,1,2,…. Then,
$$\begin{array}{*{20}l} z_{k} & = \sum\limits_{n=1}^{l}P_{r}^{\prime\prime}(n) \cdot L_{ak}(n)*\\ & = \sum\limits_{n=1}^{l-i}L_{a}(n)e^{j2\pi f_{\Delta}{nT}_{s}} \cdot L_{ak}(n)* \\&\quad+ \sum\limits_{n=l-i+1}^{l}(L_{a}(n)+ s(n-l+i))e^{j2\pi f_{\Delta}{nT}_{s}} \cdot L_{ak}(n) \\ & = \sum\limits_{n = 1}^{l} \left| L_{a}(n) \right|^{2}e^{j2\pi(f_{\Delta}-f_{\min}-{kf}_{step}){nT}_{s}}+\delta \end{array} $$
Equation (7) shows that the received composite preamble correlates with the different local FO sequences. l is the sample point number of the entire received long preamble (Pr′′(t)), and i is the sample number of the composite fragment \(\phantom {\dot {i}\!}[L_{a}(t) + s(t)]e^{j2\pi f_{\Delta }{nT}_{s}}\). In this paper, i=200; l=2560 when fs=400 M, Tchrp=0.5 us, TL=6.4 us; see Table 2 in Section 4. When (fmin+k·fstep)=fΔ, zk obtains its maximum value\(\sum \limits _{n = 1}^{l} \left | L_{a}(n) \right |^{2}\), and fΔ can be estimated as \( f_{\Delta } = \underset {k}{\arg } \max (\left | z_{k} \right |) \cdot f_{\text {step}}f_{\mathrm {\min }}\).
Table 2 Integrated signal parameters
δ in Eq. (7) determines the influence of the overlap:
$$\begin{aligned} \delta &= \sum\limits_{n=l-i+1}^{l}s[(n-l+i)] \cdot L_{a}^{*}(n) \\& \le \quad \sum\limits_{n=l-i+1}^{l} \left|s[(n-l+i)]\cdot{L_{a}}(n)^{*}\right| \end{aligned} $$
When |La(n)| is a roughly constant envelope, the induced relative error Δδ can be estimated as
$$\begin{array}{*{20}l} \Delta\delta & = \frac{\delta }{\sum\limits_{n=1}^{l}\left|L_{a}(n)\right|^{2}} \; \le \; \frac{\sum\limits_{n=l-i+1}^{l} \left|s[(n-l+i)T_{s}]\cdot L_{a}(n)^{*}\right|} {\sum\limits_{n=1}^{l}\left|L_{a}(n)\right|^{2}} \\& \le \;\frac{\sum\limits_{n=l-i+1}^{l}\left|s[(n-l+i)]\right|} {\sum\limits_{n=1}^{l}\left|L_{a}(n)\right|} \approx \frac{i\cdot\frac{2}{\pi}\sqrt{2\bar{p}_{\text{chrp}}}} {{l\cdot\sqrt{\overline{P}_{L}}}} \\ & \le \frac{i}{l}\cdot\sqrt{\frac{\bar{p}_{\text{chrp}}} {\overline{P}_{L}}} = \frac{T_{\text{chrp}}}{T_{L}}\cdot \sqrt{\frac{\bar{p}_{\text{chrp}}}{\overline{P}_{L}}} \end{array} $$
After removal, Δδ can be estimated as
$$ \Delta\delta \le \;\; \frac{T_{\text{chrp}}}{T_{L}} \cdot \sqrt{\frac{\bar{p}_{\text{chrp}}\cdot w}{\overline{P}_{L}}} $$
where \(\bar {p}_{L}\) and \(\bar {p}_{\text {chrp}}\) are the average powers of the preamble and chirp, \(\frac {2}{\pi }\sqrt {2\bar {p}_{\text {chrp}}}\) in Eq. (8) is the mean value of the constant envelope |Chirp|, and w is the residue rate.
In this paper, we focus on the influence of the removal and residual on fine FO estimation and equalization. The influence on time sync is relatively small. We also recommend using the composite signal as the long preamble directly for the fine time sync, FDE, etc., therein replacing the original preamble at the Tx side (transmitter) and Rx side (receiver). This is one of the next steps in our research.
Extracting sensing part for recognition
The composite sensing signal is extracted after the time sync, and the composite signal fragment is used directly for recognition:
$$ L_{a}(t)+s(t)\quad T_{L}-T_{\text{chrp}}<t \le T_{L} $$
Combined with a current recognition algorithm based on UWB or WiFi [9, 28], it can be used to sense the shape and size of an object.
We first simulate the performance of FO estimation and FDE, and then according to the results, we calculate the effect on RCE. Next, according to Table 90 "Allowed RCE versus data rate" in 802.11a, we obtain the allowed rate under different RCE and make an evaluation of the transmission. Finally, we test out the target recognition by using the preamble signal.
Frequency offset estimate
Fine FO estimation and FDE are the main functions of the long preamble. Impacting on FO estimation due to the composition will increase rotation error of the constellation. So in this section, we simulate FO estimation by using composite preamble, and Fig. 4 is drawn. Then, according to the result, we calculate the effect on RCE. To consider the influence on fine FO estimation, we use MATLAB 2014a; And in the next two sections, we use the eastwave FDTD electromagnetic platform.
802.11a [18] is the reference for the simulation parameters. The long preamble La is the IFFT of L={000000, L−26,26, 00000}, where L−26,26={1,1,−1,…,−1,1,1,1,1}, and long preamble includes two L periods 2TFFT. On the Rx side, it correlates with the local sequences for fine FO estimation based on a step-search method, and the coarse offset has been corrected based on the short preamble. And we set front-end sample rate fs=400 MHz, Tstep=2Ts=5 ns, the power of the overlapped chirp Pchrp is 1.2 to 2 times the long preamble PL.
For the reason that the increase in the error rate is not noticeable when the FO error εf<10 kHz [29], we set fstep=6.4 kHz. In this way if the FO estimation falls in the correct interval, εf will be 3.2 kHz, which meets the requirement of εf<10 kHz. Even if the estimated results fall in the adjacent area, εf<6.4 kHz only slightly increase the rotation error of constellation,will not affect the follow-up equalization estimation (see Fig. 7 and relevant explanation in 1st paragraph of Section 4.2.3). See Table 2 for the other parameters.
Figure 4 is the result of FO estimation using composite preamble vs.original preamble. Figure 4a shows that the FO estimation accuracy is decreased by approximately 3–4% under high-signal-to-noise-ratio (SNR) conditions after overlap and removal, and the accuracy is greater than only de-spreading is performed [16]. And as a result of Δf·t=Δϕ, FO estimation error Δf will increase the rotation error of the constellation(Δϕ).When Δϕ is very small, Δr/r ≈sinΔϕ≈Δϕ (see the small graph inserted in Fig. 4, α≈90° there), here r is magnitude of the point in constellation, Δr is magnitude of the constellation error.Hence,it can be considered that the 3-4-% Δf error leads to the corresponding increase in RCE. And since the small increase of εf does not affect the completion of the following FDE, we can add together the RCE increase derived in this section and the increase derived in next section, then we get the total RCE increase, no more consideration of mutual influence.
Figure 4b shows the fΔ accuracy curve with chirps at varying powers, where the power is easy to adjust according to the estimation and recognition precision. The curves are flatted at high SNR, with a flooring effect being exhibited [30]. This shows that to further improve estimation and recognition concurrently, we must improve the removal effect.
Frequency-domain equalization
FDTD scenario and calculation settings
The FDTD simulation provides greater accuracy than the general multi-path model [31]. We use the eastwave FDTD simulation platform, which is China's first parallel all-vector electromagnetic simulation platform. The platform is based on a strict FDTD and physical optics model; its calculation speed is 10 to 100 times higher than similar algorithms found worldwide. The parameters of the FDTD simulation were set as 2.9 ×1.5×2 m based on our lab, where the distance between the RX and TX antennas is 1.6 m and the length of an antenna is 12 cm. The sensed object is placed in the middle. Figure 5 shows the FDTD scenario.
FDTD interior scenario settings. a recognize scenario; b 3D human body model
The wall of the lab is composed of ground glass, and its reflectivity is always large; therefore, the absorption boundary's perfectly matched layer (PML) is set to 6 [26]. See Table 3 for other FDTD calculation parameters.
Table 3 FDTD calculation settings
FDTD signal settings
Referring to 802.11a and the overlapping scheme in Fig. 1, we set our FDTD excitation source as shown in Fig. 6. The data of the FDTD recorder is output to Matlab2014 for data processing. Under 802.11a, \(L_{a}(t)=w_{\text {Tlong}}(t)\sum \limits _{k=-26}^{26} {L_{k}} \exp \left (j2\pi k\Delta _{F}(t-T_{G12})\right)\), where wTlong(t) is the time window and TG12 is the guard interval. The discrete form of the long preamble is La(k)=wTlong(k)·IDFT64(L), and
$$w_{\text{Tlong}}(k)=\left\{ \begin{array}{ll} 1 \quad 1 \le k \; \le 62\\ 0.5 \quad k=0,63 \end{array}\right. $$
FDTD excitation source settings
The RF signal is rRF(t)=Re(r(t))·cos(wct)−Im(r(t))·sin(wct) [18]; thus, the I channel is endowed with Re[ifft(L)], and the Q channel is endowed with Im[ifft(L)]. Then, we obtain the integrated signal as shown in Fig. 6.
Equalization results
Figure 7a is the FFT of the received preamble under varying fractional frequency offset compensations (FFOC). The figure shows that the preamble can achieve FDE even when fractional FO estimated value fΔ has a slight error (When fΔhas a slight error such as εf=6.4kHz here, the FFOC will change as well, and then the curve is slightly different.However,the FDE can still be achieved). [29] also indicates that fΔ is allowed a certain error. In addition, from the perspective of the sum of squared deviations (SSD), the line is flattest when FFOC = -120 kHz, and thus, -120 kHz may provide the best performance. SSD is defined as \(\sum \limits _{n=1}^{64}\left [d_{n}-E(d)\right ]^{2}\), where d indicates the curve and E(d) is its mean value.
Channel equalization. a FFT of preamble at Rx side after differ-FO compansation. b Integrated signal vs. only WiFi
Figure 7b shows the FFT of the WiFi preamble (solid line sn) and the FFT of the composite preamble after removal on the Rx side (dotted line dn) after compensation at -120 kHz. mn(marked line) is WiFi at the transmitter. Thus, sn−mn is the equalization coefficient, and dn−sn is the equalization error due to the residual. We defined the relative equalization error:
$$ \sqrt{\Sigma\left[d_{n}-s_{n}\right]^{2} / \Sigma\left[s_{n}-m_{n}\right]^{2}}=\:\Delta J \qquad n=1,2,3\ldots,64 $$
By computation, ΔJ=9.5% slightly increases RCE, and taking into account the 3% increase in phase rotation RCE caused by the fre-offset error mentioned in Section 4.1, it will lead to an increase in the RCE from -25 dB to -16 dB or -19 dB to -13 dB. Accordingly, the data rate decreases from 54 Mbps to 24 Mbps or 36 Mbps to 18 Mbps (Table 4). (if the sync precision improves to Tstep= 2.5 ns, the data rate decreases from 54 Mbps to approximately 48 Mbps)
Table 4 Equalization error leads to a fractional increase in RCE
Assessment of the communication
According to the test results,analytical,calculation in 4.1 and 4.2.3: FO estimation error increases the rotation error of constellation, equalizing compensation error causes the diffusion of constellations. Combining the results of the two, the total RCE increase due to the composition is got. And then according to Table90 in 802.11a (Table 4 in this paper), we obtain the allowed rate under different RCE, Table 5 is got and Fig. 8 is drawn.
Assessment of the composite method vs. time multiplex integration
Table 5 Rbmax Performance comparison
Table 5 shows the Rbmax comparison of some super-resolution design methods occupying the same 20 MHz of bandwidth, except the time-division integration [1] is 100 MHz of bandwidth.
Figure 8 shows the assessment of the communication capability compared to time-division (TD) multiplex integration when they are applied to the same OFDM modulation based on 802.11a. Here, communication duty ratio (C duty ratio) is the communication proportion under TD multiplex integration. For TD integration mode, with the increase of the C duty ratio, its transmission capacity increases;and when C duty ratio is from 2/3 to 3/4, or from 3/4 to 4/5..., the increase is getting slower. The data rate of composition mode is not very different from the TD mode when C duty ratio of the TD mode is more than 3/4. In addition, composition mode has certain advantage when RCE is less than − 22 db, and then we say that it has a high transmission capacity.
High-resolution recognition
We use the output of the FDTD simulation in Section 4.2 to compare the recognition effect of only using WiFi and when using the composite signal. Two groups of experiments were performed. One group attempts to distinguish between a basketball (Φ25 cm) and a volleyball (Φ21 cm); the other group attempts to distinguish between a football (Φ22 cm) and a volleyball. Besides identifying balls with different diameters,we also added a human standing/lying recognition simulation. The height of the 3D human body model is set to 1.55 m(see Fig. 5b), and the simulation result is drawn in Fig. 5b.
Based on the normalization of the long preamble, time-domain features of the composite signal fragment were extracted, including the energy, excess delay, rms delay, maximum value, standard deviation, and peak value. The SVM classifier svmtrain and the svmpredict function in MATLAB2014 were used in both experiments. A radial basis kernel was used, and the parameters c and g were set to 1 and 10, respectively. A total of 600 training samples and 1400 test samples for each ball were selected. εr of the ball was set to 3, and the porosity was set to 0.97. The wall of the lab is composed of ground glass, so our absorption boundary PML is set to 6.
Figure 9a is the identification results of the basketball and volleyball; if test_data of the basketball is input into the SVM model and the output is basketball, the result is recorded as being correct. The solid line indicates the correct recognition rate of the basketball, and the dashed line indicates the correct recognition rate of the volleyball. Figure 10 is the result for the football/volleyball. And Fig. 9b is the result for human standing/lying.
Basketball/volleyball recognition and standing/lying recognition. a ball recognition; b standing/lying recognition
Football and volleyball recognition
Figure 9a shows that under the same SNR conditions, the composite signal achieves a higher correctness. The resolution enhancement is slightly when 1.2PLchirp is overlapped; the enhancement is further obvious when 2PLchirp is overlapped. However, the impact on channel estimation requires extreme control, so there is a certain restriction on the power of chirp and the enhancement has a certain limited. In Fig. 10, the recognition effect is enhanced more clearly when difference between the objects becomes smaller,the recognition cannot be done by using original WiFi. Figure 10 also shows that under the same power and bandwidth, overlapping SSB signal is more efficient. The method is useful for interior target recognition, such as Fig. 9b shows nearly 100% standing/lying recognition accuracy in our simulation when preamble signal was used.
In this paper, an overlapped composition method is studied to improve the recognition resolution of WiFi signals. A separation method based on cancelation and decomposition is also studied. The power of the overlapping signal can be easily adjusted according to the sensing resolution and CE precision. The FDTD simulation shows that the composite signal achieves a better target recognition resolution than using the WiFi preamble alone. In addition, the method has a relatively high communication capacity.
For future work, we would consider whether a suitable composite sequence can be used to replace the original preamble at both Tx and Rx sides, and the sequence is used for recognition and CE directly. Such as we may research F-domain composition or convolution next, there would then be no need to consider the effects of deletion and residues, so the power of chirp can be increased much more.
CE:
FDTD:
Finite-difference time-domain
FDE:
FO:
Frequency offset
IOT:
FFOC:
Fractional frequency offset compensate
PAPR:
Peak-to-average-power ratio
PML:
Perfectly matched layer
RCE:
Relative constellation error
Receiver side
SSB:
Single-side band
Sum of squared deviations
SVM:
SNR:
Tx:
Transmitter side
USRP:
Universal software radio peripheral
UWB:
Ultra wideband
H. Liang, W. Ke, 24-GHz integrated radio and radar system capable of time-agile wireless communication and sensing. IEEE Trans. Microw. Theory Tech. 60(3), 619–31 (2012).
A. Farina, Guest editorial: special issue on bistatic and MIMO radars and their applications in surveillance and remote sensing. IET Radar Sonar Navig. 8(2), 73–4 (2014).
C. C. Hongbo Jiang, Smart home based on WiFi sensing: A survey. Dig. Antennas Propag. Soc. Int. Symp.6:, 13317–25 (2018).
F. Adib, D. Katabi, See through walls with WiFi!. ACM SIGCOMM. 43(4), 75–86 (2013).
Q. Pu, S. Gollakota, S. Gupta, in Proceedings of the 19th Annual International Conference on Mobile Computing & Networking. ACM. Whole-home gesture recognition using wireless signals (ACM Mobi Com, 2013), pp. 27–38.
T. H. Tegan Webster, Passive multistatic radar experiment using WiMAX signals of opportunity. part 2: Multistatic velocity backprojection. IET Radar Sonar Navig.10(2), 248–55 (2016).
H. X. Yu Guo, Learning using privileged information for HRRP-based radar target recognition. IET Sign. Process.12(2), 188–97 (2018).
Z. K. Usman Mahmood, A deep learning framework using passive WiFi sensing for respiration monitoring (IEEE GLOBECOM, 2017).
Y. Zhong, T. Jiang, Z. Zhou, in IEEE ICC Workshop on Radar and Sonar Networks. A novel gesture recognition method by Wi-Fi communication signal based on fourth-order cumulants (IEEE ICCWLondon, 2015), pp. 10567–10571.
L. Zhipeng, Waveform research on integration of radar and communication. PhD thesis (Beijing Institute of Technology, China, 2015).
L. Yongjun, A super-resolution design method for integration of OFDM radar and communication. J. Electron. Inf. Technol.38(2), 425–33 (2016).
A. K. Mishra, M. Inggs, in IEEE Electronics, Computing and Communication Technologies,Bangalore. FOPEN capabilities of commensal radars based on whitespace communication systems (IEEE CONECCTBangalore, 2014), pp. 1–5.
H. Takase, M. Shinriki, in 15th International, Radar Symposium (IRS). A dual-use radar and communication system with complete complementary codes (IEEE IRSGdansk, 2014), pp. 16–47.
S. Sen, OFDM radar space-time adaptive processing by exploiting spatio-temporal sparsity. IEEE Trans. Sign. Process.61(1), 118–30 (2013).
L SY, in IEEE Radar Conference, Cincinnati. MIMO OFDM radar with communication and interference cancellation features, (2014), pp. 19–23.
E. R. B. Mark Roberton, Integrated radar and communication based on chirp spread-spectrum techniques (IEEE, Philadelphia, 2003).
Z. Tianxian, X. Xianggen, OFDM synthetic aperture radar imaging with sufficient cyclic prefix. IEEE Trans. Geosci. Remote Sens.53(1), 394–404 (2015).
IEEE standard 802.11a Part 11, Wireless LAN medium access control MAC and physical layer (PHY) specifications (1999).
C. Knapp, G. Carter, The generalized correlation method for estimation of time delay. IEEE Trans. Acoust. Speech Signal Process.24(4), 320–7 (1976).
Z. X. Liang Ruihai, Design and implementation of real-time estimator of time difference of arrival based on parallel correlation. Application of Electronic Technique. 37(2), 91–94 (2011). https://doi.org/10.16157/j.issn.0258-7998.2011.02.043.
R. D. A Gusmao, N. Esteves, On frequency domain equalization and diversity combining for broadband wireless communications. IEEE Trans. Commun. 51(7), 1029–33 (2003).
J. D. S. Iker Sobron, Device-free people counting in IoT environments: new insights, results and open challenges. IEEE Internet Things J.PP(99), 1–6 (2018).
F. A. Hamza, The USRP Under 1.5X Magnifying Lens. https://www.gnuradio.org/doc/doxygen/index.html.
H. Minsheng, Computation of the ambiguity function of PD radar. Informatization Res.35(2), 22–25 (2009).
T. J. P. SG Glisic, New PN code acquisition scheme for CDMA networks with low signal-to-noise ratios. IEEE Trans. Microw. Theory Tech. 47(2), 300–10 (1999).
T. Ohtani, A stability improvement technique using PML condition for the three-dimensional nonuniform mesh nonstandard FDTD method. IEEE Trans. Magn.May:, 1569–72 (2013).
G. S. S. G Maloney, in Digest on Antennas and Propagation Society International Symposium. Accurate computation of the radiation from simple antennas using the finite-difference time-domain method (IEEE San JoseCA, 1989), pp. 42–45.
T. J. Zhou Ge, in IEEE Global Conference on Signal and Information Processing (Global SIP). A new method of dynamic gesture recognition using Wi-Fi signals based on DWT and SVM improved by DTW, (2015), pp. 1214–1218.
L. X. Li Shuo, Research on 802.11a frequency deviation measurement. Application of Electronic Technique. 31(5), 48–50 (2005).
Z. k. Wangwen Bo, Wideband wireless communication OFDM technology (Posts&Telecom Press, Beijing, 2003).
W. Yang, Multipath model for UWB indoor Los environments. J. Commun.26(10), 24–28 (2005).
This work is supported by the National Natural Science Foundation of China (No. 61671075) and the Major Program of the National Natural Science Foundation of China (No. 61631003). The authors thank the anonymous reviewers for their helpful comments, which were used to improve the quality of the paper.
This work is supported by the National Natural Science Foundation of China (No. 61671075) and the Major Program of the National Natural Science Foundation of China (No. 61631003).
Xiaokun Zheng and Ting Jiang contributed equally to this work.
Key Labs of Universal Wireless Communications, Beijing University of Posts and Telecommunications, Beijing, 100876, China
Xiaokun Zheng
, Ting Jiang
& Wenling Xue
College of Electronics Information and Engineering, Hebei University, Wusi East Road, Baoding, 071002, China
Search for Xiaokun Zheng in:
Search for Ting Jiang in:
Search for Wenling Xue in:
This work was conducted by ZXK as part of his Ph.D. studies. He was advised by Professor Tj. All authors read and approved the final manuscript.
Correspondence to Xiaokun Zheng.
Xiaokun Zheng was born in Baoding city, China, in 1962. He received his B.S. and M.S. degrees from Northwestern Polytechnical University, Xi'an in 2001. In addition, since 2012, he has embarked on his Ph.D. degree from Beijing University of Posts and Telecommunications. Since 2003, he has been a lecturer at the College of Electronics, Hebei University. His research interests include short-range wireless communication and wireless sensor networks. Prof. Ting Jiang was born in Weiyuan city, Sichuan Province in 1962. He received his Ph.D. degree from Yanshan University, Qinhuangdao, in 2003. Since 2009, he has been a Professor with the Key Labs of Universal Wireless Communications, Beijing University of Posts and Telecommunications. His research interests include wireless broadband interconnection, short-distance wireless communication technologies, and wireless sensor networks. He hosted two National Science Foundation projects, 1 National Major Technical Project and many enterprise projects. Xue Wenling was born in Baoding, Hebei Province, China, in 1975. She received her B.S degree in Computer Applications from Hebei University, Baoding, China, in 2001. She is currently a PhD student at Key Laboratory of Universal Wireless Communications, Beijing University of Posts and Telecommunication, Beijing, China. Her research interests include target detection and classification and signal processing.
Zheng, X., Jiang, T. & Xue, W. A composite method for improving the resolution of passive radar target recognition based on WiFi signals. J Wireless Com Network 2018, 215 (2018) doi:10.1186/s13638-018-1224-0
Recognition method
Improve correctness
Composite signal | CommonCrawl |
Quantum Computing Meta
Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. It only takes a minute to sign up.
Why do people say that Grover's algorithm does not parallelize well?
I've seen several sources, including NIST, claim that Grover's algorithm is unlikely to be useful for attacking a symmetric-key algorithm like AES-128 or a hashing algorithm because "Grover's algorithm does not parallelize well."
But I don't understand that claim. According to this answer and this paper, $k$ different quantum computers running Grover's algorithm in parallel can search an $N$-element database in $\theta(\sqrt{N/k})$ time steps. Whereas $k$ classical computers running in parallel require $\theta(N/k)$ time steps.
My guess is that when people say that "Grover's algorithm doesn't parallelize well," what they really is that "The relative speedup of running Grover's algorithm on $k$ parallel quantum computers compared to over one quantum computer, which is $\theta(\sqrt{k})$, is asymptotically smaller than the relative speedup of running classical brute-force search on $k$ classical computers compared to over one classical computer, which is $\theta(k)$."
But it seems to me that this quantity doesn't really capture what we intuitively mean by "parallelize well". I suppose that the speedup that is contributed by parallelization itself is smaller in the quantum case than in the classical case, but the quantum case already starts out so far ahead that you still end up with an asymptotically faster runtime on $k$ parallel quantum computers than on $k$ parallel classical computers. Given any $k < N$ number of computers to search a given database, you would always prefer for them to be quantum over classical computers for any $k$, because the number of time steps is asymptotically smaller in the quantum case. So (while I acknowledge that this is largely a matter of semantics) I would say that Grover's algorithm parallelizes better than classical brute-force search, not worse.
More practically speaking: if you are willing to wait some maximum time $T_\text{max}$ to find the answer, and each classical or quantum oracle query requires time $T_C \ll T_\text{max}$ or $T_Q \ll T_\text{max}$ respectively, then searching an $N$-element database fast enough would require $N \frac{T_C}{T_\text{max}}$ parallel classical computers but only $N \left( \frac{T_Q}{T_\text{max}} \right)^2$ parallel quantum computers - an improvement that's asymptotic in $T/T_\text{max}$.
(It's true that if you could somehow combine the quadratic sequential Grover speedup with the linear relative speedup of classical parallelization to get a runtime $\theta \left(\frac{\sqrt{N}}{k}\right)$ for unstructured search - which has proven to be impossible under the standard model of quantum computing - then you'd only need $\sqrt{N} \frac{T_Q}{T_\text{max}}$ parallel quantum computers. This would give you a resource cost reduction that's asymptotic in $N$ rather than in $T/T_\text{max}$, which is probably more useful.)
Getting back to the task of breaking AES-128: I agree that it's probably impractical to do so with quantum computers for the foreseeable future. But that's simply because quantum computers will be very resource intensive, and it will probably be unaffordable to build as many parallel quantum computers as we can build parallel classical computers. But I don't see what that has to do with Grover's algorithm itself; that's a totally separate assumption about the resource costs of building quantum computers.
So is there in fact any meaningful sense in which quantum computers won't be able to attack AES-128 "because Grover's algorithm does not parallelize well," as opposed to simply "because quantum computers will be much more expensive than classical computers"?
grovers-algorithm
classical-computing
speedup
tparkertparker
Problems that parallelize well have a total cost that stays roughly the same as you apply more machines. You want to be doing roughly the same number of operations, just spread out. Grover search isn't like that; if you spread it over $k$ computers then it needs $\sqrt{k}$ times more operations to solve the problem. That means $\sqrt{k}$ times more energy used which means $\sqrt{k}$ times more money spent. The more parallelization you use, the worse the advantage over classical gets.
Craig GidneyCraig Gidney
$\begingroup$ It seems to me that that would be very much problem-dependent. If you care much more about runtime than about energy, hardware costs, etc., then (as I explained above) Grover's algorithm parallelizes better than classical brute-force search. I agree that for most problems, that won't necessarily be the appropriate utility function, but there might be some problems for which saving time is more important than saving hardware resources, in which case the optimal approach would be Grover's algorithm parallelized over as many quantum computers as possible. $\endgroup$
– tparker
$\begingroup$ @tparker It does not parallelize better. With classical bruteforce, using $k$ machines reduces the time by $k$. With grover using $k$ machines only reduces it by $\sqrt{k}$. That's enormously less efficient; a huge cost to pay for a space time tradeoff. Also, as $k$ increases, it's not limiting to something quantum it's just becoming classical brute force. $\endgroup$
– Craig Gidney
$\begingroup$ You're correct, if you measure the gains to parallelization by the proportional reduction in resources gained from parallelization. But this quantity seems to me to be of limited operational utility. The only reason that brute-force search parallelizes "better" is because it starts from such an inefficient baseline; Grover's algorithm parallelizes "worse" because it's already more efficient at every combination of $N$ and $k$, so there are fewer gains left to be had from parallelization. To me, the more relevant fact is not comparing relative gains but absolute performance, and at the ... $\endgroup$
$\begingroup$ algorithmic level, Grover's algorithm is much more efficient that brute-force search for every choice of $N$ and $k$. In the real world, a massively parallel Grover's algorithm is probably impractical to build, but that just reflects the practical engineering challenges of building quantum computers; it isn't a result that can be derived from Grover's algorithm itself. In the abstract, there's no reason to assume that 1 is the optimal elasticity of substitution for the space-time tradeoff; it depends on the relative costs of runtime vs. hardware for a given application. $\endgroup$
$\begingroup$ @tparker The comparison to brute-force is a red-herring, I think. An algorithm parallelizes well/poorly relative to itself, in some sense. If you have X times the machines, how many times faster do you solve the problem? Perfect parallelism means X times, it doesn't matter what kinds of machines you're using, or what alternatives exist to your algorithm. $\endgroup$
– mbrig
A more relevant consideration is not the total number of computers involved, but rather the total number of computational steps over all machines (or a very similar quantity: area-time, the total number of bits/qubits times the runtime). For classical computing this is directly proportional to the total energy consumption, and in most cases directly proportional to the economic cost of the computation (whether you pay for total server time on multiple machines, or the opportunity cost of occupying a server cluster with this computation). The same arguments apply to quantum computing, more or less: there is an energy cost for error correction that is (very likely) to be proportional to (number of qubits)*(total runtime). Even with a huge breakthrough in error correction, there is still this opportunity cost.
So, let's consider the calculation you did above, where we need $N\frac{T_C}{T_{max}}$ classical computers and $N\frac{T_Q^2}{T_{max}^2}$ quantum computers. Multiplying by the total runtime (which is $T_{max}$ for both) and the size of each computer (let's call it $S_C$ and $S_Q$) gives the area-time costs:
$$ \text{Classical}: NT_CS_C\text{, Quantum:} \frac{NT_Q^2S_Q}{T_{max}}$$
Where does this leave us? With a few points:
When $T_{max}$ is fixed, the total cost of classical and quantum search both grow proportional to the size of the search space. That is: the asymptotic quantum advantage is gone in this context. A fixed $T_{max}$ is a realistic constraint: in cryptography it translates to "how soon do we need to obtain these secrets". Granted, we might be able to drop $T_C$ or $T_Q$ (e.g., if processor speeds improve), but if $T_Q$ is dropping faster than $T_C$, that's not really an algorithmic advantage.
Generally, the way people think about and talk about classical algorithms assumes a great deal of parallelism. When people say DES is insecure because they keyspace is only $2^{56}$ bits, so you can run an attack in time (on the order of) $2^{56}$, they don't really mean time: even on a modern, fast, 3 GHz processor, that would still take 90 years. It's insecure because people can easily parallelize the attack and they only need $2^{56}$ operations (which is readily affordable). Saying that Grover "doesn't parallelize well" emphasizes that we need to break from that way of thinking. More concretely, the "square-root speedup" that people often talk about isn't real: sequential attacks will never happen, and the advantage diminishes with parallelism.
For a bit of perspective, a somewhat reasonable value of $T_{max}$ is $2^{40}$ operations, which NIST estimated as the number of sequential operations on a surface code with current quantum hardware in a year. Then Grover needs $2^{80}$ qubits to break AES-128. With $2^{80}$ processors, classical computers only need $2^{48}$ sequential evaluations to break AES-128. So if classical computers can compute 256 sequential evaluations of AES in less time than one surface code cycle, there is no quantum advantage. Even if not, the quantum advantage is very very small.
Sam JaquesSam Jaques
I've been directed to this question from this related question on crypto.stackexchange.com. Apologies if I'm a little unfamiliar with the praxis of this group.
With regard to the definition of "parallelises well", I concur with Craig Gidney's answer that in my circles at least this means that up to a certain bound for $k$, applying $k$ times as many computational resources roughly reduces run time by a factor of roughly $k$.
This does not mean that Grover's algorithm does not parallelise at all, but does mean that one needs to be careful about how one describes attacks. For example, bringing $2^{20}$ quantum computers to bear on a 128-bit search space results in attack of $2^{74}$ operations rather than the $2^{64}$ of a single quantum computer.
With regard to the security of AES, as with all statements of complexity theoretic security this is a statement about economics. The statement that AES-128 is secure against classical computation does not mean that even given unlimited resources there is no way to deploy time and classical compute resources to brute force exhaust a 128-bit key space, rather that such a computation is economically infeasible for a certain timeframe. Likewise, when I say that AES-128 is secure against quantum attack, I mean that it is economically infeasible to deploy the resources required within a given timeframe.
Economic costing by necessity involves a certain level of assumption about the rate of development of technology e.g. with classical computing that the cost of resources does not significantly outstrip Moore's law. My own belief is that 7 billion logical qubits (as compared to say the roughly 2330 needed to attack elliptic curves systems or the 6189 needs to attack RSA), running for 1 year (as opposed to 1 hour for elliptic curve and 8 hours for factoring estimates, both using clock rates several order of magnitude smaller) is an infeasible rate of economic development for cryptographically relevant timescales. I similarly believe that achieving another 45-or-so-bits of speed up1 in classical compute power is going to be economically infeasible for similar timescales, though maybe less infeasible than the required growth for quantum attacks.
I'm not sure of the exact model for current classical resources should be, but the world Bitcoin network is currently operating at around $2^{68}$ hashes per second.
Daniel SDaniel S
$\begingroup$ I agree with everything in this answer. My only (very limited) claim is that it's somewhat misleading to say that the main reason why Grover's algorithm is unlikely to affect the security of AES is that it "doesn't parallelize well". (I know that you don't make this claim, but other people do.) In my opinion, the main reason why Grover's will probably not be effective is the cost and engineering difficulty of building quantum computers (as compared to classical ones), and any results that can be directly derived from Grover's algorithm itself are only partially relevant. $\endgroup$
$\begingroup$ The fact is that if we could make quantum computers as fast and as cheap as classical computers, then we might still be debating the semantics of whether Grover's algorithm "parallelizes well", but we'd also probably be running Grover's algorithm in a massively parallel fashion, not in serial, on real-world problems. $\endgroup$
Thanks for contributing an answer to Quantum Computing Stack Exchange!
Can we speed up the Grover's Algorithm by running parallel processes?
Are there problems in which quantum computers are known to provide an exponential advantage?
Does the oracle in Grover's algorithm need to contain information about the entirety of the database?
Does quantum computing provide any speedup in evaluation of transcendental functions?
Adapting search algorithm to search the minimum in a database in $O(\sqrt{N}\log(N))$ queries
Are there many practical problems for which Grover's algorithm beats the best heuristic classical algorithm?
Where does the time complexity come from when applying Grover's algorithm to AES?
Generally speaking, are quantum speedups always due to parallelization of a given problem?
In the adiabatic version of Grover's algorithm, how is the Hamiltonian constructed?
How do we know a "quantum function call" is worth the same amount of time as a "classical function call?" | CommonCrawl |
Matematicheskii Sbornik
Mat. Sb.:
Mat. Sb., 2008, Volume 199, Number 10, Pages 63–86 (Mi msb3935)
Natural differential operations on manifolds: an algebraic approach
P. I. Katsyloa, D. A. Timashevb
a Scientific Research Institute for System Studies of RAS
b M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics
Abstract: Natural algebraic differential operations on geometric quantities on smooth manifolds are considered. A method for the investigation and classification of such operations is described, the method of IT-reduction. With it the investigation of natural operations reduces to the analysis of rational maps between $k$-jet spaces, which are equivariant with respect to certain algebraic groups. On the basis of the method of IT-reduction a finite generation theorem is proved: for tensor bundles $\mathscr{V},\mathscr{W}\to M$ all the natural differential operations $D\colon\Gamma(\mathscr{V})\to\Gamma(\mathscr{W})$ of degree at most $d$ can be algebraically constructed from some finite set of such operations. Conceptual proofs of known results on the classification of natural linear operations on arbitrary and symplectic manifolds are presented. A non-existence theorem is proved for natural deformation quantizations on Poisson manifolds and symplectic manifolds.
Bibliography: 21 titles.
DOI: https://doi.org/10.4213/sm3935
Full text: PDF file (642 kB)
Sbornik: Mathematics, 2008, 199:10, 1481–1503
UDC: 514.74+512.815.7
MSC: Primary 58A32, 53D55; Secondary 15A72, 81S10
Citation: P. I. Katsylo, D. A. Timashev, "Natural differential operations on manifolds: an algebraic approach", Mat. Sb., 199:10 (2008), 63–86; Sb. Math., 199:10 (2008), 1481–1503
\Bibitem{KatTim08}
\by P.~I.~Katsylo, D.~A.~Timashev
\paper Natural differential operations on manifolds: an algebraic
\jour Mat. Sb.
\issue 10
\mathnet{http://mi.mathnet.ru/msb3935}
\crossref{https://doi.org/10.4213/sm3935}
\adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2008SbMat.199.1481K}
\jour Sb. Math.
\crossref{https://doi.org/10.1070/SM2008v199n10ABEH003969}
http://mi.mathnet.ru/eng/msb3935
https://doi.org/10.4213/sm3935
http://mi.mathnet.ru/eng/msb/v199/i10/p63
Related presentations:
On differential characteristic classes of metrics and connections
D. A. Timashev, October 8, 2014 16:45
E. V. Ponomareva, "Classification of double flag varieties of complexity 0 and 1", Izv. Math., 77:5 (2013), 998–1020
E. G. Puninskiy, "Natural operators on tensor fields", Moscow University Mathematics Bulletin, 69:5 (2014), 225–228
D. A. Timashev, "On differential characteristic classes of metrics and connections", J. Math. Sci., 223:6 (2017), 763–774
Navarro A., Navarro J., Prieto C.T., "Natural Operations on Holomorphic Forms", Arch. Math.-Brno, 54:4 (2018), 239–254
Gordillo-Merino A., Navarro J., Sancho P., "A Remark on the Invariant Theory of Real Lie Groups", Colloq. Math., 156:2 (2019), 295–300
Full text: 226
First page: 15 | CommonCrawl |
Lightmetrica
Implementation of VCM/UPS in Lightmetrica: Part 1
Development Lightmetrica
(This article is written for ray-tracing camp 4 Advent Calender)
One of the major purpose of developing Lightmetrica is to offer the researcher to a way to compare the existing techniques without unnecessary implementation. So we are trying to implement various rendering techniques as much as possible in one framework. Recent update of Lightmetria includes an implementation of a set of rendering techniques based on photon density estimation. These techniques is originated from photon mapping developed by Jensen et al. [1996]. From the invention of photon mapping, various techniques have been actively developed so far.
Specifically, the photon mapping based techniques is known to be efficient for the scene containing specular-diffuse-specular (SDS) paths. This kind of lighting effect is important for some scenes, e.g., the caustics in the surface under water seen from the outside is an typical example. The rendering techniques based on independent path sampling (e.g., bidirectional path tracing [Veach & Guibas 1994]) often suffer from rendering these scenes. This is because independent sampling techniques need to sample SDS paths with low probability, if the size of light source is small. If the light source is spatially degenerated one (e.g., point light source), the sampling become theoretically impossible and requires special extension to the path space to handling these cases [Kaplanyan et al. 2011].
Recent advance in developing photon mapping based techniques succeeded to incorporate bidirectional path sampling in the framework of photon density estimation. This technique is independently developed by two research groups: vertex connection and merging (VCM) by Georgiev et al. [2012] and unified path sampling (UPS) by Hachisuka et al. [2012]. So in this article, as a mark of respect to both groups, we call this technique by VCM/UPS.
They integrated two approaches originally developed independently based on different formulations into one so that to be combinable via multiple importance sampling (MIS) [Veach & Guibas 1995]. MIS is an widely used technique in rendering researches to combine two or more different way of sampling. For example, path tracing with next event estimation often combines two different way of the sampling a path with a compensation to the probability measure; sampling from the density associated with BSDF and from the direct sampling of the light sources.
Also, the combined approach is naturally extended to progressive photon density estimation framework by Knaus and Zwicker [2011]. The progressive photon density estimation is originally developed by Hachisuka et al. [2008, 2009], achieving the consistent estimation of the photon mapping. The technique is later simplified by Knaus and Zwicker [2011] and both VCM and UPS papers utilizes the framework to achieve the progressive estimation.
In Lightmetrica, we implemented various the photon mapping based rendering algorithms to make it possible to compare algorithms in the consistent manner, without introducing uncertainty in comparison. Our implementation includes:
renderer::pm
Photon mapping [Jensen et al. 1996]
renderer::ppm
Progressive photon mapping [Hachisuka et al. 2008]
renderer::sppm
Stochastic progressive photon mapping [Hachisuka et al. 2009]
renderer::vcm
Vertex connection and merging [Gergiev et al. 2012]
Unified path sampling [Hachisuka et al. 2012]
Bidirectional photon mapping [Vorba 2011]
We achieved the implementation of VCM/UPS with only ~800 lines of code excluding the shared codes. Because the main focus of our framework is not the optimization, we attempted to implement the code so that we can understand easily. The implementation follows the mathematical formulation so that we can grasp the connection between the formulation and the implementation. As for the optimized implementation utilizing the recursive formulations, we can refer to the technical paper by Georgiev [2012] and smallvcm.
In this article, I attempt to describe the implementation of these techniques from the theoretical background to the implementation detail. The description in this article is based on the implementation when this article is written and it might be changed by the following updates.
Light transport simulation
Here we will introduce the formulation of the light transport. We will begin with the famous path integration formulation by Veach [1997]. In the formulation, the pixel intensity can be written as $$ \begin{equation} I = \int_{\Omega} f(\bar{x}) d\mu(\bar{x}), \end{equation} $$ where $\bar{x}=\mathbf{x}_1\dots\mathbf{x}_{k-1}$ is the path with length $k$, $\Omega$ is a set of all paths in any length and $\mu$ is the product area measure. $f$ is the measurement contribution function. See Veach [1997] for the detailed definition.
Desigining path sampling techniques
We will design two types of path sampling techniques: vertex connection which constructs a path with connecting two different path vertices, and vertex merging which constructs a path by merging two different path vertex within the specified range. In this discussion, we will focus on sampling the path with the length $k$.
Vertex connection: In order to sample a path with vertex connection, initially we sample the two different paths originated from the light source and the sensor, and connect the two path vertices from each subpath to construct a path. Given a path $\bar{x}$ with the length $k$, there is $k+2$ ways of sampling the path. These sampling strategy is often indexed by using the number of vertices $s$ and $t$ counting from the light source and the sensor respectively. The pdf for sampling with the strategy $(s,t)$ is defined as $$ \begin{equation} p^{VC}_{s,t}(\bar{x}) = p_L(\mathbf{x}_{1}\dots\mathbf{x}_{s}) \cdot p_E(\mathbf{x}_{k+1}\dots\mathbf{x}_{s+1}), \end{equation} $$ where $$ \begin{align} p_L(\mathbf{x}_{1}\dots\mathbf{x}_{s}) &= \begin{cases} 1, & s = 0 \\ p(\mathbf{x}_1) \prod_{i=1}^{s-1} p(\mathbf{x}_{i}\to\mathbf{x}_{i+1}), & \text{otherwise}. \end{cases} \\ p_E(\mathbf{x}_{1}\dots\mathbf{x}_{s}) &= \begin{cases} 1, & t = 0 \\ p(\mathbf{x}_{k+1}) \prod_{i=1}^{t-1} p(\mathbf{x}_{k+1-i}\to\mathbf{x}_{k-i}), & \text{otherwise}. \end{cases} \end{align} $$ Here $p$ is the pdf defined on area measure.
Vertex merging: The essential point of VCM/UPS is taking photon density estimation as path sampling techniques. We can think of the path introduced in the process of the photon density estimation as the extended path. Similar to vertex connection, we can think of two subpaths originated from the light source and the emitter. We let each subpaths be $x_L=\mathbf{x}_1\dots\mathbf{x}_s\mathbf{x}^{*}_{s+1}$ and $x_E=\mathbf{x}_{s+1}\dots\mathbf{x}_{k}$. Here we consider one extra path vertex $\mathbf{x}^{*}_{s+1}$, which is the position of photon in the context of photon mapping. We define the extended path $\bar{x}=\mathbf{x}_1\dots\mathbf{x}_s\mathbf{x}^{*}_{s+1}\mathbf{x}_{s+1}\dots\mathbf{x}_{k}$ concatenating two subpaths. The evaluation of the measurement contribution function $f$ given the extended path just ignore the path vertex $\mathbf{x}^{*}_{s+1}$, that is, evaluates $f(\mathbf{x}_1\dots\mathbf{x}_s\mathbf{x}_{s+1}\dots\mathbf{x}_{k})$.
However in order to combine the extended path with the path generated by vertex connection, we need to express the pdf in the same measure. So we consider the density estimation as the process to decide whether to merge two vertices by Russian roulette with the given radius $r$. This process is similar to calculating the marginal density with respect to area measure around the position of the photon $\mathbf{x}_s$ within the radius $r$, making the pdf for extended path be same as $p^{VC}_{s,t}(\bar{x})$: $$ \begin{align} p^{VM}_{s,t}(\bar{x}) &= p^{VC}_{s,t}(\bar{x}) \cdot \mbox{Pr}(\| \mathbf{x}_{s+1} - \mathbf{x}^{*}_{s+1} \| < r) \\ &= p^{VC}_{s,t}(\bar{x}) \int_{A_r} p(\mathbf{x}_{s}\to\mathbf{x}) d\mathbf{x}, \end{align} $$ where $A_r$ is a set of points in the scene surfaces around $\mathbf{x}_{s}$ within the range $r$. If $K_r$ is constant kernel with radius $r$, this equation can be approximated as $$ \begin{equation} p^{VM}_{s,t}(\bar{x}) \approx \pi r^2 \, p(\mathbf{x}_{s-1}\to\mathbf{x}^*_s) \cdot p^{VC}_{s,t}(\bar{x}). \end{equation} $$
Combined estimator
Now we can design an estimate of $I$ combining vertex connection and merging utilizing multiple importance sampling. We will combine single path from vertex connection and $N_{VM}$ paths from vertex merging. This is because we can make use of efficient path reusal with spacial data structure for range queries such as Kd-tree or hash grid. Indeed we can consider multiple paths with vertex connection, in our implementation we decided not to do so because vertex connection involves in relatively high-cost triangle intersection queries. The combined estimate can be written as $$ \begin{equation} \langle I \rangle = \underbrace{ \sum_{s,t\geq 0} w_{VC,s,t}(\bar{x}) \frac{f(\bar{x})}{p^{VC}_{s,t}(\bar{x})} \frac{1}{N_{VM}}}_{\langle I \rangle_{VC}} + \underbrace{ \sum_{l=1}^{N_{VM}} \sum_{s,t\geq 2} \chi_{s,t}(\bar{x}_l) w_{VM,s,t}(\bar{x}_l) \frac{f(\bar{x}_l)}{p^{VM}_{s,t}(\bar{x}_l)}}_{\langle I \rangle_{VM}}, \end{equation} $$ where $\chi_{s,t}$ is the characteristic function that equals to one if $\| \mathbf{x}_{s+1} - \mathbf{x}^{*}_{s+1} \| < r$, otherwise zero. $w_{v,s,t}$ is the power heuristics weight defined as $$ \begin{equation} w_{v,s,t}(\bar{x}) = \frac{p^{v}_{s,t}(\bar{x})^\beta}{ \sum_{s',t'\geq 0} p^{VC}_{s',t'}(\bar{x})^\beta + N_{VM} \sum_{s',t'\geq 2} p^{VM}_{s',t'}(\bar{x})^\beta }. \end{equation} $$ If $\beta=1$, the weight becomes the balance heuristics. In the actual implementation, the summation $\sum_{l=1}^{N_{VM}}$ and the selection of the path satisfying $\chi_{s,t}$ is achieved via range query structure.
(continue to part 2)
[Veach and Guibas 1994] E. Veach and L. Guibas, Bidirectional estimators for light transport, In Eurographics Workshop on Rendering, 1994.
[Veach 1997] E. Veach, Robust Monte Carlo method for light transport simulation, PhD Thesis, Stanford University, 1997.
[Jensen 1996] H. W. Jensen, Global illumination using photon maps, In Rendering Techniques '96, 1996.
[Hachisuka et al. 2008] T. Hachisuka, S. Ogaki and H. W. Jensen, Progressive photon mapping, In ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 2008.
[Hachisuka and Jensen 2009] T. Hachisuka and H. W. Jensen, Stochastic progressive photon mapping, In ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 2009.
[Knaus and Zwicker 2011] Progressive photon mapping: A probabilistic approach, In ACM Transactions on Graphics, 2011.
[Vorba 2011] J. Vorba, Bidirectional photon mapping, In CESCG, 2011.
[Georgiev et al. 2012] I. Georgiev, J. Krivanek, T. Davidovic, and P. Slusallek Light transport simulation with vertex connection and merging, In ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 2012.
[Georgiev 2012] I. Georgiev, Implementing Vertex Connection and Merging, Technical Report, 2012.
Copyright © 2015 Hisanari Otsu (@hi2p_perim)
Built with Hugo. Documentation generated with Doxygen.
Site design utilizes Twitter Bootstrap, based on the theme from Boostwatch. | CommonCrawl |
Oversummering juvenile and adult Semipalmated sandpipers in Perú gain enough survival to compensate for foregone breeding opportunity
Eveling A. Tavera ORCID: orcid.org/0000-0003-0058-65691,2,3,
Glenn E. Stauffer4,
David B. Lank1 &
Ronald C. Ydenberg1
Movement Ecology volume 8, Article number: 42 (2020) Cite this article
Age at maturity and the timing of first breeding are important life history traits. Most small shorebird species mature and breed as 'yearlings', but have lower reproductive success than adults. In some species, yearlings may defer northward migration and remain in non-breeding regions ('oversummering') until they reach 2 years of age. Some adults also oversummer. Oversummering would be favoured by natural selection if survival were as a result raised sufficiently to compensate for the missed breeding opportunity. Several thousand Semipalmated Sandpipers (Calidris pusilla) spend the non-breeding period at Paracas, Perú, including individuals with long bills (likely from eastern Arctic breeding populations ~ 8000 km distant) and short bills (likely from western Arctic breeding populations, up to 11,000 km distant), with short-billed birds more likely to oversummer. We tested the prediction that oversummering birds have higher survival than migrants, and that the magnitude of this higher survival for oversummering birds is enough to compensate for their lost breeding season.
We used a Multi-State Mark-Recapture model based on 5 years of encounter data (n = 1963 marked birds, and 3229 resightings) obtained year-round at Paracas, Perú, to estimate seasonal (i.e. breeding and non-breeding) survivorship for migrant and oversummering birds. We calculated the magnitude of the oversummering survival advantage required to compensate, for both yearlings and adults, based on published measures of annual survival and reproductive success. Using bill length as a proxy for migration distance, we investigated whether migratory survival is distance-dependent.
We estimate that 28% of yearlings and 19% of adults oversummer. Survival is higher for oversummering birds than for migrants, and the oversummering survival advantage is greater for adults (0.215) than for yearlings (0.140). The theoretical thresholds predicted by the size of the missed reproductive opportunity are 0.240 for adults and 0.134 for yearlings. Migratory survival decreases and the oversummering rate increases with migration distance, as assessed by culmen length.
Our results support the life history hypothesis that oversummering raises survival enough to compensate for the loss of a breeding opportunity. Greater migration distance lowers survival and increases the probability of oversummering.
Life history theory predicts that natural selection acts on the age of maturity through its effects on survivorship and reproductive success [1]. The age of first breeding can have a substantial effect on population growth rate, and cases in which individuals forgo early breeding opportunities are therefore of intrinsic interest [2]. Among small shorebird species, most individuals attempt to breed in their first year of life, e.g. Dunlin Calidris alpina [3], Temminck's Stint Calidris temminckii [4], Least Sandpiper Calidris minutilla [5] and Semipalmated Sandpiper Calidris pusilla [6]. As in birds in general, first year breeders have lower reproductive success than older individuals [7]. In Semipalmated and Western Sandpipers Calidris mauri young breeders have later hatch dates, smaller egg sizes, lower nesting success and lower fecundity than adults [6] (Kwon E, et al. Age-specific fecundity and population dynamics of Western Sandpipers Calidris mauri. In prep.). The lower reproductive payoffs for young birds help to favor delayed breeding, and factors that increase the risk or cost of migration further raise the survival advantage of delayed breeding.
'Oversummering' is a term used to describe when individuals in a typically migratory shorebird species defer migration and remain on the non-breeding grounds during the breeding season [8]. (As in most literature, 'breeding season' here refers to the boreal spring and summer.) Oversummering has been variously attributed to sexual immaturity [9, 10]; helminthic infestation [8]; sterility, injuries or illness [11]; less efficient foraging [12]; flight cost on primary wear [13]; behavioral adaptations to distance-dependent costs [14,15,16] and poor success in the first breeding attempt [17]. Summers et al. [17] found that among five species of shorebirds with groups spending the non-breeding season in Britain or South Africa, a large proportion of South African birds showed no preparation for migration (molt to breeding plumage and mass gain). They inferred that these birds oversummered, attributing this to distance-dependant migration risk. Migration distance has also been used as a factor to explain intraspecific differences in the age of first breeding within Western Sandpipers and Sanderlings Calidris alba. Juveniles at more southerly non-breeding areas, further from arctic breeding grounds, neither molt into breeding plumage nor migrate northward, while those at more northerly locations do so [18,19,20,21,22,23].
In this study we test the hypothesis that oversummering provides a survival advantage over migration. We predict that oversummering enhances survivorship of those individuals doing so by enough to offset the expected fitness cost of their foregone breeding opportunity. We also evaluate whether migratory survival falls with distance as previous investigators have suggested, and if so, whether oversummering is as predicted more prevalent when migrations are longer.
Semipalmated Sandpipers perform an annual return migration between South American non-breeding regions and Arctic breeding areas ranging from Alaska eastward across the Canadian tundra [20, 24]. Adults undergo a full molt after southward migration, upon (or just before) returning to non-breeding sites. Juveniles migrate a full month later than adults and do not molt, though some later undertake a partial wing molt (replacing 1–6 primaries; termed 'partial post-juvenal wing molt, or 'PPW') during the pre-migratory period (January – March). At our study area at Paracas, Perú, at the southern edge of the non-breeding range, many young birds oversummer [18], as do some adults.
This migratory dichotomy provides an opportunity to compare the survival of oversummering and migrant birds. To do so we develop a multi-state mark-capture-resighting (MSMR) model with two age classes (juveniles/yearlings, adults) and two migration strategies (oversummer, migrate). We predict that oversummering birds have higher survival than migrants during the breeding season (April – September). Further, since adults have a greater probability of breeding successfully than juveniles, those that oversummer should gain more in terms of survivorship than young birds by doing so. Finally, among birds that do migrate, we investigate whether migratory survival is distance dependent, using bill length as a proxy for migration distance (see below). If so, birds presumed to be from western Arctic breeding populations (short bills and long migrations) should be more likely to oversummer than those presumed to be from eastern Arctic breeding populations (long bills and shorter migrations).
Study site
We captured, marked, released, and resighted Semipalmated Sandpipers between October 2014 and March 2019, at the Paracas National Reserve in Perú, a natural protected area located in the department of Ica, 250 km south of Lima city (Fig. 1). The work was conducted on La Aguada beach (13° 51′ 35′′S, 76° 16′ 16′′ W), an intertidal mudflat ~ 2 km long and surrounded by coastal desert. The broad near-shore section of the mudflat has no vegetation and is inundated only on the highest monthly tides. The intertidal mudflat follows the fringe of the bay, is ~ 50 m wide, and is inundated twice daily by tides of ~ 1.5 m in height.
Location of the study site, La Aguada Beach at Paracas National Reserve, Department of Ica, Perú
Capture, marking and resighting
Fieldwork was conducted during both the non-breeding season (October to March; termed 'winter') and the migration/breeding season (April to September; termed 'summer'). During non-breeding seasons, we conducted seven-day capture-resighting 'field campaigns' during the new moon phase of each month. Shorebirds were captured at night (2000 – 0600 h) with mist-nets, beginning 3 h after the evening high tide and ending 3 h before the subsequent high tide. Captured birds were marked on the right tarsus with an incoloy metal band obtained from the CORBIDI Bird-Banding Program (the Peruvian bird-banding scheme). A three-character-coded yellow flag was placed on the left tibia (e.g. 3AT), following the Pan-American Shorebird Program protocol [25], to identify individuals and enable resightings. Each morning, 3 persons each spent 3 h (0600 - 0900 h) surveying the entire study area, locating and identifying (by telescope) marked individuals. During breeding seasons we did no mist-netting, but carried out a 5-day resighting-only field campaign each month. All capture, handling and marking methods were approved by regulatory committees for animal welfare and permitting agencies for wildlife research.
Upon initial capture, birds were assigned to an age category based on plumage characteristics and date. Young-of-the-year are first seen in Paracas in September, and are considered 'juveniles' until April 1 of the following year (~ 10 months of age) when they by definition become 'yearlings'. They are recognizable by plumage, particularly the retained juvenile-type inner greater coverts [20, 26]. Field campaigns during the summer months are 'resighting only' and by the time mist-netting resumes in October of each year, all yearlings have completed molt into adult plumage and are easily distinguished from newly-arrived juveniles. Adult plumage is distinct, recognizable by the shape and coloration of newly molted primaries [20, 26].
Culmen length was measured using a dial caliper (mm). Semipalmated Sandpipers have a cline in bill length across their breeding range, with average bill length shorter in western breeding populations [24, 27]. The distribution of bill lengths at Paracas encompasses the full range, and is slightly left-skewed (towards shorter bills [18];). These data suggest that Semipalmated Sandpipers at Paracas include birds from western (~ 11,000 km distant on a great circle route) as well as eastern Arctic breeding populations (~ 8000 km).
Multi-state model structure
We used a multi-state mark-recapture (MSMR [28, 29];) model to estimate the survivorship of adults and yearling migrant and oversummering birds during winter (October – March) and summer (April – September). The four states are: J (juvenile or yearling); A (adult); M1(migrant yearling – unobservable state); and M2 (migrant adult – unobservable state), and the model also estimates the proportions of adults and juveniles that oversummer or migrate. A total of 1963 birds was captured, marked, and resighted in the analysis, which included data from 54 monthly 'field campaigns' conducted from October 2014 through March 2019. Birds marked prior to October 2014 were treated as having been marked when first resighted after 1 October 2014. Marked birds were subsequently resighted 5163 times, after multiple sightings within field campaigns were consolidated. Each of the 54 monthly field campaigns was assigned to one of five annual 'sampling occasions' (Winter 1, Winter 2, Spring, Summer, Fall; Fig. 2), for a total of 22 throughout the study. Repeat observations of the same individual during field campaigns within a sampling occasion were further consolidated, producing 3229 independent resighting records (Tables 4 and 5 in Appendix 1).
State transitions for Semipalmated Sandpipers at Paracas, Perú. Solid arrows denote compulsory transitions, and dashed transitions denote probalistic transitions estimated by the multi-state mark-recapture (MSMR) model. Sampling occasions (Winter 1, Winter2, Spring, Summer and Fall) are not of equal length The split of Winter 1 and Winter 2 made them all comparable with each other with equal lengths S{Winter} = S {Summer}. States are defined in the text: J (juvenile/yearling), A (adult), M1 (migrant yearling), and M2 (migrant adult)
The structure of the model, with arrows indicating transitions, is shown in Fig. 2. Young-of-the year enter the model in Winter 1 as juveniles. Yearling birds transition to state A in Winter 1. All birds retain the stage assigned in Winter 1 when progressing to Winter 2. At the end of Winter 2, individuals either oversummer (remaining at Paracas), or migrate, in which case they transition to the unobservable states (M1 for yearlings, M2 for adults). The MSMR model estimates ψJM1 (the probability that a yearling migrates), and derives ψJJ (the probability that a yearling oversummers) as its complement (1 - ψJM1). Similarly, the model estimates ψAM2 (the probability that an adult migrates), and derives ψAA (the probability that an adult oversummers) as its complement (1 – ψAM2). Hence, ψJM1, and ψAM2 are the only transition probabilities estimated by the model. Spring, Summer and Fall are each 2 months long, and all birds retain their state with probability 1.0 as they progress through these successive stages. The cycle repeats beginning at Winter 1. Note that sampling occasions are not of equal length. Winter 1 (Oct, Nov, Dec) and Winter 2 (Jan, Feb, Mar) are 3 months long, while Spring (April, May), Summer (June, July) and Fall (August, Sept) are each 2 months long. Survivorship for the 6-month 'summer' is based on the Spring, Summer, and Fall sampling occasions, while the 6-month 'winter season' includes Winter1 and Winter 2 sampling occasions.
We competed a set of 36 versions of the basic model (Table 1), generated by combinations of 12 structures for annual survival (S), and three structures for the probability of resighting (p). There is a single structure for the transition probabilities. The model structures evaluating survival rates include all possible combinations of the one-way effects and two-way interactions, excepting the strategy*season interaction (impossible because strategies exist only in summer). The three-way interaction is not considered. Detection probability varies in three possible ways: by age, by season, or by age and season. We set the detection probability to zero for the unobservable states. Survival, detection and transition probabilities were constrained to be equal across the 5 years of observations, because models allowing annual variability failed to converge reliably.
Table 1 Set of models fitted for Semipalmated Sandpiper survival analysis. There are twelve structures for annual survival (S), three structures for probability of resighting (p), and a single structure for transition probability (ψ), not shown here. The 36 models are presented in ascending order by ΔAICc
It is not possible to estimate unique survival rates for unobservable states in MSMR models [30], and hence it is typically necessary to set the survival probability of an unobservable state equal to that of one of the observable states. However, the combination of imposed constant annual survival and the structural determinism in transition probabilities, including the constraint that all individuals in unobservable states become observable in Winter 1 (Fig. 2), enables the estimation of age-specific survival probabilities for the unobservable states.
We fitted models in program MARK [31] using the "Rmark" interface [32] within program R, version 3.5.1 [33]. Model selection is based on Akaike's information criterion, corrected for the effective sample size (AICc [34];). All models are used to estimate parameters and confidence intervals.
The goodness-of-fit (GOF) tests available for MSMR models assume time-varying survival and fully observable states [35, 36]. Neither of these conditions holds in our model, and we therefore could not conduct GOF tests.
Survival as a function of migration distance
The breeding destination of any individual Semipalmated Sandpiper at Paracas is unknown, but there is a strong relationship between breeding location and mean culmen length [24, 27], with bills shorter in western Arctic (~ 11,000 km migration) than in eastern Arctic (~ 8000 km) breeding populations. We use culmen length as a proxy for migration distance, and calculate survival in relation to migration distance as follows.
The relationship between culmen length and annual survival was previously estimated for yearlings by adding culmen length as a covariate to the encounter history and using an open robust design multistate model [37]. This produced a non-significant slope (survival probability/mm of culmen) of − 0.0048. However, this slope estimate combines oversummering and migrant yearlings, which, as we hypothesize, may differ in survival. We can decompose the estimate and calculate culmen length-specific survivorship rates for migrants by recognizing that, for each culmen size class, the survival estimate is composed of the survival of migrants and non-migrants, weighted by their proportion of the population. Denoting survival in culmen length class i as Si, the proportions of migrants and oversummerers as Pmi and Poi, and the survival of migrants and oversummerers as Wmi and Woi, then
$$ {S}_i=\left({Pm}_i\ast {Wm}_i\ \right)+\left({Po}_i\ast {Wo}_i\ \right) $$
The proportions of migrants and oversummering yearlings in size class i are estimated based on pre-migratory molt patterns ([18]; see Appendix 2). With the reasonable assumption that the survival of oversummering birds (Woi) is independent of culmen length class, the only unknown parameter is the survival of migrant juveniles (Wmi in Eq. 1). Solving for Wmi
$$ {\mathrm{Wm}}_{\mathrm{i}}=\left({\mathrm{Pm}}_{\mathrm{i}}+\left({\mathrm{Po}}_{\mathrm{i}}\ast {\mathrm{Wo}}_{\mathrm{i}}\right)\right)/{\mathrm{S}}_{\mathrm{i}} $$
We apply this procedure to estimate the survival of yearling migrants in each culmen size class. We are unable to perform a parallel analysis for adults because we lack a marker of their migratory status comparable to that provided by the pre-migratory molt patterns of yearlings.
Predicting the oversummering survival advantage
Our prediction is that the behavioural decision to migrate or not depends on the extra survival gained by oversummering (the 'survival advantage') being large enough to offset the foregone reproduction. The estimation of this theoretical threshold value is based on a simple life history model (see Appendix 3) that expresses the foregone breeding opportunity as a proportion of expected lifetime reproductive success. The predicted thresholds are 0.134 for yearlings, and 0.240 for adults. Individuals should migrate when their expected reproductive return exceeds this value, or oversummer when their expected return is lower.
Survival estimates
The 1963 marked Semipalmated Sandpipers were resighted 3229 times within sampling occasions after marking. The percentage of birds not seen after first capture was on average 43% (annual range 31–55%). Marked birds were re-encountered on 1 to 15 subsequent sampling occasions (mean 1.64). The mean number of years a marked bird was re-encountered subsequent to initial capture averaged 0.85, ranging from 0.40 to 1.14 annually (excluding the logical zero from the final year). Further detail on the distribution of encounter periods is given in Appendix 1.
The model competition is summarized in Table 1. In the most informative model, survival varies by age and strategy, and by their interaction: juveniles have lower survival than adults, and migrants have lower survival than oversummering birds. The age by strategy interaction arises because the age difference in survival is non-existent for migrants, as shown by the survival estimates given in Table 3. Adult survival exceeds that of yearlings in both winter (adult 0.904; yearling 0.829) and summer (adult 0.894; yearling 0.810) by about 8%, but the survival of migrant adults and yearlings does not differ (adult 0.679; yearling 0.670) (Fig. 3).
Seasonal (six-month) survivorship estimates of migrant and oversummering Semipalmated Sandpiper adults (left) and yearlings (right). Vertical lines are 95% confidence intervals
Detection probability in all the most informative models varies by the age, season and their interaction, ranges from 0.171 to 0.594, and is higher during winter than summer for both age classes. During the summer period, adults have a higher detection probability than juveniles (Table 2).
Table 2 Detection probabilities (model averaged) of juveniles and adults in the (six-month) Winter (October to March) and Summer (April to September) seasons UCL/LCL Upper/Lower Confidence Limits (95%)
The top model carries 41% of model weight, and is 1.74 AICc units better than the next best model (Table 1). The second most informative model is identical in structure but excludes the interaction, carries 17.1% of the weight, and is 0.04 AICc units better than the third most informative model. This model includes the age, season and strategy effects, and the age by season interaction. It carries 16.8% of the weight. Seventy-five percent of the weight is in the top three models, which are all very similar.
We estimated annual survival as the product of the appropriate seasonal estimates in Table 3, namely (winter*summer) for oversummering birds, and (winter*migration) for migrants. The annual survival (Oct – Sept) estimated by this method is, for migrant yearlings 0.555, for oversummering yearlings 0.671, for adult migrants 0.614, and for oversummering adults 0.808.
Table 3 Seasonal (six-month) survival estimates (model averaged) for juvenile/yearlings (J) and adults (A) during Winter (October to March), Summer (April to September) and migration seasons. UCL/LCL = Upper/Lower 95% Confidence Limits
Oversummering
The probability that a yearling migrates (transition probability ψJM1) is estimated at 0.72 (LCL: 0.67; UCL: 0.77), while the probability that an adult migrates (transition probability ψAM2) is estimated at 0.81 (LCL: 0.79; UCL: 0.82). The (complementary) rates of oversummering are 0.28 (yearling) and 0.19 (adult).
Our main prediction is that oversummering gives a survival advantage large enough to offset the reproduction necessarily foregone by oversummering. For yearlings, the difference between the estimated survival (see Table 3) of migrants (0.670) and oversummering individuals (0.810) is 0.140 (95% CI 0.118–0.162). For adults, the difference between the estimated survival of migrants (0.679) and oversummering individuals (0.894) is 0.215 (95% CI 0.169–.261). Thus, the estimated survival advantages for adults and yearlings both closely match and do not differ significantly from the threshold values predicted (adults 0.240; yearlings 0.134; see Appendix 3).
Survival in relation to migration distance
Based on pre-migratory molt patterns measured at Paracas [18], the probability of migration rises with culmen length in all years (Appendix 2), demonstrating that longer-billed (shorter migration distance) yearlings are more likely to migrate. We entered migration probabilities based on this relationship into Eq. 1 to estimate summer season survivorship in each culmen class length. As a sensitivity analysis, we varied summer season survival of oversummering yearlings between our estimate of 0.81 and the higher estimate of 0.93 for yearling Western Sandpipers oversummering at Paracas [37].
Results are presented in Fig. 4. In all cases, the calculated survival of migrants falls off steeply for the short culmen classes (presumed to be western Arctic breeders with longer migration distance), but is level for longer-billed, shorter-distance migrant birds presumed to be from central and eastern Arctic breeding sites.
Calculated survival of migrant yearling Semipalmated Sandpipers at Paracas, in relation to culmen length (mm). Method described in the text. Longer culmens are associated with eastern breeding populations and shorter migration distance. Oversummering survival adjusted to 0.81 (upper line), 0.87 (middle line), or 0.93 (lower line)
Of the several thousand Semipalmated Sandpipers that spend the non-breeding season at Paracas, Perú, an estimated 72% of yearlings and 81% of adults migrate northward to breed, with the remainder oversummering. We estimate that the summer (April – September) survival probability of oversummering yearlings is 0.140 higher than that of migrant yearlings, while that of oversummering adults is 0.215 higher than that of migrant adults. These estimates are statistically indistinguishable from the values theoretically required to compensate oversummering birds for their foregone expected breeding success. These results support the hypothesis that oversummering is a life history tactic undertaken when the survival advantage gained equals or outweighs in fitness terms the lost breeding opportunity. We assume that individual decisions to migrate or not are flexible, dependent on individual situations, including probable migration distance, and governed by mechanisms evolved by natural selection. Since our study is observational, we could not assign individual birds to oversummering or migratory strategies at random, as would be done in a true experimental study. Thus, the comparison is imperfect as a quantification of the consequences for individuals of migration vs. oversummering. There also may be biases in permanent emigration. Despite these caveats, we view these results as demonstrating the adjustment of a behavioural threshold at fitness equivalency between conditional alternative tactics of oversummering and migration, driven by the substantial survivorship advantage for oversummering birds of both ages.
It has long been hypothesized that oversummering provides a survival advantage over migration [17]. Though consistent with the hypothesis, previous comparisons of survivorship are confounded either by age or by location. For example, in Western Sandpipers at Paracas, oversummering yearlings have higher survival (0.83) than migrant adults (0.70 [37];). But this comparison is confounded by age because at this location all adults migrate and all yearlings oversummer [22]. Survivorship comparisons have also been made between overwintering groups at locations where (some) individuals oversummer (e.g. yearling male Western Sandpipers in Mexico; survival 0.65) and those where all oversummer (Chitré; survival 0.83) but this comparison is confounded by location. Reneerkens et al. [38] found that the apparent annual survival of Sanderlings wintering in tropical West Africa (Mauritania: 0.74 and Ghana: 0.75) was lower than at three European sites (0.84, 0.84 and 0.87), even though those from the tropics often oversummered. These estimates pool migrant and oversummering birds within locations, and so if the result from our study applies, would underestimate the survival of oversummering birds and overestimate that of migrants.
Oversummering is a form of 'partial migration', though reversed from the usual system in that migrants leave non-breeding rather than breeding areas. Buchan et al. [39] review published comparisons in partial migration systems of the survival of migrants and non-migrants, in birds and other taxa. They assembled 129 effect size estimates from 23 studies, of which 73% report a survival advantage for residents (i.e. non-migrants), 22% for migrants, and 5% report equal survival. The 'persistently higher' fitness advantage in birds is associated with survival, and not breeding success.
Our finding that survival falls with migration distance, although anticipated, comes with a number of caveats that must be borne in mind. First, the breeding location of any individual bird observed at Paracas is not known, but inferred from the association between the mean culmen length and breeding location. The correlation between migration distance and culmen length is therefore indirect. With the variation around population averages, a bird with the overall mean culmen length of 18 mm, although most likely to breed in the centre of the range, could possibly breed at any location.
The overall proportion of yearling migrants (72%) was estimated by our MSMR model, and we estimated the proportion of migrants within each culmen size class based on the incidence of partial post-juvenal wing (PPW) molt. This is a minimum estimate, because it has been established [40] that some individuals migrate without PPW, though the number of recaptures is too small to establish a reliable estimate of its frequency. The data are clear that PPW at Paracas occurs with higher frequency among long-billed birds, but the quantitative relationship of culmen length to migratory tendency retains some uncertainty. Our model estimates that 19% of adults oversummered, which is higher than we expected. Accounts of oversummering in the shorebird literature refer almost exclusively to young birds. The sole published measure of adult incidence of which we are aware is 8% of Semipalmated Sandpipers from Brazil [41];). Based on the migration distance pattern we documented for juveniles, the high proportion more likely relates to Paracas lying on the southern edge of the species' wintering range. Nevertheless, the paucity of previous descriptions is curious.
We calculated that migratory survival is higher for long-billed (short-distance, eastern-breeding) migrant yearlings than for short-billed (long-distance, western-breeding) birds, consistent with the migration distance hypothesis. The calculated relationship (Fig. 4) is not linear: survival seems high and steady for bird with culmens longer than ~ 19 mm, and falls off quickly below that length. Though consistent with our hypothesis, we emphasize that this result must be viewed as tentative.
The life history model (Appendix 3) calculates the threshold survival advantage required for oversummering to match the reproductive cost of a missed breeding season. Oversummering is favoured at values above the threshold, and migration below. The survival advantages estimated from the data (adults 0.215, for yearlings 0.140) are statistically indistinguishable from the calculated threshold values of s* (for adults 0.240, for yearlings 0.134). Substantial numbers of both adults (19%) and yearlings (28%) oversummer, and we therefore presume that migration is flexible. We hypothesize that the migration decision is condition-dependent: individuals evaluate based on their own condition and circumstances whether they lie above or below the threshold. Under this hypothesis, those that migrate (the majority: 81% of adults and 72% of yearlings) decided that their migratory prospects were good enough that the extra survival that would be gained by oversummering is less than the threshold. The minority that oversummer, in contrast, decided that their migratory prospects were poor enough that the extra survival that would be gained by oversummering lies above the threshold. Factors contributing to the variance around our measured estimates of the thresholds likely include annual differences in the food availability at Paracas that supports migratory preparation (e.g. due to ENSO values [42]), differences in the proportions of birds from different breeding sites (because migratory distance has a strong influence on oversummering), the frequency distribution of pre-migratory condition in the overwinter population, and the accuracy with which individuals are able to assess their own condition.
Our data indicate that migration distance is an important consideration affecting migratory survival, with longer migration making oversummering more advantageous. Supporting this, a recent study from Martinez-Curci et al. (2020 [43];) showed a higher percentage of oversummering yearlings (53%) and adults (46%) Red Knots at a very distant non-breeding site in Argentina. Additional rigors such as long ocean crossings or predators may amplify the effect of distance. Physiological condition likely also bears on the decision, including factors previously suggested, such as health [9,10,11], or plumage condition [13]. Foraging ability and food conditions [12], such as those engendered by ENSO events or other ecological conditions, may also play an important role in the evaluation of whether a migration is worth the risk [39].
The data show that adults have higher survival than juvenile/yearlings, whether measured on an annual basis, or by seasons. Age differences of this kind have often been reported, so this is not unexpected. Among shorebirds, young birds are disadvantaged through foraging competition with adults, and so are expected to have poorer survival. For example, juvenile Red Knots Calidris canutus canutus at Mauritania are displaced by adults in dyadic interactions and are forced to use more dangerous feeding areas [44]. Wintering juvenile Redshanks Tringa totanus on a Scottish estuary are socially constrained by adults to feed on salt marshes, where higher exposure to raptors elevates the mortality rate [45]. But our seasonal comparisons reveal an interesting wrinkle, in that the survival difference between migrant adults and yearlings is non-existent.
The lower survival of young birds is often attributed to lack of experience in coping with migration, foraging and predators [46,47,48,49]. For example, juveniles are assumed to be naive about avoiding dangerous sites [50], or to be less capable than adults at finding good habitats [51, 52]. Our survival estimates show that yearlings and adults differ little or not all in migratory survival, which suggests little influence of competition or inexperience in this phase of the annual cycle. Note however that juveniles on their initial southward migration are not represented in our model.
Our results support the life history hypothesis that both oversummering juvenile and adult birds compensate for the loss of a breeding opportunity with higher survivorship than migrant birds. Migration distance has been previously identified as a factor associated with migratory propensity, and our data support this conclusion. Other factors are likely also important in affecting the decision to oversummer. The Semipalmated Sandpipers studied at Paracas may be particularly sensitive to changes in other factors, since both strategies are currently maintained in the population. Factors affecting pre-migratory body condition, such as El Niño may affect the annual trade-off [53], and climate change could alter the balance over the longer term. Heightened migratory danger from increasing falcon populations [14,15,16] could also do so.
Stearns SC. The evolution of life histories. London: Oxford University Press; 1992.
Lee AM, Reid JM, Beissinger SR. Modelling effects of nonbreeders on population growth estimates. J Anim Ecol. 2017;86:75–87.
Holmes RT. Breeding ecology and annual cycle adaptations of the red-backed sandpiper (Calidris alpina) in northern Alaska. Condor. 1966;68:3–46.
Hilden O. Population dynamics in Temminck's stint Calidris temminckii. Oikos. 1978;30:17–28.
Miller EH. Egg size in the least sandpiper, Calidris minutilla, on Sable Island, Nova Scotia. Canada Ornis Scand. 1979;10:10–6.
Gratto CL, Cooke F, Morrison RIG. Nesting success of yearling and older breeders in the Semipalmated sandpiper Calidris pusilla. Can J Zool. 1983;61:1133–7.
Saether BE. Age-specific variation in the reproductive performance of birds. Curr Ornithol. 1990;7:251–83.
McNeill R, Diaz MT, Villeneuve A. The mystery of shorebird over-summering: a new hypothesis. Ardea. 1994;82:143–51.
Eisenmann E. Northern birds summering in Panama. Wilson Bulletin. 1951;63:181–5.
Soto-Montoya E, Carmona R, Gómez M, Ayala-Pérez V, Arce N, Danemann GD. Over-summering and migrant red knots at Golfo de Santa Clara, Gulf of California, Mexico. Wader Study Group Bull. 2009;116(Suppl 3):191–4.
Wetmore A. Our migrant shorebirds in southern South America. U.S.D.A. Tech. Bull. No. 26; 1927.
Puttick GM. Foraging behaviour and activity budgets of curlew sandpipers. Ardea. 1979;67:111–22.
O'Hara PD. The role of feather wear in alternative life history strategies of a long-distance migratory shorebird, the Western sandpiper. Ph.D: Dissertation, Simon Fraser University, Burnaby, BC, Canada; 2002.
Lank DB, Butler RW, Ireland J, Ydenberg RC. Effects of predation danger on migration strategies of sandpipers. Oikos. 2003;103:303–19.
Ydenberg RC, Butler RW, Lank DB, Smith BD, Ireland J. Western sandpipers have altered migration tactics as peregrine falcon populations have recovered. Proc R Soc B. 2004;271:1263–9.
Ydenberg RC, Butler RW, Lank DB. Effects of predator landscapes on the evolutionary ecology of routing, timing and molt by long-distance migrants. J Avian Biol. 2007;38:523–9.
Summers RW, Underhill LG, Prys-Jones RP. Why do young waders in southern Africa delay their first return migration to the breeding grounds? Ardea. 1995;83:351–7.
Tavera EA, Lank DB, González PM. Effects of migration distance on life history strategies of Western and Semipalmated sandpipers in Perú. J. Field Orn. 2016;87(Suppl 3):293–308.
O'Hara PD, Fernández G, Becerril F, De La Cueva H, Lank DB. Life history varies with migratory distance in Western sandpipers Calidris mauri. J Avian Biol. 2005;36:191–202.
Hicklin P, Gratto-Trevor CL. Semipalmated Sandpiper (Calidris pusilla). In: Poole A, editor. The Birds of North America Online. Cornell Lab of Ornithology, Ithaca; 2010.
Myers JP. A test of three hypotheses for latitudinal segregation of the sexes in wintering birds. Can J Zool. 1981;59:1527–34.
Fernández G, O'Hara PD, Lank DB. Tropical and subtropical Western sandpipers (Calidris mauri) differ in life history strategies. Ornitol Neotrop. 2004;15:385–94.
Myers JP, Maron TJ, Sallaberry AM. Going to extremes: why do sanderlings migrate to the Neotropics? Neotrop Orni. 1985;36:520–35.
Harrington BA, Morrison RIG. Semipalmated sandpiper migration in North America. Stud Avian Biol. 1979;2:83–100.
Myers JP, Maron JC, Ortiz E, Castro GV, Howe MA, Morrison RIG, Harrington BA. Rationale and suggestions for a hemispheric colour-marking scheme for shorebirds: a way to avoid chaos. Wader Study Group Bull. 1983;38:30–2.
Prater AJ, Marchant JH, Vuorinen J. Guide to the identification and ageing of Holarctic waders. B.T.O. Guide 17. Maud and Irvine Ltd., Tring, UK; 1977.
Gratto-Trevor CL, Morrison RIG, Mizrahi DS, Lank DB, Hicklin P, Spaans AL. Migratory connectivity of Semipalmated sandpipers: winter distribution and migration routes of breeding populations. Waterbirds. 2012;35:83–95.
Nichols JD, Kendall W. The use of multi-state capture-recapture models to address questions in evolutionary ecology. J Appl Stat. 1995;22:835–46.
Lebreton JD, Pradel R. Multistate recapture models: modelling incomplete individual histories. J Appl Stat. 2002;29:353–69.
Schaub M, Gimenez O, Schmidt BR, Pradel R. Estimating survival and temporary emigration in the multistate capture-recapture framework. Ecol. 2004;85:2107–13.
White GC, Burnham KP. Program MARK: Survival estimation from populations of marked animals. Bird Study. 1999;46:120–38.
Laake J.L. RMark: An R Interface for Analysis of Capture–Recapture Data with MARK. AFSC Processed Rep 2013–01, 25 p. Alaska Fisheries Science Center, NOAA, National Marine Fisheries Service, Seattle, Washington, USA; 2013.
S R Development Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.r-project.org; 2012.
Burnham KP, Anderson DR. Model selection and multimodel inference: a practical information-theoretic approach. 2nd ed. Fort Collins: Colorado State University; 2002.
Gimenez O, Lebreton JD, Choquet R, Pradel R. R2ucare: an R package to perform goodness-of-fit tests for capture–recapture models. Methods Ecol Evol. 2018;9:1749–54.
Kendall WL. Coping with unobservable and mis–classified states in capture–recapture studies. Anim Biodiv Conserv. 2004;27:97–107.
Tavera EA. Survivorship and Life History Strategies in relation to migration distance in Western and Semipalmated sandpipers in Perú. PhD thesis dissertation. Simon Fraser University, Canada; 2020.
Reneerkens J, Versluijs TS, Piersma T, Alves JA, Boorman M, Corse C, Gil O, Hallgrimsson GT, Lang J, Loos B, Ntiamoa-Baidu Y, Nuoh AA, Potts PM, Horn J, Lok T. Low fitness at low latitudes: wintering in the tropics increases migratory delays and mortality rates in an Arctic breeding shorebird. J Anim Ecol. 2019. https://doi.org/10.1111/1365-2656.13118.
Buchan C, Gilroy JJ, Caty I, Franco AMA. Fitness consequences of different migratory strategies in partially migratory populations: a multi-taxa meta-analysis. J Anim Ecol. 2019. https://doi.org/10.1111/1365-2656.13155.
Gratto CL, Morrison RIG. Partial post-juvenile wing molt of the Semipalmated sandpiper Calidris pusilla. Wader Study Group Bull. 1981;33:33–7.
Fedrizzi CE, Azevedo SM, Lazerda de Larrazabal ME. Body mass and acquisition of breeding plumage of wintering Caliidris pusilla (Linnaeus) (Aves, Scolopacidae) in the coast of Pernambuco, north-eastern Brazil. Rev. Bra. Zool. 2004;21(Suppl 2):249–52.
Quispe-Ccalluari C, Tam J, Demarcq H, Chamorro A, Espinoza-Morriberon D, Romero C, Dominguez N, Ramos J, Oliveros-Ramos R. An index of coastal thermal effects of El Niño southern oscillation on the Peruvian upwelling ecosystem. Int J Climatol. 2018;38:3191–201.
Martínez-Curci NS, Isacch JP, D'Amico VL, Rojas P, Castresana GJ. To migrate or not: drivers of over-summering in a long-distance migratory shorebird. J Avian Biol. 2020. https://doi.org/10.1111/jav.02401.
Van den Hout PJ, van Gils JA, Robin F, Van der Geest M, Dekinga A, Piersma T. Interference from adults forces young red knots to forage for longer and in dangerous places. Anim Behav. 2014;88:137–46.
Cresswell W. Flocking is an effective anti-predation strategy in redshanks, Tringa totanus Anim. Behav. 1994;47:433–442.
Lima SL. Predation risk and unpredictable feeding conditions: determinants of body mass in birds. Ecology. 1986;67:377–85.
Kus BE, Ashman P, Page GW, Stenzel LE. Age related mortality in a wintering population of dunlin. Auk. 1984;101:69–73.
Anderson DR, Burnham KP, White GC. Problems in estimating age-specific survival rates from recovery data of birds ringed as young. J Anim Ecol. 1985;54:89–98.
Sandercock BK, Gratto-Trevor CL. Local survival in Semipalmated sandpipers Calidris pusilla breeding at La Pérouse Bay, Canada. Ibis. 1997;139(Suppl 2):305–12.
Dierschke V. High profit at high risk for juvenile dunlins Calidris alpina stopping over at Helgoland (German bight). Ardea. 1998;86:59–69.
Ralph CJ. The disorientation and possible fate of young passerine coastal migrants. Bird-Band. 1978;49:237–47.
Rappole JH, Ramos MA, Oehlenschlager RJ, Warner DW, Barkan CP. Timing of migration and route selection in north American songbirds. In: Drawe DL, editor. Proceedings of the first welder Wildlife Foundation symposium. Welder Wildlife Foundation, Sinton, Texas; 1979. p. 199–214.
O'Hara PD, Hasse BJM, Elner RW, Smith BD, Kenyon JK. Are population dynamics of shorebirds affected by El Niño/southern oscillation (ENSO) while on their non-breeding grounds in Ecuador?. Estuar. Coast Shelf S 2007;74:96–108.
Weiser EL, Lanctot RB, Brown SC, Gates RH, Bêty J, Boldenow ML, Brook RW, Brown GS, English WB, Flemming SA, Franks SE, Gilchrist HG, Giroux MA, Johnson A, Kendall S, Kennedy LV, Koloski L, Kwon E, Lamarre JF, Lank DB, Latty CJ, Lecomte N, Liebezeit JR, McGuire RL, McKinnon L, Nol E, Payer D, Perz J, Rausch J, Robards M, Saalfeld ST, Senner NR, Smith PA, Soloviev M, Solovyeva D, Ward DH, Woodard PF, Sandercock BK. Annual adult survival drives trends in Arctic-breeding shorebirds but knowledge gaps in other vital rates remain. The Condor. 2018; doi: https://doi.org/10.1093/condor/duaa026.
A number of people played key roles in data collection. We would like to give special thanks to the CORBIDI shorebird banding crew: E. Ortiz, Y. Tenorio, R. Huayanca, M. Antezana, T. Poma, and to all the hundreds of volunteers for their contributions to the 55 field campaigns carried out for this study. We deeply appreciate CORBIDI's Director, Thomas Valqui, for his support and first-hand cooperation during all the years of fieldwork. We are grateful to the staff of Paracas National Reserve, especially to P. Saravia for assistance in obtaining the research permits, and for much other support and cooperation. We thank M. Drever and Environment and Climate Change Canada for valuable scientific advice and support, Wendy Palen and David Green for statistical guidance, and Jan van Gils for his suggestions. This study was carried out under permit of the Peruvian National Service of Protected Natural Areas (SERNANP-RNP).
Funding was provided primarily by the Neotropical Migratory Bird Conservation Act Program (NMBCA) administered by the U.S. Fish and Wildlife Service. Also, by the Centre for Wildlife Ecology (CWE) at Simon Fraser University, by Environment and Climate Change Canada (ECCC) and by the U. S. Forest Service. For all sources, funding role only covered logistic expenses.
Centre for Wildlife Ecology, Department of Biological Sciences, Simon Fraser University, 8888 University Drive, Burnaby, British Columbia, V5A 1S6, Canada
Eveling A. Tavera, David B. Lank & Ronald C. Ydenberg
Centro de Ornitología y Biodiversidad – CORBIDI, Santa Rita 105, Of. 202, Huertos de San Antonio, Surco, Lima 33, Lima, Peru
Eveling A. Tavera
Present address: Centre for Wildlife Ecology, Department of Biological Sciences, Simon Fraser University, 8888 University Dr., Burnaby, BC, V5C2G2, Canada
Wisconsin Department of Natural Resources, 107 Sutliff Ave, Rhinelander, WI, 54501, USA
Glenn E. Stauffer
David B. Lank
Ronald C. Ydenberg
This paper developed from chapters of EAT's PhD thesis at Simon Fraser University. EAT was responsible for gathering and compiling the field data. GES and EAT designed and ran the MSMR model and model competitions. EAT, RCY and DBL contributed to conceptual and analytical design of life history comparisons, interpretation of the results and editing the manuscript. EAT wrote the manuscript with contributions from RCY and DBL. All authors read and approved the final manuscript.
Correspondence to Eveling A. Tavera.
Capture and sampling methods performed followed guidelines recommended by the Canadian Council on Animal Care and was approved by the Animal Care Committee of Simon Fraser University (permit 1043B-03).
Encounters of marked Semipalmated Sandpipers at Paracas, Perú
Table 4 The number of sampling occasions in which birds marked in each year were (re-) encountered, ranging from 1 (= initial capture only), to 16 (includes initial capture). A total of 1963 were marked, of which 855 (43.6%) were encountered only during the sampling period when intially captured
Table 5 Mean number of sampling occasions and mean number of years in which individually-marked Semipalmated Sandpipers were subsequently re-encountered at Paracas, Perú, both by year birds were initially marked
Estimating relationships between culmen length, the probability of migration and survival
Tavera (2020 [37];) analyzed the relationship between culmen length and annual survival by adding culmen length as a covariate to the encounter history of marked Semipalmated Sandpipers at Paracas, using an open robust design multistate model to estimate survival. The resultant slope (− 0.0048; change in survival probability/mm of culmen) is slightly but non-significantly negative, which runs counter to the expectation for migrants. However, this slope estimate pools oversummering and migrant yearlings, which we hypothesize differ in survival.
The procedure to make separate survival estimates for migrants and oversummering birds is described in the text, and is encapsulated in eq. 1. This requires information on the proportion on migrants in each culmen size class, derived as follows. Tavera et al. ([37]; see her Table 7) measured the proportion of yearlings undergoing 'partial primary wing moult' (PPW) at Paracas during the pre-migratory period. Three years of data all show higher rates of PPW at longer culmen lengths, with the strongest relationship and the narrowest confidence limits occurring in 2015 when PPW was most common (44.4%), shown below (Fig. 5 in Appendix 2). Gratto and Morrison [40] observed that some yearlings may migrate without any PPW, and as no individuals undergo PPW and then oversummer (Tavera unpubl. data), the incidence of PPW appears to be a minimal estimate of the proportion of migrants.
To estimate the proportion of migrants in each culmen size class, we adjusted the estimate of PPW in each size class upward, requiring that its incidence increases smoothly (i.e. no inflection points) over all culmen length size classes. Values were adjusted until (i) the overall proportion of migrants and (ii) the overall survival match the values estimated by the MSMR model (0.72 and 0.70, respectively). The final vector of proportions ('prop. Migrants' column in Table 6 in Appendix 2) is not a unique solution, but with the known distribution of culmen lengths at Paracas ('proportion' column in Table 6 in Appendix 2, from Fig. 2 in [18]), only a narrow range of size class proportions is able to satisfy these criteria. These estimates were entered into Eq. 1. The calculations are summarized in Table 6 in Appendix 2.
Table 6 Calculations of the survival of migrant Semipalmated Sandpipers at Paracas in each culmen length size class. For example, survival in the 17 mm culmen length size class is 0.710. These constitute 23% of the birds at Paracas. We estimate using the above procedure that 57% are migrants, so 43% oversummer. Oversummer survival is 0.81. Therefore, if overall survival of the size class is 0.710, migrant survival is 0.635, and oversummering gives a survival advantage of 18%
Probability of Partial Post-Juvenal Wing Molt (PPW) in juvenile Semipalmated Sandpipers captured at Paracas, during the 2015 pre-migratory season [18]. The relationship is computed with the co-variate day-of-year set to 168.1, breeding plumage index set to 1.54, and mass to 24.63 g, values that minimize bias. The probability of PPW ranges from 0.22 for the shortest bills to 0.83 for the longest
How large must the oversummering survival advantage be to compensate for a missed breeding season?
Several thousand Semipalmated Sandpipers spend the boreal winter at Paracas. Northward migration to breeding sites begins in April, but some yearlings (i.e. birds born the previous summer) as well as some adults do not migrate but remain at Paracas during the boreal summer ('oversummer').
A hypothesis for oversummering is that survival is higher than for migration. There is also a cost to oversummering, because a breeding opportunity is foregone. The survival advantage for oversummering must therefore be high enough, in fitness terms, to compensate. As implied by previous studies of shorebird oversummering (see Introduction), we hypothesize that the ability to undertake the breeding migration is condition-dependent. Due to relatively poor condition, (perhaps due to parasitic infection, low wing or plumage quality, or low fat stores [42]) some individuals decide to oversummer, trading off the fitness benefit of higher survival against the fitness cost of a foregone breeding opportunity. In this Appendix we estimate the magnitude of the survival advantage required to compensate.
We term the expected Lifetime Reproductive Success (LRS) of an adult A. We assume that this is independent of whether a bird migrated or oversummered as a yearling (i.e. no carry-over). The expected reproductive success of a breeding yearling is termed R, and is likely lower than that of adults. We use the value of 0.76 from Weiser et al. (2018 [54];) for the annual survival of adult semipalmated sandpipers. (This is slightly higher than that estimated at Paracas, where mortality and permanent emigration cannot be distinguished.) With this level of survival, an adult Semipalmated Sandpiper expects to survive for 1/(1–0.76) = 4.2 years.
Gratto et al. (1983 [6];) measured Semipalmated Sandpiper reproductive success: mean adult clutch size is 3.9 and hatching success 77%, while yearling clutch size is 3.8 and hatching success 44%. The expected annual reproductive success of adults is therefore 3.00, and of yearlings 1.67. The expected LRS of an adult A is (3.00 * 4.2) = 12.5.
We term the breeding season (April – August) survival of a migrant yearling x, and denote the additional survival gained by oversummering as s (i.e. the survival advantage). We seek the value of s at which the fitness of a yearling's life history with oversummering is equal to that of a life history with migration. Designate this threshold value of s as s*. Oversummering is favored if s > s*, and migration if s < s*.
To find s*, we reason as follows: from the decision point (i.e. in March or April, when a bird commits to one strategy or the other) an oversummering yearling expects to survive until October and reach adulthood with probability x + s, and from that point expects LRS of A. Expected fitness is (x + s)A. From the same decision point, yearling migrants survive the breeding season and reach adulthood with probability x, and expect LRS of A if they do so. Reproductive success of the first breeding attempt is R, so expected fitness is R + xA. At the threshold these alternatives have equal fitness, so (x + s)A = R + xA. Solving for s gives s* = R/A. The term R/A can be interpreted as the proportion of expected adult LRS that a yearling foregoes by oversummering.
The predicted threshold survival advantage s* = R/A = (1.67/12.5) = 0.134. The threshold survival advantage required for adults to oversummer is predicted to be higher because they forego more reproduction by oversummering than do yearlings. Following the logic above we calculate s* for adults as 0.240.
Note also that yearling migratory survival is estimated to be lower in smaller culmen size classes (Fig. 4) presumably because the migration distance is on average longer. This would enlarge the survival advantage of oversummering, assuming that all else is equal. We can estimate the survival advantage for each culmen size class by subtracting the estimated migrant survival of migrants in each culmen size class from the measured oversummer survival estimate of 0.81. Assuming that neither R nor A is affected by culmen length, the threshold value of s* remains the same (0.13). The pattern of survival advantage therefore matches migratory propensity (estimated from PPW), with more oversummering in smaller culmen length classes (Table 6 in Appendix 2).
Finally, we add that this basic theoretical calculation of the threshold does not consider the option of PPW. We presume this would enlarge somewhat the conditions for migration (i.e. lower s*) because it enables an improvement in migratory preparedness (specifically, the condition of primaries) at a cost lower than that of a full wing molt.
Tavera, E.A., Stauffer, G.E., Lank, D.B. et al. Oversummering juvenile and adult Semipalmated sandpipers in Perú gain enough survival to compensate for foregone breeding opportunity. Mov Ecol 8, 42 (2020). https://doi.org/10.1186/s40462-020-00226-6
Multi-state mark-recapture model
Migratory strategy
Distance-dependant | CommonCrawl |
The contradiction between the Gell-Mann Low theorem and the identity of Møller operator $H\Omega_{+}=\Omega_{+}H_0$
What is the sense of introducing generating functional to the summands of expansion of S-matrix?
How exactly analyticity of S-matrix comes from causality principle?
When we define the S-matrix, what are "in" and "out" states?
Naive question about the S-matrix
Equivalence Theorem of the S-Matrix
On the transformation rule of 'in and out' states
S-Matrix formalism to describe the transition of matter falling into a black hole into Hawking radiation?
Lippman-Schwinger Equation with Outgoing Solutions
Positivity of residues and unitarity in scattering amplitudes
How to prove the equivalence of two different difinitions of S-operator? i.e. $\Omega_+(\Omega_-)^\dagger= e^{i \alpha}(\Omega_-)^\dagger\Omega_+$
I read there are two definitions about [S-operator](https://en.wikipedia.org/wiki/S-matrix#The_S-matrix):
The first one (e.g (8.49) in Greiner's Field Quantization) is:
$$S_{fi}\equiv \langle \Psi_p^{-}| \Psi_k^{+}\rangle$$
where $|\Psi_p^{-}\rangle$ is a state in Heisenberg picture which is $| p \rangle$ at $t=+\infty$ when you calculate the $|\Psi_p^{-}\rangle$ in Schrodinger picture , called out state. $| \Psi_k^{+}\rangle$ is a state in Heisenberg picture which is $| k \rangle$ at $t=-\infty$, called in state.
So$$S_{fi}\equiv \langle \Psi_p^{-}| \Psi_k^{+}\rangle= \langle p|(\Omega_-)^\dagger\Omega_+|k \rangle$$
In this case the S-operator $\hat S=(\Omega_-)^\dagger\Omega_+$,
where Møller operator
$$\Omega_+ = \lim_{t\rightarrow -\infty} U^\dagger (t) U_0(t)$$
$$\Omega_- = \lim_{t\rightarrow +\infty} U^\dagger (t) U_0(t)$$
So $$S=U_I(\infty,-\infty)$$
Another definition (e.g (9.14) (9.17) (9.99) in Greiner's Field Quantization) is :
$$S_{fi}\equiv \langle \Psi_p^{-}| \Psi_k^{+}\rangle\equiv\langle \Psi_p^{-}| \hat S ^\prime |\Psi_k^{-}\rangle=\langle \Psi_p^{+}| \hat S ^\prime |\Psi_k^{+}\rangle$$
where S-operator
$\hat S ^\prime |\Psi_p^{-}\rangle =|\Psi_p^{+}\rangle$ that is $\hat S^\prime = \Omega_+(\Omega_-)^\dagger$.
It seems that these two definitions are differnt, but many textbook can derive the same dyson formula for these two S-operators.
https://en.wikipedia.org/wiki/S-matrix#The_S-matrix
How to prove: $$\Omega_+(\Omega_-)^\dagger= e^{i \alpha}(\Omega_-)^\dagger\Omega_+$$
related to this question: http://physics.stackexchange.com/questions/105152/there-are-two-definitions-of-s-operator-or-s-matrix-in-quantum-field-theory-a
s-matrix-theory
quantum-mechanics
asked Apr 1, 2017 in Theoretical Physics by Alienware (185 points) [ revision history ]
edited Apr 1, 2017 by Alienware
I don't think the claimed equality holds. Probably different sources use slightly different definitions of the Moeller operators.
commented Apr 2, 2017 by Arnold Neumaier (13,989 points) [ no revision ]
@ArnoldNeumaier The first definition is obiviously right. You can find the second definition still has a form of Dyson formula except a phase in many textbook, and authors have given a proof e.g. Greiner's Field Quantization (9.14) (9.17) (9.99). Hatfield's QFT (7.43) (7.63)-(7.90)
commented Apr 2, 2017 by Alienware (185 points) [ no revision ]
The first definition gives the S-matrix in the interaction picture (a unitary operator in asymptotic space), the second formula gives the S-matrix in the Heisenberg, a unitary operator in Heisenberg space. These usually (e.g., if there is more than one channel) operate in distinct Hilbert spaces. Note that $\Omega_\pm$ go from the asymptotic Hilbert space to the Heisenberg Hilbert space (which are in general distinct).
See Thirring's Course on Mathematical Physics, Vol. 3, Definition (3.4.23) (p.138 in the first edition).
answered Apr 3, 2017 by Arnold Neumaier (13,989 points) [ revision history ]
edited Apr 3, 2017 by Arnold Neumaier
So the S-operators in these two definition are two different operators. But are they only differ by a phase? Because I really found in different textbook these two different operators have the same Dyson series.
@Alienware: In general they cannot be compared at all since they act on different spaces. To compare them one must identify the asymptotic Hilbert space in some way with the Heisenberg Hilbert space, and there is no canonical way to do so. To actually compute S-matrix elements to compare with cross sections or other collision data you need the interaction S-matrix. | CommonCrawl |
Hermitian adjoint of 4-gradient in Dirac equation
I'm having issues deriving the Dirac adjoint equation, $$\overline{\psi}(i\gamma^{\mu}\partial_{\mu}+m)=0.\tag{1}$$ I started by taking the Hermitian adjoint of all components of the original Dirac equation, giving me $$\psi^{\dagger}(-i\gamma^{\mu\dagger}\partial_{\mu}^{\dagger}-m)=0.\tag{2}$$ The adjoint of the gamma matrices is defined to be $\gamma^{\mu\dagger}=\gamma^0\gamma^{\mu}\gamma^0$, so no issues there. Now intuitively, I would think that the adjoint of the 4-gradient would be $\partial_{\mu}^{\dagger}=-\partial_{\mu}$. In non-relativistic quantum mechanics, it can be shown that first derivative operators are anti-Hermitian, so for example, $\frac{d}{dx}^{\dagger}=-\frac{d}{dx}$. So I would think this would the same case for the 4-gradient, but apparently it isn't. Among the many derivations I've gone over, for example, on page 77 here, it claims that the 4-gradient is self-adjoint. Could someone please explain why my intuition is incorrect?
quantum-field-theory operators differentiation dirac-equation dirac-matrices
Qmechanic♦
120k1414 gold badges243243 silver badges14551455 bronze badges
connorpconnorp
$\begingroup$ Depends on the space. If you're taking adjoints in $L^2$, then yes, it's anti-Hermitian. But here you're taking adjoints in spinor space (that is, transposing and conjugating spinors and matrices), so the derivative is unaffected. $\endgroup$ – Javier Aug 13 '16 at 23:02
$\begingroup$ @Javier Could you explain why exactly the derivative is unaffected? I'm not very familiar with spinor space. Thanks. $\endgroup$ – connorp Aug 14 '16 at 16:42
Expanding on my comment.
The basic idea is that what you mean by adjoint depends on the vector space being considered. For example, we might have $\mathbb{C}^n$ as our space, with the usual inner product; in that case, the adjoint of a vector or matrix is the transpose conjugate. Note that technically, taking adjoint of a vector doesn't return a vector, because row vectors and column vectors belong to different spaces.
We could also use $L^2(\mathbb{R}^n)$ as our vector space. Its elements are functions, and the inner product is defined by
$$(f,g) = \int d^n x\ f^* g$$
You can take adjoints here too, using the inner product defined above. It can be shown that the derivative operator, which is a linear transformation on $L^2$, is anti-Hermitian.
Now to the Dirac equation. The vector space (that is, spinor space) being considered here is $\mathbb{C}^4$, not $L^2$. That is, $\psi$ is a vector because it has four components, not because it's a function. The fact that its components are functions is irrelevant here. When we take adjoints, we transpose and conjugate vectors and matrices. The derivative is an operator if you think about what it does to functions, but it is not a $4\times4$ matrix; it does nothing to spinors. Therefore, the particular adjoint we're doing here doesn't affect it.
JavierJavier
Perhaps the following argument is more convincing:
The Dirac equation$^1$ $$ (i\gamma^{\mu}\stackrel{\rightarrow}{\partial}_{\mu}-m)\psi~=~0 \tag{A}$$ is by the fundamental lemma of variational calculus equivalent to $$ \forall \phi:\quad 0~=~\int d^4x~\bar{\phi}(i\gamma^{\mu}\stackrel{\rightarrow}{\partial}_{\mu}-m)\psi, \tag{B}$$ where $\phi$ is an arbitrary (off-shell) Dirac spinor.
Hermitian conjugation in Dirac spinor space leads to $$ \forall \phi:\quad0~=~\int d^4x~\bar{\psi}(-i\stackrel{\leftarrow}{\partial}_{\!\mu}~\gamma^{\mu}-m)\phi, \tag{C}$$ which is equivalent to $$ \bar{\psi}(i\stackrel{\leftarrow}{\partial}_{\!\mu}~\gamma^{\mu}+m)~=~0,\tag{D}$$ cf. the above comment by Javier.
On the other hand, if we also integrate (C) by parts, we get (after discarding boundary terms) $$ \forall \phi:\quad 0~=~\int d^4x~\bar{\psi}(i\gamma^{\mu}\stackrel{\rightarrow}{\partial}_{\mu}-m)\phi, \tag{E}$$ where the derivative now acts on $\phi$.
$^1$ We use the following conventions: $$ \bar{\psi} ~=~ \psi^{\dagger}\gamma^0 , \qquad\gamma^{\mu\dagger}~=~\gamma^0\gamma^{\mu}\gamma^0, \qquad (\gamma^0)^2~=~{\bf 1}_{4\times 4}. \tag{F}$$
Qmechanic♦Qmechanic
Not the answer you're looking for? Browse other questions tagged quantum-field-theory operators differentiation dirac-equation dirac-matrices or ask your own question.
Is Hamiltonian a scalar or tensor in Quantum Mechanics?
Is the Dirac Lagrangian Hermitian?
Dirac adjoint of a matrix
Adjoint of Gamma Matrices - Dirac
Green's function for adjoint Dirac Equation
Hermitian properties of Dirac operator
Derivation of the adjoint of Dirac equation
Showing hermiticity properties of Dirac matrices using hamiltonians
Why is Dirac equation a matrix equation? | CommonCrawl |
Tuition ClassesExpand
Secondary LevelExpand
A Math Tuition
IP Math Tuition
E Math Tuition
Pre-University LevelExpand
H2 Math Tuition
Free ResourcesExpand
O Level (Sec/ IP)Expand
Free Online Mini-Course
A Level (H2 Math)Expand
Essential Topical Questions
Free Vectors Mini-Course
H2 Math Question Bank
O Level | IGCSE | Additional Mathematics
The Complete Study Guide | Video explanations | Downloadable Worksheets
Trigonometric Functions and Equations
Many students struggle to master trigonometry, the study of relationships involving right-angled triangles. The six trigonometric functions in relation to a right-angled triangle are used to measure the length of sides and angles of a triangle given its hypotenuse and alternate side or angle.
Sine, cosine, and tangent, the three primary trigonometric functions, are responsible for all other trigonometric functions.
$\sin\theta$
The sine function is the ratio of the opposite side length to that of the hypotenuse.
$\frac{\text{opposite}}{\text{hypotenuse}}$ OR $\frac{1}{\text{cosec }\theta}$
$\cos\theta$
The cosine function is the ratio of the adjacent side length to that of the hypotenuse.
$\frac{\text{adjacent}}{\text{hypotenuse}}$ OR $\frac{1}{\sec\theta}$
$\tan\theta$
The tangent function is the ratio of the opposite side length to the adjacent side length.
This function can also be represented in the form of sine and cosine.
$\frac{\text{opposite}}{\text{adjacent}}$ OR $\frac{\sin\theta}{\cos\theta}$ OR $\frac{1}{\cot\theta}$
The values of trigonometric functions of a general angle cannot be readily calculated unless we are dealing with special angles. Let's start off with the primary trigonometric functions.
Trigonometric Ratios of Special Angles
A special angle is an angle where the exact value of one of the trigonometric ratios is known. The trigonometric ratios for the special angles are essential in applications and can be deduced with a little geometry, along with some knowledge of surds.
In this section, we will find the exact values of the special angles. We can use a simple method for doing so, based on the fact that certain basic geometrical shapes have special angles. The special property of the right-angled triangle can also be utilized to find these special angles.
The first shape we have here is an equilateral triangle. An equilateral triangle has three equal-length sides and three angles which are all the same measure, in this case, $60^\circ$.
Since trigonometric ratios have to revolve around a right-angled triangle, we can draw a straight line from the apex of the equilateral triangle to its base to split the triangle into two equal right-angled triangles. This gives us a triangle with angles of $30^\circ$, $60^\circ$ and $90^\circ$.
Let the shortest unit of the triangle be $1$ unit. The hypotenuse will be $2$ units as it is twice the length of the shortest unit. To find the height of the triangle, we are going to use Pythagoras' Theorem.
$\begin{aligned}
c&=\sqrt{{{a}^{2}}+{{b}^{2}}} \\
{{c}^{2}}&={{a}^{2}}+{{b}^{2}} \\
{{2}^{2}}&={{h}^{2}}+{{1}^{2}} \\
{{h}^{2}}&=4-1 \\
h&=\sqrt{3}
\end{aligned}$
$30$ degrees
$\theta=30{}^\circ $ or $\frac{\pi }{6}$rad
$\sin \theta $
$\cos \theta $
$\tan \theta $
\sin \theta &=\frac{\text{opposite}}{\text{hypotenuse}} \\
& =\frac{1}{2}
\cos \theta &=\frac{\text{adjacent}}{\text{hypotenuse}} \\
& =\frac{\sqrt{3}}{2}
\tan \theta &=\frac{\text{opposite}}{\text{adjacent}} \\
& =\frac{1}{\sqrt{3}} \\
& =\frac{1}{\sqrt{3}}\cdot \frac{\sqrt{3}}{\sqrt{3}} \\
& =\frac{\sqrt{3}}{1} \\
& =\sqrt{3}
Next is an isosceles triangle, a triangle in which at least two of the sides are congruent (equal in length).
We can draw a diagonal line from corner to corner of the square, as shown at left. Then we see that the square is split into two congruent right-angled triangles. This gives us a triangle with angles of $45^\circ$ and $90^\circ$.
The original square has sides of length $1$ unit. Since this is an isosceles triangle, we can use the Pythagoras' Theorem to find the hypotenuse.
& =\sqrt{{{1}^{2}}+{{1}^{2}}} \\
Click image to view in full size
& =\frac{1}{\sqrt{2}}
& =\frac{1}{1} \\
& =1
Trigonometric Values of Special Angle
$\theta $
$30{}^\circ $ or $\frac{\pi }{6}$rad
$\frac{1}{2}$
$\frac{1}{\sqrt{2}}$
$\frac{\sqrt{3}}{2}$
$\frac{1}{\sqrt{3}}=\frac{\sqrt{3}}{3}$
$1$
$\frac{\sqrt{3}}{1}=\sqrt{3}$
Without using a calculator, find the exact value of $\frac{\sin \frac{\pi }{4}}{\cos \frac{\pi }{3}}$.
\frac{\sin \frac{\pi }{4}}{\cos \frac{\pi }{3}}&=\frac{\frac{1}{\sqrt{2}}}{\frac{1}{2}} \\
& =\frac{1}{\sqrt{2}}\times \frac{2}{1} \\
Basics of Angles
Clockwise and Anticlockwise Rotations
We have all seen objects rotate or revolve, and we know that they can do so in two different ways. The direction in which a plane deviates from a line is called an angle. It can be positive or negative. We know that the hands of a clock go in a clockwise direction but in trigonometry, angles are always measured in the anticlockwise direction from the positive $x$-axis.
The angle measured in an anticlockwise direction has a positive value.
The angle measured in a clockwise direction has a negative value.
Basic Angle, $\alpha$
The basic acute angle or the reference angle, $\alpha$, is the acute angle that the given angle makes with the $x$-axis.
$\left( \theta =\alpha \right)$
$\left(\theta =180{}^\circ -\alpha^\circ \right)$ OR $\left(\theta =\pi -\alpha \right)$
$\left(\theta =180{}^\circ +\alpha^\circ \right)$ OR $\left(\theta =\pi +\alpha \right)$
$\left(\theta =360{}^\circ -\alpha^\circ \right)$ OR $\left(\theta =2\pi -\alpha \right)$
Express $\sin \theta$, $\cos \theta$ and $\tan \theta$ in terms of the ratios of its basic angles when $\theta=140^\circ $.
Basic angle, $\alpha =180{}^\circ -140{}^\circ =40{}^\circ $
\sin 140{}^\circ &=40{}^\circ \\
\cos 140{}^\circ &=-\cos 40{}^\circ \\
\tan 140{}^\circ &=-\tan 40{}^\circ
Finding Trigonometric Ratios
ASTC Rules
We can easily solve trigonometric questions by identifying the $4$ quadrants and calculating the angles in degrees or radians. The four quadrants of the coordinate plane are important for trigonometric calculations, and it's helpful to have a quick visual reference for angles. The four quadrants of the coordinate plane are numbered anticlockwise as shown in the figure.
The easiest way to remember is by using the mnemonic ASTC – All Students Take Calculus.
1st Quadrant
2nd Quadrant
3rd Quadrant
4th Quadrant
If a given angle is in the first quadrant ($0^\circ$ to $90^\circ$), then all trigonometric ratios in that angle will be positive.
The letter 'A' in ASTC indicates that all trigonometric ratios in the first quadrant are positive.
If a given angle is in the second quadrant ($90^\circ$ to $180^\circ$), then all trigonometric ratios in that angle will be negative except for sine.
The letter 'S' in ASTC indicates that only sine in the second quadrant is positive.
If a given angle is in the third quadrant ($180^\circ$ to $270^\circ$), then all trigonometric ratios in that angle will be negative except for tangent.
The letter 'T' in ASTC indicates that only the tangent in the third quadrant is positive.
If a given angle is in the fourth quadrant ($270^\circ$ to $360^\circ$), then all trigonometric ratios in that angle will be negative except for cosine.
The letter 'C' in ASTC indicates that only cosine in the fourth quadrant is positive.
Any time you are given an angle measurement in degrees, you can use this ASTC rule to figure out which quadrant your ratio will be in. We know that trigonometric ratios can be expressed in both positive and negative forms, but only one of those forms is true for any given angle.
First Quadrant:
\sin \theta &=\sin \alpha \\
\cos \theta &=\cos \alpha \\
\tan \theta &=\tan \alpha
Second Quadrant: $\left(\theta =180{}^\circ -\alpha^\circ \right)$ OR $\left(\theta =\pi -\alpha \right)$
\cos \theta &=-\cos \alpha \\
\tan \theta &=-\tan \alpha
Third Quadrant: $\left(\theta =180{}^\circ +\alpha^\circ \right)$ OR $\left(\theta =\pi +\alpha \right)$
\sin \theta &=-\sin \alpha \\
Fourth Quadrant: $\left(\theta =360{}^\circ -\alpha^\circ \right)$ OR $\left(\theta =2\pi -\alpha \right)$
Find the exact value of each of the following.
$\tan 210^\circ$
(a) $\tan 210^\circ$
$\cos \frac{8}{3}\pi $
(b) $\cos \frac{8}{3}\pi $
$\sin 330^\circ$
(c) $\sin 330^\circ$
Basic angle, $\alpha =30{}^\circ $
\tan 210{}^\circ &=\tan 30{}^\circ \\
Basic angle,
\alpha &=\pi -\frac{2}{3}\pi \\
& =\frac{1}{3}\pi
\cos \left( \frac{8}{3}\pi \right)&=-\cos \left( \frac{1}{3}\pi \right) \\
& =-\frac{1}{2}
Basic angle, $\alpha =30{}^\circ$
$\begin{aligned}\sin 330{}^\circ&=-\sin \left( 30{}^\circ\right) \\& =-\frac{1}{2}\end{aligned}$
Cotangent, Secant and Cosecant Ratios
The other trigonometric functions such as cotangent, secant, and cosecant can be derived from the primary trigonometric functions. The reciprocal of sine is cosecant, the reciprocal of cosine is secant, and the reciprocal of tangent is cotangent. All these ratios are reciprocals of each other.
$\cot\theta$
The cotangent function is the ratio of the adjacent side length to the opposite side length.
$\frac{1}{\tan\theta}$ OR $\frac{\text{adjacent}}{\text{opposite}}$.
$\sec\theta$
The secant function is the ratio of the hypothenuse to the adjacent side length.
$\frac{1}{\cos\theta}$ OR $\frac{\text{hypotenuse}}{\text{adjacent}}$
$\text{cosec }\theta$
The cosecant function is the ratio of the hypothenuse to the opposite side length.
$\frac{1}{\sin\theta}$ OR $\frac{\text{hypotenuse}}{\text{opposite}}$
Tip: Look at the third letter to determine the reciprocal of the trigonometric ratio.
${{\cos }^{n}}\theta ={{\left( \cos \theta \right)}^{n}},{{\sin }^{n}}\theta ={{\left( \sin \theta \right)}^{n}},{{\tan }^{n}}\theta ={{\left( \tan \theta \right)}^{n}}$ for $n\ge 1$,
${{\cos}^{2}}\theta ={{\left( \cos\theta \right)}^{2}}$
${{\sin}^{3}}\theta ={{\left( \sin\theta \right)}^{3}}$
${{\tan}^{5}}\theta ={{\left( \tan \theta \right)}^{5}}$
${{\cos }^{-1}}\theta \ne {{\left( \cos \theta \right)}^{-1}},{{\sin }^{-1}}\theta \ne {{\left( \sin \theta \right)}^{-1}}$ and ${{\tan }^{-1}}\theta ={{\left( \tan \theta \right)}^{-1}}$
$y={{\cos }^{-1}}x\Leftrightarrow \cos y=x$
$y={{\sin }^{-1}}x\Leftrightarrow \sin y=x$
$y={{\tan }^{-1}}x\Leftrightarrow \tan y=x$
Given that $\cos \theta=\frac{2}{7}$ and $\theta$ is acute, without using a calculator, find
$\sec \theta$,
(i) $\sec \theta$,
$\cot \theta$,
(ii) $\cot \theta$,
$\text{cosec} \theta$.
(iii) $\text{cosec} \theta$.
Use Pythagoras' Theorem to find the height:
\sqrt{{{7}^{2}}-{{2}^{2}}}&=\sqrt{45} \\
=\sqrt{9\cdot 5} \\
=3\sqrt{5}
\sec \theta &=\frac{1}{\cos \theta } \\
\cot \theta &=\frac{1}{\tan \theta } \\
& =\frac{1}{\frac{3\sqrt{5}}{2}} \\
& =\frac{2}{3\sqrt{5}}\cdot \frac{\sqrt{5}}{\sqrt{5}} \\
& =\frac{2}{15}\sqrt{5}
\text{cosec}\theta &=\frac{1}{\sin \theta } \\
Principal Values
The principal values of trigonometric functions are those values in the domain, which the function will make exist. The principal values of $x$ are defined as follow.
$y=\cos x$
$0{}^\circ \le x\le 180{}^\circ $ or $0\le x\le \pi $
$y=\sin x$
$-{{90}^{{}^\circ }}\le x\le {{90}^{{}^\circ }}$ or $-\frac{\pi }{2}\le x\le \frac{\pi }{2}$
$y=\tan x$
Write down the principal value of ${{\tan }^{-1}}\left( -\sqrt{3} \right)$, leaving your answer in radian as a multiple of $\pi $.
{{\tan }^{-1}}\left( -\sqrt{3} \right)&=-60{}^\circ \\
& =-60{}^\circ \times \frac{\pi }{180{}^\circ } \\
& =-\frac{1}{3}\pi
Do sign up for our Free Mini-Courses and try our well-structured curriculum to see how it can help to maximize your learning in mathematics online.
Secondary School Additional Mathematics Free Online Course Sign Up.
You may also find the pricing and plans for our Additional Mathematics unlimited all-access courses here!
Sketching of Trigonometric Curves
Techniques and Applications of Differentiation
Techniques of Integration
Whatsapp (+65)-87488161
[email protected]
Quality is our commitment
Meeting learning needs is our mission
Inspiring through teaching is our passion
© 2022 Tim Gan Math Learning Centre | MOE Registered | Privacy Policy | Terms and Conditions | Sitemap
Tim Gan Math
I will be back soon
Hey there! How can we help you today?
Start Chat with:
Pre-University Level
O Level (Sec/ IP)
A Level (H2 Math) | CommonCrawl |
Algebra of Matrices
Find A, if \[A\left[ \begin{matrix}
1 & 2 & 3 \\
\end{matrix} \right]=\left[ \begin{matrix}
3 & -2 & -1 \\
-4 & 1 & -1 \\
\end{matrix} \right]\]
Hint: Multiply the inverse of the matrix given on LHS, both the sides. Then, find the inverse of the matrix which was present earlier in LHS following every step. Now, multiply the inverse matrix with the matrix present on RHS earlier.
Complete step-by-step answer:
Given in the question:
\[A\left[ \begin{matrix}
Suppose, \[\left[ \begin{matrix}
\end{matrix} \right]=B\]
B is a $ 3\times 3 $ read as 3 by 3 matrix, which means B matrix contains 3 rows and 3 columns.
\[\left. \underbrace{\left[ \begin{matrix}
\end{matrix} \right]}_{\text{Column}} \right\}\text{Rows}\left( 3\times 3 \right)\]
Given here, \[A\times B=\left[ \begin{matrix}
Since, we do not know the matrix A, we must have to solve matrix B with the final product matrix present in the RHS.
To move matrix B from $ \text{LHS}\to \text{RHS} $ we multiply B matrix with its own inverse $ {{B}^{-1}} $ which yield identity matrix.
\[\text{Identity matrix I = }\left[ \begin{matrix}
Any matrix when multiplied with its own inverse always results in identity matrix this is predefined matrix property.
So, \[B\times {{B}^{-1}}=I\]
Now, in the question, we multiply $ {{B}^{-1}} $ on both LHS as well as RHS. So, that things overall remain unchanged.
\[\begin{align}
& A\times \left( B\times {{B}^{-1}} \right)=\left[ \begin{matrix}
\end{matrix} \right]\times {{B}^{-1}} \\
& \Rightarrow A\times I=\left[ \begin{matrix}
\end{align}\]
Now, we will find $ {{B}^{-1}} $ there are general rules to find inverse of my matrix.
Step 1:- Find minors of each element of the matrix.
For example, \[\left[ \begin{matrix}
{{a}_{1}} & {{a}_{2}} & {{a}_{3}} \\
{{b}_{1}} & {{b}_{2}} & {{b}_{3}} \\
{{c}_{1}} & {{c}_{2}} & {{c}_{3}} \\
Minor of $ {{1}^{st}} $ element $ {{a}_{1}} $ can be found out by eliminating the row and the column containing it:
\[\text{Minor of }{{\text{a}}_{\text{1}}}=\left( \begin{matrix}
{{b}_{2}} & {{b}_{3}} \\
{{c}_{2}} & {{c}_{3}} \\
\end{matrix} \right)\text{ can be solved as }{{\text{b}}_{2}}{{c}_{3}}-{{b}_{3}}{{c}_{2}}\]
Therefore, for matrix B minor of each element are:
& \Rightarrow \left( \begin{matrix}
\left( \begin{matrix}
5 & 7 \\
\end{matrix} \right) & \left( \begin{matrix}
\end{matrix} \right) \\
5 & -12 & -10 \\
-1 & 1 & 1 \\
Step 2:- Matrix of co-factors.
Co factors are just associated with +ve, -ve signs alternatively.
\[\Rightarrow \left( \begin{matrix}
5\left( + \right) & -12\left( - \right) & -10\left( + \right) \\
2\left( - \right) & -5\left( + \right) & -4\left( - \right) \\
-1\left( + \right) & 1\left( - \right) & 1\left( + \right) \\
\end{matrix} \right)=\left( \begin{matrix}
5 & 12 & -10 \\
-2 & -5 & 4 \\
\end{matrix} \right)\]
Step 3:- Adjugate: transverse all the elements diagonally.
12 & -5 & -1 \\
-10 & 4 & 1 \\
Step 4:- Multiply by 1/determinant.
Determinant = total number of matrix.
\[D={{a}_{1}}\times \left( \text{minors of }{{\text{a}}_{\text{1}}} \right)-{{a}_{2}}\times \left( \text{minors of }{{\text{a}}_{2}} \right)+{{a}_{3}}\times \left( \text{minors of }{{\text{a}}_{3}} \right)\]
D = is given for the original matrix B.
\[B=\left[ \begin{matrix}
& D=1\times \left( \begin{matrix}
\end{matrix} \right)-2\times \left( \begin{matrix}
\end{matrix} \right)+3\times \left( \begin{matrix}
& \Rightarrow 1\left( 5 \right)-2\left( 12 \right)+3\left( -10 \right) \\
& \Rightarrow 5+24-30 \\
& \Rightarrow -1 \\
& {{B}^{-1}}=\dfrac{1}{D}\times \left( \text{Matrix obtained in step 3} \right) \\
& \Rightarrow \dfrac{1}{D}\times \left( \begin{matrix}
& \Rightarrow -1\left( {} \right)=\left( \begin{matrix}
So, we have,
\[AI=\left[ \begin{matrix}
\end{matrix} \right]\times \left( \begin{matrix}
We now have to multiply both the matrices on RHS.
Matrices can only be multiplied if and only if:
Number of rows of first matrices = Number of columns of second matrices.
Here, both matrices to be multiplied are $ \left( 3\times 3 \right) $
Therefore, can be multiplied:
\[\Rightarrow \left[ \begin{matrix}
Elements of rows of first matrix will be multiplied by the elements of columns of second matrix:
\left( 3\times -5 \right)+\left( -2\times -12 \right)+\left( -1\times 10 \right) & \left( 3\times 2 \right)+\left( -2\times 5 \right)+\left( -1\times -4 \right) & \left( 3\times 1 \right)+\left( -2\times 1 \right)+\left( -1\times 1 \right) \\
\left( -4\times 5 \right)+\left( 1\times -12 \right)+\left( -1\times 10 \right) & \left( -4\times 2 \right)+\left( 1\times 5 \right)+\left( -1\times 4 \right) & \left( -4\times 1 \right)+\left( 1\times 1 \right)+\left( -1\times 1 \right) \\
\left( 2\times -5 \right)+\left( 0\times -12 \right)+\left( 1\times 10 \right) & \left( 2\times 2 \right)+\left( 0\times 5 \right)+\left( 1\times -4 \right) & \left( 2\times 1 \right)+\left( 0\times 1 \right)+\left( 1\times -1 \right) \\
Thus, \[AI=\left( \begin{matrix}
According to the matrix properties AI=A.
Whenever an identity matrix:
\end{matrix} \right]\] is multiplied with any other matrix, leaving the product as the matrix which was multiplied itself.
\[\therefore \text{ A}\times \text{I=A}\]
Therefore, required matrix A is \[A=\left( \begin{matrix}
Note: You should be knowing all the properties of matrices that is, how to find the inverse of a matrix, how to multiply two matrices. You must be very careful while doing the calculation, because the calculation part is complex here. It is important to note that we cannot divide the matrix and get A directly, so we must assume the matrix as B and then apply the concept of inverse of a matrix, identity matrix and solve such questions. Usually students make mistakes while writing the cofactor matrix by interchanging the sign of terms, so be very careful while solving that portion. | CommonCrawl |
Isochronous cluster synchronization in delay-coupled VCSEL networks subjected to variable-polarization optical injection with time delay signature suppression
Liyue Zhang, Wei Pan, Lianshan Yan, Bin Luo, Xihua Zou, and Mingfeng Xu
Liyue Zhang,1 Wei Pan,1,* Lianshan Yan,1 Bin Luo,1 Xihua Zou,1 and Mingfeng Xu2
1Center for Information Photonics and Communications, Southwest Jiaotong University, Chengdu, Sichuan 611756, China
2State Key Laboratory of Optical Technologies on Nano-Fabrication and Micro-Engineering, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
*Corresponding author: [email protected]
Liyue Zhang https://orcid.org/0000-0001-6679-4411
Lianshan Yan https://orcid.org/0000-0002-3633-7161
Xihua Zou https://orcid.org/0000-0002-7240-4229
W Pan
B Luo
X Zou
M Xu
Liyue Zhang, Wei Pan, Lianshan Yan, Bin Luo, Xihua Zou, and Mingfeng Xu, "Isochronous cluster synchronization in delay-coupled VCSEL networks subjected to variable-polarization optical injection with time delay signature suppression," Opt. Express 27, 33369-33377 (2019)
Get PDF (11429 KB)
Cluster synchronization in symmetric VCSELs networks with variable-polarization optical feedback (OE)
Time-delay signature suppression of polarization-resolved wideband chaos in VCSELs with dual-path chaotic optical injections (AO)
Chaos synchronization and communication in global semiconductor laser network with coupling time delay signature concealment (AO)
Fiber Bragg gratings
Optical feedback
Vertical cavity surface emitting lasers
Revised Manuscript: October 22, 2019
Manuscript Accepted: October 22, 2019
Theoretical model
Numerical results and discussions
The isochronous cluster synchronization with time delay (TD) signature suppression in delay-coupled vertical-cavity surface-emitting laser (VCSEL) networks subject to variable-polarization optical injection (VPOI) is theoretically and numerically studied. Based on the inherent symmetries of network topology, parameter spaces for stable cluster synchronization are presented, and zero-lag synchronization are achieved for VCSELs in same clusters. Additionally, the TD signature reduction for the dynamics of VCSELs in the stable clusters are systematically discussed. It is shown that both moderate polarizer angle and frequency detuning between different clusters have strengthen the effect of TD signature suppression. Moreover, the isochronous cluster synchronization with TD signature concealment is also verified in another VPOI-VCSEL network with different topology, indicating the generality of proposed results. Our results shed a new light on the research of chaos synchronization and chaos-based secure communications in VCSEL networks.
Chaos synchronization in delay-coupled semiconductor lasers (SLs) has received considerable interests for its potential applications in secure communications [1–6], high speed random number generators (RNGs) [7–10], generation of neuron-like dynamics [11], chaotic radar [12,13] and reservoir computing [14], etc. Different from the most of previous works that focused only on the synchronization patterns in simple scenarios with two or three SLs [15,16], there have some pioneering works extending the investigations to network realization of SLs and diverse synchronization patterns have been reported in complex SL networks [17–23]. More recently, cluster synchronization in SL networks, a new synchronization pattern in which SLs are divided into disjoint subsets based on the inherent symmetry of network topology, has obtained gradually intrests [20–23]. In cluster synchronization, SLs within same cluster can achieve isochronous synchronization and dynamics among different clusters are asynchronous [24,25]. However, the isochronous cluster synchronization regime in the vertical cavity surface-emitting laser (VCSEL) networks subjected to variable-polarization optical injection (VPOI) has never been reported and still deserve for further study.
From another viewpoint, the chaotic dynamics of delay-coupled SLs possess the detectable and recurrent features, which correspond to the optical round-trip time in the external cavity. As a result, this time delay (TD) signature leads to unavoidably serious security issues in the practical applications of chaos synchronization, as the chaotic dynamics of SLs correlated to themselves with a delay time. For example, the security of chaotic communication systems will degrade seriously if an eavesdropper can retrieve the TD signature [26]. Moreover, the TD signature induces recurrent information and thus resultantly reduce the randomness of RNGs [27] and it can also compromise the accuracy in chaotic ranging and chaotic radar. Unfortunately, it has been shown that the TD signature can be directly extracted by statistical analysis of the intensity time-series of lasers [28,29], and, what was even worse, the TD signature concealed successfully in the time domain also can be retrieved from the phase of laser emission [30]. Recently, more and more strategies have been proposed to suppress TD signature in two different ways. On the one hand, the TD signature can be deducted by modifying the structure of systems, such as the SLs with double optical feedback [31], dual-path optical injection [32], fiber Bragg grating feedback [33], and incoherent delayed self-interference of laser emission [34]. On the other hand, it can also be concealed by taking advantages of the interactions between different polarization modes of VCSELs [35,36].
Nevertheless, the studies on the TD signature reduction in network scenarios are still sorely lacked and most of related works are constrained to three lasers [37,38], greatly limiting the scope of practical applications of chaos synchronization. Here, we both theoretically and numerically investigate the isochronous cluster synchronization in complex VCSEL networks subject to VPOI with TD signature suppression. The parameter spaces for stable cluster synchronization are numerically studied, and then the TD signature reduction in the VPOI-VCSEL networks are proposed and discussed systematically. Moreover, the generality of our results are validated in a different topology of VPOI-VCSEL network.
2. Theoretical model
For the theoretical model, spin-flip model (SFM) is adopted and extended to the network scenarios by taking into account the delay-coupled VCSELs with VPOI as follows [35,39]:
(1)$$\begin{aligned}\dot{E_{mx}}=&k(1+i\alpha)\left[(N_m-1)E_{mx}+in_mE_{my}\right]-(\gamma_a+i\gamma_p)E_{mx}\\ &+\sigma\sum_{l=1}^{D_{s}}A_{ml}E_{lx}(t-\tau_{in})cos^2(\theta_{pl})e^{{-}i(w_{l}\tau_{in}+\Delta wt)}\\ &+\sigma\sum_{l=1}^{D_{s}}A_{ml}E_{ly}(t-\tau_{in})cos(\theta_{pl})sin(\theta_{pl})e^{{-}i(w_{l}\tau_{in}+\Delta wt)} \end{aligned}$$
(2)$$\begin{aligned} \dot{E_{my}}=&k(1+i\alpha)\left[(N_m-1)E_{my}-in_mE_{mx}\right]+(\gamma_a+i\gamma_p)E_{my}\\ &+\sigma\sum_{l=1}^{D_{s}}A_{ml}E_{lx}(t-\tau_{in})cos(\theta_{pl})sin(\theta_{pl})e^{{-}i(w_{l}\tau_{in}+\Delta wt)}\\ &+\sigma\sum_{l=1}^{D_{s}}A_{ml}E_{ly}(t-\tau_{in})sin^2(\theta_{pl})e^{{-}i(w_{l}\tau_{in}+\Delta wt)} \end{aligned}$$
(3)$$ \dot{N_{m}}=\gamma_N[\mu-N_m(1+|E_{mx}|^2+|E_{my}|^2)+in_m(E_{mx}E_{my}^*-E_{my}E_{mx}^*)]$$
(4)$$ \dot{n_{m}}=-\gamma_sn_m-\gamma_N[n_m(|E_{mx}|^2+|E_{my}|^2)+iN_m(E_{my}E_{mx}^*-E_{mx}E_{my}^*)] $$
where $E_x$ and $E_y$ denote the linear polarizations of the XP and YP components. $N$ is the total carrier inversion between conduction and valence bands, while $n$ accounts for the difference between carrier inversions with opposite spins. $A$ is the adjacency matrix that illustrates the topology of VCSEL network, $A_{ml}=1$ if $\textrm {VCSEL}_m$ is directly coupled to $\textrm {VCSEL}_l$, and $A_{ml}=0$ otherwise. $D_s$ represents the network size, and $D_s$=9 for the networks in Fig. 1. $\sigma$ is the uniform coupling strength among VCSELs, $\mu$ is the normalized current factor ($\mu =1~\textrm {corresponds to threshold current}$), $\alpha$ is the linewidth enhancement factor, $\tau _{in}=1.25\textrm {ns}$ is the coupling delay, $\theta _p$ is the variable polarizer angle (with respect to XP), and $w_m=2\pi c/\lambda _m$ is the central frequency of VCSEL with central wavelength $\lambda _m=850\textrm {nm}$. $\Delta w=2\pi \Delta f$ and $\Delta f=f_m-f_l$ represents the frequency detuning between VCSELs. The other typical VCSEL parameters include field decay rate $k=300\textrm {ns}^{-1}$, total carrier decay rate $\gamma _N=1\textrm {ns}^{-1}$, linear dichroism $\gamma _a=1\textrm {ns}^{-1}$, linear birefringence $\gamma _p=30\textrm {ns}^{-1}$, and spin-flip rate $\gamma _s=50\textrm {ns}^{-1}$ [35].
Fig. 1. Schematic diagrams of delay-coupled VPOI-VCSEL networks with two different network topologies.
Figure 1 presents two different delay-coupled VPOI-VCSEL networks, and the same-colored VCSELs are classified into the same cluster or synchronizable sub-cluster. Mathematically, the VCSEL network can be described as a graph $g=(V(g),E(g))$, where $V(g)$ is the vertex set and $E(g)$ is the set of edges, and two vertices are adjacent if there is an edge between them. Then we can represent the network as an adjacency matrix, and the symmetry of network is a permutation of the vertices and make the adjacency matrix unchanged [40]. As VCSELs in same clusters are mapped into each other in the symmetry operations, we can separate the VCSELs in network into different clusters after all the permutations of the network vertices. As mentioned before, the permutation of VCSELs in same clusters will preserve the adjacency matrix of network unchanged, and thus make them having the same dynamical equations. Therefore, if VCSELs in same cluster start with same initial conditions, isochronous synchronization can be achieved indefinitely. Otherwise, the stability of isochronous cluster synchronization will depends on the parameters choice of VCSEL network for random initial conditions [24]. Moreover, the adjacency matrix A of the VCSEL network in Fig. 1(a) is presented as follows:
(5)$$A=\begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}\\$$
Based on the inherent symmetries of network topology, the network in Fig. 1(a) can be divided into three untrivial clusters that contain more than one VCSEL, which are defined as cluster I ($\textrm {VCSEL}_2$ and $\textrm {VCSEL}_3$), cluster II ($\textrm {VCSEL}_4$ , $\textrm {VCSEL}_5$, $\textrm {VCSEL}_6$, and $\textrm {VCSEL}_7$), and cluster III ($\textrm {VCSEL}_8$ and $\textrm {VCSEL}_9$). It is worth noting that there exists partial synchronization in cluster II for a wide range of parameter space. Therefore, we can further divide the cluster II into two sub-clusters, i. e. cluster II$_a$ ($\textrm {VCSEL}_4$ and $\textrm {VCSEL}_5$) and cluster II$_b$ ($\textrm {VCSEL}_6$ and $\textrm {VCSEL}_7$) for convenience.
3. Numerical results and discussions
The root-mean square (RMS) synchronization error is adopted to evaluate the synchronization quality of clusters in network. The values of RMS are obtained by the calculation between the intensity time series of VCSELs ($I_T=|E_x|^2+|E_y|^2$) in same cluster with random initial conditions as follows [20,24]:
(6)$$RMS=\frac{\sum_{m=1}^{D_{c}}\sqrt{\left \langle{I_{Tm}(t)-\hat{I}_{T}(t)}\right \rangle}}{D_{c}\hat{I}_{T}(t)}$$
where $D_c$ is the dimention of the clusters, $\left \langle \cdot \right \rangle$ denotes the time average, and $\hat {I}_{T}(t)=\sum _{m=1}^{D_{c}}I_{Tm}(t)/D_{c}$. The threshold value of RMS for stable isochronous cluster synchronization is set to be 0.01, which means that stable isochronous cluster synchronization is assumed to be achieved for $\textrm {RMS}\;<\;0.01$.
To explore the parameter spaces for stable isochronous cluster synchronization, Fig. 2 presents the values of RMS for different clusters in network as function of coupling strength $\sigma$, current factor $\mu$, polarizer angle $\theta _{p}$ for optical injection and linewidth enhancement factor $\alpha$ which is the internal parameter of VCSELs. It is shown that the stable isochronous cluster synchronization can be obtained in a wide range of parameters space, which validates that the topology of network plays an important role on the synchronization scheme of VPOI-VCSEL networks. Furthermore, there is an intra-cluster deviation for cluster II. With the modulation of parameters, the cluster splits into two sub-clusters, i. e. cluster II$_a$ ($\textrm {VCSEL}_4$ and $\textrm {VCSEL}_5$) and cluster II$_b$ ($\textrm {VCSEL}_6$ and $\textrm {VCSEL}_7$). As shown in Figs. 2a(2)–2a(4) and Figs. 2b(2)–2b(4), the parameter spaces of stable cluster synchronization for cluster II$_a$ and cluster II$_b$ are much wider than that for cluster II.
Fig. 2. a(1)-a(5) The values of RMS as functions of coupling strength $\sigma$ and current factor $\mu$ for different clusters in network with $\theta _{p}= 50^\circ$ and $\alpha =3$. b(1)-b(5) The values of RMS as function of linewidth enhancement factor $\alpha$ and polarizer angle $\theta _{p}$ with $\sigma =35\textrm {ns}^{-1}$ and $\mu =2.5$.
Figure 3 presents the dynamical evolution and bifurcation for the intensity of VCSELs in cluster II. When linewidth enhancement factor $\alpha =1$, all the four VCSELs of cluster II synchronize isochronously as shown in Fig. 3a(1). For $\alpha =2$ (Fig. 3a(2)), cluster II has split into two smaller sub-clusters, each of which includes two VCSELs. And when $\alpha =3$ (Fig. 3a(3)), these four VCSELs lose synchrony eventually. Moreover, Figs. 3b(1)–3b(4) illustrate the intra-cluster deviation as function of network parameters systematically, which clearly show that there is a bifurcation of cluster II with the modulation of parameters in network. Meanwhile, cluster II$_a$ and cluster II$_b$ can still achieve stable isochronous synchronization in a wide range of parameter spaces after the intro-cluster bifurcation.
Fig. 3. The dynamical evolution of VCSELs in cluster II for different values of linewidth enhancement factor $\alpha$ with $\sigma =35\textrm {ns}^{-1}$, $\mu =2.5$ and $\theta _{p}= 50^\circ$, for $\alpha =1$ (a(1)), $\alpha =2$ (a(2)), and $\alpha =3$ (a(3)). b(1)-b(4) RMS values of cluster II, II$_a$ and II$_b$ (intra-cluster deviation) as function of $\alpha , \sigma , \mu$ and $\theta _p$.
On the basis of synchronization in cluster II$_a$ and cluster II$_b$, we begin to demonstrate the simultaneous TD signature suppression of such VPOI-VCSEL netwok in the following parts. To estimate TD signatures both in the intensity and phase series of VCSELs in cluster II$_a$ and cluster II$_b$, time-dependent auto-correlation function (ACF) is induced and defined as follows [28,30]:
(7)$$C_T= \frac{\Big\langle\left[I_{T}(t)-\left\langle{I_{T}(t)}\right\rangle\right]\cdot{\left[I_{T}(t+{\Delta} t)-\left\langle{I_{T}(t+{\Delta} t)}\right\rangle\right]}\Big\rangle}{\sqrt{\left\langle\left[I_{T}(t)-\left\langle{I_{T}(t)}\right\rangle\right]^2\right\rangle\cdot{\left\langle\left[I_{T}(t+{\Delta} t)-\left\langle{I_{T}(t+{\Delta} t)}\right\rangle\right]^2\right\rangle}}}$$
where $\langle {\cdot } \rangle$ denotes time average, ${\Delta} t \in [-5, 5]~\textrm {ns}$ denotes the lag time, and the time series used for calculation are selected within $t=[40,450]~\textrm {ns}$, which is long enough to keep transients extinct. $I_{T}$ denotes the total outputs of VCSELs and will be changed to the phase series to calculate $C_T (\varphi )$. For a given value of $\Delta t$, the ACF measures a linear relationship between $I_{T} (t)$ and $I_{T} (t+{\Delta} t)$.
The dynamical evolution and TD signature identification (both in intensity and phase domain) of VCSELs in cluster II$_a$ and cluster II$_b$ with different values of frequency detuning $\Delta f$ are presented in Fig. 4. Here, $\Delta f$ denotes the frequency detuning between VCSELs within cluster I and the other clusters. It can be seen from Fig. 4b(2) and Fig. 4c(2) that, there exist peak values at $\Delta t$=2.5 ns, which corresponds to the optical round-trip time in the external cavity, i. e. two times of the coupling delay $2\times \tau _{in}$=2.5 ns with $\Delta f$=0 GHz. Moreover, TD signatures are significantly suppressed both in intensity and phase by inducing frequency detuning.
Fig. 4. Intensity time series of cluster II$_a$ and II$_b$ (a(1)-a(3)); ACF of intensity (b(1)-b(3)) and phase series (c(1)-c(3)) for different frequency detuning $\Delta f$; RMS, $R_c$ and $R_C (\varphi )$ as function of frequency detuning $\Delta f$ (d(1)-d(3)), with $\alpha =2$, $\sigma =20\textrm {ns}^{-1}$, $\mu =2.5$ and $\theta _{p}= 50^\circ$.
Furthermore, in order to evaluate the TD signature quantitatively, the quantity peak signal to mean ratio are introduced [35,41]:
(8)$$R_C= \frac{\textrm{max}(|C_T|)}{\left\langle |C_T (\Delta t)|\right\rangle}$$
where max($|C_T|$) is the maximum value of $|C_T|$ in the vicinity of TD signature $\tau$, and $\left \langle |C_T (\Delta t)|\right \rangle$ denotes the mean value. The $R_C$ is calculated in the vicinity of optical round trip $\tau$=2.5 ns and the maximum value of $C_T$ at $\Delta t$=0 is excluded. Figs. 4d(1)–4d(3) present the RMS, $R_C$, and $R_C (\varphi )$ as a function of frequency detuning, respectively. The results clearly show that, the synchronization of VCSELs in cluster II$_a$ and II$_b$ are still preserved with the existence of frequency detuning, while TD signature reduction can be remarkably improved with the introduction of frequency detuning between clusters.
We further investigate the influence of polarizer angle on the TD signature suppression. The dynamical evolution and calculation of $C_T$ and $C_T (\varphi )$ of VCSELs in cluster II$_a$ and II$_b$ are presented in Fig. 5 with three distinct choices of $\theta _p$, which obviously indicate that the TD signature can be successfully concealed both in the intensity and phase series with moderate optical injection polarizer angle. Furthermore, the values of RMS, $R_C$ and $R_C (\varphi )$ as function of $\theta _p$ are further calculated in Figs. 6a(1)–6a(3), respectively. It is shown that, the TD signature is quite sensitive to polarizer angle and the values of $R_C$ achieve its minimum value at intermediate polarizer angles. The low values of $R_C$ for intermediate polarizer angle can be explained by the evolution of polarization states of VCSEL in network. To provide physical insight into the suppression of TD signature at critical values of $\theta _p$, Fig. 6(b) shows the Poincar$\acute {\textrm {e}}$ sphere for the dynamics of $\textrm {VCSEL}_4$ with $\theta _p=2^\circ$ and $\theta _p=50^\circ$. It can be seen that, the dynamics of XP mode are in dominant for $\theta _p=2^\circ$ as most of the points reside in the X-axis. However, for the situation of $\theta _p=50^\circ$, there is no dominant polarization mode and points spread all over the sphere. Hence, the interaction between the two polarization modes contributes to the suppression of TD signature.
Fig. 5. Intensity time series of cluster II$_a$ and II$_b$ (a(1)-a(3)); ACF of intensity (b(1)-b(3)) and phase series (c(1)-c(3)) for different polarizer angles $\theta _p$ with $\sigma =20\textrm {ns}^{-1}, \alpha =2, \mu =2.5$ and $\Delta f=35\textrm {GHz}$.
Fig. 6. a(1)-a(3) The RMS, $R_C$ and $R_C (\varphi )$ of VCSELs in cluster II$_a$ and II$_b$ as function of optical injection polarizer angle $\theta _p$ with $\sigma =20\textrm {ns}^{-1}, \alpha =2, \mu =2.5$ and $\Delta f=35\textrm {GHz}$. b(1)-b(2) Evolution of polarization states described by the Poincar$\acute {\textrm {e}}$ sphere for delay-coupled VCSEL network with different polarizer angles $\theta _p$.
Finally, the generality of our proposed results are validated in another delay-coupled VPOI-VCSEL network with different topology (Fig. 1(b)). Figure 7 show the dynamical evolution of the cluster containing $\textrm {VCSEL}_{6, 7, 8, 9}$ and the corresponding ACF calculated from intensity and phase series, respectively. It is demonstrated that, again, the isochronous synchronization of two sub-clusters and TD signature suppression are simultaneously implemented, which indicates that our result are applicable to different network topologies.
Fig. 7. Dynamical evolution (a) and calculation of ACF in intensity (b) and phase (c) series for VCSELs in network of Fig. 1(b), with $\sigma =20\textrm {ns}^{-1}, \alpha =2, \mu =2.5, \Delta f=35\textrm {GHz}$, and $\theta _p=50^\circ$.
In conclusion, the isochronous cluster synchronization and TD signature suppression are achieved simultaneously in delay-coupled VCSEL networks subjected to VPOI. The influence of network parameters on the stability of cluster synchronization and TD signature reduction are investigated systematically. The generality of proposed results are validated in different topologies of VCSEL network. Our results offer a new insight on the chaos-based applications in VCSEL networks.
National Natural Science Foundation of China (61775185); Sichuan Province Science and Technology Support Program (2018HH0002, 2019JDJQ0022); the "111" Plan (B18045).
1. A. Argyris, D. Syvridis, L. Larger, V. Annovazz-Lodi, P. Colet, I. Fischer, J. Garcia Ojalvo, C. R. Mirasso, L. Pesquera, and K. A. Shore, "Chaos-based communications at high bit rates using commercial fibre-optic links," Nature 438(7066), 343–346 (2005). [CrossRef]
2. N. Jiang, A. K. Zhao, C. P. Xue, J. M. Tang, and K. Qiu, "Physical secure optical communication based on private chaotic spectral phase encryption/decryption," Opt. Lett. 44(7), 1536–1539 (2019). [CrossRef]
3. N. Q. Li, W. Pan, L. S. Yan, B. Luo, X. H. Zou, and S. Y. Xiang, "Enhanced two-channel optical chaotic communication using isochronous synchronization," IEEE J. Sel. Top. Quantum Electron. 19(4), 0600109 (2013). [CrossRef]
4. P. Li, Q. Cai, J. G. Zhang, B. J. Xu, Y. M. Liu, A. Bogris, K. A. Shore, and Y. C. Wang, "Observation of flat chaos generation using an optical feedback multi-mode laser with a band-pass filter," Opt. Express 27(13), 17859–17867 (2019). [CrossRef]
5. X. H. Zou, W. L. Bai, W. Chen, P. X. Li, B. Lu, G. Yu, W. Pan, B. Luo, L. S. Yan, and L. Y. Shao, "Microwave photonics for featured applications in high-speed railways: Communications, detection, and sensing," J. Lightwave Technol. 36(19), 4337–4346 (2018). [CrossRef]
6. X. H. Zou, F. Zou, Z. Z. Cao, B. Lu, X. L. Yan, G. Yu, X. Deng, B. Luo, L. S. Yan, W. Pan, J. P. Yao, and A. M. J. Koonen, "A multifunctional photonic integrated circuit for diverse microwave signal generation, transmission, and processing," Laser Photonics Rev. 13, 1800240 (2019). [CrossRef]
7. A. Uchida, K. Amano, M. Inoue, K. Hirano, S. Naito, H. Someya, I. Oowada, T. Kurashige, M. Shiki, and S. Yoshimori, "Fast physical random bit generation with chaotic semiconductor lasers," Nat. Photonics 2(12), 728–732 (2008). [CrossRef]
8. C. P. Xue, N. Jiang, K. Qiu, and Y. X. Lv, "Key distribution based on synchronization in bandwidth-enhanced random bit generators with dynamic post-processing," Opt. Express 23(11), 14510–14519 (2015). [CrossRef]
9. P. Li, Y. Guo, Y. Guo, Y. Fan, X. Guo, X. Liu, K. Li, K. A. Shore, Y. Wang, and A. Wang, "Ultrafast fully photonic random bit generator," J. Lightwave Technol. 36(12), 2531–2540 (2018). [CrossRef]
10. P. Li, Y. Guo, Y. Q. Guo, Y. L. Fan, X. M. Guo, X. L. Liu, K. A. Shore, E. Dubrova, B. Xu, Y. C. Wang, and A. B. Wang, "Self-balanced real-time photonic scheme for ultrafast random number generation," APL Photonics 3(6), 061301 (2018). [CrossRef]
11. S. Y. Xiang, H. Zhang, X. X. Guo, J. F. Li, A. J. Wen, W. Pan, and Y. Hao, "Cascadable neuron-like spiking dynamics in coupled VCSELs subject to orthogonally polarized optical pulse injection," IEEE J. Sel. Top. Quantum Electron. 23(6), 1–7 (2017). [CrossRef]
12. F. Y. Lin and J. M. Liu, "Chaotic lidar," IEEE J. Sel. Top. Quantum Electron. 10(5), 991–997 (2004). [CrossRef]
13. D. Z. Zhong, G. L. Xu, W. Luo, and Z. Z. Xiao, "Real-time multi-target ranging based on chaotic polarization laser radars in the drive-response VCSELs," Opt. Express 25(18), 21684–21704 (2017). [CrossRef]
14. K. Hicke, M. A. Escalona-Morán, D. Brunner, M. C. Soriano, I. Fischer, and C. R. Mirasso, "Information processing using transient dynamics of semiconductor lasers subject to delayed feedback," IEEE J. Sel. Top. Quantum Electron. 19(4), 1501610 (2013). [CrossRef]
15. M. Ozaki, H. Someya, T. Mihara, A. Uchida, S. Yoshimori, K. Panajotov, and M. Sciamanna, "Leader-laggard relationship of chaos synchronization in mutually coupled vertical-cavity surface-emitting lasers with time delay," Phys. Rev. E 79(2), 026210 (2009). [CrossRef]
16. R. Vicente, I. Fischer, and C. R. Mirasso, "Synchronization properties of three delay-coupled semiconductor lasers," Phys. Rev. E 78(6), 066202 (2008). [CrossRef]
17. F. Böhm, A. Zakharova, E. Schöll, and K. Lüdge, "Amplitude-phase coupling drives chimera states in globally coupled laser networks," Phys. Rev. E 91(4), 040901 (2015). [CrossRef]
18. J. Shena, J. Hizanidis, V. Kovanis, and G. P. Tsironis, "Turbulent chimeras in large semiconductor laser arrays," Sci. Rep. 7(1), 42116 (2017). [CrossRef]
19. J. Shena, J. Hizanidis, P. Hövel, and G. Tsironis, "Multiclustered chimeras in large semiconductor laser arrays with nonlocal interactions," Phys. Rev. E 96(3), 032215 (2017). [CrossRef]
20. L. Y. Zhang, W. Pan, L. S. Yan, B. Luo, X. H. Zou, and M. F. Xu, "Cluster synchronization of coupled semiconductor lasers network with complex topology," IEEE J. Sel. Top. Quantum Electron. 25(6), 1–7 (2019). [CrossRef]
21. Y. Aviad, I. Reidler, M. Zigzag, M. Rosenbluh, and I. Kanter, "Synchronization in small networks of time-delay coupled chaotic diode lasers," Opt. Express 20(4), 4352–4359 (2012). [CrossRef]
22. M. Bourmpos, A. Argyris, and D. Syvridis, "Analysis of the bubbling effect in synchronized networks with semiconductor lasers," IEEE Photonics Technol. Lett. 25(9), 817–820 (2013). [CrossRef]
23. M. F. Xu, W. Pan, S. Y. Xiang, and L. Y. Zhang, "Cluster synchronization in symmetric VCSELs networks with variable-polarization optical feedback," Opt. Express 26(8), 10754–10761 (2018). [CrossRef]
24. L. M. Pecora, F. Sorrentino, A. M. Hagerstrom, T. E. Murphy, and R. Roy, "Cluster synchronization and isolated desynchronization in complex networks with symmetries," Nat. Commun. 5(1), 4079 (2014). [CrossRef]
25. L. Y. Zhang, A. E. Motter, and T. Nishikawa, "Incoherence-mediated remote synchronization," Phys. Rev. Lett. 118(17), 174102 (2017). [CrossRef]
26. V. S. Udaltsov, J. P. Goedgebuer, L. Larger, J. B. Cuenot, P. Levy, and W. T. Rhodes, "Cracking chaos-based encryption systems ruled by nonlinear time delay differential equations," Phys. Lett. A 308(1), 54–60 (2003). [CrossRef]
27. K. Hirano, K. Amano, A. Uchida, S. Naito, M. Inoue, S. Yoshimori, K. Yoshimura, and P. Davis, "Characteristics of fast physical random bit generation using chaotic semiconductor lasers," IEEE J. Quantum Electron. 45(11), 1367–1379 (2009). [CrossRef]
28. D. Rontani, A. Locquet, M. Sciamanna, and D. S. Citrin, "Loss of time-delay signature in the chaotic output of a semiconductor laser with optical feedback," Opt. Lett. 32(20), 2960–2962 (2007). [CrossRef]
29. L. Zunino, M. C. Soriano, I. Fischer, O. A. Rosso, and C. R. Mirasso, "Permutation-information-theory approach to unveil delay dynamics from time-series analysis," Phys. Rev. E 82(4), 046212 (2010). [CrossRef]
30. R. M. Nguimdo, M. C. Soriano, and P. Colet, "Role of the phase in the identification of delay time in semiconductor lasers with optical feedback," Opt. Lett. 36(22), 4332–4334 (2011). [CrossRef]
31. J. G. Wu, G. Q. Xia, and Z. M. Wu, "Suppression of time delay signatures of chaotic output in a semiconductor laser with double optical feedback," Opt. Express 17(22), 20124–20133 (2009). [CrossRef]
32. S. Y. Xiang, W. Pan, A. J. Wen, N. Q. Li, L. Y. Zhang, L. Shang, and H. X. Zhang, "Conceal time delay signature of chaos in semiconductor lasers with dual-path injection," IEEE Photonics Technol. Lett. 25(14), 1398–1401 (2013). [CrossRef]
33. S. S. Li and S. J. Chan, "Chaotic time-delay signature suppression in a semiconductor laser with frequency-detuned grating feedback," IEEE J. Sel. Top. Quantum Electron. 21(6), 541–552 (2015). [CrossRef]
34. A. B. Wang, Y. B. Yang, B. J. Wang, B. B. Zhang, L. Li, and Y. C. Wang, "Generation of wideband chaos with suppressed time-delay signature by delayed self-interference," Opt. Express 21(7), 8701–8710 (2013). [CrossRef]
35. S. Y. Xiang, W. Pan, B. Luo, L. S. Yan, X. H. Zou, N. Jiang, L. Yang, and H. N. Zhu, "Conceal time-delay signature of chaotic vertical-cavity surface-emitting lasers by variable-polarization optical feedback," Opt. Commun. 284(24), 5758–5765 (2011). [CrossRef]
36. Y. Hong, "Experimental study of time-delay signature of chaos in mutually coupled vertical-cavity surface-emitting lasers subject to polarization optical injection," Opt. Express 21(15), 17894–17903 (2013). [CrossRef]
37. N. Q. Li, W. Pan, S. Y. Xiang, L. S. Yan, B. Luo, and X. H. Zou, "Loss of time delay signature in broadband cascade-coupled semiconductor lasers," IEEE Photonics Technol. Lett. 24(23), 2187–2190 (2012). [CrossRef]
38. Y. Liu, Y. Xie, Y. Ye, J. Zhang, S. Wang, Y. Liu, G. Pan, and J. Zhang, "Exploiting optical chaos with time-delay signature suppression for long-distance secure communication," IEEE Photonics J. 9(1), 1–12 (2017). [CrossRef]
39. J. M. Regalado, F. Prati, M. San Miguel, and N. B. Abraham, "Polarization properties of vertical-cavity surface-emitting lasers," IEEE J. Quantum Electron. 33(5), 765–783 (1997). [CrossRef]
40. B. D. MacArthur, R. J. Sánchez García, and J. W. Anderson, "Symmetry in complex networks," Discret. Appl. Math. 156(18), 3525–3531 (2008). [CrossRef]
41. L. Y. Zhang, W. Pan, L. S. Yan, B. Luo, X. H. Zou, S. Y. Xiang, and N. Q. Li, "Conceal time-delay signature of mutually coupled vertical-cavity surface-emitting lasers by variable polarization optical injection," IEEE Photonics Technol. Lett. 24(19), 1693–1695 (2012). [CrossRef]
A. Argyris, D. Syvridis, L. Larger, V. Annovazz-Lodi, P. Colet, I. Fischer, J. Garcia Ojalvo, C. R. Mirasso, L. Pesquera, and K. A. Shore, "Chaos-based communications at high bit rates using commercial fibre-optic links," Nature 438(7066), 343–346 (2005).
N. Jiang, A. K. Zhao, C. P. Xue, J. M. Tang, and K. Qiu, "Physical secure optical communication based on private chaotic spectral phase encryption/decryption," Opt. Lett. 44(7), 1536–1539 (2019).
N. Q. Li, W. Pan, L. S. Yan, B. Luo, X. H. Zou, and S. Y. Xiang, "Enhanced two-channel optical chaotic communication using isochronous synchronization," IEEE J. Sel. Top. Quantum Electron. 19(4), 0600109 (2013).
P. Li, Q. Cai, J. G. Zhang, B. J. Xu, Y. M. Liu, A. Bogris, K. A. Shore, and Y. C. Wang, "Observation of flat chaos generation using an optical feedback multi-mode laser with a band-pass filter," Opt. Express 27(13), 17859–17867 (2019).
X. H. Zou, W. L. Bai, W. Chen, P. X. Li, B. Lu, G. Yu, W. Pan, B. Luo, L. S. Yan, and L. Y. Shao, "Microwave photonics for featured applications in high-speed railways: Communications, detection, and sensing," J. Lightwave Technol. 36(19), 4337–4346 (2018).
X. H. Zou, F. Zou, Z. Z. Cao, B. Lu, X. L. Yan, G. Yu, X. Deng, B. Luo, L. S. Yan, W. Pan, J. P. Yao, and A. M. J. Koonen, "A multifunctional photonic integrated circuit for diverse microwave signal generation, transmission, and processing," Laser Photonics Rev. 13, 1800240 (2019).
A. Uchida, K. Amano, M. Inoue, K. Hirano, S. Naito, H. Someya, I. Oowada, T. Kurashige, M. Shiki, and S. Yoshimori, "Fast physical random bit generation with chaotic semiconductor lasers," Nat. Photonics 2(12), 728–732 (2008).
C. P. Xue, N. Jiang, K. Qiu, and Y. X. Lv, "Key distribution based on synchronization in bandwidth-enhanced random bit generators with dynamic post-processing," Opt. Express 23(11), 14510–14519 (2015).
P. Li, Y. Guo, Y. Guo, Y. Fan, X. Guo, X. Liu, K. Li, K. A. Shore, Y. Wang, and A. Wang, "Ultrafast fully photonic random bit generator," J. Lightwave Technol. 36(12), 2531–2540 (2018).
P. Li, Y. Guo, Y. Q. Guo, Y. L. Fan, X. M. Guo, X. L. Liu, K. A. Shore, E. Dubrova, B. Xu, Y. C. Wang, and A. B. Wang, "Self-balanced real-time photonic scheme for ultrafast random number generation," APL Photonics 3(6), 061301 (2018).
S. Y. Xiang, H. Zhang, X. X. Guo, J. F. Li, A. J. Wen, W. Pan, and Y. Hao, "Cascadable neuron-like spiking dynamics in coupled VCSELs subject to orthogonally polarized optical pulse injection," IEEE J. Sel. Top. Quantum Electron. 23(6), 1–7 (2017).
F. Y. Lin and J. M. Liu, "Chaotic lidar," IEEE J. Sel. Top. Quantum Electron. 10(5), 991–997 (2004).
D. Z. Zhong, G. L. Xu, W. Luo, and Z. Z. Xiao, "Real-time multi-target ranging based on chaotic polarization laser radars in the drive-response VCSELs," Opt. Express 25(18), 21684–21704 (2017).
K. Hicke, M. A. Escalona-Morán, D. Brunner, M. C. Soriano, I. Fischer, and C. R. Mirasso, "Information processing using transient dynamics of semiconductor lasers subject to delayed feedback," IEEE J. Sel. Top. Quantum Electron. 19(4), 1501610 (2013).
M. Ozaki, H. Someya, T. Mihara, A. Uchida, S. Yoshimori, K. Panajotov, and M. Sciamanna, "Leader-laggard relationship of chaos synchronization in mutually coupled vertical-cavity surface-emitting lasers with time delay," Phys. Rev. E 79(2), 026210 (2009).
R. Vicente, I. Fischer, and C. R. Mirasso, "Synchronization properties of three delay-coupled semiconductor lasers," Phys. Rev. E 78(6), 066202 (2008).
F. Böhm, A. Zakharova, E. Schöll, and K. Lüdge, "Amplitude-phase coupling drives chimera states in globally coupled laser networks," Phys. Rev. E 91(4), 040901 (2015).
J. Shena, J. Hizanidis, V. Kovanis, and G. P. Tsironis, "Turbulent chimeras in large semiconductor laser arrays," Sci. Rep. 7(1), 42116 (2017).
J. Shena, J. Hizanidis, P. Hövel, and G. Tsironis, "Multiclustered chimeras in large semiconductor laser arrays with nonlocal interactions," Phys. Rev. E 96(3), 032215 (2017).
L. Y. Zhang, W. Pan, L. S. Yan, B. Luo, X. H. Zou, and M. F. Xu, "Cluster synchronization of coupled semiconductor lasers network with complex topology," IEEE J. Sel. Top. Quantum Electron. 25(6), 1–7 (2019).
Y. Aviad, I. Reidler, M. Zigzag, M. Rosenbluh, and I. Kanter, "Synchronization in small networks of time-delay coupled chaotic diode lasers," Opt. Express 20(4), 4352–4359 (2012).
M. Bourmpos, A. Argyris, and D. Syvridis, "Analysis of the bubbling effect in synchronized networks with semiconductor lasers," IEEE Photonics Technol. Lett. 25(9), 817–820 (2013).
M. F. Xu, W. Pan, S. Y. Xiang, and L. Y. Zhang, "Cluster synchronization in symmetric VCSELs networks with variable-polarization optical feedback," Opt. Express 26(8), 10754–10761 (2018).
L. M. Pecora, F. Sorrentino, A. M. Hagerstrom, T. E. Murphy, and R. Roy, "Cluster synchronization and isolated desynchronization in complex networks with symmetries," Nat. Commun. 5(1), 4079 (2014).
L. Y. Zhang, A. E. Motter, and T. Nishikawa, "Incoherence-mediated remote synchronization," Phys. Rev. Lett. 118(17), 174102 (2017).
V. S. Udaltsov, J. P. Goedgebuer, L. Larger, J. B. Cuenot, P. Levy, and W. T. Rhodes, "Cracking chaos-based encryption systems ruled by nonlinear time delay differential equations," Phys. Lett. A 308(1), 54–60 (2003).
K. Hirano, K. Amano, A. Uchida, S. Naito, M. Inoue, S. Yoshimori, K. Yoshimura, and P. Davis, "Characteristics of fast physical random bit generation using chaotic semiconductor lasers," IEEE J. Quantum Electron. 45(11), 1367–1379 (2009).
D. Rontani, A. Locquet, M. Sciamanna, and D. S. Citrin, "Loss of time-delay signature in the chaotic output of a semiconductor laser with optical feedback," Opt. Lett. 32(20), 2960–2962 (2007).
L. Zunino, M. C. Soriano, I. Fischer, O. A. Rosso, and C. R. Mirasso, "Permutation-information-theory approach to unveil delay dynamics from time-series analysis," Phys. Rev. E 82(4), 046212 (2010).
R. M. Nguimdo, M. C. Soriano, and P. Colet, "Role of the phase in the identification of delay time in semiconductor lasers with optical feedback," Opt. Lett. 36(22), 4332–4334 (2011).
J. G. Wu, G. Q. Xia, and Z. M. Wu, "Suppression of time delay signatures of chaotic output in a semiconductor laser with double optical feedback," Opt. Express 17(22), 20124–20133 (2009).
S. Y. Xiang, W. Pan, A. J. Wen, N. Q. Li, L. Y. Zhang, L. Shang, and H. X. Zhang, "Conceal time delay signature of chaos in semiconductor lasers with dual-path injection," IEEE Photonics Technol. Lett. 25(14), 1398–1401 (2013).
S. S. Li and S. J. Chan, "Chaotic time-delay signature suppression in a semiconductor laser with frequency-detuned grating feedback," IEEE J. Sel. Top. Quantum Electron. 21(6), 541–552 (2015).
A. B. Wang, Y. B. Yang, B. J. Wang, B. B. Zhang, L. Li, and Y. C. Wang, "Generation of wideband chaos with suppressed time-delay signature by delayed self-interference," Opt. Express 21(7), 8701–8710 (2013).
S. Y. Xiang, W. Pan, B. Luo, L. S. Yan, X. H. Zou, N. Jiang, L. Yang, and H. N. Zhu, "Conceal time-delay signature of chaotic vertical-cavity surface-emitting lasers by variable-polarization optical feedback," Opt. Commun. 284(24), 5758–5765 (2011).
Y. Hong, "Experimental study of time-delay signature of chaos in mutually coupled vertical-cavity surface-emitting lasers subject to polarization optical injection," Opt. Express 21(15), 17894–17903 (2013).
N. Q. Li, W. Pan, S. Y. Xiang, L. S. Yan, B. Luo, and X. H. Zou, "Loss of time delay signature in broadband cascade-coupled semiconductor lasers," IEEE Photonics Technol. Lett. 24(23), 2187–2190 (2012).
Y. Liu, Y. Xie, Y. Ye, J. Zhang, S. Wang, Y. Liu, G. Pan, and J. Zhang, "Exploiting optical chaos with time-delay signature suppression for long-distance secure communication," IEEE Photonics J. 9(1), 1–12 (2017).
J. M. Regalado, F. Prati, M. San Miguel, and N. B. Abraham, "Polarization properties of vertical-cavity surface-emitting lasers," IEEE J. Quantum Electron. 33(5), 765–783 (1997).
B. D. MacArthur, R. J. Sánchez García, and J. W. Anderson, "Symmetry in complex networks," Discret. Appl. Math. 156(18), 3525–3531 (2008).
L. Y. Zhang, W. Pan, L. S. Yan, B. Luo, X. H. Zou, S. Y. Xiang, and N. Q. Li, "Conceal time-delay signature of mutually coupled vertical-cavity surface-emitting lasers by variable polarization optical injection," IEEE Photonics Technol. Lett. 24(19), 1693–1695 (2012).
Abraham, N. B.
Amano, K.
Anderson, J. W.
Annovazz-Lodi, V.
Argyris, A.
Aviad, Y.
Bai, W. L.
Bogris, A.
Böhm, F.
Bourmpos, M.
Brunner, D.
Cai, Q.
Cao, Z. Z.
Chan, S. J.
Chen, W.
Citrin, D. S.
Colet, P.
Cuenot, J. B.
Davis, P.
Deng, X.
Dubrova, E.
Escalona-Morán, M. A.
Fan, Y. L.
Fischer, I.
Garcia Ojalvo, J.
Goedgebuer, J. P.
Guo, X.
Guo, X. M.
Guo, X. X.
Guo, Y.
Guo, Y. Q.
Hagerstrom, A. M.
Hao, Y.
Hicke, K.
Hirano, K.
Hizanidis, J.
Hong, Y.
Hövel, P.
Inoue, M.
Jiang, N.
Kanter, I.
Koonen, A. M. J.
Kovanis, V.
Kurashige, T.
Larger, L.
Levy, P.
Li, J. F.
Li, K.
Li, N. Q.
Li, P.
Li, P. X.
Li, S. S.
Lin, F. Y.
Liu, J. M.
Liu, X.
Liu, X. L.
Liu, Y. M.
Locquet, A.
Lu, B.
Lüdge, K.
Luo, B.
Luo, W.
Lv, Y. X.
MacArthur, B. D.
Mihara, T.
Mirasso, C. R.
Motter, A. E.
Murphy, T. E.
Naito, S.
Nguimdo, R. M.
Nishikawa, T.
Oowada, I.
Ozaki, M.
Pan, G.
Pan, W.
Panajotov, K.
Pecora, L. M.
Pesquera, L.
Prati, F.
Qiu, K.
Regalado, J. M.
Reidler, I.
Rhodes, W. T.
Rontani, D.
Rosenbluh, M.
Rosso, O. A.
Roy, R.
San Miguel, M.
Sánchez García, R. J.
Schöll, E.
Sciamanna, M.
Shang, L.
Shao, L. Y.
Shena, J.
Shiki, M.
Shore, K. A.
Someya, H.
Soriano, M. C.
Sorrentino, F.
Syvridis, D.
Tang, J. M.
Tsironis, G.
Tsironis, G. P.
Uchida, A.
Udaltsov, V. S.
Vicente, R.
Wang, A.
Wang, A. B.
Wang, B. J.
Wang, Y.
Wang, Y. C.
Wen, A. J.
Wu, J. G.
Wu, Z. M.
Xia, G. Q.
Xiang, S. Y.
Xiao, Z. Z.
Xie, Y.
Xu, B.
Xu, B. J.
Xu, G. L.
Xu, M. F.
Xue, C. P.
Yan, L. S.
Yan, X. L.
Yang, L.
Yang, Y. B.
Yao, J. P.
Ye, Y.
Yoshimori, S.
Yoshimura, K.
Yu, G.
Zakharova, A.
Zhang, B. B.
Zhang, H. X.
Zhang, J.
Zhang, J. G.
Zhang, L. Y.
Zhao, A. K.
Zhong, D. Z.
Zhu, H. N.
Zigzag, M.
Zou, F.
Zou, X. H.
Zunino, L.
APL Photonics (1)
Discret. Appl. Math. (1)
IEEE J. Quantum Electron. (2)
IEEE Photonics Technol. Lett. (4)
Laser Photonics Rev. (1)
Nat. Commun. (1)
Opt. Commun. (1)
Phys. Lett. A (1)
Phys. Rev. E (5)
(1) E m x ˙ = k ( 1 + i α ) [ ( N m − 1 ) E m x + i n m E m y ] − ( γ a + i γ p ) E m x + σ ∑ l = 1 D s A m l E l x ( t − τ i n ) c o s 2 ( θ p l ) e − i ( w l τ i n + Δ w t ) + σ ∑ l = 1 D s A m l E l y ( t − τ i n ) c o s ( θ p l ) s i n ( θ p l ) e − i ( w l τ i n + Δ w t )
(2) E m y ˙ = k ( 1 + i α ) [ ( N m − 1 ) E m y − i n m E m x ] + ( γ a + i γ p ) E m y + σ ∑ l = 1 D s A m l E l x ( t − τ i n ) c o s ( θ p l ) s i n ( θ p l ) e − i ( w l τ i n + Δ w t ) + σ ∑ l = 1 D s A m l E l y ( t − τ i n ) s i n 2 ( θ p l ) e − i ( w l τ i n + Δ w t )
(3) N m ˙ = γ N [ μ − N m ( 1 + | E m x | 2 + | E m y | 2 ) + i n m ( E m x E m y ∗ − E m y E m x ∗ ) ]
(4) n m ˙ = − γ s n m − γ N [ n m ( | E m x | 2 + | E m y | 2 ) + i N m ( E m y E m x ∗ − E m x E m y ∗ ) ]
(5) A = ( 0 1 1 0 0 0 0 1 1 1 0 0 1 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 )
(6) R M S = ∑ m = 1 D c ⟨ I T m ( t ) − I ^ T ( t ) ⟩ D c I ^ T ( t )
(7) C T = ⟨ [ I T ( t ) − ⟨ I T ( t ) ⟩ ] ⋅ [ I T ( t + Δ t ) − ⟨ I T ( t + Δ t ) ⟩ ] ⟩ ⟨ [ I T ( t ) − ⟨ I T ( t ) ⟩ ] 2 ⟩ ⋅ ⟨ [ I T ( t + Δ t ) − ⟨ I T ( t + Δ t ) ⟩ ] 2 ⟩
(8) R C = max ( | C T | ) ⟨ | C T ( Δ t ) | ⟩ | CommonCrawl |
Socio-economic, epidemiological and geographic features based on GIS-integrated mapping to identify malarial hotspots
Abdul Qayum1,2,
Rakesh Arya3,
Pawan Kumar4 &
Andrew M Lynn1
Malaria Journal volume 14, Article number: 192 (2015) Cite this article
Malaria is a major health problem in the tropical and subtropical world. In India, 95% of the population resides in malaria endemic regions and it is major public health problem in most parts of the country. The present work has developed malaria maps by integrating socio-economic, epidemiology and geographical dimensions of three eastern districts of Uttar Pradesh, India. The area has been studied in each dimension separately, and later integrated to find a list of vulnerable pockets/villages, called as malarial hotspots.
The study has been done at village level. Seasonal variation of malaria, comparison of epidemiology indices and progress of the medical facility were studied. Ten independent geographical information system (GIS) maps of socio-economic aspects (population, child population, literacy, and work force participation), epidemiology (annual parasitic index (API) and slides collected and examined) and geographical features (settlement, forest cover, water bodies, rainfall, relative humidity, and temperature) were drawn and studied. These maps were overlaid based on computed weight matrix to find malarial hotspot.
It was found that the studied dimensions were inter-weaving factors for malaria epidemic and closely affected malaria situations as evidenced from the obtained correlation matrix. The regions with water logging, high rainfall and proximity to forest, along with poor socio-economic conditions, are primarily hotspot regions. The work is presented through a series of GIS maps, tables, figures and graphs. A total of 2,054 out of 8,973 villages studied were found to be malarial hotspots and consequently suggestions were made to the concerned government malaria offices.
With developing technology, information tools such as GIS, have captured almost every field of scientific research especially of vector-borne diseases, such as malaria. Malarial mapping enables easy update of information and effortless accessibility of geo-referenced data to policy makers to produce cost-effective measures for malaria control in endemic regions.
Malaria is a parasitic protozoal disease caused by parasites of Plasmodium genus. The parasite belongs to the diverse group of unicellular eukaryotes called protozoa. The genus has 250 Plasmodium species, but Plasmodium falciparum and Plasmodium vivax [1] are two key species found in the Indian sub-region. Falciparum malaria is the most severe form worldwide [2-4], but P. vivax is the most important species in the study area of the present work [5]. Malaria is a major health problem in the tropical and subtropical world. Around 2.5 million malaria cases are reported annually from Southeast Asia, of which India alone contributes 76% in malaria incidence [6].
Eighty-nine percent of the Indian population resides in malaria-endemic regions. It is a public health problem in most part of the country. Various actions, including passive surveillance of malaria by primary health centres (PHCs), community health centres (CHCs), malaria clinics, use of artemisinin-based combination therapy (ACT), and introduction of intervention such as rapid diagnostic tests (RDTs) for malaria cases, have been taken under the directorship of the National Vector Borne Disease Control Programme (NVBDCP), New Delhi [7]. Mathematical analysis has established the progress and achievement of these action plans (Figure 1 (1.1 and 1.2)). In India, the number of malaria cases reported has decreased from 2.93 million (1995) to 1.08 million (2012), while the number of malarial deaths has decreased from 1,151 (1995) to 519 (2012). From year 2003 to 2012 total malarial cases, annual parasitic index (API) and number of deaths due to malaria has persistently decreased (Figure 1 (1.3)).
Malaria situation in India and annual deaths. 1.1 Year wise malarial cases. 1.2 Year wise total malarial deaths. 1.3 Malaria situation in India.
Around 27% of Indians live in high-transmission zones where malarial cases are above one per 1,000 persons [7]. Researchers working in the malaria field appreciate that it is a focal disease and the topography of the land is an important consideration in understanding the local epidemiological situation [8]. Such high malaria incidence is primarily because of the drug resistance of its parasites [9]. There are various other reasons, including excessive deforestation [10], indiscriminate use of pesticides in agriculture, demographic shifts, for this enhanced rate of spread of this deadly disease. For vector-borne diseases, factors such as proportion of infectious mosquitoes, vector population density, infecting rates after biting, vicinity of breeding grounds, climatic factors particularly rainfall and relative humidity (RH), are known to have a strong influence on the biology of mosquitoes. To establish seasonal variation and annual variation, geographical information system (GIS) mapping was carried out [11]. In the terai region of Eastern Uttar Pradesh the spread of vector-borne diseases has become uncontrolled especially during the rainy seasons [5].
To find malarial hotspot sites, various works was done at macroscopic level by Srivastava et al. for tribal states of India [12], by Nath et al. for Sonitpur District Assam, by Daasha et al. for Koraput District in Orissa [13], by Srivastava et al. for Mewat region, Haryana [14], by Agarwal et al. for Gwalior City, by Srivastava et al. for Kheda District in Gujrat [15], Yadav et al. for Udalguri District in Assam. However, much work has to be done by widening the horizon of inclusion of malaria causing factors and there has to be work at village level.
Malaria control action plans are dying out due to improper implementation, inadequate surveillance and lack of geo-referenced information to pinpoint the trouble spots for timely preventive actions [3]. The present work emphasizes the analysing of the malaria epidemic situation and attaches various dimensions of socio-economic situations, epidemiological circumstances and geographic conditions to develop an integrated map based on the application of GIS (Figure 2). GIS has been widely accepted as a mapping device for anti-malarial plants to develop geo-referenced attributes of all such plants with anti-plasmodic actions [16]. Studies have already been done for geographic association with malaria prevalence and have established that a positive correlation for malaria exists with proximity to water bodies [17]. In Huang-Huai, China it was found that 74% of malaria cases were located within 60 m of water bodies and the risk rate among the people living there was significantly higher than elsewhere [18]. The socio-economic data, as well as quantitative and qualitative information on health facilities, have spatial basis and can be integrated [3]. GIS mapping has already been done for the study area [5] at PHC/CHC level but the aim here was to extend this up to the villages. Socio-economic and physico-chemical factors could also be important causes of malaria endemicity in the study region. The data in this work have been acquired from Landsat Thematic Mapper, from Census India 2011 and epidemiological data were collected from district malaria offices (DMO).
Major segments in the work.
With developing technology, the role of tools such as GIS has captured almost every field of scientific research, be it vector-borne diseases [5], forest fire management [11], water harvesting, hydrology, flood prone areas or climate change issues. It has become a principal tool in malarial mapping [2,3,13-15,19,20], and helps with quick retrieval of information and map generation to highlight hotspots of malaria incidence. . Hotspot refers to an area or geographical region of relatively higher importance which is based on parameters such as symptomatic cases and asymptomatic cases. It signifies for the region of focused intervention by the authorities to utilize the limited resources optimally for combating the malaria. The present work is an amalgamation of ten parameters, of which socio-economic (workforce participation (WFP), population, child population and literacy), geographical features (settlement, forest cover, water body, rainfall, RH and temperature) and epidemiology (API and number of slides collected and examined) are the three dimensions (Figures 2 and 3). After overlaying all these parameters, the most vulnerable villages were selected.
Dimensions in GIS-integrated mapping approach.
The objective of the current study was to develop socio-economic and climatic factors, geographical parameters, and clinical-data based GIS-integrated databank. It includes finding a list of all those villages/pockets (so-called malarial hotspots) where preferential allotments of government anti-malarial policies are required, which is GIS-integrated output based on the factors directly affecting malaria dynamics. These maps will help the authorities in reducing malarial risk in the area and hotspots will help in devising and designing strategic malaria control measures.
The Indian Council of Agricultural Research (ICAR) study area (Figure 4) falls in the agro-ecological sub-region of 'eastern plain, hot, sub-humid (moist) eco-region'. The area comprises three eastern Uttar Pradesh (UP) districts Gorakhpur (26°13′N to 27°29′N, 83°05′E to 83°56′E and altitude 69 m), Kushinagar (26°39′N to 27°15′N, 83°38′E to 84°15′E and altitude 75 m) and Maharajganj (26°59′N to 27°19′N, 83°09′E to 83°45′E and altitude 66 m). Major soil types found are sandy loam, clay loam and alluvial loam soil, with a total area of 9,291 sq km of which 3.82% is the State area. It lies in the north eastern corner of the most populous state and comprises a large stretch lying to the north of River Rapti, which is a tributary of the Gandak River, and is also surrounded by River Rohini on the northern side, which is its major source of water. There is an international border with Nepal. The study area is a highly dense region of UP State (average population density 1210/sq km) and is home to more than 10.67 million people [21]. It is evident from GIS map (Figure 5) that the villages are settled at a distance of less than a km, densely populated and thus of concern for the malaria epidemic study.
Location of study area.
Geographical location of villages of the study area.
Socio-economics
Socio-economic and demographic data are collected based on recent Census 2011 [21] and Economics and Statistics Division, Government of Uttar Pradesh, India. Various factors such as population, income per capita, total household, number of workers, population living below poverty line, etc., (Table 1) are taken into consideration and added as new fields to the spatial databank of the area in ArcGIS 10 environment to generate socio-economic indicator maps (Figure 6) for population (Figure 6 (6.1)), child population (Figure 6 (6.2)), WFP (Figure 6 (6.3)) and literacy (Figure 6 (6.4)).
Table 1 Socio-Economical Features of the study area
Socio-economic indicator maps. 6.1 General population distribution. 6.2 Child population (up to six years old). 6.3 Work force participation. 6.4 Literacy.
Epidemiology and clinical data
Epidemiological indices [10] API = (total positive cases for infection/population size) × 1,000 and SPR = (total positive cases/ total slides examined) ×100, are obtained for Kushinagar and Maharajganj and have been plotted to study monthly variation of API for year 2013, while a comparative annual plot of SPR vis-à-vis API was done for Gorakhpur. This makes a better picture towards establishing how API and SPR are inter-related. On a monthly basis, malaria incidence in terms of positive cases of P. vivax and P. falciparum and number of slides examined and collected was obtained from DMO for years 2012 and 2013 at PHC and CHC level. A spatial databank was created in ArcGIS 10 using geo-referenced data of study area obtained through satellite imagery. Inverse distance weightage (IDW) spatial analysis was conducted on clinical data to develop epidemiology maps (Figure 7) including API (Figure 7 (7.1)) and number of slides collected and examined map during year 2013 (Figure 7 (7.2)).
GIS maps for epidemiology. 7.1 API 2013. 7.2 Slide collected and examined 2013.
Geographical features and climatic data
The features conducive for malarial mosquito proliferation, such as water bodies, annual rainfall, RH [22], forest cover, temperature [23], and settlements [24], are collected for the study area using satellite imagery technique. Climate information variable was obtained from the Climate Research Unit (CRU), UK. Shape files of temperature, RH and rainfall were obtained from using ArcMap six discrete GIS maps (Figure 6 (6.1-6.6)).
Data generation
For the socio-economic parameters, such as population, child population, WFP and literacy, data have been generated at PHC level and settlement map (Figure 5) was produced. The rationale used was to first find percentage settlement in any PHC and percentage settlement of district containing that PHC and then dividing former by latter and multiplying with WFP of that district to generate WFP of PHC. All PHCs, all other districts were similarly calculated for the remaining three socio-economic indicators.
GIS-integrated mapping
A range of geographical features comprising six layers was imported to ArcMap 10 environment. The entire study area, including all CHCs/PHCs, was geo-referenced through numerous GPS coordinates, adjusting the corresponding points in the software environment. Through ArcGIS 10 this set of information was used to develop maps for all the villages in the study area. The analysis was done using ArcMap™ GIS to describe primary risk factors associated with malaria endemicity. Later, the registered sub-centres with their GPS co-ordinates were imported in to ArcGIS environment and spatial data was linked with their attributes. Similarly, various other GIS maps were developed based on socio-economic parameters as well as geographical features as per the schematic flowchart (Figure 8). API value of year 2013 was interpolated using the IDW method to map vulnerable zones in the study area. False colour composite (FCC) imagery (MIR-Red, NIR-Green and Green-Blue) were used to locate general land use in the study area. Vulnerable zones were overlaid on general land to analyse possible malaria causes.
Schematic flowchart: GIS-integrated mapping of socio-economic, geographical features and epidemiology.
GIS-integrated mapping involves following steps
GIS layers for 12 malaria factors (Figure 3) were created individually based on natural breaks classes.
These individual layers were integrated in to three categories (Figure 9. (91-9.3)) as per Table 2, using standard weights and mathematical equation:
$$ \begin{array}{l}\mathrm{M}\mathrm{H}\mathrm{S}={\mathrm{HS}}_{\mathrm{se}}\times {\mathrm{HS}}_{\mathrm{e}}\times {\mathrm{HS}}_{\mathrm{gf}}\hfill \\ {}HS={\displaystyle \prod_1^n}n=\left( factor 1\right)\left( factor 2\right)\dots ..\left( factor\;n\right)\hfill \end{array} $$
where, MHS = Malaria Hotspot, HSse = Hotspot for socio-economic, HSe = Hotspot for epidemiology, HSgf = Hotspot for geographical features.
To obtain integrated malarial hotspot (Figure 9 (9.4)), all 12 individual layers are combined using multiplicative function (above equation) and weights and output was categorized using natural breaks. Multiplicative function is used to optimize the respective ranks (Table 2) of each malaria factor or GIS layers.
Overlays of epidemiology, socio-economic and geographical features. 9.1 GIS-integrated epidemiology. 9.2 GIS-integrated socio-economic. 9.3 GIS-integrated geographical features. 9.4 GIS-integrated malarial hotspot.
Table 2 Integrated factors for malaria hotspot identification (Weight Matrix)
Weight matrix was used to produce three layers (L13, L14 and L15) initially and later these layers were integrated to produce malaria hotspot (Layer L16). The detailed process followed:
PHC wise layers of all 12 factors (Figure 2) L1, L2..... L12 was created.
All layers were integrated using Boolean operator 'Union' to get layer L13.
$$ {\mathrm{L}}_{13}={\displaystyle {\cup}_{i=1}^2\left({\mathrm{B}}_{\mathrm{E}}\right)}\in \mathrm{all}\kern0.5em {\mathrm{L}}_{\mathrm{i}} $$
BE stands for epidemiology for all layers from L1-L2
Thus, layer L13 = {BE: BE ∈ Layers Li; i = 1, 2} (Figure 9 (9.1))
Similarly, for Socio-economic factors (BS)
$$ {\mathrm{L}}_{14}={\displaystyle {\cup}_{i=3}^6\left({\mathrm{B}}_{\mathrm{s}}\right)}\in \mathrm{all}\kern0.5em {\mathrm{L}}_{\mathrm{i}} $$
Thus, layer L14 = {BS: BS ∈ Layers Li; i = 3, 6} (Figure 9 (9.2))
And for geographical/climatic factors (BG)
$$ {\mathrm{L}}_{15}={\displaystyle {\cup}_{i=7}^{12}\left({\mathrm{B}}_{\mathrm{G}}\right)}\in \mathrm{all}\kern0.5em {\mathrm{L}}_{\mathrm{i}} $$
Thus, layer L15 = {BG: BG ∈ Layers Li; i = 7, 12} (Figure 9 (9.3))
Malaria hotspot was obtained by integrating layers L13, L14 and L15 using Boolean operator 'Union' to get layer L16
$$ {\mathrm{L}}_{16}={\displaystyle {\cup}_{k=13}^{15}\left({\mathrm{B}}_{\mathrm{E},\mathrm{S},\mathrm{G}}\right)}\in \mathrm{all}\kern0.5em {\mathrm{L}}_{\mathrm{k}}; $$
BE,S,G stand for epidemiology, socio-economic and geographical factors. Thus, required malaria hotspot is Layer L16 = {BE,S,G: BE,S,G ∈ each layers Lk; k = 13, 14, 15 (Figure 9 (9.4)).
Rationale behind weight matrix
It was found in general that malaria incidence was related to land use pattern, water use, higher than average rainfall, greater forest coverage, presence of abandoned water reservoirs, and poor socio-economic status [25]. Weight matrix (Table 2) was constructed based on inputs from experts (Table 3), research findings of related study regions and different regions as well [22,23,25-30], and evidence-based weighting method [26]. Experts were asked to write extent of impact of land-use pattern on all malaria factors in terms of high, moderate, low and negative impact. Evidence-based weighting method was adopted which specifies the malaria relationship with selected factors through weights. However, selected factors were decided based on scrutiny of a series of journals and research articles. Experts suggested vulnerable factors in malaria incidence in response to feedback form. Weight system was derived based on a response of a questionnaire sent to malaria experts familiar with geo-graphics of the study region. The information from journals was combined with the expert opinions based on the relative weighting for a particular malaria factor and its frequency of repetition in various research publications with the suggestion made by the experts. A score out of 100 corresponding to these observations was assigned each malaria factor to constitute weight factors.
Table 3 Correlation matrix
Malarial hotspot identification
A malarial risk map was prepared by overlaying ten basic maps (Figure 7 (7.1 and 7.2), Figures 6 (6.1-6.4), 13 (13.1-13.6)). Overlay was done based on the weight matrix. After collecting and evaluating expert response and integrating it with findings of a variety of researches, all the factors were divided into four categories, ranked 1-4. Factors ranked 1 were considered 'low', ranked 2 'moderate', ranked 3 'high', and ranked 4 'very high' (Table 2). The cut-off values of categories were decided using natural breaks found within the final data in the study area. Natural Breaks classes were used as it is based on natural groupings inherent in the data. Class breaks are identified that best group similar values and that maximize the differences between classes. The features are divided into classes whose boundaries are set where there are relatively big differences in the data values.
Socio-economy
Socio-economic and geographical features
Factors such as socio-economic (work participation, economy, urban-rural population, households etc) and geographical features (land cover type, geographical profile, rainfall, forests etc) have good impacts on malaria situation and thus, tabulated (Table 1).
Land use pattern
The major use of land (Figure 10) is in open/current fallow (49.3%) and agriculture land (37.8%). Most of the land is either for cultivation or is under forest area. Considering available rainfall intensity, the region is good for rice cultivation. Rice fields [31] and forests provide excellent breeding space for mosquitoes. Land use pattern indicates study region could be a malaria potent zone.
Land use distribution.
Epidemiology and rainfall
The study area was bifurcated and the API and SPR for Gorakhpur plotted (Figure 11 (11.1)) to understand its annual co-variation. It was observed that these malarial indices were synchronous in general. For Maharajganj and Kushinagar seasonal variation of malaria incidence, slides collected and examined as well as monthly rainfall was plotted for years 2012 and 2013. It was observed during the rainy seasons (July-October) that malarial incidence was relatively high for both districts in 2012 as well as 2013 (Figure 12 (12.1,12.2)) indicating possible strong correlation of rainfall with malaria. Seasonal variation of malaria during 2012 and 2013 (Figure 11 (11.2, 11.3)) was also similar in general. There was an increase in number of slides collected and examined in 2013 over 2012 and also during rainy seasons more slides were collected and examined (Figure 11 (11.4,11.5)) indicating medical facilities in terms of slides collected and examined had increased to reduce early detection of malaria incidence to reduce malarial deaths.
Epidemiology study: Seasonal variation and health facility. 11.1 Variation of SPR and API for Gorakhpur. 11.2 Seasonality of malaria for Kushinagar. 11.3 Seasonality of malaria for Maharajganj. 11.4 Progress of medical facility for Maharajganj. 11.5 Progress of medical facility for Kushinagar.
Rainfall vs. malaria cases plot. 12.1 Seasonality of malaria-rainfall, Kushinagar 12.2 Seasonality of malaria-rainfall, Maharajganj.
GIS based study
Land classification
Reveals 8,973 villages/settlement units in the study area (Figure 5). (A list of all village pockets/settlements is provided as Additional file).
GIS maps for epidemiology
Maps are produced for API 2013 (Figure 7 (7.1)) and health facility indicator in terms of slides collected and examined (Figure 7 (7.2)). These two maps were overlaid to produce an integrated map for epidemiology (Figure 9 (9.1)).
Socio-economic indicator maps
Four socio-economic elementary maps on general population distribution, child population (up to six years old), WFP and literacy (Figure 5 (5.1-5.4)) were produced. Elementary maps were overlaid in GIS environment to produce an integrated map of socio-economic indicators (Figure 9 (9.2)).
Geographical indicator maps
Six elementary maps (Figure 13 (13.1-13.6)) covering major geographical malaria-related factors, including vegetation, water bodies, rainfall, settlements, temperature, and RH were developed to establish possible links between these indicators and malaria. The elementary maps were basic units in developing an GIS-integrated geographical indicator map (Figure 9 (9.3)).
Geographical indicator maps. 13.1 Forest land and other vegetation. 13.2 Water bodies. 13.3 Rainfall intensity. 13.4 Land settlements. 13.5 Annual average temperature. 13.6 Annual relative humidity.
GIS-integrated maps
To understand spatial distribution of malaria aspects, four layers of socio-economic factors, two layers of epidemiology (clinical) factors and six layers of environment and geographic factors were rated, weighted and ranked (Table 2) on the basis of their importance on malaria incidence. Overlaying of these layers using calculated weights yielded malaria risk map in four classes by natural breaks using ArcGIS 10 software (Figure 9 (9.4)). These classes were very high-risk (2,054), high-risk (2,280), moderate-risk (1,981), and low-risk (2,658). Very high-risk constitutes malarial hotspot and all villages in this class were extracted (Additional file 1).
Correlation matrix
Matrix was drawn against various malarial factors to find any inter-weaving nature and to establish any possible relationship between these parameters (Table 4). The matrix was computed by overlaying layers of malarial hotspot, epidemiology, geographics and socio-economics under the ArcGIS environment. It was observed that epidemiology and geographic features were related to malaria incidence by 55%, socio-economic factors were also largely (54%) related to geographic features, while socio-economics were not a major factor in determining malaria incidence in a given locality; the major factors remain epidemiology and geographic features.
Table 4 Malarial hotspot identification: Classic case of consistent stakeholders and land use pattern
Malarial hotspot classes
Overlay analysis revealed a total of 2,054 out of 8,973 villages studied were found to be malarial hotspots (Figure 14) and a list of all such villages/pockets is supplied as Additional file 1.
Number of villages in various malarial hotspot classes.
Socio-economic finding
It is necessary to identify population at risk, their economic level and access to medical facilities for managing an accurate malaria control programme. Since malaria is an environment-dependent disease and hence, by integration of these data with socio-economic and community health levels, it is possible to establish an early warning system for malaria epidemics. The area has 32.4% as total work participation, including 16.4% as marginal worker and has large population below the poverty line (BPL). Literacy level is 52.17% while access to the medical facility is poor, which is the major reason for the poor health recovery due to malarial incidence. The region belongs to low socio economic zones with monthly income ~70.3 USD and 78% population agrarian. The region has 1,436,878 total households in which 87.98% are rural, while 12.02% are urban; the demographic divide lies with 51.25% males, 48.75% females and 3,462,855 works in the entire study area.
Analysis of epidemiology indices and maps
Malaria incidence in the selected study area is not very prominent if compared with the prevalence in African countries. Instead of API and SPR, 100API and 100SPR was plotted annually for Gorakhpur to highlight numerical values of these epidemiological indices (Figure 11 (11.1)). 100API is algebraic multiplication of API by 100 to magnify the existing API. This is highly useful for the region where API is not so high and magnification eases the study of API variation. These two are plotted on common axis system to find any possible relationship between API and SPR. Theoretically, these are directly related, i.e., 'sail, swim and sink together', but observation reveals peculiarity of 'no proportionate relationship'. However, a major section of the plot is in consistency with the theory and the partial mismatch is because of the error in data collection from the DMO.
Malaria incidence of year 2012 with 2013 was compared and also seasonal and monthly variation of malaria cases for Kushinagar and Maharajganj was plotted (Figure 11 (11.2-11.5)). Epidemiology data for year 2012 was kept to verify the predictive model and results obtained in the study confer with the malaria observed in the villages of 'very high' or 'high' incidence. GIS mapping for year 2012 for same geographic region was done by and result was compared with the predictive model in the current study. In both districts malaria incidence is relatively high during months July-September (rainy season) in both the years. This establishes positive correlation of malaria cases with rainfall. In rainy seasons the number of breeding sites increases (because of water logging) leading to growth of malaria vectors. However, from 2012 to 2013 there was no significant increase in malaria eradication for the studied area. In general, it remained unchanged and hence the study area demands deeper investigation of the current malaria situation to bring change and satisfactory health achievements.
For Maharajganj, the plot of the number of slides collected and examined for year 2013 showed continuous increase, which is clear indicator of fine discharge of government medical facility. This further indicates the penetration of health facilities to the public. A similar plot for Kushinagar witnessed similar monthly increases in number of slides collected and examined in the same year. This generates a ground for generalization of discharge of health measures for the whole region. Both the districts of the study area could be declared healthy against the health facility available. It is important to note that the medical facility profile of Maharajganj (Figure 11 (11.4)) and Kushinagar (Figure 11 (11.5)) is very similar and is indicative of governmental schemes reinforcing the expansion of the health infrastructure in the study area.
For the microscopic study of malaria incidence in a locality of less critical or malaria-vulnerable areas, a new term of malaria part per million (MPPM) can be introduced and conceptualized (since API for these areas remain in fraction). This is 1,000 times API and serves as numerical convenience for study of malaria of vulnerable localities, after magnifying the obtained API data by 1,000. Although there is no universally accepted definition of 'malaria vulnerable zones', it can be noted that for these zones API is low, generally in fraction.
Malarial hotspot identification factors were studied across land use pattern (Table 3). It was found that all types of land use, except barren land, impacts malaria incidence heavily considering epidemiology as one dimension while barren land itself has almost no impact on any of the three dimensions. It was further observed that settlement's aquatic ecosystems and forest/tree cover had good impact on almost all malaria-affecting factors. Land-use pattern plays crucial role in determining host-vector dynamics.
Geographical profile
Excess rainfall shows negative correlation [23] with malaria incidence as rain can flush out mosquito larvae [27] and positive correlation with temperature and RH [28]. The map helps identification of breeding places of mosquito larva. It was found that water bodies and forest land nearby human habitation was the main breeding site. Average monthly rainfall and temperature variation were plotted (Figure 15) based on the data obtained from CRU. All the districts reflected similar behaviour (Figure 15 (15.1-15.6)). Temperature bands were plotted with maximum, minimum for each point in a year, which indicates variation of above 12°C in a given day and ranging from 7°C to 41°C.
Rainfall and temperature monthly variation. 15.1 Monthly variation of rainfall and temperature - Kushinagar. 15.2 Monthly variation of temperature - Kushinagar. 15.3 Monthly variation of rainfall and temperature- Gorakhpur. 15.4 Monthly variation of temperature - Gorakhpur. 15.5 Monthly variation of rainfall and temperature - Maharajganj. 15.6 Monthly variation of temperature - Maharajganj.
GIS analysis
GIS-integrated model possesses well mix of both symptomatic and asymptomatic cases with larger emphasis on the former. For symptomatic cases, slides were collected for patients with malaria symptoms for years 2012 and 2013, and were examined for P. vivax and P. falciparum positive cases. In GIS-integrated output (Figure 9 (9.4)), a suitable weight based on the matrix (Table 2) is given to both cases to account for developing malaria hotspot. While, indirect parameters including breeding grounds for vectors such as water bodies, high settlement areas and forests; factors for survival of larvae such as rainfall, temperature and RH; other factors such as capacity to afford medical facility indicated through socio-economy parameters, are considered for asymptomatic cases. Under GIS environment, spatial distinction can be easily seen in symptomatic cases (Figure 9 (9.1)) and asymptomatic cases (Figure 9 (9.2 and 9.2)). Moreover, GIS as mapping tool is used to integrate these two cases to bring out malaria hotspot (Figure 9 (9.4)) as key element for early malaria warning system.
Although API of the region falls below the national average, geographical characteristics, proximity to the Himalayan region (major reason for heavy rainfall) and poor socio-economic conditions make the region sensitive to various vector borne diseases. The study region has been epidemic for vector-borne diseases, such as Japanese encephalitis, dengue and chikungunya. It is important to analyse the results to establish control measures against the deadly disease. It was found that the region in the vicinity of Partawal, Fazil Nagar, Motichak, Ramkola and Padrauna PHCs had higher API (Figure 7 (7.1)) over other regions and therefore demands strategic monitoring of government malaria intervention. The number of slides collected and examined was not from the regions of high malaria incidence but from all the regions and were collected uniformly (Figure 7 (7.2)). There must be a spatial shift in slides collection and examination to the region where the API index is high and the additional collection of slides has to be done for these regions of relatively high importance.
PHCs in Gorakhpur are highly populated in comparison to PHCs of other districts. Excess population poses a threat to malaria incidence and hence it possesses relatively higher weight. Regions in the vicinity of wet grass-lands, fresh-water swamp forest and terai swampy grass are superior breeding sites for mosquitoes and thus assume more weight, constituting a malaria-sensitive zone. Malaria incidence is likely to be high in eastern Nichlaul, Mithaura and Laxmipur PHC region in times to come.
Water bodies play a pivotal role in malaria dynamics. Vicinity to water bodies is very important for malaria incidence. It varies inversely with distance from the water body (Figure 13 (13.2)). Based on the distance, factor weights are designed (Table 2). Most of the study area falls within cultivable lands. Rice is one of the major crops in the region requiring lots of water which makes a virtual water reservoir and high chance of malaria breeding sites. It was reported that mosquito breeding in rice fields is inversely proportional to the distance from village during a study in Madla District, Madhya Pradesh [31]. However, the precise role of rice fields in maintaining high malaria transmission could not be established but the rice fields contributed significant vector populations and thus high probability of malaria cases is expected.
Moderate rainfall can provide the conditions for breeding of Anopheles mosquito and enhances malaria hazard. The soil of Maharajganj and Kushinagar is clay and alluvial loam, which holds water and little additional rain, leads to water logging. Thus, breeding sites are generated and this is the reason that this region has relatively high malaria incidence (Figure 7 (7.1)). It was found that Gorakhpur region has relatively low malaria incidence, the prime reason being heavy rainfall as it is the district with the highest annual rainfall in the whole state. The rainfall flushes out the larvae and excess rainfall possesses lower weight while the moderate (74-95 mm) rainfall that falls in northern Gorakhpur and southern Maharajganj and Kushinagar has high rank and these regions are malaria sensitive with respect to rainfall criteria.
The study area is home to 10,690,142 people with a geographical area of 9,291 sq km and population density of 1,151 per sq km. This amount of land is home to the near equivalent of countries such as Greece, Portugal and Sweden, etc. Land settlement is very dense (Figures 5, 13 (13.4)) making the region highly vulnerable. Gorakhpur City has maximum settlements but almost no malaria incidence is observed, because of better socio-economics and high rainfall.
Southern Gorakhpur City has the maximum of average temperature (averaged annually) among various PHCs/CHCs because of the presence of heavy industry and industrial effluents, while minimum average temperature is found in northern Maharajganj as it has rich forest cover which acts as a sink for warm gases. Temperatures above 32°C have maximum impact on larvae growth but the study region has 26.7°C as maximum average temperature (Figure 13 (13.5)) and thus it is not a major malaria factor, and thus weighted inferiorly.
In the entire study region, RH showed almost no variation (63-68%) and <60% is critical for mosquitoes [22]; thus, it had least impact on vector population and was weighted insignificantly and no significant geographical distinction could be made based on the RH in the study area.
Health facility hotspots and vulnerable villages
Considering 12-odd factors on the same piece of land for its spatial distribution analysis is the biggest challenge that GIS is capable for. At the same spatial coordinates there might be many contradicting parameters, e.g., forest area, vegetation and rainfall are positive (high rank) factors while population and WFP are negative (less rank) factors. Thus, the weight system (Table 2) has evolved to accommodate two conflicting factors in developing integrated maps towards malarial hotspot identification. Net factor is obtained by weighted multiplication of various malarial factors (Figure 3).
Environmental and climatic factors play a crucial role in influencing malaria incidence and transmission [29]. Sporogenic duration and mosquito survival is highly dependent on temperature. It was claimed that parasitic growth ceases at 16°C or less [30]. Temperatures above 32°C lead to high throughput of vector population. Temperature-induced mosquito deaths occur between 40 and 42°C depending on species [32]. Rainfall does not affect parasites directly but it provides the medium for aquatic mosquito stages and increases RH, which is crucial for mosquito incubation. Monthly average RH below 60% reduces the life of mosquitoes [22]. It was observed that 80 mm average rainfall is crucial for the malarial transmission [30].
This study could be useful in providing basic knowledge of malaria risk factors and to focus control measures on vulnerable populations alone, thus enabling optimal utilization of resources available, which is essential for developing countries with poor socio-economic indicators. Malarial mapping enables easy update of information and effortless accessibility of geo-referenced data to policy makers to produce cost effective measures for malaria control in endemic regions. The success of such control measures mainly depends on the precise identification and geographical reconnaissance of malarial hotspots. Malaria risks maps are a convenient tool for discussing targeted and cost effective control measures with government authorities. GIS enables the generation of revised maps as soon as new data are available.
Malarial cases in the study region could be attributed to rainfall intensity, temperature, forest cover and humidity as malaria-causing factors, as well as a low socio-economic profile of the population. This study has established that there is a close relationship between socio-economic factors, geographical description, demographic data and epidemiology depiction and malaria incidence. It helps in understanding the malaria transmission pattern based on anthropogenic and environmental factors. Health parameter alone may not be complete and reliable for malaria prediction and thus this integrated approach could be a faultless endeavour to judge malarial hotspots precisely and accurately.
Wide-ranging maps were effective in communicating major findings to the local health authorities, district health administrator and authorities of NVBDCP. With improving socio-economic conditions and deeper penetration of health infrastructure, the present hotspots of malaria may drift and thus GIS mapping becomes much crucial as it offers smooth data updating. As soon as new data are entered, the correct map for the changed scenario is ready, whereas this is a major drawback in the current manual system. The hotspot identification based on GIS mapping could be treated as a priority area for monitoring and surveillance of malaria. It is suggested that a databank of malaria incidence, demographic and socio-economic profile and access to health facilities be established for malaria-endemic regions in the country. Adding these factors to a malaria database will identify hotspots for optimal utilization of resources towards significant malaria control.
Using the extrapolation technique for current malaria incidence as well as past, and the hotspot identification used in this study, malaria occurrence could be predicted in future and policy makers could be advised accordingly for effective and optimal distribution of governmental aid for malaria control. Policies need to be streamlined. At present, governmental health aid, such as insecticide-treated mosquito nets and ACT are distributed randomly. These aids have to be distributed in highly targeted fashion, especially when the resources are very limited and need is very high. Similar work has to be extended for the whole land to design a comprehensive governmental plan for developing a 'Malaria National Map'. The work could be integrated with CSIR, New Delhi's ongoing bio-prospecting project of open source drug discovery (OSDDs), Malaria Section, to host these malarial maps with a website which is in development phase. It may be further extended to various other vector-borne diseases such as dengue, filaria, chikungunya, kala-azar and Japanese encephalitis to develop similar maps for designing effective control measures against these vector-borne diseases.
Carlton J, Silva J, Hall N. The genome of model malaria parasites, and comparative genomics. Curr Issues Mol Biol. 2005;7:23–37.
Nath MJ, Bora AK, Yadav K, Talukdar PK, Dhiman S, Baruah I, et al. Prioritizing areas for malaria control using geographical information system in Sonitpur district, Assam, India. Public Health. 2013;127:572–8.
Article CAS PubMed Google Scholar
Yadav K, Nath MJ, Talukdar PK, Saikia PK, Baruah I, Singh L. Malaria risk areas of Udalguri district of Assam, India: a GIS based study. Int J Geogr Inf Sci. 2012;26:123–31.
Saxena R, Nagpal BN, Srivastava A, Gupta SK, Dash AP. Application of spatial technology in malaria research & control: some new insights. Indian J Med Res. 2009;130:125–32.
Qayum A, Lynn AM, Arya R, Jaiswal SK. GIS integrated epidemiological indices for risk area identification towards malaria control measures. Int J Eng Adv Tech. 2013;2:376–81.
WHO. World Malaria Report 2012. Geneva: World Health Organization; 2012. http://www.who.int/malaria/publications/world_malaria_report_2012/en/.
National Vector Borne Disease Control Programme: Malaria situation in India. Delhi, Ministry of Health and Family Welfare, Govt. of India. Available from: http://nvbdcp.gov.in/malaria3.html
Sweeny AW. The Application of GIS in Malaria Control Programs. In: 10th Colloquium of the Spatial Information Research Centre. 1998. p. 315–20.
National Vector Borne Disease Control Programme. Guidelines for Diagnosis and Treatment of Malaria in India. Delhi: Ministry of Health and Family Welfare, Govt. of India; 2009.
Park K. Textbook of Preventive and Social Medicine. 22nd ed. Jabalpur: Banarsidas Bhanot Publisher; 2013. p. 234–5. 248.
Jaiswal RK, Mukherjee S, Raju KD, Saxena R. Forest fire risk zone mapping from satellite imagery and GIS. Int J Appl Earth Obs Geoinf. 2001;4:1.
Srivastava A, Nagpal BN, Joshi PL, Paliwal JC, Dash AP. Identification of malaria hot spots for focused intervention in tribal state of India: a GIS based approach. Int J Health Geog. 2009;8:30.
Daash A, Srivastava A, Nagpal BN, Saxena R, Gupta SK. Geographical information system (GIS) in decision support to control malaria e a case study of Koraput district in Orissa, India. J Vector Borne Dis. 2009;46:72–4.
Qayum A, Lynn A, Arya R. Traditional knowledge system based GIS mapping of antimalarial plants: spatial distribution analysis. J Geogr Inf Syst. 2014;6:478–91. doi: 10.4236/jgis.2014.65041.
Srivastava A, Nagpal BN, Saxena R, Sharma VP. Geographical information system as a tool to study malaria receptivity in Nadiad Taluka, Kheda district, Gujarat, India. Southeast Asian J Trop Med Pub Health. 1999;30:650–6.
Srivastava A, Nagpal BN, Saxena R, Wadhwa TC, Mohan S, Siroha GP, et al. Malaria epidemicity of Mewat region, district Gurgaon, Haryana, India: a GIS based study. Curr Sci. 2004;86:1297–303.
Van der HW, Konradsen F, Amerasinghe PH, Perera D, Piyaratne MK, Amerasinghe FP. Towards a risk map of malaria for Sri Lanka: the importance of house location relative to vector breeding sites. Int J Epidemiol. 2003;32:280–5.
Zhou SS, Zhang SS, Wang JJ, Zheng X, Huang F, Li WD, et al. Spatial correlation between malaria cases and water-bodies in Anopheles sinensis dominated areas of Huang-Huai plain. China Parasit Vectors. 2012;5:106.
Agarwal SA, Sikarwar SS, Sukumaran D. Application of RS & GIS in risk area assessment for mosquito borne diseases- a case study in a part of Gwalior City (M.P.). Int J Advanc Technol Eng Res. 2012;2:1–4.
Musa MI, Shohaimi S, Hashim NR, Krishnarajah I. A climate distribution model of malaria transmission in Sudan. Geospat Health. 2012;7:27–36.
Census of India. Administrative atlas of India. Office of the Registrar General & Census Commissioner, India. New Delhi: Ministry of Home Affairs; 2011. p. 29.
Pampana E. A Textbook of Malaria Eradication. London: Oxford University Press; 1969.
Salehi M, Mohammad K, Farahani MM, Zeraati H. Spatial modeling of malaria incidence rates in Sistan and Baluchistan province, Islamic Republic of Iran. Saudi Med J. 2008;29:1791–6.
Srivastava A, Nagpal BN, Saxena R, Dev V, Subbarao SK. Prediction of Anopheles minimus habitat in India- a tool for malaria management. Int J Geogr Inf Sci. 2005;19:91–7.
Klinkenberg E, Hoek WVD, Amerasinghe FP. A malaria risk analysis in an irrigated area in Sri Lanka. Acta Trop. 2004;89:215–25.
Hanafi-Bojd AA, Vatandoost H, Oshaghi MA, Charrahy Z, Haghdoost AA, Zamani G, et al. Spatial analysis and mapping of malaria risk in an endemic area, south of Iran: A GIS based decision making for planning of control. Acta Trop. 2012;122:132–7.
Martens WJ, Niessen LW, Rotmans J, Jetten TH, McMichael AJ. Potential impact of global climate change on malaria risk. Environ Health Persp. 1995;103:458–64.
Article CAS Google Scholar
Haghdoost AA, Alexander N, Cox J. Modelling of malaria temporal variations in Iran. Trop Med Int Health. 2008;13:1501–8.
Cox J, Craig M, Le Sueur D, Sharp B. Mapping malaria risk in the highlands of Africa. MARA/HIMAL Tech Rep. 1999;114:8.
Adjuik M, Bagayoko M, Binka F, Coetzee M, Cox J, Craig M, et al. Towards an Atlas of Malaria Risk in Africa. First Technical Report of the Mapping Malaria Risk in Africa. Durban: MARA/ARMA; 1998.
Singh N, Singh OP, Soan V. Mosquito breeding in rice fields and its role in malaria transmission in Mandla district, M.P. Indian J Malariol. 1989;26:191–8.
Jepson WF, Moutia A, Courtois C. The malaria problem in Mauritius: the bionomics of Mauritian Anophelines. Bull Entomol Res. 1947;38:177–208.
We would like to acknowledge the computer laboratory facility at School of Computational & Integrative Sciences, Jawaharlal Nehru University, New Delhi, India. We would like to thank Director, NVBDCP, New Delhi for his critical remarks and suggestion of inclusion of socio-economic factors as one segment of the GIS malariology work and especially Dr Munish Joshi of NVBDCP for his commendable support during preparation of this manuscript. Further, we would like to extend our sincere thanks to the Ministry of Environment Forest and Climate Change.
Centre for Biology & Bioinformatics, School of Computational & Integrative Sciences, Jawaharlal Nehru University, New Delhi, India
Abdul Qayum & Andrew M Lynn
Indira Gandhi National Forest Academy, Dehradun, India
Abdul Qayum
Centre for the Study of Regional Development, Jawaharlal Nehru University, New Delhi, India
Rakesh Arya
Nepalganj Medical College, Banke, Nepal
Andrew M Lynn
Correspondence to Pawan Kumar.
AQ proposed the idea, designed the overall architecture of the work, made analysis and drafted the manuscript. AML was responsible for the implementation of idea, channelled the work and made critical evaluation. RA was responsible for GIS database, demographic data and map preparation. PK collected the epidemiological data and other related information. All authors read and approved the manuscript.
List of malaria hotspots on GIS-integrated approach.
Qayum, A., Arya, R., Kumar, P. et al. Socio-economic, epidemiological and geographic features based on GIS-integrated mapping to identify malarial hotspots. Malar J 14, 192 (2015). https://doi.org/10.1186/s12936-015-0685-4
Malarial hotspots | CommonCrawl |
Unit 4: Anthropogenic Climate Change
corabellemccrea
Environmental Science B
Workbook 16.1
1. What effect does the increase in greenhouse gases have on the atmosphere that contributes to global warming?
Greenhouse gases trap extra heat close to the surface of Earth.
2. Which statement is true about the relationship between Earth's average surface temperature and the amount of carbon dioxide in the atmosphere?
The average surface temperature is increasing as the amount of carbon dioxide in the atmosphere increases.
3. If air pollution was reduced, what would happen to the greenhouse effect?
The greenhouse effect would decrease but still occur naturally.
4. Which statements are true about the effect of pollution on global warming?
- Some pollutants, like chlorofluorocarbons, increase global warming by trapping sunlight near Earth's surface.
- Some pollutants affect global warming by reflecting sunlight back into space or by causing raindrops to form.
Since the Industrial Revolution, human activities have released additional carbon dioxide into the atmosphere. What effect has additional atmospheric carbon dioxide had upon ocean life?
This carbon dioxide is absorbed by ocean water, causing ocean acidification and loss of coral reefs.
The following map shows the global chlorophyll concentrations from 2002-2004. The most concentrated areas of chlorophyll are near the continental coastlines, where water is generally shallower. Which claim is supported by this data?
Increased carbon dioxide levels will result in more chlorophyll concentrated along the continental coastlines because the amount of phytoplankton in oceans increases, disrupting habitats and resources available to ocean life.
Carbon dioxide is found in ice cores analyzed by scientists. Which statement best explains a related concern about ice cap and glacial melting?
Carbon dioxide is released when the ice melts and the released carbon dioxide adds to global warming.
Investigators used data from the Argo array to model ocean currents and speed. What global trend have they discovered?
Ocean current energy has increased.
1. Which are greenhouse gases that contribute to global climate change as they increase in the atmosphere?
.75 of 1
- methane
- nitrogen
- carbon dioxide
2. According to this graph, what change in the atmosphere over past decades is contributing to global climate change?
steady increase in atmospheric carbon dioxide
3. What happens over time to the average global temperature as concentrations of carbon dioxide, methane, and nitrous oxide increase in the atmosphere?
The average global temperature increases.
4. Which statement describes the relationship between air pollution and the greenhouse effect?
If air pollution increases, the greenhouse effect increases.
5. Which air pollutant helps decrease global warming?
some aerosols because they reflect sunlight back into space
6. How have Earth's systems been modified due to the burning of fossil fuels and deforestation?
The increase in atmospheric carbon dioxide has led to global warming and changes in the biosphere.
7. In recent times, changes to the amount of _[blank]_ in Earth's atmosphere creates feedbacks that cause changes to other Earth systems, such as the ocean.
8. Which statements describe an effect of pollution on glacial and ice cap melting?
- The Greenland ice sheet has been shrinking.
- There are fewer glaciers than there were 100 years ago.
9. When temperatures increase and the ice cap melts, what will happen to local coastal areas?
Coastal areas will flood.
10. What is a consequence of increased temperature related to hurricanes?
The frequency of strong hurricanes has increased over the past several decades.
1. According to this graph, what is the approximate increase in atmospheric methane from 1750, prior to the Industrial Revolution, to 2000?
1000 ppb
2. According to this graph, between which two dates did the average global temperature increase at the greatest rate, which correlates to more droughts occurring?
3. Study the graph.
4. Using the data, what would the forecasted approximate average global temperature change be from the year 2000 to the year 2100 if global emissions become low?
Climate change will increase during this century regardless of low or high emission growth.
1. Why do scientists rely on models to support the theory of human-caused global warming?
Models can make predictions by using data from the past and present
2. Student A claims that all organisms will be able to adapt to global warming. Student B claims some organisms will be able to adapt to global warming.
Which student claim is correct and why?
Student B is correct because some organisms will be able to reproduce, evolve, and adapt in time, but others may not.
3. Why do people view global warming as caused by humans?
Carbon dioxide, a greenhouse gas, has increased in the atmosphere due to humans burning fossil fuels.
4. Why are models better predictors now than in the past?
Technology has improved modeling.
1. Which view of global warming looks at the present in the context of all of geologic history?
Global warming is due to natural causes.
2. Which piece of evidence supports the view that global warming is a natural occurrence?
There have been periods of global warming in Earth's history.
3. Why do people view global warming as a natural process?
There have been previous warming and cooling periods in Earth's history.
4. The top model shows the ice over present-day Greenland, and the bottom model shows Greenland within the years to come. According to this model of Greenland's ice melt, what is expected to happen to the Greenland ice sheet?
It is expected to decrease in area.
5. What have scientists learned from geoscience data and global climate models?
- The average surface temperature of the Earth is increasing.
- Atmospheric carbon dioxide is increasing.
6. What is the forecasted rate of global climate change and a future impact on Earth?
The rate of global climate change will probably increase and result in an increase in average global temperature.
Unit Session 4
Code: quiz4
1. Biodiversity is the sum of all the different species of animals, plants, fungi, and microbial organisms living on earth and the variety of habitats they live.
2. Habitat destruction is a threat to biodiversity.
3. Climate change does not effect biodiversity.
4. Protecting our planet starts with us/you!
1. In what ways does the use of a plastic water bottle contribute to one's carbon footprint?
- Carbon dioxide is released when the bottle is manufactured.
- Carbon dioxide is released when the bottle is transported to the store.
2. Which action would reduce one's carbon footprint?
planting a tree
Unit 4 Exam
1 Which gases contribute directly to global climate change when their levels are increased in the atmosphere?
- nitrous oxide
2. This graph shows the concentration of carbon dioxide in trapped bubbles of air in Antarctic ice sheets. What does it indicate about the conditions that contribute to global climate change?
The concentration of carbon dioxide in the atmosphere has greatly increased in recent years.
3. What does this graph indicate about conditions that contribute to global climate change?
The amount of atmospheric carbon dioxide is steadily increasing.
4. What do these diagrams of air pollution over China before and during the COVID-19 pandemic indicate about the conditions that contribute to global climate change?
- The amount of atmospheric nitrogen dioxide decreased over time.
- Air pollution decreased over time.
5. What is the effect of pollution on the greenhouse effect?
Pollutants like methane trap heat from the Sun in the Earth's atmosphere, adding to the greenhouse effect.
6. What are some effects of pollution on global warming?
- Some aerosols can reflect sunlight back into space to decrease global warming.
- Particulates help water condense into clouds, which affects global warming.
- Chlorofluorocarbons increase global warming.
7. Which conclusion can be drawn from these models of ocean acidification?
Oceans are becoming more acidic, and fewer corals can survive.
8. Global warming is related to a change in the occurrence of droughts across the globe. The models below show areas affected by droughts over time. Which conclusion can be drawn from the models?
As global warming increases, the number of droughts increases.
9. Which prediction can be made about the current rate of climate change and its impact on the number of forest fires that occur?
As the rate of climate change increases, the number of forest fires will also increase.
10. Which prediction can be made about the current rate of climate change and its impact on coral reefs?
Coral reefs will decrease as temperatures increase from climate change.
11. The burning of fossil fuels for energy, transportation, and other human needs is affecting Earth's systems. Which explanation best describes one of these effects?
Carbon dioxide is released when fossil fuels are burned, contributing to a rise in the average global temperature due to the greenhouse effect and decreasing the populations of some organisms.
12. Which explanation best describes how deforestation modifies Earth's systems?
NOT - Deforestation removes trees that keep Earth cool. This causes the temperature of the Earth to increase and the polar ice caps to melt.
13. The following map of South America shows the average (or mean) surface air temperature change due to deforestation in the region. The loss of trees allows more heat to be absorbed by the ground, leading to a drier climate for the region. Which claim is supported by this data?
One change to Earth's surface can create feedback that causes changes to other Earth systems.
14. Given what is known about the impact of carbon dioxide in the atmosphere, which claims could you make from the data in this graph?
- Oceans become more acidic as carbon dioxide is dissolved in the water.
- Ice sheets melt as the temperature of the Earth increases due to the impact of carbon dioxide on the greenhouse effect.
- The increase in atmospheric carbon dioxide creates feedbacks.
15. When fossil fuels burn, they react with oxygen to release carbon dioxide and water vapor. What effect do these substances have on global warming?
Carbon dioxide is a greenhouse gas that increases global warming
16. What effect do greenhouse gases have on glacial and ice cap melting?
An increase in greenhouse gases increases the rate of melting.
17. What is the impact of an increase in sea level due to increased melting?
Coastal areas are flooded.
18. The images show the change in Arctic sea ice as the average global temperature increased from 1980 (top) to 2012 (bottom). Which statement best analyzes the impact of global warming during this time?
The oldest and thickest Arctic sea ice is melting as the temperature increases due to global warming.
19. What is the effect of increased atmospheric temperature on the ocean's surface temperature and the occurrence of hurricanes?
The ocean's surface temperature is increasing, providing energy for hurricanes.
20. What is the impact of increased atmospheric temperature on ocean currents?
Ocean currents move with greater energy.
21. Which view of global warming supports making changes to the human way of life?
Global warming is due to human activity.
22. What may contribute to the view that models are not predictors of global warming?
a misunderstanding of how models are constructed
23. Which activity has the highest carbon footprint per individual?
using cars relying on fossil fuels
24. What do humans need to reduce most to positively impact global climate change?
production of greenhouse gases
25. Based on current models, what will probably happen to climate change if we do nothing?
Climate change will increase.
algebra 2b - unit 5: more than one function
princessneavah
Primavera Biology B Unit 4: Evolution
WrecksGlass
English 10B
Primavera_Answers
algebra 2b - unit 6: statistics
Unit 2: Resources and Their Impact
Unit 5: Pollutants
Unit 3: Earth's Climate and how it Changes
Unit 1: Land Use and Management
Unit 6: Environmental Regulations
Unit 6: Studies in Social Science
Unit 5: Using Statistics in the Social Sciences
Verified questions
Compare mountain building along an ocean-continent convergent boundary and a continent-continent convergent boundary.
A heart pacemaker consists of a switch, a battery of constant voltage $E_0$, a capacitor with constant capacitance $C$, and the heart as a resistor with constant resistance $R$. When the switch is closed, the capacitor charges; when the switch is open, the capacitor discharges, sending an electrical stimulus to the heart. During the time the heart is being stimulated, the voltage $E$ across the heart satisfies the linear differential equation $$ \frac { d E } { d t } = - \frac { 1 } { R C } E . $$ Solve the DE subject to $E(4)=E_0$.
Draw structural formulas for (a) The eight alcohols with the molecular formula $\mathrm{C}_{5} \mathrm{H}_{12} \mathrm{O}$. (b) The eight aldehydes with the molecular formula $\mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}$. (c) The six ketones with the molecular formula $\mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}$. (d) The eight carboxylic acids with the molecular formula $\mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{2}$. (e) The nine carboxylic esters with the molecular formula $\mathrm{C}_{5} \mathrm{H}_{10} \mathrm{O}_{2}$.
(II) *(a)* Determine the rate at which the escape velocity from the Earth changes with distance $r(\gt r_\text{E})$ from the center of the Earth, $dv_\text{esc}/dr$.
Environmental Science for AP
2nd Edition•ISBN: 9781464108686 (2 more)Andrew Friedland, Rick Relyea
3rd Edition•ISBN: 9781319113292Andrew Friedland, Rick Relyea
15th Edition•ISBN: 9781260913736Bradley Smith, Eldon D. Enger
Meteorology Today: An Introduction to Weather, Climate, and the Environment
10th Edition•ISBN: 9781133848097C Donald Ahrens, Christine Harrington
Essentials of Strength Training and Conditioning
Jeffrey_O2
A&P 1 Final
Jillian_Kilduff
Physiology Final exam (older material)
kjsdhflfbefkh
katebrennan75Plus | CommonCrawl |
Is the $24$ game NP-complete?
The $24$ game is as follows. Four numbers are drawn; the player's objective is to make $24$ from the four numbers using the four basic arithmetic operations (in any order) and parentheses however one pleases.
Consider the following generalization. Given $n+1$ numbers, determine whether the last one can be obtained from the first $n$ using elementary arithmetical operations as above. This problem admits succinct certificates so is in NP.
Is it NP-complete$?$
Akhil MathewAkhil Mathew
$\begingroup$ This sounds pretty similar to subset sum (en.wikipedia.org/wiki/Subset_sum_problem), which is NP complete. $\endgroup$ – Simon Nickerson Jul 24 '10 at 7:18
$\begingroup$ Can we not use LaTeX if the formula is simple enough like 'n+1'? And $NP$ is definitely wrong — at least use $\mathrm{NP}$. $\endgroup$ – kennytm Jul 24 '10 at 7:38
$\begingroup$ @Simon: Similar, yes; but can you show a reduction from it? $\endgroup$ – BlueRaja - Danny Pflughoeft Jul 24 '10 at 7:47
$\begingroup$ Not yet, which is why I left a comment rather than an answer. $\endgroup$ – Simon Nickerson Jul 24 '10 at 8:37
$\begingroup$ @Akhil: Looks like we're going to need a hint (I'm assuming you already know the answer.. :) $\endgroup$ – BlueRaja - Danny Pflughoeft Jul 25 '10 at 6:19
This is still WIP. There are a few missing details, still I think it's better than nothing. Feel free to edit in the missing details.
Given a problem of SUBSET-SUM. We have a set of A={a1,a2,...,an} numbers, and another number s. The question we're seeking answer to is, whether or not there's a subset of A whose sum is s.
I'm assuming that the 24-game allows you to use rational numbers. Even if it doesn't, I think that it is possible to emulate rational numbers up to denominator of size p with integers.
We know that SUBSET-SUM is NP-complete even for integers only. I think the SUBSET-SUM problem is NP-hard even if you allow treating each ai as a negative number. That is even if A is of the form A={a1,-a1,a2,-a2,...,an,-an}. This is still a wrinkle I need to iron out in this reduction.
Obviously, if there's a subset of A with sum s, then there's a solution to the 24-problem for how to reach using A to s. The solution is only using the + sign.
The problem is, what happens if there's no solution which only uses the + sign, but there is a solution which uses other arithmetic operations.
Let us consider the following problem. Let's take a prime p which is larger than n, the total number of elements in A. Given an oracle which solves the 24-problem, and a SUBSET-SUM problem of A={a1,a2,...,an} and s. We'll ask the oracle to solve the 24-problem on
A={a1+(1/p),a2+(1/p),...,an+(1/p)}
for the following values:
s1=s+1/p,s2=s+2/p,...,sn=s+n/p.
If the solution includes multiplication, we will have a denominator larger than p in the end result, and thus we will not be able to reach any si.
Given an arithmetric expression that contains aiaj=x+(1/p2), It is impossible that the denominator p2 would "cancel" out, since there are at most n elements in the summation, and thus the numerator would never reach p, since p>n.
THIS IS NOT QUIT RIGHT! The expression aiaj-akal will be an integer, and therefor our oracle might return answer which includes two multiplications one negative and one positive.
What about division? How can we be sure no division will occur. Find another prime q which is different than p, and larger than the largest ai times n. Multiply all answers by q. The set A will be
A={qa1+(1/p),qa2+(1/p),...,qan+(1/p)}
We will look for the following values:
s1=qs+1/p,s2=qs+2/p,...,sn=qs+n/p.
In that case, ai/aj will be smaller than the minimal element in A, and therefor the end result which will contains ai/aj will never be one of the si we're looking for.
Elazar LeibovichElazar Leibovich
$\begingroup$ I don't understand your proof - just because our oracle gives a solution using multiplication does not mean there's not also a solution using addition; in the same vein, just because there is a solution to the subset-sum doesn't mean it's the same 24-solution our oracle will give. Is this fact dealt with somehow? (if we change the oracle to give every solution, the reduction is simple: append a bunch of 0's and ask the oracle for all the solutions; then check if one of the solutions is of the form a1 + 0*a2 + a3 + a4 + 0*5 + .... But, that is not the question) $\endgroup$ – BlueRaja - Danny Pflughoeft Jul 27 '10 at 18:11
$\begingroup$ @BlueRaja, If I understood the question correctly, the reduction I need to provide is, given an oracle which solves the 24-problem in polynomial time, show it is possible to solve an NPC problem in polynomial time. This will prove the problem is NP-hard. It is easy to see that if we only allow the use of addition in the 24-problem, then it's parallel to SUBSET-SUM. I'm trying to show we can "force" the 24-problem oracle solver not to use multiplication or division. Is that clearer? $\endgroup$ – Elazar Leibovich Jul 27 '10 at 20:30
$\begingroup$ Oh, I see where the confusion lies: you are assuming that the numbers $a_1, a_2, .., a_n$ given in the problem are restricted to integers, but the oracle which solves the problem is allowed to input rational numbers? Is that correct? $\endgroup$ – BlueRaja - Danny Pflughoeft Jul 27 '10 at 21:04
$\begingroup$ This is an interesting avenue, but I'm not entirely sure about the details. For instance, the first statement about multiplication -- is it possible that sum of the denominators cancel? $\endgroup$ – Akhil Mathew Jul 28 '10 at 3:14
$\begingroup$ Couldn't you cancel a 1/(p^2) term by subtracting another product? $\endgroup$ – yatima2975 Jul 28 '10 at 14:34
There are a few subtleties that will probably effect the final answer.
Are we required to find the solution, or merely establish existence? By analogy, determining if a number has a prime factorization is trivial, but finding its prime factorization is hard.
Is the runtime being measured in terms of {a_1,...,a_n,s} or {log(a_i),...,log(a_n),log(s)}? By analogy, SUBSET-SUM is in P in the first case, but NP-complete in the second case.
Jeremy HurwitzJeremy Hurwitz
$\begingroup$ 1. Existence is enough (I wasn't talking about the function problem). 2. The second. Well, actually we have to add $n$ to it (I don't think we should take $\log n$). $\endgroup$ – Akhil Mathew Jul 28 '10 at 3:03
$\begingroup$ @Jeff, SUBSET-SUM is never in P. The kind of problems you can never solve in polynomial time, regardless of the actual numbers given to you is called strongly NP-complete en.wikipedia.org/wiki/Strongly_NP-complete $\endgroup$ – Elazar Leibovich Jul 28 '10 at 4:12
$\begingroup$ Oops, it's Jeremy not Jeff. And BTW Jeremy, this kind of post should be in a comment I think. $\endgroup$ – Elazar Leibovich Jul 28 '10 at 4:13
$\begingroup$ Neither of those questions make sense to me - this is not an algorithm, it's a yes or no question. Of course an answer exists: it's either yes, or no. $\endgroup$ – BlueRaja - Danny Pflughoeft Jul 28 '10 at 15:43
$\begingroup$ @ Elazar: Agreed, but I didn't have enough reputation to leave a comment. $\endgroup$ – Jeremy Hurwitz Jul 28 '10 at 21:47
It's worth considering a few ways of showing that the problem is neither in P, nor NPC. I've marked this answer "community wiki", so please feel free to add suggestions and flesh out ideas here.
Based on my experience playing the 24 game, it seems that most combinations of numbers are solvable. If we could formalize this, we could show that the 24 game is not NPC. Formally, consider the 2^n inputs of length n. If all but polynomially-many of them solvable, then the language is sparse and cannot be NPC (unless P=NP).
Jeremy Hurwitz
Not the answer you're looking for? Browse other questions tagged computer-science or ask your own question.
When the 24 Game is solvable given the four cards?
NP-Complete Problem
Pebble game on graph
Number guessing game with expensive answers
Algorithm to reduce expressions to canonical form
AND, OR, NOT, and creating turing complete programming languages
Can a statement about uninterpreted symbols be proved giving them a meaning?
What is (mathematically) minimal computer architecture to run any software
How shall I understand what the GNU utilities "comm" and "diff" do in terms of ordered sets?
What is a good route for a math student to self study computer science systematically and efficiently?
Is classical Euclidean geometry Turing complete? | CommonCrawl |
A Massive Cluster at z = 0.288 Caught in the Process of Formation: The Case of Abell 959
76272.pdf (1.684Mb)
Bîrzan, L.
Rafferty, D.A.
Cassano, R.
Brunetti, G.
Weeren, RJ van
Brüggen, M.
Intema, Huib
Gasperin, F de
Andrade-Santos, F.
Botteon, A.
Röttgering, H.J.A.
Shimwell, T.W.
Bîrzan, L. and Rafferty, D.A. and Cassano, R. and Brunetti, G. and Weeren, R.J.V. and Brüggen, M. and Intema, H.T. et al. A massive cluster at z = 0.288 caught in the process of formation: The case of Abell 959. Monthly Notices of the Royal Astronomical Society. 487 (4): pp. 4775–4789.
10.1093/mnras/stz1456
School of Elec Eng, Comp and Math Sci (EECMS)
This article has been accepted for publication in Monthly Notices of the Royal Astronomical Society ©: 2019 The Author(s). Published by Oxford University Press on behalf of the Royal Astronomical Society. All rights reserved.
The largest galaxy clusters are observed still to be forming through major cluster-cluster mergers, often showing observational signatures such as radio relics and giant radio haloes. Using LOFAR Two-meter Sky Survey data, we present new detections of both a radio halo (with a spectral index of $\alpha_{143}^{1400}=1.48^{+0.06}_{-0.23}$) and a likely radio relic in Abell 959, a massive cluster at a redshift of z=0.288. Using a sample of clusters with giant radio haloes from the literature (80 in total), we show that the radio halo in A959 lies reasonably well on the scaling relations between the thermal and non-thermal power of the system. Additionally, we find evidence that steep-spectrum haloes tend to reside in clusters with high X-ray luminosities relative to those expected from cluster LM scaling relations, indicating that such systems may preferentially lie at an earlier stage of the merger, consistent with the theory that some steep-spectrum haloes result from low-turbulence mergers. Lastly, we find that halo systems containing radio relics tend to lie at lower X-ray luminosities, relative to those expected from cluster LM scaling relations, for a given halo radio power than those without relics, suggesting that the presence of relics indicates a later stage of the merger, in line with simulations.
Low-frequency radio study of MACS clusters at 610 and 235 MHz using the GMRT
Paul, Surajit; Salunkhe, Sameer; Datta, Abhirup; Intema, Huib
Studies have shown that mergers of massive galaxy clusters produce shocks and turbulence in the intra-cluster medium, the possible event that creates radio relics, as well as the radio halos. Here we present GMRT dual-band ...
The spectacular cluster chain Abell 781 as observed with LOFAR, GMRT, and XMM-Newton
Botteon, A.; Shimwell, T.W.; Bonafede, A.; Dallacasa, D.; Gastaldello, F.; Eckert, D.; Brunetti, G.; Venturi, T.; Van Weeren, R.J.; Mandal, S.; Brüggen, M.; Cassano, R.; De Gasperin, F.; Drabent, A.; Dumba, C.; Intema, Huib; Hoang, D.N.; Rafferty, D.; Röttgering, H.J.A.; Savini, F.; Shulevski, A.; Stroe, A.; Wilber, A. (2019)
Context: A number of merging galaxy clusters show the presence of large-scale radio emission associated with the intra-cluster medium (ICM). These synchrotron sources are generally classified as radio haloes and radio ...
Multi-wavelength observations of the dissociative merger in the galaxy cluster ciza J0107.7+5408
Randall, S.; Clarke, T.; Van Weeren, R.; Intema, Hubertus; Dawson, W.; Mroczkowski, T.; Blanton, E.; Bulbul, E.; Giacintucci, S. (2016)
We present results based on X-ray, optical, and radio observations of the massive galaxy cluster CIZA J0107.7+5408. We find that this system is a post-core-passage, dissociative, binary merger, with the optical galaxy ... | CommonCrawl |
Impacts of attending an inclusive STEM high school: meta-analytic estimates from five studies
Barbara Means ORCID: orcid.org/0000-0001-5400-09601,
Haiwen Wang2,
Xin Wei2,
Viki Young1 &
Emi Iwatani1
International Journal of STEM Education volume 8, Article number: 4 (2021) Cite this article
Inclusive STEM high schools seek to broaden STEM participation by accepting students on the basis of interest rather than test scores and providing a program sufficient to prepare students for a STEM major in college. Almost nonexistent before the present century, these high schools have proliferated over the last two decades as a strategy for addressing gaps in STEM education and career participation. This study uses a meta-analytic approach to investigate the relationship between attending an inclusive STEM high school and a set of high school outcomes known to predict college entry and declaration of a STEM college major.
Combining effect estimates from five separate datasets of students from inclusive STEM high schools and matched comparison schools, the analysis reported here used data from administrative records and survey data for 9719 students in 94 high schools to obtain estimates of the average impact of attending an inclusive STEM high school on STEM-related high school outcomes. Positive effects for inclusive STEM high schools were found for completion of key STEM courses and for likelihood that students would engage in self-selected STEM activities. Students who attended an inclusive STEM high school also identified more strongly with mathematics and science and were more likely as high school seniors to be very interested in one or more STEM careers. Importantly, these positive impacts were found for low-income, under-represented minority, and female students as well as for students overall. Attending an inclusive STEM high school appeared to have a small positive impact on science test scores for students overall and for economically disadvantaged students, but there were no discernible impacts on mathematics test scores.
These findings suggest that the inclusive STEM high school model can be implemented broadly with positive impacts for students, including low-income, female, and under-represented minority students. Positive impacts on the odds of taking advanced mathematics and science courses in high school and on interest in entering a STEM profession are of particular importance, given the strong association between these variables and entry into a STEM major in college.
Historically, secondary education programs to prepare students for the STEM pipeline—such as selective STEM programs and high schools or selective courses like Advanced Placement science, mathematics, and computer science in regular high schools—have targeted students who could demonstrate a high level of prior academic achievement or aptitude. Recently, however, thinking has changed about how to build America's STEM workforce. The National Academies of Sciences, Engineering, and Medicine, for example, have drawn attention to the clash between the growing need for STEM (science, technology, engineering, and mathematics) expertise on the one hand and US demographic trends on the other (, 2011; National Academies, 2005). Those demographic groups most likely to pursue STEM studies and careers—middle and high socioeconomic status white and Asian males—comprise a dwindling proportion of the country's population. A 2010 report from the President's Council of Advisors on Science and Technology (PCAST) made the case for moving away from the idea that we can fulfill our needs by selecting for STEM talent to the idea that we must develop STEM talent:
[S]tudies suggest that achieving expertise is less a matter of innate talent than of having the opportunity and motivation to dedicate oneself to the study of a subject in a productive, intellectual way – and for sufficient time – to enable the brain development needed to think like a scientist, mathematician, or engineer. This has important implications for STEM education; it underscores the need to motivate students for long-term study of STEM, and points to the potential for many more students to excel in STEM. (PCAST, 2010)
President Obama's White House developed policies based on this line of thinking, including $80 million in the 2017 federal budget for the creation of "next-generation" high schools (White House Office of Science and Technology Policy, 2016).
The concept of an inclusive STEM high school (ISHS) entails (1) accepting interested students without applying admissions test score or other academic achievement criteria and (2) providing a secondary education program sufficient to prepare all of their students for a STEM major in college.
Almost nonexistent before the present century, inclusive STEM high schools (ISHSs) have proliferated over the last two decades. A 2008 survey of specialized STEM high schools identified over 100 public high schools that described their mission as preparing under-represented minority youth to successfully pursue postsecondary STEM studies (Means, Confrey, House, & Bhanot, 2008). By 2011, the Texas High School Project reported having more than 70 inclusive STEM high schools, North Carolina had at least two dozen according to the North Carolina New Schools Project, and the nonprofit research organization Battelle had teamed with partners in the states of Ohio, Tennessee, and Washington to create and support ISHSs in a STEM learning network within each state. More recently, Rogers-Chapman (2014) generated a list of 221 inclusive STEM high schools in the USA.
There is no single model or accrediting body for inclusive STEM high schools. Some arise from state initiatives, some from district-level strategies, and some from charter school networks. Descriptive studies have found considerable variation across schools that consider themselves ISHSs (LaForce, Noble, King, Holt, & Century, 2014; Lynch et al., 2018), but there are some commonalities. Inclusive STEM high schools are typically small in size (600 or fewer students), with the intent of fostering close relationships among students and between students and their teachers (Lynch, 2015). While they are public schools, most ISHSs are "schools of choice" accepting students from across a school district or geographic area. Case studies of ISHSs (LaForce et al., 2014; Lynch et al., 2018; Scott, 2012) have identified key components characterizing many of them: a rigorous STEM-focused college preparatory curriculum taken by all students; use of project- or problem-based pedagogy; an extensive network of supports for students who need assistance mastering the curriculum; incorporation of career, technology, and life skills into school activities and practices; a supportive school climate; and partnerships with external organizations to support out-of-school STEM experiences.
Published studies of the effectiveness of this high school model have used different samples and analytic approaches and have come to conflicting conclusions. Although test scores are not the best predictor of entering and persisting in STEM majors (Wang, 2013), most of the empirical research on the effectiveness of inclusive STEM schools has focused on test score impacts. Young et al. (2011) examined student outcomes for "T-STEM" high schools in Texas and found slightly but significantly higher 9th-grade math and 10th-grade math and science test scores compared to other Texas schools, after controlling for demographic and prior achievement variables. In contrast, a study analyzing achievement test outcomes for students spending 2 years in one of six Ohio ISHSs, compared to conventional high schools drawing from the same middle schools, found that only two of the six ISHSs had a positive impact on students' science achievement, with the other four having negligible or even negative impacts (Gnagey & Lavertu, 2016). A study by Saw (2017) used a statewide sample with five student cohorts, comparing test scores of students from 42 Texas T-STEM Academies with those of students from all other Texas high schools (1580 unique schools) and found a positive impact of T-STEM attendance for grade 11 mathematics achievement but not for achievement in other subject areas.
The present study contributes to research on the effectiveness of inclusive STEM high schools by (1) applying meta-analytic techniques to a large inclusive STEM high school data set with five student cohorts drawn from three different states, (2) looking at a range of high school outcomes known to be predictive of entry into a STEM college major rather than just mathematics and science test scores, and (3) examining outcomes for student subgroups under-represented in STEM (i.e., low-income, under-represented minority, and female students).
Non-test high school outcomes that predict entry into STEM in college include completion of advanced mathematics and science courses in high school (Adelman, 2006; Chen & Weko, 2009; Federman, 2007; Trusty, 2002; Wang, 2013), a high level of interest in STEM and involvement in STEM-related activities during high school (Andersen & Ward, 2014; Chang, Eagan, Lin, & Hurtado, 2011; Maltese & Tai, 2011; Regan & DeWitt, 2015), and aspiring to enter a STEM career (Legewie & DiPrete, 2014; Radunzel, Mattern, & Westrick, 2016; Tai, Liu, Maltese, & Fan, 2006). If the rationale behind the ISHS model (that it can increase the likelihood that students will become STEM majors in college) is correct, we would expect ISHSs to have a positive impact on these more proximal indicators that can be measured at the end of high school. These relationships are illustrated in the conceptual framework that guided our data collection, shown in Fig. 1.
Conceptual framework linking inclusive STEM high school experiences to entry into STEM pipeline
Prior work
Our research team has been studying the relationship between attending an inclusive STEM high school and these high school outcomes since 2012. We have sought to address the policy-relevant question of whether inclusive STEM high schools implemented at scale can in fact prepare a diverse student population for STEM college majors. We have conducted parallel analyses employing propensity score weighting for five student samples drawn from North Carolina, Texas, and Ohio, three states that have large numbers of inclusive STEM high schools. Replicating studies in multiple state contexts is important if research findings are to play a role in guiding education policy. Running parallel studies in the three states allows us to observe the generality of ISHS impacts in multiple student and school samples under different conditions. Previously, we (Means et al., 2017) have described impacts of ISHS for the two most mature samples in our program of research (the Class of 2013 in North Carolina and the Class of 2014 in Texas). The analyses reported here combine data from these cohorts with data from three additional student cohorts—for a total of five cohorts from three states—to obtain estimates of the average ISHS impact on the STEM-related high school outcomes in Fig. 1.
Data-sharing agreements with agencies managing state education data systems precluded combining student-level data from the different states into a single data set for analysis. But by employing a meta-analytic approach, we can increase the total sample size and provide more precise impact estimates than were available from any one of the five individual cohort studies. This is particularly useful when looking at ISHS impacts for various student subgroups, such as low-income or under-represented minority students. Tests of statistical significance within individual cohort studies are highly influenced by sample sizes, which were relatively small for subgroups of interest in some cohorts. In addition, a meta-analytic approach allows us to inspect the variability of outcome estimates across different state contexts and student cohorts. It may be that the ISHS model has positive impacts under some circumstances (e.g., strong state supports in terms of professional development for school leaders) but not others (e.g., when many of the local alternatives to STEM high schools are also schools of choice). If we observe significant heterogeneity in terms of impacts across states and cohorts, we need to worry about generalizing from findings in these three states to possible initiatives in other states and should direct our attention to searching for conditions or practices particular to a state or time period that can help us understand the prerequisites for effective implementation of ISHSs at scale.
The present study
The analyses reported here combine data from two student cohorts each in North Carolina and Texas along with data from a single cohort from Ohio. The findings for the two younger student samples in North Carolina and Texas are of particular interest because these students were surveyed first as 9th graders in the fall of their freshman year of high school and then as seniors in the spring before graduation, allowing us to use their grade 9 reports of STEM-related activities and interests during middle school (as well as the grade 8 achievement covariates used in analyses for all cohorts) as covariates in analyzing their high school outcomes.
The analyses reported here use the combined data from all five student samples to address two research questions:
RQ 1. When findings from the study's five student cohorts are combined, do ISHSs appear to have positive impacts for the STEM course-taking, out-of-class activity, attitude, achievement, and aspirational outcomes in the inclusive STEM high school conceptual framework?
RQ 2. When findings from the study's five student cohorts are combined, do ISHSs appear to enhance STEM-related high school outcomes for low-income, under-represented minority (African American, Hispanic, and Native American), and female students?
State contexts for inclusive STEM high schools
To contextualize our meta-analysis of five student cohorts from three states, we examined aspects of the state environments and policies that could be expected to influence the way in which ISHSs were implemented. These included the demographic and geographic characteristics of their student populations, financial resources, requirements for high school graduation, strength of the K-12 education accountability system, state financial supports for establishing and supporting ISHSs, teacher union presence, charter school policy, and connections between state education and economic development policies. We addressed these issues during interviews with education policymakers at the state, regional, and local levels within each of the three states in our study.
During the timeframe of our study, North Carolina's 600 public high schools were serving a population of around 460,000 students, of whom over 40% were from an under-represented minority (predominantly African American) and half were designated as economically disadvantaged. Roughly 2 out of 5 high schools in North Carolina had been designated as in need of program improvement. In 2006, all of the state's high schools designated as in need of improvement were invited to compete for one of ten $40,000 grants issued by the State Board of Education to support a planning year for creating a new, autonomous STEM-focused high school. The resulting STEM school could be an entirely new school sharing a campus with a larger, traditional high school or it could be a conversion of the entire pre-existing high school. The nonprofit North Carolina New Schools Project was designated as the professional services provider for the ISHS planning process. The New Schools Project, founded in 2003 with funding from the Bill & Melinda Gates Foundation and later known as NC New Schools, offered technical assistance services to support STEM-focused curriculum development and instruction and to connect new STEM high schools to higher education and industry partners. The state Department of Public Instruction had relatively little direct involvement in the creation of North Carolina's ISHSs. These schools emerged instead from a combination of school district initiatives and support from NC New Schools and other nongovernmental education support agencies as well as business and higher education partners (Young et al., 2017). North Carolina ISHSs in our study were all district-run public schools and eight of them were schools-within-a-school created as part of a conversion effort for a larger school previously identified as in need of improvement. The North Carolina school sample did not include any charter schools. (At the time we were recruiting North Carolina schools for our study, North Carolina had a cap of 100 on the total number of charter schools in the state.) Another important piece of the North Carolina context was the state's receipt of one of the first Race to the Top education grants in 2010, bringing in roughly $400 million for educational improvement, including funds for learning technology and for establishing four STEM "anchor schools" focused on career areas important to the state's economy. These schools and their associated "affinity networks" were established subsequent to the school recruiting and the first round of survey data collection for our research, but prior to the second student survey for cohort 2 conducted in 2016. Race to the Top spending may have reduced the contrast between ISHS and comparison school experiences for this second cohort of North Carolina students.
During the same period, Texas had a much larger education system with 1450 public high schools serving nearly 1.5 million students. More than half of these students were designated as coming from low-income homes, and 65% were from under-represented minorities (primarily Hispanic). Texas is a charter friendly state, but one with a strong accountability system. Interest in establishing ISHSs (which are called T-STEM Academies in Texas) arose during conversations between the governor's office and representatives from the Bill & Melinda Gates Foundation. T-STEM was envisioned as a public-private partnership from the beginning, with extensive support from both the Texas Education Agency and the Community Foundation of Texas. The intended nature of an ISHS was more highly specified in Texas than in North Carolina. Detailed T-STEM design and implementation requirements were set forth in a T-STEM Academy Blueprint, and T-STEM Academies risked loss of their funding if they did not comply with blueprint requirements, which included serving a student body of which at least 50% were low-income and under-represented minority students. To support the effective implementation of inclusive STEM high schools, Texas established seven T-STEM Centers dispersed throughout the state to provide needs assessments and tailored technical assistance.
Ohio had the fewest number of public high schools, 506, serving around 520,000 students. These students were 53% low income and 35% minority. Ohio offers another example of ISHSs promoted through a partnership between a state education agency and a private entity: In this case, the Battelle Memorial Institute teamed with the Ohio STEM Learning Network (OSLN). OSLN also received support from the Gates Foundation for establishing ISHSs in Ohio. The state's strategy for supporting ISHSs was to have well-established ISHSs, such as Metropolitan High School in Columbus, Ohio, serve as models for new schools within a regional hub. Each regional hub had higher education and business/industry partners. The OSLN hubs provided technical assistance in the form of collaborations, joint classes, site visits, and educator-to-educator professional learning opportunities (Young et al., 2017). The Ohio Department of Education developed a STEM school designation process, but the designation requirements were less strict than the Texas T-STEM Academies Blueprint. With respect to the student body, for example, a STEM designation in Ohio required that it have a "racial, ethnic, socio-economic, and gender balance reflective of the region" in contrast to the Texas stipulation of explicit representation targets. During the years of our study, Ohio public schools were experiencing reductions in state funding and increased teacher accountability based on their students' test scores in reading and mathematics. Under Governor Kasich, Ohio was friendly to charter schools, and there was considerable tension between public school districts and charter proponents, with both sides claiming that state funding practices advantaged the other sector (Strauss, 2016).
A more extended treatment of the education environment and policies in the three states can be found in Young et al. (2017).
School sampling and recruiting
In each state, our recruiting process began with identifying high schools within the state that met our definition of an inclusive STEM high school and that would have both a grade 12 and a grade 9 class during the year when we planned our survey data collection. For this purpose, we defined an inclusive STEM high school as a secondary school or self-contained school-within-a-school that (a) enrolls students on the basis of interest rather than aptitude or prior achievement, (b) provides students with more intensive STEM preparation than conventional high schools do, and (c) expresses the goal of giving all its students the preparation to succeed in a STEM major in college. Following school and district or charter management organization requirements for approval of research participation, we enlisted as many inclusive STEM high schools as we could in North Carolina and Ohio. In Texas, where there were many more such schools, we continued recruiting until we had 38 willing ISHSs. Study participation in North Carolina and Texas entailed administering surveys to incoming 9th graders and to graduating seniors in the first year of a school's participation in the research and then re-surveying the first of these groups 3.5 years later when they were about to graduate. In Ohio, resources were available for just one study cohort, and surveys were administered to graduating seniors only.
Next, for each ISHS agreeing to participate in our research, we used publicly available school-level data to identify high schools without a STEM focus that served student bodies as similar as possible in terms of demographics and prior achievement profiles for their entering 9th graders. These non-STEM comparison schools were then recruited using the same research application and incentive offers extended to ISHSs. The monetary compensation for school participation depended on school size, with an honorarium of $500 for a small school (enrollment of 600 students or fewer) and $1000 for a larger school (enrollment greater than 600).
Supplementary Figures 1, 2, and 3 in Appendix A-1 provide details on the stages in the recruiting and data collection process and the number of schools remaining in the sample at each stage for each of the three states.
Grade 9 student survey
The main purpose of surveying students entering high school was to obtain reports with as little time lag as possible of students' STEM-related activities and attitudes during middle school. These measures could be used as covariates in analyses of the same students' responses to the Grade 12 Student Survey that they would take in the future. Entering 9th graders were asked to identify the subject of their favorite course in middle school and to indicate whether they had participated in each of eight types of STEM-related activities, such as math and science clubs, competitions, camps, and study groups.
Grade 12 student survey
The survey for graduating seniors was designed to collect data on sociocognitive constructs highlighted in expectancy theory (i.e., science and math identity, interests, academic expectations, and self-efficacy) and on variables shown to predict entry into STEM college majors in prior empirical research. Survey items and scales addressed students' high school experiences and outcomes including STEM courses taken, extracurricular and leisure-time activities related to STEM, overall academic and STEM orientation, academic and personal supports received through their high school and at home, plans for the year following graduation, and interest in STEM careers.
Sources of items and scales for both the Grade 9 and the Grade 12 Student Surveys included the National Center for Education Statistics' High School Longitudinal Study, the Consortium for Chicago School Research's Biennial Chicago Public School Student Survey, and surveys used in SRI's Program Evaluation of the Innovative Technology Experiences for Students and Teachers Program and its Evaluation of the Texas High School Project. Survey scales from these instruments have demonstrated predictive validity with respect to variables such as high school graduation rates (Allensworth, Healey, Gwynne, & Crespin, 2016). Attitudinal constructs were measured through scales of 4 or 5 items using a Likert scale format. The reliability (Cronbach's alpha) of the Grade 12 Student Survey item scales ranged from .71 to .92. The Grade 12 Student Survey measures are provided in Appendix A-2.
For our North Carolina cohorts, student demographic information, grade 8 test scores, high school grade point average, ACT scores, and graduation status were obtained from the North Carolina Education Research Center (NCERDC). With the exceptions of high school GPA and ACT scores, the same kinds of administrative data were available for students in our Texas samples from the Education Research Center at the University of Texas, Austin. For the Ohio sample, we worked directly with the Ohio Department of Education, which linked our survey files to student longitudinal records, stripped personally identifying information, and then returned the linked data sets to us for analysis.
Most of the high school outcome measures in this research, such as STEM course-taking, out-of-class activity, attitude, aspirations, and mathematics and science grades, were derived from the Grade 12 Student Survey, which was essentially identical across the five individual studies. Mathematics and science achievement test measures, on the other hand, were obtained from state data systems and differed across the three states. North Carolina and Ohio had ACT scores for nearly all of the 12th graders in our samples. The Texas student data system does not include ACT or SAT scores, but it did have subject area Texas Assessment of Knowledge and Skills (TAKS) scores for students in the Class of 2014 (cohort 3). Texas students in cohort 4 (Class of 2017) did not take the TAKS test in grade 11, and no science and math test scores were available for a majority of this sample. In addition, some of the covariates in our analytic models, such as special education status or English proficiency, were operationalized slightly differently across the three states.
Analytic approach for within-state impact estimates
For analyses of each of the five cohorts, we applied propensity score weighting to make the comparison school student sample as similar as possible to the ISHS student sample in terms of students' prior achievement (mainly grade 8 achievement test scores) and demographic characteristics (including gender, ethnicity, English proficiency, parents' education, and parent employment in STEM). For cohort 2 and cohort 4 studies, where students had been surveyed previously in 9th grade, we also included two prior STEM experience variables from the 9th-grade survey—namely whether STEM was a favorite subject and participation in STEM activities in middle school—in the propensity score weighting and as covariates.
For propensity score weighting, we first posited a logistic regression model with being an ISHS student as the outcome and included the abovementioned set of student variables as predictors. For each student, we then calculated a propensity score pi, which is the probability of being in an ISHS, based on the estimated propensity score model. We weighted each comparison student by the odds of being in an ISHS, calculated as pi/(1 − pi), and assigned a weight value of 1 to each ISHS student. These weights were used in the subsequent analysis.
Because students are clustered within high schools, our analyses used hierarchical modeling with school and student levels to compare outcomes for 12th graders in ISHSs to those of 12th graders in comparison schools, adjusting for student demographic characteristics and eighth-grade achievement scores through propensity score weighting as described above. We also adjusted for middle school STEM subject interest and STEM activities for cohorts 2 and 4.
For each set of comparisons, we posited a hierarchical model with student and school levels for the same set of outcomes. The ISHS impact was estimated at the school level. The hierarchical model for student-level outcomes took the following form:
$$ {Y}_{ij}={\beta}_0+{\beta}_1\left({\mathrm{ISHS}}_j\right)+{\beta}_k\left(k\mathrm{th}-{\mathrm{student}\ \mathrm{covariate}}_{ij}\right)+{\beta}_{ij}\left(l\mathrm{th}-{\mathrm{school}\ \mathrm{covariate}}_j\right)+{e}_{ij}+{r}_j $$
where i is students, j is schools, Yij is a student outcome, and ISHS equals 1 for students in an ISHS school and 0 for students in a comparison school. eij and rj are student and school random effects. β1 is the estimated ISHS impact on the student outcome. We included as student-level covariates being female, African American, Hispanic, economically disadvantaged, limited in English proficiency, special education designation, either parent having a bachelor's degree, and grade 8 mathematics, science, and reading achievement scores. We also included eighth-grade social studies score in cohort 3 and cohort 4 since it was available in the Texas administrative data. We only included students with at least one grade 8 achievement test score in the analysis. For student-level predictors, we used multiple imputation, applying the MIANALYZE procedure in SAS to impute each missing value 5 times. Our model also incorporated school-level covariates, including urbanicity, title I improvement status (controlling for accountability pressure), percent minority students, percent economically disadvantaged students, and average incoming students' eighth-grade test scores in the school.
Meta-analytic approach
Because we wanted to test for average effects on outcomes for inclusive STEM high schools as a conceptual model across the three states with a total of five datasets, we performed a fixed-effect meta-analysis that calculates the average effect across the five cohorts. We applied the metan commands in Stata. For each outcome, the weighted mean effect was computed by weighting the effect estimate for each study cohort by the inverse of its variance. Log odds estimates were used for dichotomous outcomes from a logit function. In working with achievement test data, we converted ACT scores (in North Carolina and Ohio for cohorts 1, 2, and 5) and TAKS scores (in Texas for cohort 3) into standardized effect sizes using Hedge's G, and conducted a meta-analysis across the studies on the mathematics and science test score outcome constructs.
Table 1 provides basic information on the five student samples (cohorts) in the meta-analysis, identifying their state, graduation year, high school achievement test measure, and the number of schools and students in the ISHS and comparison groups. Characteristics of the high schools in each of the five ISHS samples compared to all high schools in their state in the focus year are shown in Table 2. These school-level descriptive data characterize all the students in each school in the study, not just those students in our analytic samples. These school-level data suggest that the ISHSs in each of the five studies were serving higher proportions of low-income and under-represented minority students than were public high schools in their state as a whole, suggesting that ISHSs are indeed expanding the diversity of students exposed to a STEM-focused curriculum. The other major school-level difference apparent in Table 2 is in average school size. One of the essential components of an ISHS is the creation of a close-knit school community, which many educators believe is possible only in a school of relatively small size.
Table 1 Five ISHS impact study cohort samples
Table 2 ISHS and comparison school samples compared to all state high schools
Table 3 summarizes key student characteristics for each ISHS and comparison student analytic sample, both before and after propensity score weighting. Before the propensity score weighting, there were some significant differences in the characteristics of ISHS and comparison school student samples in each of the five cohorts. There were more females in comparison schools than in ISHSs for example in cohorts 3 and 4. There was a significantly larger proportion of under-represented minority students in ISHSs than in comparison schools in cohorts 1, 2, 4, and 5. There was a higher proportion of economically disadvantaged students in the ISHS sample in cohort 1 and 5. Differences in grade 8 test scores were not large (which would be expected since similarity of incoming students' grade 8 test scores was a major criterion in selecting schools to recruit for the comparison sample) and favored the comparison school students and the ISHS students equally often. After propensity weighting, the comparison school student sample did not differ from the ISHS student sample for any student characteristic in any of the five studies with the exception of the percentage of females in the school in the Ohio sample (Cohort 5).
Table 3 Descriptive key information on ISHS vs. comparison students who participated in 12th-grade survey, before, and after propensity score weighting, by cohort
High school outcomes
Figures 2, 3, 4, and 5 display the results of the five individual cohort analyses and the meta-analysis for the set of high school outcomes in the ISHS conceptual model (Fig. 1). In each of these figures, squares to the right of the solid vertical line labeled 0 denote positive impacts on the log odds of obtaining the outcome for the student cohort; the ISHS impact for that cohort was statistically significant if the "whiskers" (demonstrating the confidence interval for the impact estimate) extending out from the square do not cross 0. The shaded diamond at the bottom of each figure shows the overall impact estimate obtained in the meta-analysis for that outcome; a dotted vertical line running through the diamond is shown to facilitate comparing impact estimates for individual cohorts to the overall impact estimate. The values for each impact estimate, its confidence interval, and its weight in calculating the overall impact are shown to the right of the figure.
Estimates of ISHS impacts on STEM course-taking
Estimates of ISHS impacts on STEM activities and attitudes
Estimates of ISHS impacts on STEM achievement
Estimates of ISHS impacts on educational and career aspirations
Tables 4, 5, 6, and 7 provide the meta-analysis estimates of ISHS effects for the same set of outcomes for student subgroups—i.e., under-represented minorities, economically disadvantaged, and female students. (Effect estimates for subgroups in the five individual student cohorts are available from the authors upon request.)
Table 4 Meta-analysis impact estimates for high school advanced course-taking, by student subgroup
Table 5 Impact estimates for high school STEM activity and attitudes, by student subgroup
Table 6 Impact estimates for high school achievement, by student subgroup
Table 7 Impact estimates for education and career aspirations, by student subgroup
Course-taking
As shown in Fig. 2, ISHSs appear to make a difference in the level of mathematics courses students take in high school. Because students need to be ready for calculus when they enter college if they are to complete a STEM college major within 4 years, completion of precalculus or calculus in high school is an important outcome. The estimated ISHS effect on this outcome was positive with log odds of .84, p < .001, corresponding to an odds ratio (OR) of 2.3. This odds ratio suggests that attending an ISHS more than doubles the likelihood that a student will complete precalculus or calculus while in high school. In addition, there was a significant impact across the five cohorts on the OR for having taken chemistry in high school (log odds = .94, OR = 2.6, p < .001), but there was no significant impact on the odds of completing physics (log odds = .19, OR = 1.2, p > .05). Attending an ISHS appeared to increase the odds that a student would complete some kind of technology course in high school (log odds = .47, OR = 1.6, p < .01) and to have a very large impact on the likelihood of taking an engineering course (log odds = 2.29, OR = 9.9, p < .001). Importantly, every ISHS impact estimate for course-taking that was significantly positive for students overall was also significantly positive for under-represented minority, economically disadvantaged, and female students, as shown in Table 4.
In summary, ISHS attendance appeared to impact STEM courses completed in high school. Specifically, positive ISHS effects were found for mathematics (completing calculus or precalculus), chemistry, technology courses, and engineering. These positive impacts were found for low-income, under-represented minority, and female students as well as for the ISHS student sample as a whole. The enrollment of low-income, under-represented minority, and female students in such advanced mathematics and science classes within ISHSs contrasts with reports of their typically low participation rates in these courses in US high schools (see https://ocrdata.ed.gov/). These findings for 77 ISHS senior classes across three states appear to confirm the conclusions of earlier qualitative research on ISHSs suggesting that they provide a more rigorous STEM curriculum than do regular high schools (Lynch et al., 2018).
STEM-related activities and attitudes
Figure 3 presents the analytic results for student reports of their participation in STEM activities outside of courses and their attitudes toward mathematics and science. Students who attended ISHSs reported participating in more STEM extracurricular activities overall (estimated difference = .28 on a 4-point scale, p < .001) and engaging in more self-selected STEM-related activities outside of school, such as visiting a science museum (estimated difference = .13 on a 4-point scale, p < .001). Again, the positive ISHS impact estimates obtained for students overall were seen also for under-represented minority, economically disadvantaged, and female students, as shown in Table 5. The level of engagement in STEM activities for these groups of students contrasts with reports of the lower participation rates of under-represented minority and female students in out-of-school STEM activities nationally (see, for example, the responses of 12th graders on the questionnaire administered with the National Assessment of Educational Progress available at https://www.nationsreportcard.gov/sq_students_views_2015/).
However, the ISHS impacts on these outcomes for cohorts 2 and 4, for which we were able to control for middle school engagement in STEM activities, were not statistically significant with the exception of participation in informal STEM activities by the Texas Class of 2017 (cohort 4), suggesting that this differential inclination to engage in voluntary STEM activities may have pre-dated entry into an ISHS.
In terms of students' attitudes toward science and mathematics, the ISHS experience appears to have a positive influence on students' affinity for these subjects but not on their confidence in their ability to do well in them. ISHS students are more likely than comparison school students to report that their favorite high school subject was in a STEM area (log odds = .52, OR = 1.7, p < .001), and the meta-analysis effect estimates were significantly positive also for under-represented minority, low-income, and female students (see Table 5). ISHS seniors also expressed a stronger identity as a science person (estimated difference = .16 on a 4-point scale, p < .001) and as a mathematics person (estimated difference = .11 on a 4-point scale, p < .001). These positive impacts too held for the three student subgroups. In contrast, ISHS students' sense of efficacy in science and mathematics (expectation that they can do well in the subject) was no higher than that of comparison school students in the overall meta-analysis or in any of the five cohorts (not shown in figure), nor was it significant for any of the subgroups in the meta-analysis (see Table 5).
While it seems clear that ISHS students demonstrate a stronger sense of identification with both science and mathematics than do their peers in other high schools, it may be that this heightened interest is something they brought to their high schools. In those analyses where we were able to control for the extent to which students identified with science and mathematics when they began high school (i.e., in cohorts 2 and 4), the ISHS impact estimate was significantly positive in one case (cohort 2) but not in the other (cohort 4). It makes sense that students who identify with mathematics and science in middle school are more likely to choose to attend an ISHS, and the mixed findings for cohorts 2 and 4 leave uncertainty as to whether attending an ISHS deepens that sense of identity.
Our analyses indicate that ISHS attendance did not enhance students' sense of self-efficacy in mathematics or science relative to that of their peers attending other kinds of high schools. It should be remembered, however, that students tend to take more advanced math and science courses within ISHSs, and it may well be that ISHS students have a better understanding than students taking less advanced courses do of what they do not know. While self-efficacy is regarded as an important predictor of high school course-taking and postsecondary engagement and success in STEM studies in several theoretical models (Lent, Brown, & Hackett, 1994; Simpkins, Davis-Kean, & Eccles, 2006), results from some studies and from international assessment programs suggest that achievement and sense of efficacy do not necessarily go hand in hand (Andersen & Ward, 2014; Chiu, 2017; Maltese & Tai, 2011).
STEM achievement and standardized test scores
Estimates of the ISHS impacts on science and mathematics achievement test scores and self-reported grades are shown in Fig. 4. Although impact estimates did not reach statistical significance for individual cohorts, the overall ISHS impact estimate in the meta-analysis was significantly positive for science achievement test scores (g + = .12, p < .05). The student subgroup impact estimates in Table 6 show that the positive relationship between ISHS attendance and science test scores is found for economically disadvantaged students as well (g + = .13, p < .05) but fails to attain statistical significance for under-represented minority (g + = .10) or female students (g + = .11). The meta-analysis found no ISHS impact on mathematics test scores (g + = .02). Nor was there a statistically significant relationship between ISHS attendance and math test score for any of the student subgroups. In summary, ISHS attendance appeared to have a small positive impact on science test scores for students overall and for economically disadvantaged students. There were no discernible impacts on mathematics test scores in any of the meta-analyses.
Achievement can also be measured by course grades, and this was one area where the pattern of statistically significant ISHS effects differed for student subgroups compared to the overall student sample. The meta-analysis found that for students overall the likelihood of earning high grades (all As or As and Bs) in science and mathematics classes was not significantly greater for ISHS students than for students from comparison high schools (log odds = .28, OR = 1.3 for science and log odds = .17, OR = 1.2 for mathematics). But there were significant ISHS advantages for some subgroups, as shown in Table 6. Under-represented minority students were more likely to report earning high grades in science classes if they attended an ISHS (log odds = .37, OR = 1.4, p < .01). Economically disadvantaged students were more likely to report earning high grades in both science (log odds = .40, OR = 1.5, p < .001) and mathematics classes (log odds = .40, OR = 1.5, p < .01) if they attended an ISHS. The meta-analysis findings for the other three combinations of subgroup and grades were null. Thus, students overall did not report earning higher grades in science and mathematics classes if they attended an ISHS, but again, ISHS students were more likely than their peers in comparison schools to be taking advanced courses in these subjects. Economically disadvantaged students did report earning higher grades in both science and mathematics classes if they attended an ISHS. In addition, under-represented minority students were more likely to report earning high grades in science if they attended an ISHS. One of the tenets of the ISHS model is that all students, including those who are under-represented minority students, can excel in STEM, and this finding is congruent with prior research showing that high expectations enhance academic achievement (Hattie, 2009). It appears that ISHS attendance offers some enhancement of STEM course performance among the kinds of students this educational innovation was intended to benefit.
Education and career aspirations
Figure 5 displays the impact estimates for three key variables related to likelihood of entering and completing a STEM college major. The first of these is going directly to a 4-year college in the fall after high school graduation. Many low-income students and students of color begin their postsecondary work at two-year colleges. Unfortunately, statistics show that students who start at a 2-year college, like those who delay college entry altogether, have a lower probability of ever earning a bachelor's degree. Attending an ISHS did not increase the odds of reporting the intent to go directly into a 4-year degree program for students overall or for low-income or under-represented minority students. One exception to this overall pattern was a significant relationship between ISHS attendance and planning to enter a 4-year college the next fall for female students (log odds = .33, OR = 1.4, p < .05).
There was a positive ISHS impact on our other measure of educational aspiration. Students who had attended an ISHS were more likely to report that they expect to earn a master's or higher degree (log odds = .40, OR = 1.5, p < .01), and this significant effect was found for under-represented minority, low-income, and female students as well, as shown in Table 7.
Finally, ISHS students were more likely to report that they were very interested in entering a STEM career (log odds = .40, OR = 1.5, p < .001). This latter positive impact was found not only for the combined meta-analytic sample but also for four of the five cohorts, including the two cohorts for which the statistical model controlled for STEM interest during middle school. The positive ISHS effect on STEM career interest was found also for under-represented minority (log odds = .42, OR = 1.5, p < .001), economically disadvantaged (log odds = .40, OR = 1.5, p < .001), and female students (log odds = .43, OR = 1.5, p < .001). This finding is important because interest in a STEM career at the end of high school is a strong predictor of entering and persisting in a STEM major in college (Radunzel et al., 2016).
In summary, ISHS attendance had a positive impact on several key measures of STEM aspirations. Students who attended an ISHS were more likely to expect to earn a graduate degree and were more likely to be very interested in one or more STEM careers. These positive ISHS effects were found in the three student subgroup meta-analyses as well.
Tests of sensitivity and heterogeneity of effects
Additional analyses were run to examine the sensitivity of our findings to choice of analytic model and to differences in state context or timeframe. For cohorts 2 and 4 (students who had taken a survey in grade 9 as well as grade 12), we conducted a sensitivity analysis by modeling the high school outcomes without using these two middle school STEM experience indicators as covariates and found that after controlling for all of the student-level characteristics used in all five studies, adding controls for prior STEM interest and activity did not change any inferences about ISHS impacts on grade 12 outcomes.
To examine the sensitivity of our results to the choice of a fixed-effects model, we also conducted a random-effects meta-analysis and found the results to be quite similar to those of the fixed-effect analysis, with the direction and statistical significance of all the effect estimates remaining the same.
Given the lack of a definitive, national model of what an inclusive STEM high school is and the likelihood that both the choices of individual school leaders and communities and the policies of different states will affect ISHS designs and the ways they operate, we wanted to assess the consistency of the ISHS impacts across the five study cohorts. We conducted tests of the heterogeneity of the distribution of the effect estimates using Cochran's Q for each outcome variable, and did not detect statistically significant heterogeneity for any of them.Footnote 1 We also looked at I squared values, representing the percentage of total variation across studies that is due to heterogeneity rather than chance for the overall sample. All outcomes have I squared values less than 25% except for completed calculus or precalculus (36%), completed chemistry (42%), and science self-efficacy (30%). Heterogeneity in the first two of these variables was likely related to differences in state course-taking requirements, as discussed below.
Across the five study cohorts, our data show first that, as intended, ISHSs attract students from groups under-represented in STEM. The proportion of ISHS students who came from low-income homes was 63% and the proportion from under-represented minorities was 69% across the five study samples. Moreover, the proportions of under-represented minority and low-income students in the ISHSs exceeded those in public high schools in their states as a whole in every state and for every study cohort.
Our findings suggest that nonselective STEM-focused high schools may increase the likelihood that students, including those from groups under-represented in STEM, will leave high school with stronger STEM academic experiences and greater interest in STEM careers than they would have had if they had attended secondary programs without a STEM focus. However, we acknowledge the limitations of propensity score modeling as a basis for causal inference and the possibility that our models did not fully control for a greater initial interest in STEM careers on the part of students who self-selected into ISHSs.
While very consistently positive, the size of the ISHS impact estimates varied for different kinds of outcomes. They appear large for STEM course-taking, STEM identity, and interest in pursuing a STEM career; moderate for general education aspirations; small for science achievement test scores; and absent for mathematics achievement scores and STEM self-efficacy.
These findings have important implications for education policy. They suggest that the inclusive STEM high school model can be implemented broadly with positive impacts for students, including low-income, female, and under-represented minority students. These findings underscore the assumption expressed in the PCAST report that a much broader cross-section of students can experience sustained, advanced instruction in STEM if given the opportunity and suitable support structures.
An important question is whether the ISHS impacts are large enough to have practical import. In particular, policymakers would want to know whether attending a STEM high school increases the likelihood of entering and completing a STEM degree program in college. Postsecondary outcome data were not available for most of the five cohorts included in this meta-analysis. But, as noted above, strong interest in a STEM career at the end of high school predicts entry into a STEM major in college. In addition, as reported elsewhere (Means, Wang, Wei, Iwatani, & Peters, 2018), we have analyzed postsecondary data for cohort 3 and found that the odds of being in a STEM bachelor's degree program 2 years after finishing high school were nearly triple for these Texas ISHS graduates compared to matched graduates of comparison high schools.
These findings also suggest that the ISHS model that emerged over the last 15 years is robust enough to yield similar positive outcomes across a wide range of state contexts. The five cohorts in our analyses incorporate 77 ISHS school samples from three states and 4 graduation years. As described earlier, student demographics, education policies, and the specific strategy for starting and supporting inclusive STEM high schools varied across the three states. Nevertheless, the overall picture presented by the impact estimates in Figs. 2, 3, and 5 is one of consistent impacts across different contexts and graduation cohorts. For most of the 21 high school outcomes, ISHS impact estimates for all five study cohorts were positive in direction. While the size of the positive impact and of the standard error (and therefore the significance level) differed from sample to sample, the consistency in the direction of the effect suggests that inclusive STEM high schools with the characteristics shown in Fig. 1 do typically enhance STEM course-taking and career interest across a range of different state contexts. The consistency of the direction of impacts observed across the five cohorts suggests that despite lack of any national accrediting process or control of the ISHS model, the emergent practice is consistent enough that impacts are equivalent across a range of different state contexts. Several factors may have contributed to the observed consistency across state contexts.
First, all three of the states where we conducted studies received funding from the Bill & Melinda Gates Foundation to support the creation of inclusive STEM high schools. Funding and ideas came from organizations within each state (legislature, governor's office, education department, science and technology organizations, local foundations) as well, but the Gates Foundation investment was certainly an impetus for starting this work at scale and came with a set of core ideas about the need for new designs for small high schools promoting rigor, relevance, and relationships for students from underserved communities (Gates, 2005). Comparing our findings for inclusive STEM high schools within North Carolina, Ohio, and Texas to those in other states as described by other research teams (LaForce et al., 2014; Lynch et al., 2018; Scott, 2012) does not suggest that there are systematic differences between the two, but we need to acknowledge the role of the Gates Foundation and the legitimacy of the question of whether implementation of these schools would have been more variable absent the foundation's involvement in early planning for all three state initiatives.
Another likely contributor to the consistency of ISHS impacts across states was our use of phone interviews to screen potential schools for our ISHS sample to make sure they really were nonselective and had a schoolwide STEM-focused program that all students were expected to complete. Some schools have rebranded themselves as STEM without making any substantive changes in expectations, curriculum, or pedagogy (Eisenhart et al., 2015; Weis et al., 2015) or involve some but not all of their students in intensive STEM coursework. Our screening of potential study schools was designed to exclude such superficial school reform efforts from our study samples, but may have screened out some variants of broad-access STEM schools and programs related to different state policies and incentives (e.g., around career technical education pathways).
One high school outcome category that did seem to be sensitive to state context effects was STEM course-taking. By virtue of the way impacts are estimated, the estimated ISHS impact on likelihood of taking a specific STEM course was influenced both by practices in ISHSs and by practices in non-STEM high schools. The latter can change over time as a result of state policy initiatives or other educational trends. For example, the "4 X 4" policy operating in Texas from 2007 to 2014 meant that in those years every high school student had to take 4 years of science and 4 years of math to graduate, and this policy likely reduced the ISHS impact on science and math course-taking for cohort 3. The Texas state legislature repealed this policy during the second year of high school for cohort 4.
In summary, these meta-analysis findings provide a positive example of an equity-oriented educational improvement effort with measurable positive impacts. Cohen and Mehta (2017) argue that the weak central control and loosely coupled nature of American public education make system-wide change in core instruction difficult but do open up possibilities for more limited, niche reforms that deviate from usual practices with respect to teaching and learning. Inclusive STEM high schools appear to be one such niche reform—manipulating curriculum, instructional practices, expectations for nondominant student subgroups, and school size and culture in ways that in combination pay off in terms of high school outcomes. The ISHS data suggest that regardless of their demographic background, students who have an interest in STEM can benefit from a rigorous STEM-focused curriculum if provided with the kinds of instruction and supports emphasized in the ISHS model. The next critical question for policy and practice is whether this kind of educational approach can travel beyond its niche—becoming something that low-income, under-represented minority, and female students who are interested in STEM can experience within the typical American high school.
The datasets that support the findings of this study were created my merging survey data collected by SRI International with student-level demographic and test score data obtained from three state organizations. Demographic data for students in the survey samples and their Grade 8 and 10 state test score data were obtained from the North Carolina Education Research Center (NCERDC) at Duke University, the Texas Education Research Center (ERC) at the University of Texas Austin, and the Ohio Department of Education. Because of state restrictions on access to these data and agreements made to obtain access to them, the merged data files used for analysis are not available.
Cochran's Q is calculated as the weighted sum of squared differences between individual study effects and the pooled effect across studies. It is distributed as a chi-square statistic with number of studies minus 1 degrees of freedom.
ERC:
Education Research Center
ISHS:
Inclusive STEM high school
NCERDC:
North Carolina Education Research Center
SE:
Standard error
TAKS:
Texas Assessment of Knowledge and Skills
T-STEM Academies:
Texas (inclusive) STEM academies
Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through college. Washington, DC: U. S. Department of Education.
Allensworth, E. M., Healey, K., Gwynne, J. A., & Crespin, R. (2016). High school graduation rates through two decades of district change: The influence of policies, data records, and demographic shifts. Chicago, IL: University of Chicago Consortium on School Research.
Andersen, L., & Ward, T. J. (2014). Expectancy-value models for the STEM persistence plans of ninth-grade, high-ability students: A comparison between Black, Hispanic, and White students. Science Education, 98(2), 216–242.
Chang, M. J., Eagan, M. K., Lin, M. H., & Hurtado, S. (2011). Considering the impact of racial stigma and science identity: Persistence among biomedical and behavioral science aspirants. The Journal of Higher Education, 82(5), 564–596. https://doi.org/10.1353/jhe.2011.0030.
Chen, X., & Weko, T. (2009). Students who study science, technology, engineering, and mathematics (STEM) in postsecondary education. In Stats in brief. Washington, DC: U.S. Department of Education.
Chiu, M. M. (2017). Self-concept, self-efficacy, and mathematics achievement in 65 regions including the US and Asia. In J. W. Son, T. Watanabe, & J. J. Lo (Eds.), What matters? Research trends in international comparative studies in mathematics education, (pp. 267–288). New York: Springer.
Cohen, D. K., & Mehta, J. D. (2017). Why reform sometimes succeeds: Understanding the conditions that produce reforms that last. American Educational Research Journal, 54(4), 644–690. https://doi.org/https://doi.org/10.3102/0002831217700078
Eisenhart, M., Weis, L., Allen, C. D., Cipollone, K., Stich, A., & Dominguez, R. (2015). High school opportunities for STEM: Comparing inclusive STEM-focused and comprehensive high schools in two US cities. Journal of Research in Science Teaching, 52, 763–789.
Federman, M. (2007). State graduation requirements, high school course taking and choosing a technical college major. Topics in Economics Analysis & Policy, 7. https://doi.org/10.2202/1935-1682.1521.
Gates, B. (2005). Prepared remarks for the National Education Summit on High Schools. Downloaded from https://www.gatesfoundation.org/media-center/speeches/2005/02/bill-gates-2005-national-education-summit
Gnagey, J., & Lavertu, S. (2016). The impact of inclusive STEM high schools on student achievement. AERA Open, 2(2), 1–21.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London and New York: Routledge.
LaForce, M., Noble, E., King, H., Holt, S., & Century, J. (2014). The 8 elements of inclusive STEM high schools. Chicago, IL: The University of Chicago.
Legewie, J., & DiPrete, T. A. (2014). Pathways to science and engineering bachelor's degrees for men and women. Sociological Science, 1, 41–48.
Lent, R. W., Brown, S. D., & Hackett, G. (1994). Toward a unifying social cognitive theory of career and academic interest, choice, and performance. Journal of Vocational Behavior, 45(1), 79–122. https://doi.org/10.1006/jvbe.1994.1027.
Lynch, S. J. (2015). Science for all: A new breed of schools is closing achievement gaps among students and may hold the key to a revitalized 21st-century workforce. Scientific American, 313(2). Retrieved from http://www.scientificamerican.com/article/science-for-all/
Lynch, S. J., Peters-Burton, E., Behrens, T., House, A., Ford, M., Spillane, N., … Means, B. (2018). Understanding inclusive STEM high schools as opportunity structures for underrepresented students: Critical components. Journal of Research in Science Teaching. https://doi.org/10.1002/tea.21437.
Maltese, A. V., & Tai, R. H. (2011). Pipeline persistence: Examining the association of educational experiences with earned degrees in STEM among U.S. participants. Science Education Policy, 95(5), 877-907. https://doi.org/https://doi.org/10.1002/sce.20441
Means, B., Wang, H., Wei, X., Iwatani, E., & Peters, V. (2018). Broadening participation in STEM college majors: Effects of attending a STEM-focused high school. AERA Open, 4(4), 1–17. https://doi.org/10.1177/2332858418806305.
Means, B., Confrey, J., House, A., & Bhanot, R. (2008). STEM high schools: Specialized science technology engineering and mathematics secondary schools in the U.S. Report prepared for the Bill &Melinda Gates Foundation. Menlo Park, CA: SRI International. Retrieved from http://ctl.sri.com/publications/displayPublicationResults.jsp
Means, B., Wang, H., Wei, X., Lynch, S., Peters, V., Young, V., & Allen, C. (2017). Expanding STEM opportunities through inclusive STEM-focused high schools. Science Education, 101, 681–715. https://doi.org/10.1002/sce.21281.
National Academies (National Academy of Sciences, National Academy of Engineering, & Institute of Medicine) (2005). Rising above the gathering storm. Washington, DC: National Academies Press.
National Academies (National Academy of Sciences, National Academy of Engineering, & Institute of Medicine) (2011). Expanding underrepresented minority participation: America's science and technology talent at the crossroads. Washington, DC: National Academies Press.
President's Council of Advisors on Science and Technology (PCAST) (2010). Prepare and inspire: K-12 education in science technology, engineering and math (STEM) for America's future. Washington, DC: Executive Office of the President.
Radunzel, J., Mattern, K., & Westrick, P. (2016). The role of academic preparation and interest on STEM success. ACT Research Report Series, 2016-8. Iowa City, IA: ACT.
Regan, E., & DeWitt, J. (2015). Attitudes, interest and factors influencing STEM enrolment behaviour: An overview of relevant literature. In E. K. Henriksen, J. Dillon, & J. Ryder (Eds.), Understanding student participation and choice in science and technology education (pp. 63–88). Dordrecht: Springer. http://dx.doi.org/https://doi.org/10.1007/978-94-007-7793-4_5.
Rogers-Chapman, M. F. (2014). Accessing STEM-focused education: Factors that contribute to the opportunity to attend STEM high schools across the United States. Education and Urban Society, 46(6), 716–737.
Saw, G. (2017). Policy brief: The impact of inclusive STEM high schools on student outcomes: Evidence from Texas STEM Academies. University of Texas at Austin Education Research Center Retrieved from https://texaserc.utexas.edu/wp-content/uploads/2017/12/70-Brief-Guan-Saw-PB-11.16.17.pdf.
Scott, C. (2012). An investigation of science, technology, engineering and mathematics (STEM) focused high schools in the U.S. Journal of STEM Education: Innovations and Research, 13(5), 30–39.
Simpkins, S. D., Davis-Kean, P. E., & Eccles, J. S. (2006). Math and science motivation: A longitudinal examination of the links between choices and beliefs. Developmental Psychology, 42, 70–83. https://doi.org/10.1037/0012-1649.42.1.70.
Strauss, V. (2016). The education mess in Ohio under Governor John Kasich. The Washington Post. Retrieved from https://www.washingtonpost.com/news/answer-sheet/wp/2016/02/10/the-education-mess-in-ohio-under-gov-john-kasich/?noredirect=on&utm_term=.89fadf2dec04
Tai, R. H., Liu, C. Q., Maltese, A. V., & Fan, X. (2006). Planning early for careers in science. Science, 312, 1143–1144.
Trusty, J. (2002). Effects of high school course-taking and other variables on choice of science and mathematics college majors. Journal of Counseling and Development, 80, 464–474.
Wang, X. (2013). Why students choose STEM majors: Motivation, high school learning, and postsecondary context of support. American Educational Research Journal, 50(5), 1081–1121.
Weis, L., Eisenhardt, M., Cipollone, K., Stich, A., Nikischer, A., Hanson, J., … Dominguez, R. (2015). In the guise of STEM education reform: Opportunity structures and outcomes in inclusive STEM-focused high schools. American Educational Research Journal, 52(6), 1024–1059.
White House Office of Science and Technology Policy. (Feb 11 2016). STEM for All. Available at https://obamawhitehouse.archives.gov/blog/2016/02/11/stem-all.
Young, V., Adelman, N., Cassidy, L., Goss, K., House, A., Keating, K., et al. (2011). Evaluation of the Texas High School Project. Third comprehensive annual report. Austin, TX: Texas Education Agency.
Young, V., Lynch, S., Means, B., House, A., Peters, V., & Allen, C. (2017). Bringing inclusive STEM high schools to scale: Lessons from three states. Menlo Park, CA: SRI Education.
This work was funded under National Science Foundation grants DRL-1817513 to Digital Promise Global and DRL-1316920 to SRI International. Any opinions, findings, conclusions, or recommendations are those of the authors and do not necessarily reflect the position, policy, or endorsement of their organizations or the funding agency.
Learning Sciences Research, Digital Promise, 2955 Campus Drive, San Mateo, CA, 94403, USA
Barbara Means, Viki Young & Emi Iwatani
SRI Education, 333 Ravenswood Avenue, Menlo Park, CA, 94025, USA
Haiwen Wang & Xin Wei
Barbara Means
Haiwen Wang
Xin Wei
Viki Young
Emi Iwatani
BM served as principal investigator for the design and conduct of the inclusive STEM high school research project and was the main author for the manuscript. HW led the school sampling, propensity score matching, and design and running of the analytic models. XW ran all analytic models incorporating data from the Texas Education Research Center. VY led the collection of qualitative data concerning state and local policy contexts for inclusive STEM high schools and was chiefly responsible for those portions of the manuscript. EI served as a data analyst and reviewed and contributed to the manuscript. All authors read and approved the final manuscript.
Correspondence to Barbara Means.
All ethics were overseen by the Institutional Review Board; participants were invited to consent to participate and allowed to opt out of participation with no consequence at any time.
Additional file 1: Appendix A-1.
School Identification, Recruiting, and Participation. Supplementary Figure A-1. School identification and recruiting for Cohorts 1 and 2 in North Carolina. Supplementary Figure A-2. School identification and recruiting for Cohorts 3 and 4 in Texas. Supplementary Figure A-3. School identification and recruiting for Cohort 5 in Ohio
Outcome Measures Used in the Meta-Analysis
Means, B., Wang, H., Wei, X. et al. Impacts of attending an inclusive STEM high school: meta-analytic estimates from five studies. IJ STEM Ed 8, 4 (2021). https://doi.org/10.1186/s40594-020-00260-1
Meta-analytic models school effects
STEM schools | CommonCrawl |
Ethnobotanical survey of medicinal plant species used by communities around Mabira Central Forest Reserve, Uganda
Patience Tugume1,
Esezah K. Kakudidi1,
Mukadasi Buyinza2,
Justine Namaalwa2,
Maud Kamatenesi3,
Patrick Mucunguzi1 &
James Kalema1
Journal of Ethnobiology and Ethnomedicine volume 12, Article number: 5 (2016) Cite this article
An ethnobotanical study of medicinal plants was carried out in 14 villages adjacent to Mabira Central Forest Reserve (CFR) in Central Uganda between August 2013 and March 2014.
Information was obtained through interviews using semi- structured questionnaires. Field excursions with traditional healers and herbal medicine collectors were carried out. Descriptive statistics were used to present the data. Fidelity ratios and Informant consensus agreements were calculated.
A total of 190 plant species in 61 families and 152 genera were reported in the treatment of various health conditions. Family Fabaceae was dominant representing 14 % of the plant species documented. Vernonia amygdalina was the preferred species for treating malaria. Leaves (68 %) were the most frequently used parts in preparing herbal remedies. Decoctions (29 %) and oral route (53 %) of administration were commonly used method of herbal medicine preparation and administration respectively. Fifty-eight health conditions grouped in 25 categories were treated using medicinal plants. Informant consensus agreement was highest for blood system disorders (0.9) that included anaemia, hypertension and blood cleansing indicating homogeneity of informant's knowledge about remedies used. Vernonia amygdalina and Erythrina abyssinica had 100 % fidelity level for treatment of malaria and vomiting respectively.
The diversity of medicinal plant species used and the associated indigenous knowledge are of great value to the local community and their conservation and preservation is paramount. The therapeutic uses of the documented plants provides basic data for further research focused on pharmacological studies and conservation of the most important species.
The acceptance and use of herbal medicine is on the increase globally [1–3]. In Africa the situation is not different, over 80 % of the population particularly in the developing countries depends directly on plants for their primary healthcare requirements [4, 5]. In the East African region countries such as Burundi [6] and Tanzania [7] that neighbour Uganda, the population using traditional medicine is also well above 80 % particularly in the rural areas [6, 7]. Plants form an important part of health care especially for the rural poor in Uganda [8]. The Ugandan government has specifically up scaled the use of herbal medicine and is in the process of integrating it into the main health care system [9, 10]. The noted increased use of herbal medicine is as a result of the confirmed therapeutic evidence of the herbal remedies [11]. This has been enhanced by the consequences of limited access to modern health services in most developing countries including Uganda, high cost of modern medicine compared to the indigenous herbal medicines, wide socio-cultural acceptance of traditional medicine and the belief that natural products pose no risk [3, 4, 12, 13].
The increased preference of herbal medicine has consequently propelled the search for pharmaceutical remedies against different ailments from plants [14]. The medicines are collected from the wild and this has negatively impacted on the plant resource due to unsustainable exploitation rates as well as the health of many people who cannot afford orthodox medicine [15–17]. This makes documentation, sustainable utilisation as well as conservation essential [3, 18]. The first step in conservation is to document material traditionally used to treat an ailment [15, 16]. Previous studies have identified and documented numerous medicinal plants for treatment of various diseases in Uganda [1, 19] however these have been targeting specific ailments and are not detailed in shared use. A larger number of medicinal plants and indigenous uses have not yet been documented. The rich history of African cultures and their innovative utilisation of plants as a source of remedies have been passed down through generations largely by oral tradition [20]. This knowledge is gradually being lost [21] as the custodians die before passing on information to the younger generations. Besides the gradual loss of ethnobotanical knowledge due to lack of documentation, overharvesting of medicinal materials from their natural habitat has been one of the major threats of traditional medicine. In order to conserve wild plant species, there is need for reliable data on their distribution and level of use [22].
The documentation of indigenous knowledge through ethnobotanical studies is important in conservation and utilization of biological resources [23]. The identification of local names, scientific names and indigenous uses of plants not only preserves indigenous knowledge but also facilitates future research on safety and efficacy of medicinal plants in treatment of various ailments [24]. It is against this background that utilization of medicinal plants as a source of primary health care by communities adjacent to Mabira CFR is documented. This will ensure that traditional knowledge about use of these plants is conserved. It will also facilitate the discovery of new sources of drugs and promote sustainable use of medicinal plant resources in Uganda. In addition conservation of medicinal plants will add value to the recreational environment as well as health improvement through sustained ecosystems. This study aimed at collecting data on plant species used to treat different health conditions by communities adjacent to Mabira CFR.
The study area covered human settlement areas around Mabira CFR some of which were enclaves and others adjacent to the forest. Mabira CFR is located 20 km north of Lake Victoria shoreline immediately to the west of Victoria Nile. The forest reserve lies partly in Buikwe, Mukono and Kayunga districts and occupies an area of 306 km2 with an altitudinal range of 1070 – 1340 m above sea level [25]. It is situated between latitude 0o 22' and 0o 35'N and between longitude 32o 56'and 33o 02'E [26] (Fig. 1).
Map of Mabira CFR showing the study villages. The figure shows location of Mabira CFR in Uganda and specifically highlights the sites of villages where ethnobotanical surveys of medicinal plants were carried out. The map displays demarcations of the administrative boundaries showing the major road network and the main physical features in the study area
The forest reserve occupies gently undulating landscape characterised by numerous flat-topped hills (relics of the ancient African peneplain), and wide shallow valleys [27]. The topography is such that the land drains to the north, even though the reserve's southern boundary lies only 13 km from the lakeshore. The underlying rocks are composed of micaceous schists and shales of the Buganda- Toro system with ridges of quartzite and amphibolite. The soils are generally ferralitic sandy clay loams, with black waterlogged clays in the valley bottoms. The climate is tropical with two rainfall peaks from April to May and October to November ranging between 1,250 – 1,400 mm per annum. Annual mean temperature range, minimum: 16–17 ° C, maximum: 28–29 ° C. The vegetation of Mabira CFR was classified as "medium altitude moist semi-deciduous [28].
Commercial use of the forest began when some parts were harvested in the early 1900's and until 1988, intensive coffee/banana agricultural encroachment badly damaged parts of the forest. [27] About 21 % and 26 % of the reserve have been designated as strict nature reserve and buffer zone respectively and the forest in these areas is recovering following extensive plantings of native tree species.
The human population living in the forest enclaves was approximately 825,000 with a density of 200–230 people per Km-2 [29]. The local people are mainly of the Bantu ethnic group of the following tribes; Baganda, Banyarwanda, Basoga, Bagisu, Bakiga, Banyankole, Bagwere and Batoro.
The reserve has tea and sugarcane plantations around. Some local people reside in settlements for labourers on the tea and sugarcane estates [30]. The extent of growing cash crops other than tea and sugar cane is limited by scarcity of land. However locals are engaged in cultivation of food crops mainly for subsistence consumption like maize, beans, bananas, ground nuts, sweet potatoes and vegetables. Livestock rearing is limited to a few households.
Ethical approval of the study was obtained from the Uganda National Council of Science and Technology (UNCST) under registration number SS 3368 after obtaining a research license from National forestry Authority (NFA).
This was a field survey targeting custodians of Traditional Medicine used in treatment of diseases. Verbal pre-informed consent was obtained from the participants before the interview. Interviews were conducted in Luganda the local language in the area using guided semi structured questionnaires and a research assistant that was conversant with the local language.
Collection of data on medicinal plants used to treat different ailments in the study area was according to a slight modification of Martin's procedure [31]. Purposive sampling was used to identify 14 out of 27 villages that heavily depend on the forest for primary health care through a Rapid Rural Appraisal (RRA) with village leaders. Heavy dependence was defined by village council leaders' local experience i.e. based on the number of individuals who depend wholly on herbal medicine for livelihoods. The study included villages within 1–5Km from the forest. This is because distance from the forest influence people's use of forest products. Before entering each of the villages, permission was sought from local leaders after explaining the aim of the study who gave us the name of the first key informant while the rest of the respondents were selected by snow ball sampling technique. [32, 33] A total of 36 key informants were selected with at least two from each village and an additional eight knowledgeable herbalists recommended by the community members from Naluvule, Bukuku, Buwoola and Kalagala villages. The informants included primary collectors, vendors and traditional healers who are the custodians of indigenous knowledge on herbal medicines. Traditional healers are divided into two broad groups of herbalists who mainly use herbs while diviners also invoke ancestral spirits to guide them in their healing practice [34–36]. They provided information on plants and parts used, ailments treated, mode of preparation and administration, habit, source and availability of medicinal plants. Field excursions were conducted along forest trails taking traditional healers as guides and voucher specimens of cited medicinal plants were collected.
Preference ranking
Preference ranking [31] of the 10 most available medicinal plant species and diseases commonly treated by each were shortlisted by the 12 key informants according to importance attached to the species as per frequency of use and effectiveness (number of days taken to healing in treating particular diseases successfully). The values assigned for each species across were summed up for all the informants to get an overall rank value. The species were then ranked in descending order with the species that had the highest total ranked first.
Plant identification and processing of Voucher specimens
Plant identification was partly carried out in the field based on field manuals for plant identification [37, 38]. Voucher specimens were collected and later identified at Makerere University Herbarium. Correctness of scientific names of species were also checked according to Tropicos:http://www.tropicos.org database accessed on 12/05/2015.
Descriptive statistics using frequencies and percentages were used to summarize data using Microsoft excel 2013. The ailments treated by the medicinal plants were classified into different categories [39].
Informant consensus agreement
The informant consensus factor (Fic) was calculated to indicate the homogeny of information using the formula;
$$ \begin{array}{rcl}{\mathrm{F}}_{\mathrm{ic}}& =& \frac{{\mathrm{N}}_{\mathrm{ur}} - {\mathrm{N}}_{\mathrm{taxa}}}{{\mathrm{N}}_{\mathrm{ur}}-1}\\ {}\mathrm{Where}\ {\mathrm{N}}_{\mathrm{ur}}& =& \mathrm{Number}\ \mathrm{of}\ \mathrm{use}\ \mathrm{reports}\end{array} $$
Ntaxa = Number of species in each use category. It estimates the relationship between the number of use reports (Nur) minus the number of taxa used (Ntaxa) and the number of use reports in each category minus one [40].
Fic values are low if plants are chosen randomly or if informants do not exchange information about their use or disagree about the species used in treatment of an ailment category. The values are high (close to one) if the species are used by a large proportion of informants and there is a well-designed criterion in community or if information is exchanged between informants. Therefore the medicinal plants are presumed to be effective in treating a certain disease have higher Fic values [41].
Fidelity level (FL)
Fidelity Level [42] was calculated for each of the 10 preferred species for their popularity according to the key informants who cited them in the treatment of particular ailments. Fidelity Level (FL) = Ip/Iu x 100 %, where Ip is the number of informants who suggested the use of a species for the same major ailment, Iu is the total number of informants who mentioned the species for any use.
Medicinal plant uses
The communities around Mabira CFR use diverse flora in treatment of various ailments and local people possess rich traditional knowledge on medicinal plants (Table 1). Both males and females used medicinal plants but males were dominant representing 70 % of the respondents. The age of the respondents ranged between 25–80 years. Generally 46 % of the respondents were below 50 years.
Table 1 Medicinal plants, their habit, parts used, ailments treated, habitat, method of preparation and administration
A total of 190 plant species distributed in 61families and 152 genera were identified as used. Fabaceae contributed 27 species, followed by Asteraceae (17), Euphorbiaceae (13), Solanaceae (10) and Lamiaceae (9). Genera Solanum and Indigofera contributed five species each while Ficus, Vernonia, and Acacia contributed four species each.
Preferred medicinal plant species
Vernonia amygdalina was highly ranked and regarded most important in treatment of malaria in the study area. Table 2 shows ranking of the ten most important plant species according to key informants in decreasing order together with values assigned by each informant. The key ailments treated by the preferred medicinal plants were mentioned by the key informants during the interviews.
Table 2 Rank values assigned by each informant for each of the 10 preferred medicinal plants
Growth forms of Plants and parts used for medicinal purposes
Different plant parts of medicinal plants are used to make herbal preparations (Table 3). A high number of herbal medicine are made using leaves (77 %) and roots 40 %. Other parts of the plants are not commonly used. Regarding the 10 preferred medicinal plant species, the bark was predominantly used in seven species, followed by leaves (5) and least roots (3) (Table 3 ), although more than one part was used in some cases. For instance leaves, bark and root of Spathodea campanulata and leaves, roots and fruits of Tamarindus indica and Phytolaca dodecandra are used to prepare remedies. Herbs made up the highest proportion of medicinal plants species (41 %), followed by trees (28 %), shrubs (22 %), climbers and grasses (4 %).
Table 3 Plant parts used for medicinal purposes
Source of medicinal plants
Of the recorded medicinal plants, 56 % are from the forest, 14 % are cultivated 12 % grow in grasslands/woodlands and farmlands (18 %). The low incidence of medicinal plant gardens was attributed to the need to maintain secrecy of traditional knowledge and the argument that cultivated medicinal plants are less potent compared to plants collected from the wild and therefore the latter are preferred. Medicinal plant species from the forest were mostly members of Fabaaceae (40 %) and Euphorbiaceae (54 %) while species from family Asteraceae were dominant in grasslands (25 %) and fallow (44 %). Most of the medicinal plants grown in home gardens are introduced species and have not been domesticated. These include: Callistemon citrinus, Capsicum frutescens, Moringa oleifera, plus fruit tree species that are also medicinal such as Mangifera indica, Persea americana, Carica papaya and Psidium guajava. Fifty percent of medicinal plant users who harvest for commercial purposes collect plants form the forest.
Methods of preparation and administration
The medicinal plants for treatment of different ailments were prepared and administered using various methods. Decoction was commonly used (29 %), followed by crushing and mixing with water (24 %), use of fresh crushed material (14 %) and burning (9 %) (Fig. 2). In the current study, additives used in herbal medicine preparation included silver fish, ash, salt, alcohol, tea and onions. Salt was used in remedies against toothache and oral wounds where it is believed to kill germs. For external application vaseline, paraffin and ghee were used to reduce friction during application of the remedy.
Percentage of species prepared using different methods. The figure depicts the percentage of medicinal plant species used for making herbal remedies using different methods according to information obtained from key informant interviews. The total number of species for calculation of percentages was 190. In some cases herbal remedies from the same medicinal plant species could be prepared using more than one method. The main ingredient used in preparation of herbal remedies was water in the case of decoctions and cold infusions. Method of preparation varied according to the plant species, plant part used and sometimes the condition being treated
Different routes were used in administration of herbal preparations. Oral route contributed 61 % of the total species, followed by herbal bath (28 %), rubbing leaves on affected parts (14 %) and inhalation of smoke (5 %). The least used route of herbal administration was steam bath (2 %).
Ailments treated by medicinal plants
The 58 health conditions recorded were grouped into 25 categories of which gynecological conditions, digestive disorders and skin infections featured prominently (Table 4). The number of species used to treat different ailments are summarized in Table 4.
Table 4 Ailment categories treated by different medicinal plants
Species treated a wide range of ailments varying from one to six per plant. Species that treated the highest number of ailments were Balanites aegyptiaca, Carica papaya, Dracaena steudneri that were used in management of six health conditions each. On the other hand Allium sativum, Cissampelos macronata, Kalanchoe crenata, Lantana trifolia, Solanum anguvi, Tagetes minuta and Vernonia lasiopus were each used in management of five health conditions. Taxonomic analysis revealed that members of family Fabaceae were used to treat the highest percentage (28 %) of ailments. This was followed by Solanaceae (24 %), Asteraceae and Euphorbiaceae (19 %) each, Amaranthaceae, Balanitaceae and Rutaceae 14 % each, Anarcadiaceae, Moraceae, Poaceae, Bignoniaceae 12 % each while families Alliaceae, Caricaceae, Dracaenaceae, Lamiaceae, Minespermaceae, Rosaceae, Rubiaceae, Verbenaceae and Zingiberaceae 10 % each and the rest treated less than 10 %.
Informant consensus agreement (Fic)
This technique is designed to highlight species that have healing potential for specific major purposes. The relative importance of each plant species in treatment of different ailments as categorized in Table 5 was analysed using the Factor Informant Consensus (Fic) [41]. Fic values range from 0–1 where values close to one (1) indicate a high rate of informant consensus on a plant species used against an illness category. Fic values close to zero (0) mean low degree of agreement among the informants about the use of a plant species for treatment of a particular ailment. Fic for different ailment categories was calculated to test for homogeneity or consistency of informants' knowledge about a particular remedy for an ailment category. Fic indicated which plants are widely used and thus merit further pharmacological and phytochemical studies. The highest Fic (0.9) was scored for blood system disorders. The important plants used for anaemia were Amaranthus dubius and Hibiscus acetosella while those for high blood pressure included Oxalis corniculata, Canarium schweinfurthi, Sesbania sesban, Vangueria apiculata, Citrus limon and, Solanum anguivi. Seven ailment categories had Fic of zero (0) since each respondent reported a different species used for the same ailment (Table 5).
Table 5 Consensus agreement about uses of medicinal plants for ailment categories
Fidelity Levels (FL) of preferred plant species
For each of the 10 most preferred plant species a fidelity level (Table 6) was calculated to quantify their importance to treat a major ailment [42]. It was calculated based on the number of users of a given plant species to treat a major ailment. FL shows the proportion in percentage of informants claiming the use of a plant species for the same major ailment to the total number of informants who mention the plant for any use. FL = (Ip/ Iu) x 100 where Ip = Number of informants who suggested the use of a species for the same major purpose (therapeutic use), (Iu) = Total number of informants who mentioned the plant species for any use.
Table 6 Fidelity Levels (FL) of most commonly used plants by Key Informants
Table 6 shows high fidelity levels of greater than 50 % for seven plant species which highlights the importance of these species in treatment of the mentioned diseases in the study area. Vernonia amygdalina and Erythrina abyssinica had a fidelity level of 100 % in treatment of malaria and vomiting respectively. High FL levels for these species indicated their outstanding preference for treating malaria and vomiting.
Characteristics of respondents
Most of the respondents were men with an average age of 52 years. African belief is that traditional healers should be male [43–45]. A high proportion of key informants being male of 50 years and above is in line with studies in Rwanda [46, 47]. Old people (aged 51–80 years) in society have more knowledge on medicinal plants and their uses due to long direct contact with plant resources. In contrast, younger people have little interest in traditional medicine in general and there appears to be a risk of knowledge loss if nothing is done to motivate them. Younger people are exposed to modern education and hence not interested in learning and practicing ethnomedicinal wisdom that would perpetuate indigenous knowledge. Differences in medicinal plants knowledge among age groups was also reported in other studies [48, 49] in Ethiopia.
Diversity of medicinal plants
The high number of species documented indicates that the study area has diverse flora used in treatment of various ailments and rich traditional knowledge on medicinal plants in the community. This makes Mabira CFR an important source of herbal medicine for the rural communities since more than half of the mentioned medicinal plants were harvested from the forest. High utilisation of medicinal plant species from forests has been reported among the Bakonjo and Bamba in Mt. Rwenzori and Semiliki forest areas in Bundibugyo, Western Uganda [50, 51].
Families Fabaceae, Asteraceae, Euphorbiaceae, Lamiaceae, and Solanaceae are widely reported in herbal preparations in different parts of Uganda [1, 8, 19, 52, 53] and their widespread use could be attributed to their wide range of bioactive compounds. Asteraceae is reported to have a large number of bioactive compounds [54, 55] thus contributing to the high utilization rates of members of the family for medicinal purposes.
A majority of plant species documented treated more than one condition. The use of one plant to treat several ailments is probably attributed to presence of many metabolites in one particular plant and also the fact that the same molecule can be active against different pathogens. In other instances a combination of plants were used in preparation of a herbal remedy against a certain ailment which illustrates the synergistic effects of such plants. As an example Amaranthus spinosus and Cleome gynandra leaves were used against fungal infections of the scalp, Balanites aegyptica roots are mixed with leaves of Citrus limon against diarrhoea. On the other hand some remedies were monotherapies based on preparations from a single plant. Such plants could be palatable, nontoxic and highly effective against ailments they are used to treat based on experience of users.
Most of the medicinal plant species collected and identified in the study area were also medically used in other areas of Uganda [1, 19, 56] and other parts of Africa [57] to treat the same or different ailments. The use of the same plant species for similar or different ethnomedicinal uses in different countries is a reliable indication of the bioactivity potential of the documented plant species [58]. Of the 190 medicinal plant species identified in the current study, 34 species were identified earlier in Iganga Eastern Uganda [59], 82 species in Mukono and Mabira forest areas [60], 22 species in Western Uganda [1], 40 species in Mpigi [52] and 30 species in Oyam Northern Uganda [8]. A comparison of ethomedicinal uses of some plant species used in Mabira CFR communities with other parts of Uganda and in other countries is presented in Table 7. Bioactivity studies previously conducted on some of the identified plant species collaborate their ethnobotanical uses. For instance Capsicum frutescens is used in management of different cancers – an activity attributed to presence of capsaicin which possesses antimutagenic and anticarcinogenic activities [61]. Also Prunus africana has been found to possess anti-inflammatory and antioxidative activities and compounds like cytotoxic phenolics and beta sitostenone, n-docosanol [62] which are important in management of cancer. The ethnomedicinal reports of the same plant species across geographical regions and different cultural groups is indicative of the medicinal properties of the species.
Table 7 Relevant literature on previous ethnomedical uses of some medicinal plant species in the current study
Plant parts used
The use of leaves to make herbal medicine preparations followed by roots and barks is a common practice in many communities in Uganda as reported in Mukono [60], Sango bay in Southern Uganda [16], Western Uganda [1], communities around Kibale National Park [63], Mpigi [52] and other countries like Kenya [64], Ethiopia [65] and Bolivia [66]. The high utilisation rates of leaves could be attributed to the ease with which they can be obtained in large quantities compared to other plant parts. Leaves are the main photosynthetic organ in plants and considered to be a key component of the natural pharmacy for synthesis of constituents particularly those that are more pharmacologically active against diseases [67]. The preference of leaves to other plant parts is thus thought to be due to accumulation of active ingredients like tannins and other alkaloids [67]. In contrast, in Oyam district of Northern Uganda, roots were the common plant parts used in herbal medicine preparations and the other parts were underutilized [8]. However, as noted [68] a clear relationship exists between the parts of the plant collected, or the collection method and the impact on the harvested plant. Collection of the bark and root is damaging and makes species vulnerable to overexploitation. Harvesting the bark in large quantities can destroy the plant because the protective role of the bark to the plant will be curtailed. On the other hand uprooting plants especially in case of herbs and shrubs causes total destruction of the plant. Debarking and uprooting of medicinal plant species negatively affects the sustainability of the species in use. For species like Spathodea campanulata, Tamarindus indica and Phytolaca dodecandra in which more than one parts is used; sustainability would probably be achieved if the harvesting of bark and root is avoided and harvesting of leaves which is less destructive is promoted. The use of leaves is less destructive if small quantities are collected but not so if large quantities are harvested. As noted [69], overharvesting of leaves can lead to deterioration of medicinal plants since removal of leaves limits the transformation of vegetative to reproductive development such as flower production and seed/fruit development which in turn limits the natural regeneration of plants. Harvesting of roots on the other hand is more destructive as it often involves uprooting whole plants which consequently affects regeneration for sustainable use.
Herbal preparations made from more than two plant parts of the same plant such as the bark and roots of Psedospondias microcarpa, leaves, bark and roots of Spathodea camapnulata and the leaves, roots and vines of Croton macrostachyus (Table 1) may endanger the species unless mechanisms for sustainable utilisation are put in place. Many studies have showed that leaves of different plants possess bioactive ingredients against different diseases and pathogens [69–72]. Since harvesting of leaves is less destructive than harvesting roots or barks, it is necessary to test leaves for efficacy against different ailments in plants where roots and barks are mostly harvested to minimize dangers of overexploitation. As an example the leaves of Vernonia amygdalina have been found to be effective against malaria [73] and thus the harvesting of roots of this species can be avoided.
Habit of medicinal plant species
Herbs were the most common plant life forms used for medicinal purposes. Harvesting of herbs that are in most cases annual is an indicator that collection of medicinal plants from the forest is not a big threat to conservation. This could be attributed to their abundance throughout the year as reported previously in Uganda [15, 19, 53, 63] although shrubs were reported to be commonly used in northern Uganda [12] and in Ethiopia [74]. The popularity of herbs as a source of herbal therapies is often attributed to their high pharmacologically active ingredients as compared to woody plants [8]. Shrubs are preferred due to their availability all year round since they are relatively draught resistant and are not affected by seasonal variations [65].
Traditional healers interviewed lacked medicinal plant gardens and collected medicinal plants from the forest. A similar trend was reported in Zimbabwe [75] but cultivated plants have been used from ancient times such as in Iran and various studies have confirmed potency of chemical constituents in them [14]. However, commercial collectors require large volumes which put pressure on the plant population. Consequently, overexploitation may lead to disappearance of many species of economic value and other uses pausing challenges to their conservation in Uganda's forests [76] and the African continent as a whole [77].
Herbal medicine preparation and administration
The main route of herbal medicine administration was oral. This mode of administration is commonly used in many herbal remedies as reported elsewhere [8, 78, 79]. The choice of oral administration may be related to the use of some solvents or additives such as water and food that are commonly believed to serve as a vehicle to transport the remedies. The additives enhance extraction of bioactive molecules during remedy preparation. The additives are also important to minimize discomfort, improve taste and reduce adverse effects such as vomiting and diarrhoea. [80] Decoctions were cited as the most common method of preparation of herbal remedies. Boiling is effective in extracting plant materials and at the same time preserves the herbal remedies for a longer period compared to cold extraction. However, both decoctions and cold extracts do not offer long shelf life for the preparations [81]. As such users continuously harvest medicinal plants which puts them under a lot of pressure that may lead to over exploitation.
Health conditions treated
Herbal therapies are still preferred in primary health care in Uganda [79] and the world [4]. The use of many herbal remedies for treatment of different ailments has been reported in other studies in Uganda [1, 53] and other countries like India [82] and Ethiopia [65]. Thus the diversity of medicinal plants used meet the varied health care needs of communities of Mabira CFR since many people cannot afford conventional treatment due to wide spread poverty. The high frequency in treatment of gynaecological conditions, digestive disorders and skin infections indicate high prevalence of these ailment categories in the study area. Other ailment categories were not commonly treated implying their low prevalence or limited traditional knowledge in the use of medicinal plants to treat them.
Blood system disorders had the highest informant consensus value (Fic =0.9). High Fic values are obtained when only one or a few plant species are reported to be used by a high proportion of informants to treat a particular ailment whereas low Fic values indicate that informants disagree over which plant to use [83]. The high Fic for blood system disorders indicates agreement among respondents on the different plant species used to manage them as well as their significance. Within this category the main condition treated was hypertension (high blood pressure). The prevalence of hypertension was confirmed in a third of adults in Mukono district [84]. The respondents attributed this to age and obesity. A study on screening of bioactive constituents of Solanum anguivi fruits which was mentioned as one of the remedies against high blood pressure revealed a lot of bioactive phytochemicals which include alkaloids, flavonoids, tannins, saponins, triterpenoids and phenols. The phenols have the ability to retard lipid oxidation in oils and fatty foods [85] thereby reducing cardiovascular diseases. The low Fic value of zero (0) in the following ailment categories; painful body parts, Childcare, muscular skeletal pains, abnormalities, body odour, psychiatric disorders and poisonous animal bites imply lack of agreement in the plant species used in treatment of such ailments. Fic values close to zero that are indicative of low informant agreement [86] could be attributed to use of same species for many ailments in the community.
Fidelity level
Vernonia amygdalina had a fidelity level of 100 % and ranked highest in the treatment of malaria as had been documented in other parts of Uganda [56, 63]. Its leaf extract has been confirmed for having good anti-malarial effects [87, 88] and through in vitro studies [88, 89]. Vernonia amygdalina contains steroid glycosides, sesquiterpene and lactones which are active against Plasmodium falciparum [90, 91]. This species has also been found to be clinically effective for the treatment of malaria patients [92]. In human trials, extracts of Vernonia amygdalina reduced parastaemia by 32 % [93]. Although Vernonia amygdalina is effective for malaria treatment, it can induce labour in pregnant women [1] thus causing miscarriages and therefore should be avoided by them. Species with high fidelity level [94] such as Vernonia amygdalina for malaria and Erythrina abyssinica for vomiting indicates that these species two were considered of great cultural significance. Erythrina abyssinica too has a wide range of use varying from treatment of malaria [95], syphilis [16], tuberculosis [52] to amoebiasis [19] in Uganda. In Kenya E. abyssinica is used to treat mumps [96], respiratory tract infections in Mexico [97] and febrile illness in Ethiopia [49]. Its usage for different ailments is possibly due to a wide range of bioactive compounds [95].
Besides malaria, V. amygdalina has been used in Uganda to treat various diseases. A decoction from its roots and leaves is used to treat syphilis, ulcers, liver problems [1], its stem bark is used to treat tuberculosis [52] and its roots are used to treat cough, abdominal pain, wounds, hernia and headache [8]. The use of V. amygdalina leaves was reported to treat heamorrhoids [57] in Nigeria, malaria [98] in Ghana and in Ethiopia against bloating, dandruff and impotency [49]. The 100 % choice by key informants of using V. amygdalina and E. abyssinca for treatment of malaria and vomiting is an indicator of the healing potential of these plant species [99]. These results point to the great potential of V. amygdalina and E. abyssinica for use as sources of new drugs for treatment against malaria and vomiting.
Other species that were preferred in this study were also medicinally important in other areas against the same or different ailments. The use of the same species in different areas against the same ailment confirms the confidence users have in herbal remedies. Momordica feotida was used in Uganda to treat sexually transmitted infections and abdominal pain [8], cough [56] and its roots were effective against erectile dysfunction [3]. The stem bark of Warburgia ugandensis was effective against tuberculosis in Mpigi while both its roots and bark treated erectile dysfunction in Western Uganda [3]. However, leaves of the same plant were used in Kenya to treat common cold and sore throat [96]. Alstonia boonei treated haemorrhoids in Nigeria [57]. The wide spread reporting on the use of these medicinal plants by different communities in different localities could be attributed to different cultural groups which could validate medicinal properties of these species and confirms the confidence users have in the remedies.
The low citation of Prunus africana against prostate cancer reflects lack of awareness about the symptoms of the disease, the facts that it is specific to men of a specific age category, the fact that not all men gate prostate cancer and that diagnosis of prostate cancer is not done. It also indicates limited sharing of knowledge about the disease in the study area.
According to [100], plant species with high fidelity level values are considered potential candidates for further pharmacological investigations and deserve priority attention.
Results from computations of Fic and FL do not collaborate each other since they measure different values but also the diseases treated were grouped in categories and no single disease was considered alone in the Fic calculations. This is due to the different formulae used to calculate the two values. FL was calculated based use reports of a plant species to treat an ailment yet Fic was calculated based on consensus among informants for use of plant species to treat different diseases in an ailment category. However, FL values corroborated well with ranking of preferred species.
The study shows that Mabira CFR habours a wide diversity of plant species used as remedies for several ailments. Such plants are very useful especially to people who cannot afford modern medical care and in cases where access to modern heath facilities is not easy. Knowledge and use of herbal medicine for treatment of various ailments among the local people is still part of their life and culture and this calls for preservation of the integrity of the forest and indigenous knowledge of herbal medicine use. The documented plants have potential of being used in drug development.
Ethics and approval of the study
CFR:
Central Forest Reserve
FL:
Fic :
Informant Consensus factor
National Forestry Authority
RRA:
Rapid Rural Appraisal
UNCST:
Uganda National Council of Science and Technology
Asiimwe S, Namutebi A, Borg-Karlsson A, Kamatenesi-Mugisha M, Oryem-Origa H. Documentation and Consensus of Indigenous knowledge on medicinal plants used by the local communities in Western Uganda. J Nat Prod Pl Res. 2014;4(1):34–42.
Joshi AR, Joshi K. Indigenous knowledge and uses of medicinal plants by local communities of the Kali Gandaki Watershed Area, Nepal. J Ethnopharmacol. 2000;73:119–29.
Kamatenesi-Mugisha M, Oryem-Origa H. Traditional herbal remedies used in the management of sexual impotence and erectile dysfunction in Western Uganda. Afr Health Sci. 2005;5(1):40–9.
WHO. Mental Health Global Action program (mHLGAP). Geneva, Switzeland: World Health Organisation; 2002.
Senthilkumar K, Aravindhan V, Rajendran A. Ethnobotanical survey of medicinal plants used by Malayan Tribes in Yercaud Hills of Eastern India. J Nat Remedies. 2013;13:119–32.
Ngezahayo J, Haryarimana F, Hari L, Stevigny C, Deuz P. Medicinal plants used in Burundian traditional healers for treatment of microbial diseases. J Ethnopharmacol. 2015;173:338–51.
.Kitula, RA. Use of medicinal plants for human health in Udzungwa mountain Forests: a case study of New Dabega Ulogambi Forest Reserve, Tanzania. J Ethnobiol Ethnomed, 2007, 3:7. doi: 10.1186/1746-4269-3-7.
Kamatenesi MK, Acipa A, Oryem-Origa H. Medicinal plants of Otwal and Ngai sub counties in Oyam District, Northern Uganda. . 2011;7.
Uganda Gazzette. Indigenous and Complementary Medicine Bill 2015, Vo mmc VIII: Bill No. 7.
WHO. World Health Organisation strategy on traditional Medicine 2014-2023, Geneva, Switzeland: World Health Organisation; 2013.
Nezhadali A, Zarrabi S. Separation, Identification and Determination of volatile compounds of Zizphora persica Bunge using HS-SPME/GC-MS. Int J Environ Sci Devt. 2010;1:23. 7763/1JESD2010.
Oreagba IA, Oshikoya KA, Amachree M. Herbal medicine use among urban residents in Lagos, Nigeria. BMC Complement Altern Med. 2011;11:117–25.
Van Andel T, Carvalheiro LG. Why urban citizens in developing cou ntriesn used traditional medicines: the case of Suriname hindawi Publiship Corporation. Evid-Based Complement Altern Med. 2013, (Article ID 687197), 13.
Sharafzadeth S, Alizadeti O. Some medicinal plants cultivated in Iran. J Appl Pharm Sci. 2012;2(1):134–7.
Hamilton AC. Medicinal plants, Conservation and Livelihoods. Biodiversi Conserv. 2004;13:1477–517.
Ssegawa P, Kasenene JM, Kiremire BT, Byamukama R, Kamatenesi-Mugisha M, Krief S, et al. Medicinal plant diversity and uses in Sango bay area, Southern Uganda. J Ethnopharmacol. 2007;113:521–40.
Kamatenesi-Mugisha M, Oryem-Origa H. Medicinal Plants used to induce labour during child birth in Western uganda. J Ethnopharmacol. 2007;109:1–9.
Balunas MJ, Kinghorn AD. Drug Discovery from medicinal plants. Life Sci. 2005;78(5):431–41.
PubMed CAS Article Google Scholar
Tabuti JRS, Dhillion SS, Lye KA. Traditional medicine in Bulamogi county, Uganda: Its practitioners, users and viability. J Ethnopharmacol. 2003;85:119–29.
Soelberg J, Asase A, Akwetey G, Jager AK. Historical versus contemporary medicinal plant uses in Ghana. J Ethnopharmacol. 2015;160:109–32.
Tabuti JRS, Kukunda CB, Kaweesi D, Kasilo OMJ. Herbal medicine use in the districts of Nakapiripirit, Pallisa, Kanungu and Mukono in Uganda. J Ethnobiol Ethnomed. 2012;8:35.
Ahrends A, Rahbek C, Bulling MT, Burgess ND, Platts PJ, Lovett JC, et al. Conservation and the Botanist effect. Biol Conserv. 2011;144:131–40.
Munthu C, Ayyapar M, Raja N, Ignacimuthu S. Medicinal plants used by traditional healers in Kancheepuran district of Tamil Nadu. India J Ethnobiol Ethnomed. 2006;2:43.
Bagai Y. Ethnobotanical features of Aladagar (Yahyali Kayseri) and its vicinity. Herb J Syst Botany. 2000;7:89–94.
Muramira TE. Valuing the losses caused to Mabira Forest by hydropower development in Uganda. Innovations. 2001;8(2):28–30.
Bahati JB, Banana AY, Gombya-Ssembajjwe W. Assessing the implications of decentralisation on livelihood, biodiversity and acological sustainability in Uganda. A preliminary analysis of the Pilot SANREM/IFRI site. Paper presented at Workshop in Political Theory and Policy Analysis: 29 February 2008; Indiana University; 2008.
Howard PC. Nature Conservation in Uganda's Tropical Forest Reserves. Switzeland: IUCN Gland; 2001.
Langdale-Brown I, Osmaston HA, Wilson JG. The vegetation of Uganda and its bearing on land use. Entebbe: Government Printer; 1964.
Mrema M, Wafula D, Agaba H. Livelihood stratergies and the use of forest and tree products in Mabira buffer zone. Kabale: Agroforestry Programme FORRI/ICRAF Collaborative project; 2001.
Meredith WD. Three communities, Two Corporations, One Forest: Forest Resource Use and Conflict, Mabira Forest, Uganda. Agroforestry in landscape mosaics Working paper series. World Agroforestry Centre, Yale University Tropical Resources Institute, and The University of Georgia; 2004.
Martin GJ. Ethnobotany: A methods manual. London: Chapman & Hall; 1995.
De Caluwe E. Market chain analysis of boabab (Adansonia digitata L.) and Tamarind (Tamarindus indica L.) products in Mali and Benin. Ghent University, Faculty of Bioscience Engineering: PhD thesis; 2011.
Giuliana A, Padulosa S. Enhancing the value chain for markets for traditional producers of Aromatic vegetables and fruit species in the Near East. A pilot study in Syria. In Proceedings of the International Conference promoting Community driven conservation and sustainable use of dry land agrobiodiversity: 18-25 April 2005; Aleppo, Syria. Edited by Amri A, Damania A. Aleppo, Syria: International Centre for Agricultural Research in the Dry Areas (ICARDA); 2005.
Anokbonggo WW. The role of African traditional Medicine in health care delivery alongside modern Medicine. In: Edwards S, Asfaw Z, editors. Plants used in African traditional Medicine as practiced in Ethiopia and Uganda. Addis Ababa University: Addis Ababa NAPRECA; 1992.
Oyebola DDO. National medical policies in Nigeria. In: Last M, Chavunduka GL, editors. The professionalisation of African Medicine. Manchester: Manchester University Press; 1986.
Schoeman JB: Psipatalogies by tradisionele swart suid-Afrikanes (Psychopathology) among traditional black South Africans. In D. A. Louw (Ed.), Suid-Afrikaanse handbook Van abnormale gedrag (South African Handbook of abnormal behaviour). Johansbeurg, South Africa: Southern Boekuitgewers; 1989.
Katende AB, Birnie A, Tengnas B. Useful trees and shrubs of Uganda. Technical Handbook series 10. In Regional Soil conservation Unit/SIDA, Nairobi 1995.
Katende AB, Ssegawa P, Birnie A. Wild food plant species and mushrooms of Uganda. SIDA Technical Handbook No. 19. Nairobi: Regional Land Management Unit (RELMA), SIDA, Technical Handbook No.9 Nirobi, 1999.
Iwu MM. Handbook of African medicinal plants. USA: CRC Press LLC; 1993.
Trotter RJ, Logan MH. Informant consensus. A new approach for identifying potentially effective medicinal plants. In: Etkin NL, editor. Plants in indigenous medicine and diet. Bedford Hills: Newyork: Redgrave; 1986. p. 91–112.
Cakilcioglu U, Khatun SL, Turkoglu I, Haytad S. Ethnopharmacological survey of medicinal plants in Maden (Elazig-Turkey). J Ethnopharmacol. 2011;137(1):469–86.
Friedman J, Yaniv Z, Dafni A, Palewitch D. A preliminary classification of the healing potential of medicinal plants, based on rational analysis of an ethnopharmacological field survey among Bedouins in Negev Desert, Israel. J Ethnopharmacol. 1986;16:275–87.
Bekalo TH, Woodmatas SD, Woldemarian ZA. An Ethnobotancal study of medicinal plants used by local people in the low lands of Korita special Woreda, Southern nations nationalities and peoples regional state Ethiopia. J Ethnobiol Ethnomed. 2009;5:26–40.
Cheikhyoussef A, Shapi M, Matengu K, Mu Ashekele H. Ethnobotanical study of Indigenous Knowledge on medicinal plant use by traditional healers in Oshikoto region, Namibia. J Ethnobiol Ethnomed. 2011;7:10.
Okello J, Ssegawa P. Plants used by communities of Ngai sub county, Apac District, Northern Uganda. Afr J Ecol. 2007;45(1):6–83.
Kamagaju L, Biziru E, Minari V, Morandiru R, Stevigny C, Ghanem G, et al. An Ethnobotanical survey of medicinal plants used in Rwanda for voluntary depigmentation. J Ethnopharmacol. 2013;150(2):708–17.
Mukazayire MJ, Minani V, Ruffo CK, Bizini E, Stevigny C, Deuz P. Traditional phytotherapy remedies used in Southern Rwanda for treatment of liver diseases. J Ethnopharmacol. 2011;138:415–31.
Awas T, Demissew S. Ethnobotanical study of medicinal plants in Kafficho people, South Western Ethiopia. Paper presented at the Proceedings of the 16th International Conference of Ethiopia studies, Ethiopia, 2009.
Chekole G, Asfaw Z, Kelbessa E. Ethnobotanical study of medicinal plants in the environs of Tara-gedam and Amba District noerthwest Ethiopia. J Ethnobiol Ethnomed. 2015;11.
Oryem-Origa H, Kakudidi EKZ, Katende AB, Bukenya-Ziraba R. Heirs to the land: Mapping the future of the Makalu-Baryun. Cultural Surviv, Q. 1995;18(4):69–71.
Oryem-Origa H, Kakudidi EKZ, Katende AB, Bukenya-Ziraba R. Utilisation of medicinal plants in Bundibugyo District, Uganda. In: Kinyua AM, Kofi-Tsekp WM, Dangana LB, editors. Conservation and utilisation of indigenous medicinal plants and wild relatives of food crops. Nairobi: UNESCO; 1997. p. 75–80.
Bunalema L, Obakiro S, Tabuti JRS, Waako P. Knowledge on plants used traditionally in the treatment of tuberclosis in Uganda. J Ethnopharmacol. 2014;151:999–1004.
Oryem-Origa H, Katende AB, Kakudidi EKZ. Some medicinal plants used in Mukono District. The Uganda J. 2003;40:56–65.
Hamill FA, Apio S, Mubira NK, Mosango M, Bukenya-Ziraba R, Maganyi OW, et al. Traditional herbal drugs of Southern Uganda. J Ethnopharmacol. 2000;70:281–300.
Leonti MM, Pamirez F, Sticher O, Heinrich M. Medicinal flora of the Populuca: A botanical systematical perspective. Econ Bot. 2003;57:218–30.
Stangeland T, Alele PE, Katuura E, Lye KA. Plants used to treat malaria in Nyakayojo sub county, Western Uganda. J Ethnopharmacol. 2011;137:154–66.
Soladoye MO, Adetayo MO, Chukwuma CE, Adetunji NA. Ethnobotanical survey of plants used in the treatment of Haemorrhoids in South Western Nigeria. Ann Biol Res. 2010;73:175–85.
Maroyi A. Traditional use of medicinal plants in South Central Zimbabwe: review & perspectives. J Ethnobiol Ethnomed. 2013;9:31.
Nalumansi P, Kamatenesi-Mugisha M, Anywar G. Medicinal plants used in Paediatric Health Care in Namungalwe sub county, Iganga District, Uganda. Nov J Med Biol Sci. 2014;2(3):1–14.
Oryem-Origa H, Katende AB, Kakudidi EKZ. Ethnobotanical studies of Mabira Forest Area, Central Uganda. Discovery & Innovations (Special edition), Afr Acad Sci, 2001, 169-181.
Surh Y. Anti -tumour promoting potential of selected spice ingredients with anti oxidant and anti inflammatory activities. A short review. Food Chem Toxicol. 2002;40:1091–7.
Bach SM, Marina EP, Ana MP, Marcial GE, Alfredo G, Rodgoun A, et al. Chemical constituents, anti infalmmatory and anti oxidant activities of bark extracts from Prunus tucumanensis Litto. Nat Prod Res. 2013;27:1–4.
Namukobe J, Kasenene JM, Kiremire BT, Byamukama R, Kamatenesi-Mugisha M, Krief S, et al. Traditional plants used for medicinal purposes by local comminities around the Northern sector of Kibale National Park, Uganda. J Ethnopharmacol. 2011;136:236–45.
Njoroge GN, Kaiburi IM, Ngenga PK, Odhiambo PO. Utilisation of priority traditional medicinal plants and local people's knowledge on their conservation status in arid lands of Kenya (Mwinyi District). J Ethnobiol Ethnomed. 2010;6:22.
Katema T, Etana D, Spiridoula A, Adugna T, Gebeyehu G, Jos GMH. Ethno-medical study of plants used for treatment of human and livestock ailments by traditional healers in South Omo, Southern Ethiopia. J Ethnobiol Ethnomed. 2013;9:32.
Thomas E, Vandebroek K, Sanca S, Van Damme P. Cultural significance of medicinal plant families and species among Quechua farmers Apillapampa, Bolivia. J Ethnopharmacol. 2009;122:60–7.
Passulacqua NG, Guariera PM, De Fine G. Contribution to the knowledge of folk plant medicine in Calabria region (Southern Italy). Filoterapia. 2007;78:52–68.
Cunningham AB. Recommendations for multiple use zones and development alternatives around Bwindi Impenetrable National Park, Uganda. People & Plants Working paper 4. Paris: UNESCO; 1996.
Cunningham AB. Applied Ethnobotany: People, wild plant use and Conservation. London: UK. Dev S EartnScan; 2001.
Millogo-Kone H, Guissou IP, Nacoulma O, Traore SA. Comparative study of leaf and stem bark extracts of parkia biglobasa against enterobacteria. Afr J Trad Complement Altern Med. 2008;5(3):238–43.
Ogbonna OJ, Udla PM, Onyekpe PI, Ogbeihe GO. Comparative studies of the phytochemical and Proximate analysis, mineral and vitamin compositions of the root and leaf extracts of Tetracarpidium comophorum. Arch Appl Sci Res. 2013;5(4):55–9.
Searels JM, Keen KD, Horton JL, Clarke DH, Ward JR. Comparing Ginsenoside production in leaves and roots of wild American Ginseng (Panax quinquefolius). Am J Pl Sci. 2013;4:1252–9.
Lawal HO, Adewuyi GO, Fawehinmi AB, Etatuvie SO. Chemical evaluation of mosquito repellent formulation prepared from essential oils of plants. J Nat Prod. 2012;6:33–7.
Lulekal E, Kelbessa E, Bekele T, Yinegar H. An Ethnobotanical study of medicinal plants in Mana Angetu District, Southern Ethiopia. J Ethnobiol Ethnomed. 2008;4:10.
Ngarivhume T, Van't Klooster CIEA, de Jong JTVM, Westhuizen JHV. Medicinal plants used by traditional healers for the treatment of malaria in Chipinge district in Zimbabwe. J Ethnopharmacol. 2015;159:224–37.
Kayanja FIB, Byarugaba D. Disappearing forests of Uganda. The way foward. Special Section: Science in the third world. Curr Sci. 2001;81(8):936–47.
Moyo M, Aremu AO, Vanstaden J. Medicinal Plants: An invaluable dwindling resource in Sub-Saharan Africa. J Ethnopharmacol. 2015;174:595–606.
Bhattarai S, Chaudhary P, Quave L, Taylor S. The use of medicinal plant species in the trans-himalayan arid zone of Mutsang District, Nepal. J Ethnobiol Ethnomed. 2010;6:14.
Kamatenesi MK, Oryem-Origa H. Medicinal Plant species used to induce labour during Chilbirth in Western Uganda. J Ethonopharmacol. 2006;109:1–9.
Etana D. Ethnobotanical study of traditional medicinal plants of Goma Woreda, Jimma Zone of Oroma region. MSc. thesis, Addis Ababa University: Department of Biology; 2010.
Hirt HM, M'pia B. Natural Medicine in the Tropics. Thirdth ed. Kisubi, Uganda: Marianum Press; 2008.
Kumar R, Bharati AK. Ethnomedicines of Tharu Tribes of Dudhwa National Park, India. Ethnobot Res Appl. 2014;112:1457–3465.
Heinrich M, Ankil A, Frei B, Weimann C, Sticher O. Medicinal plants in Mexico; Healers, Consensus and Cultural importance. Soc Sci Med. 1998;47:1859–71.
Nuwaha F, Musinguzi G. Pre-hypertension in Uganda: A cross sectional study. Cardiovasc Dis. 2013;13:101.
Rumbaoa RGO, Comago DF, Geronimo IM. "Phenolic content and antoxidant capacity of hilipine potato (Solanum tuberosum) tubers. J Food Comp Anal. 2009;22:546–50.
Gazzaneo LRS, Lucena RFP, Albuquerque UP. "Knowledge and use of medicinal plants by local specialists in a region of Atlantic Forest in the state of Pernambuco (Northern Brazil). J Ethnobiol Ethnomed. 2005;1:9.
Njan AA, Adzu B, Agaba AG, Byarugaba D, Diias-Liera S, Bangsberg DR. The analgesic and antiplasmodial activities and toxicology of Vernonia amygdalina. J Med Food. 2008;11:574–81.
Tona L, Cimaga RK, Mesia K. In vitro antiplasmodial activity of extracts and fractions from seven medicinal plants used in Democratic Republic of Congo. J Ethnopharmacol. 2004;93:27–32.
Masaba SC. The anti malarial activity of Vernonia amygdalina Del (Compositae). Transactions of the Royal Soiciety. Am Trop Med Hyg. 2000;94:694–5.
Koshimizu K, Ohigashi H, Huffman MA. Use of Vernonia amygdalina by Wild Chimpanzee: possible role of its bitter and related constituents. Physiol Behav. 1994;56:1209–16.
Ohigashi H, Huffman MA, Izutsu D. Towards the chemical ecology of medicinal plant use by wild Chimpazees possibly for parasite related diseases. J Chem Ecol. 1994;20:541–53.
Challand S, Willcox M. A critical trial of the traditional medicine V. amygdalina in the treatment of uncomplicated malaria. J Altern Complement Med. 2009;15:1231–7.
Toyang NJ, Verpoorte R. A review of medicinal potentials of plants of the genus Vernonia (Asteraceae). J Ethnophaemacol. 2013;146(3):681–723.
Heinrich M, Edwards S, Moerman DE, Leonti M. Ethnopharmacological field studies: a critical assessment of their conceptual basis and methods. J Ethnopharmacol. 2009;124:1–17.
Yenesew A, Induli M, Derese S, Midiwo JO, Heyden M, Peter GM, et al. Waters NC: Anti-plasmodia flavonoids from the stem of Erythrina abyssinica. Phytochemistry. 2004;65(22):3029–32.
Kipkore W, Wanjohi B, Rono H, Kigen G. A study of the medicinal plants used by the Marakwet Community in Kenya. J Ethnobiol Ethnomed. 2014;10:24.
Camejo-Rodrigues J, Ascensao L, Bonet M, Valles J. An ethnobotanical study of medicinal and aromatic plants in the Natural Park of "Serra de Sao Mamede, Portugal'. J Ethnopharmacol. 2003;89:199–209.
Asase A, Akwetey GA, Achel DG. Ethnopharmacological use of herbal remedied for the treatment of malaria in the Dangne West District of Ghana. J Ethopharmacol. 2010;129:367–76.
Ugulu I, Baslar S, Yorek N, Digan Y. The investigation and quantitative ethnobotanical evaluation of medicinal plants used aroiund Izmir Province, Turkey. J Med Pl Res. 2009;3:345–67.
Hassan-Abdallah A, Merito A, Hassan S, Aboubaker D, Djama M, Asfaw Z, et al. Medicianl plants and their uses by the people in the region of Randa, Djibouti. J Ethnopharmacol. 2013;148(2):8701–13.
Tabuti JRS. Herbal medicines used in treatment of Malaria in Budipe County, Uganda. J Ethnopharmacol. 2008;116:33–42.
Betti JL. An Ethnobotanical study of medicinal plants among the Baka pygmies in the Dja Biosphere Reserve Cameroon. Afr Study Monogr. 2004;25:1–27.
Loganga OA, Verruysse A, Foriers A. Contribution to the Ethnobotanical, Phytochemical and Pharmacological studies of traditionally used medicinal plants in the treatment of dysentry and diarrhea in Lomela area, Democratic Republic of Congo (DRC). J Ethnopharmacol. 2000;71:411–23.
Cos P, Hermans N, De Bruyne T, Aspers S, Sindambiwe JB, Vanden Berghe D, et al. Further evaluation of Rwandan medicinal plant extracts for their antimicrobial and antiviral activites. J Ethnopharmacol. 2002;79(2):155–63.
Adetutu A, Morgan AW, Corcoran O. Ethnopharmacological survey and in vitro evaluation of wound healing plants usedin Southern Nigeria. J Ethnopharmacol. 2011;137:50–6.
Kisangau DP, Lyaruu HM, Hosea KM, Joseph CC. Use of traditional medicines in the management of HIV/AIDs opportunistic infections in Tanzania: a case in Bukoba rural district. J Ethnibiol Ethnomed. 2007;3:27. doi:10.1186/1746-4269-3-29.
Van Wyk BE, Gericke N. People's plants: A guide to useful plants of Southern Africa. Traditional herbal remedies used by South African Women for gynaecological complaints. J Ethnobiol Ethnomed. 2000;86:97–108.
Teklehaymanot T, Giday M. Ethnobotanical study of medicinal plants used by people in Zegie peninsula, Northern Ethiopia. J Ethnobiol Ethnomed. 2007;3:12.
Ochuangi DO, Kimwele CN, Oduma JA, Gathumbi PK, Mbama JM, Kiama SG. Medicinal plants used in treatment and management of cancer in Kakamega county, Kenya. J Ethnopharmacol. 2014;15(3):1040–55.
We are greatly indebted to African Development Bank who provided funds for fieldwork. We wish to thank the traditional healers and local people that provided information. We appreciate the Uganda National Council of Science and Technology (UNCST) for granting us permission to carry out this study and the National Forestry Authority for allowing us to collect samples from the forest. We wish to thank our forest guides Mr. Abdu Kasozi, Mr. Sekabira Samuel and Mr. Kizito Isaac and research Assistant Ms Catherine Twesiime. We also acknowledge the assistance rendered by the staff of Makerere University Herbarium in identifying the plant species.
Department of Biological Sciences, College of Natural Sciences, Makerere University, P.O Box 7062, Kampala, Uganda
Patience Tugume, Esezah K. Kakudidi, Patrick Mucunguzi & James Kalema
College of Agriculture and Environmental Sciences, Makerere University, P.O Box 7062, Kampala, Uganda
Mukadasi Buyinza & Justine Namaalwa
Bishop Stuart University, P.O Box 9, Mbarara, Uganda
Maud Kamatenesi
Patience Tugume
Esezah K. Kakudidi
Mukadasi Buyinza
Justine Namaalwa
Patrick Mucunguzi
James Kalema
Correspondence to Patience Tugume.
PT conceptualized the study, designed the methods, conducted the ethnobotanical survey, analysed the data and drafted the manuscript. EKK and BM conceptualized the idea of this manuscript and participated in reviewing the manuscript. JM, MK, PM and JK reviewed the manuscript. All authors read and approved the final manuscript.
Tugume, P., Kakudidi, E.K., Buyinza, M. et al. Ethnobotanical survey of medicinal plant species used by communities around Mabira Central Forest Reserve, Uganda. J Ethnobiology Ethnomedicine 12, 5 (2016). https://doi.org/10.1186/s13002-015-0077-4
Received: 01 September 2015
Ethnobotanical
Mabira CFR | CommonCrawl |
Study on Reflectance Improvement of Al-Ti Based Oxide Thin Films for Semitransparent Solar Cell Applications
Lee, Eun Kyu;Jeong, So Un;Bang, Ki Su;Lee, Seung-Yun 437
This work reports the preparation of Al-Ti based oxide thin films and their optical properties. Although the transmittance of a $TiO_2/Al2O_3$ bilayer structure was as high as 90% at wavelengths of 600 nm or larger, the reflectance of the bilayer reached its minimum at wavelengths of around 360 nm. The transmittance of an 89-nm-thick $TiO_2$ thin film rapidly increased and then decreased at a critical wavelength because of destructive interference. The wavelength corresponding to the reflectance minimum increased after an increase in $TiO_2$ film thickness. The smooth surface morphology of the AlTiO thin film was retained up to a film thickness of 65 nm, and the transmittance of the film was inversely proportional to film thickness, in accordance with the general tendency for optical films. The reflectance of the AlTiO film at visible light wavelengths was lower than that of the $TiO_2$ film, which implies that the AlTiO film is suitable for applications as an optical thin film layer in semitransparent solar cells.
The Effect of Temperature on the Photoluminescence Properties of the InZnP/ZnSe/ZnS (Core/Multishell) Quantum Dots
Son, Min Ji;Jung, Hyunsung;Lee, Younki;Koo, Eunhae;Bang, Jiwon 443
We investigated the temperature-dependent photoluminescence spectroscopy of colloidal InZnP/ZnSe/ZnS (core/shell/shell) quantum dots with varying ZnSe and ZnS shell thickness in the 278~363 K temperature range. Temperature-dependent photoluminescence of the InZnP-based quantum dot samples reveal red-shifting of the photoluminescence peaks, thermal quenching of photoluminescence, and broadening of bandwidth with increasing temperature. The degree of band-gap shifting and line broadening as a function of temperature is affected little by shell composition and thickness. However, the thermal quenching of the photoluminescence is strongly dependent on the shell components. The irreversible photoluminescence quenching behavior is dominant for thin-shell-deposited InZnP quantum dots, whereas thick-shelled InZnP quantum dots exhibit superior thermal stability of the photoluminescence intensity.
Defect Prediction Using Machine Learning Algorithm in Semiconductor Test Process
Jang, Suyeol;Jo, Mansik;Cho, Seulki;Moon, Byungmoo 450
Because of the rapidly changing environment and high uncertainties, the semiconductor industry is in need of appropriate forecasting technology. In particular, both the cost and time in the test process are increasing because the process becomes complicated and there are more factors to consider. In this paper, we propose a prediction model that predicts a final "good" or "bad" on the basis of preconditioning test data generated in the semiconductor test process. The proposed prediction model solves the classification and regression problems that are often dealt with in the semiconductor process and constructs a reliable prediction model. We also implemented a prediction model through various machine learning algorithms. We compared the performance of the prediction models constructed through each algorithm. Actual data of the semiconductor test process was used for accurate prediction model construction and effective test verification.
Characterization and Synthesis of BN Fibers According to the Content of BN Nanopowder by Electrospinning Method
Lee, Jong Hyeok;Chun, Myoung Pyo;Hwang, Jin Ah;Jung, Young Geun;Chu, Jae Uk 455
Boron nitride (BN) nanofibers were fabricated using BN nanoparticles (70 nm) by electrospinning. Morphologies such as the diameter and density of the BN nanofibers are strongly influenced by the viscosity and dispersion state of the precursor solution. In this study, the precursor solution was prepared by ball milling BN nanoparticles and polyvinylpyrrolidone (PVP, Mw~1,300,000) in ethanol, which was electrospun and then calcined to produce BN fibers. High-quality BN nanofibers were well fabricated at a BN concentration of 15 wt% with their diameters in the range of 500 nm to 800 nm; the viscosity of the precursor solution was $400mPa{\cdot}S$. The calcination of the as-electrospun BN fibers seemed to be completed by holding them at $350^{\circ}C$ for 2 h considering the TGA data. The morphologies and phases of the BN fibers were investigated by scanning electron microscopy (SEM) and X-ray diffractometry (XRD), respectively; Fourier transform infrared (FT-IR) was also used for structure analysis.
Piezoelectric Energy Harvesting Characteristics of Trapezoidal PZT/Ag Laminate Cantilever Generator
Na, Yong-Hyeon;Lee, Min-Seon;Yun, Ji-Sun;Hong, Youn-Woo;Paik, Jong-Hoo;Cho, Jeong-Ho;Lee, Jung Woo;Jeong, Young-Hun 462
The piezoelectric energy harvesting characteristics of a trapezoidal cantilever generator with lead zirconate titanate (PZT) laminate were investigated with various Ag inner electrodes. The piezoelectric mode of operation was a transverse mode by using a planar electrode pattern. The piezoelectric cantilever generator was fabricated using trapezoidal cofired-PZT/Ag laminates by five specimens of 2, 3, 4, 7, and 13 layers of Ag. As the number of Ag electrodes increased, impedance and output voltage at resonant frequency significantly decreased, and capacitance and output current showed an increasing tendency. A maximum output power density of $7.60mW/cm^3$ was realized for the specimen with seven Ag layers in the optimal condition of acceleration (1.2 g) and resistive load ($600{\Omega}$), which corresponds to a normalized power factor of $5.28mW/g^2{\cdot}cm^3$.
A Study on Dielectric Properties of Polycarbonate Film Due to Variation of Degradation Time
Lee, Sung Ill 469
In this study, the capacity and FTIR of polycarbonate film that was degraded for 2, 4, and 8 h in a thermostat at $180^{\circ}C$ was measured. The results of this study are as follows. It was found that the capacity decreased with increasing degradation time and frequency. This findings suggest that the attraction between molecules and amorphous polycarbonate increased because it contains the ketone group (-C=O-) and the chain of dioxides group (-O-R-O-). Measurement by FTIR found that the time of thermal degradation has a smaller impact because the transmutation or variation of the material does not occur. Measurement by SEM magnified 1,000 times found that a longer thermal degradation time results in thermal decomposition of the specimen's particles.
CrC Interlayer Effect on Tribological Properties of Amorphous Carbon Deposited by UBMS Method
Kim, Phil Jung;Park, Yong Seob 475
We investigated the tribological properties of amorphous carbon (a-C) films deposited with CrC interlayers of various thicknesses as the adhesive layer. A-C and CrC thin films were deposited using the unbalanced magnetron (UBM) sputtering method with graphite and chromium as the targets. CrC films as the interlayer were fabricated under a-C films, and various structural, surface, and tribological properties of a-C films deposited with various CrC interlayer thicknesses were investigated. With various CrC interlayer thicknesses under a-C films, the tribological properties of CrC/a-C films were improved; the increased film thickness exhibited a maximum high hardness of over 27.5 GPa, high elastic modulus of over 242 GPa, critical load of 31 N, residual stress of 1.85 GPa, and a smooth surface below 0.09 nm at the condition of 30-nm CrC thickness.
Aluminum Based Oxide/Metal/Oxide Structures for the Application in Transparent Electrodes
Kim, Daekyun;Choi, Dooho 481
In this study, oxide/metal/oxide-type transparent electrodes based on Al and ZnO were investigated. Thin films of these materials were sputter-deposited at room temperature. To evaluate the thickness dependence of the oxide layers, the top and bottom ZnO layers were varied in the range of 5~80 nm and 2.5~20 nm, respectively. When the thicknesses of the top and bottom ZnO layers were fixed at 30 nm and 2.5 nm, a maximum transmitance of 66% and sheet resistance of $16.5{\Omega}/{\square}$ were achieved, which is significantly improved compared with the Al layer without top and bottom ZnO layers showing a maximum transmitance of 44.3% and sheet resistance of $44{\Omega}/{\square}$.
Fully Solution-Processed Green Organic Light-Emitting Diodes Using the Optimized Electron Transport Layers
Han, Joo Won;Kim, Yong Hyun 486
Solution-processed organic light-emitting diodes (OLEDs) have the advantages of low cost, fast fabrication, and large-area devices. However, most studies on solution-processed OLEDs have mainly focused on solution-processable hole transporting materials or emissive materials. Here, we report fully solution-processed green OLEDs including hole/electron transport layers and emissive layers. The electrical and optical properties of OLEDs based on solution-processed TPBi (2,2',2"-(1,3,5-Benzinetriyl)-tris(1-phenyl-1-H-benzimidazole)) as the electron transport layer were investigated with respect to the spin speed and the number of layers. The performance of OLEDs with solution-processed TPBi exhibits a power efficiency of 9.4 lm/W. We believe that the solution-processed electron transport layers can contribute to the development of efficient fully solution-processed multilayered OLEDs.
A Study on Variation of Single Color by Applied Voltage in Multi-Electrode Type Electronic Film
Lee, Sang-Il;Hong, Youn-Chan;Kim, Young-Cho 490
A multielectrode electronic paper film capable of expressing a single-color image was fabricated by injecting color electronic ink into an electronic paper panel; on the basis of its reflective or transparent properties, it is possible to control the expression of six single-color images and their transmittance. In this study, a single-color image was represented by driving a multielectrode electronic paper film; color coordinates were measured. The six capable single colors were yellowish pink (0.444, 0.354), white (0.355, 0.352), black (0.241, 0.241), orange (0.514, 0.360), reddish orange (0.606, 0.338), and reddish purple (0.469, 0.145). Color particles used in this paper were black and white, by which six colors are accomplished, but more single-color images can be combined by using cyan, magenta, and yellow particles.
Electrical and Mechanical Strength Properties of Epoxy/Micro Silica and Alumina Composites for Power Equipment
Park, Joo-Eon;Park, Jae-Jun 496
In this study, we prepared 40, 45, 50, 55, 60, 65, and 70 wt% content composites filled in epoxy matrix for two micro silica and three micro alumina types for use as a GIS heavy electric machine. As a filler type of epoxy composite, micro silica composites showed excellent AC breakdown strength properties compared to micro alumina composites in the case of electrical properties of micro silica and alumina. The electrical breakdown properties of micro silica composites increased with increasing filler content, whereas those of micro alumina decreased with increasing filler content. In the case of mechanical properties, the micro silica composite showed improved tensile strength and flexural strength compared with the micro alumina composite. In addition, mechanical properties such as tensile strength and flexural strength of micro silica and alumina composites decreased with increasing filler content. This is probably because O-H groups are present on the surface of silica in the case of micro silica but are not present on the surface of alumina in the case of micro alumina.
Epitaxial Growth of ZnO Nanowires on Sapphire (001) Substrates Using a Hydrothermal Process
Ham, Daseul;Jeong, Byeong Eon;Yang, Myeong Hun;Lee, Jong Kwan;Choi, Young Bin;Kang, Hyon Chol 502
Epitaxial ZnO nanowires (NWs) were synthesized on sapphire (001) substrates using a hydrothermal process. The effects of the pH value of the precursor solution on the structural and optical properties of the resulting NWs was studied. The epitaxial relationship and the domain matching configuration between the sapphire (001) substrate and the as-grown ZnO NWs were determined using synchrotron X-ray diffraction measurements. The (002) plane of $w{\ddot{u}}rtzite$ ZnO NW grows in the surface normal direction parallel to the sapphire (001) direction. However, three types of in-plane domain matching configurations were observed, such as the on-position, $30^{\circ}$-rotated position, and ${\pm}8.5^{\circ}$-rotated position relative to the on-position, which might be attributed to inheriting the in-plane domain configuration of the ZnO seed layer.
Structural Stability for Pt Line and Cross-Bar Sub-Micron Patterns
Park, Tae Wan;Park, Woon Ik 510
This study discusses and demonstrates the structural stability of highly ordered Pt patterns formed on a transparent and flexible substrate through the process of nanotransfer printing (nTP). Bending tests comprising approximately 1,000 cycles were conducted for observing Pt line patterns with a width of $1{\mu}m$ formed along the direction of the horizontal (x-axis) and vertical (y-axis) axes ($15mm{\times}15mm$); and adhesion tests were performed with an ultrasonicator for a period greater than ten minutes, to analyze the Pt crossbar patterns. The durability of both types of patterns was systematically analyzed by employing various microscopes. The results show that the Pt line and Pt crossbar patterns obtained through nTP are structurally stable and do not exhibit any cracks, breaks, or damages. These results corroborate that nTP is a promising nanotechnology that can be applied to flexible electronic devices. Furthermore, the multiple patterns obtained through nTP can improve the working performance of flexible devices by providing excellent structural stability.
Detection Algorithm and Extract of Deviation Parameters for Battery Pack Based on Internal Resistance Aging
Song, Jung-Yong;Huh, Chang-Su 515
A large number of lithium-ion batteries are arranged in series and parallel in battery packs, such as those in electric vehicles or energy storage systems. As battery packs age, their output power and energy density drop because of voltage deviation, constant and non-uniform exposure to abnormal environments, and increased contact resistance between batteries; this reduces application system efficiency. Despite the balancing circuit and logic of the battery management system, the output of the battery pack is concentrated in the most severely aged unit cell and the output is frequently limited by power derating. In this study, we implemented a cell imbalance detection algorithm and selected parameters to detect a sudden decrease in battery pack output. In addition, we propose a method to increase efficiency by applying the measured testing values considering the operating conditions and abnormal conditions of the battery pack.
The Detection Characterization of NOX Gas Using the MWCNT/ZnO Composite Film Gas Sensors by Heat Treatment
Kim, Hyun-Soo;Jang, Kyung-Uk 521
In particular, gas sensors require characteristics such as high speed, sensitivity, and selectivity. In this study, we fabricated a $NO_X$ gas sensor by using a multi-walled carbon nanotube (MWCNT)/zinc oxide (ZnO) composite film. The fabricated MWCNT/ZnO gas sensor was then treated by a $450^{\circ}C$ temperature process to increase its detection sensitivity for NOx gas. We compared the detection characteristics of a ZnO film gas sensor, MWCNT film gas sensor, and the MWCNT/ZnO composited film gas sensor with and without the heat-treatment process. The fabricated gas sensors were used to detect $NO_X$ gas at different concentrations. The gas sensor absorbed $NO_X$ gas molecules, exhibiting increased sensitivity. The sensitivity of the gas sensor was increased by increasing the gas concentration. Additionally, while changing the temperature inside the chamber for the MWCNT/ZnO composite film gas sensor, we obtained its sensitivity for detecting $NO_X$ gas. Compared with ZnO, the MWCNT film gas sensor is excellent for detecting $NO_X$ gas. From the experimental results, we confirmed the enhanced gas sensor sensing mechanism. The increased effect by electronic interaction between the MWCNT and ZnO films contributes to the improved sensor performance. | CommonCrawl |
Education And Language
Total : PDF: 280 XML: 22 | Total views: 302
Quality Implementation of Technical and Vocational Education and Entrepreneurial Skill Acquisition for Technology And Economic Development In Nigeria
Ikutal Ajigo,
Ikutal Ajigo
Edet David Asuquo,
Edet David Asuquo
Abeng Christiana Oliver,
Abeng Christiana Oliver
Department of Vocational Education University of Calabar, Nigeria
Article Date Published : 2 January 2018 | Page No.: EL-2018-01-11 | Google Scholar
DOI https://doi.org/10.18535/ijsrm/v6i1.el01
This study examined quality implementation of Technical and Vocational Education (TVE) and entrepreneurial skill acquisition for technology and economic development in Nigeria. It looked at standard of admissions policy, quality of personnel and standard of facilities. Three research hypotheses guided the study. Survey research design was adopted for the study. The population comprised heads of department, units' heads, senior non-academic staff, 300 and 200 levels students of 2016/2017 academic session from TVE department in two institutions. A sample of 135 respondents out of a population of 562 was drawn from University of Calabar (UNICAL) and Cross River University of Technology (CRUTECH) for the study. Of this number, 125 representing 92.59% return rate was achieved. Census technique was used to select the staff, while purposive sampling was adopted in choosing 300 and 200 levels students in the 2016/2017 academic sessions. On the other hand, systematic sampling was adopted in selecting 300 and 200 levels students that actually responded to the instrument. A validated researcher- made four point rating scale questionnaire captioned "Quality Implementation of Technical and Vocational Education and Entrepreneurial Skill Acquisition for Technology and Economic Development Questionnaire" (QITVEESATEDQ) was used for data collection. A reliability estimate of 0.71 was achieved for the instrument using Cronbach reliability coefficient after a pilot test was carried out. Data collected was analyzed using linear regression statistical tool and all hypotheses were tested at .05 significant level. Findings revealed that admissions policy, quality of personnel and standard of facilities in TVE departments significantly influence the acquisition of entrepreneurial skill for technology and economic development. It was therefore recommended among others that only merit should be the basis for granting admissions into TVE programs if it must lead to the acquisition of adequate entrepreneurial skill for technology and economic development.
Nigeria has over the years pride itself as the largest black nation in the world; the most populous nation in Africa; and the continent's giant. It is equally reputed with having enviable fundamentals including the 6th largest gas reserve and the 8th largest crude oil reserve in the world. The nation is endowed in commercial quantities with about 37 solid mineral types [18]. Yet, technological and economic development has been rather feeble and contrasting.
Compared with the emerging nations especially from the Asian continent, such as Indonesia, India, China, Malaysia and Thailand that were trailing behind Nigeria in terms of Gross Domestic Product (GDP) per capita as at 1970 [18]. These countries have advanced significantly and are not only miles ahead of Nigeria, but have emerged as major players on the global technological and economic development arena. These countries have undoubtedly harnessed the power of new technologies and nurtured droves of knowledge workers that are willing and able to propel the productivity and innovation tiers. Available literature tells that they have over the years made conscious efforts in the direction of investing, improving and enhancing the quality implementation of TVE programs in their respective countries which has led to myriads of impact including the acquisition of entrepreneurial skill, whereas, Nigeria played herself into the doldrums [9].
Globally, the two classical criteria most frequently adopted in assessing the state of wellbeing of countries are: the level of advancement in technology [13] and the pace of economic development [23], [3]. Essentially, these converging indicators remain the touchstones for which comparative ranking of countries hinges. According to [29], countries emerging in the first ten positions by rank order as technological giants in distinct areas with evidential solutions to the world's nagging challenges are: Japan, United States, Finland, South Korea, Germany, China, France, United Kingdom, Canada and Russia. Strides recorded by them which led to their attainment of this enviable height are in areas such as, but not limited to nuclear industry, aerospace, space technology, defense technology, IT, advance health innovations, exports, electrical, electronic and electromechanical fields, heavy duty equipment building as well as information and communication.
In appraising the state of economic development of countries for the year 2016, the International Monetary Fund (IMF) in her World Economic Outlook Database used the real Gross Domestic Product (GDP) growth rate [32] and the report revealed that Iraq, Turks and Caicos Islands as well as Nauru occupies the first, second and third positions with real GDP growth rate of 10.09 percent; 9.40 percent and 8.50 percent respectively. Nigeria with all her abundant resources occupies 193 positions with a GDP of -1.50 percent out of 218 countries surveyed, lagging behind African countries like Ethiopia which is ranked fourth with a real GDP growth rate of 7.96 percent; Cote d'Ivoire ranked seventh with a real GDP growth rate of 7.52 percent; Senegal and Tanzania in a tie rank of fourteenth and a real GDP growth rate of 6.60 percent each. Technology which is the application of scientific knowledge for the practical purposes, especially in industries involves the deployment of techniques, skills, methods, and processes necessary in the production of goods and services, or in the accomplishment of objectives.
Economic development, on the other hand is a construct most commonly used by politicians, economist, and others for so long a time now. It is a phenomenon that seeks to improve the economic well- being, quality of life and general improvement in living standards. It is typically associated with improvement in varied aspects such as: literacy level, life expectancy and the creation of wealth thresholds. But how did these countries found their niche among the 'who-is-who' in the world? Quality implementation of technical and vocational education proves to be the propellant, and this when fittingly entrenched could result in entrepreneurial skill acquisition which is the antidote for combating rising youth unemployment- a significant cankerworm plaguing economies and societies [31]. In the words of [19], economic prosperity requires the possession of entrepreneurial skill to function optimally. The competencies of individual on entrepreneurial skill acquisition as it relates to TVE are thus, designed to lead the beneficiaries to self-employment, economic self-sufficiency, and increased employment generation, all of which constitutes the contributory factors for technology and economic development of a nation.
Entrepreneurial skill, therefore, was viewed by [15] as the ability to create something new with value by devoting the necessary time and effort, assuming the accompanying financial, physic, and social risks, and receiving the resulting rewards of monetary, personal satisfaction and independence. [25] termed it to be the ability of an individual to exploit an idea and create an enterprise (either big or small), not only for personal gain, but also for social and developmental gain. This can chiefly be attained through the acquisition and application of entrepreneurial skill. The process of acquiring this much desired skill according to [28], are in four stages namely:
To objectively analyze and identify the current and foreseeable skill needs in terms of management, administrative and technical skills. This calls for a critical assessment of the current and future state of the nation in the rung of technology and economic development ladder in relation to other well-to-do nations.
To identify the entrepreneurs own personal goals and objectives (in this case, TVE graduates) and accurately analyze as well as evaluate their own skills and resources attainment. This will help in examining the extent to which the quality implementation of TVE has been achieved, and if unsatisfied, define what else need to be done to bring about access to affordable quality TVE.
To provide a realistic personal (regional or national) development plan. For any advancement to be recorded in the direction of achieving technology and economic development through TVE, it will be apt to ensure that the plans proposed are realistic, timely, measurable and attainable.
To monitor the on-going performance of the entrepreneur once the pursuit has been launch. This requires appropriate periodic review of activities carried out with a view to consolidating areas of progress, while initiating improvement techniques in areas where progress seem retarded.
As a concept, TVE is described by the Federal Republic of Nigeria [12] as a comprehensive term referring to those aspects of the educational process involving in addition to general education, the study of technologies and related sciences, and the acquisition of practical skill, attitudes, understanding and knowledge relating to occupations in various sectors of the economic and social life. Overwhelmingly, the notion of 'quality' has overtime assumed one of the most sought-after attributes in almost every human endeavor, education inclusive. Perhaps this underscored why [27] voiced that quality in education may be viewed on the basis of how good and efficient the teachers are; how adequate and accessible the facilities and materials required for effective teaching and learning are; and how prepared the graduates are for meeting the challenges of life and for solving the social needs. [7] asserted that quality implementation of technical and vocational education is increasingly recognized as the bedrock of every development and an indispensable process for achieving national goals. These scholars went on to posit that continuous enhancement of the quality implementation of technical and vocational education remains the prerequisite for any nation that yearns to harvest the enormous benefits of this all-important aspect of education. Although at present TVE is perceived to be haphazardly implemented, a situation that has made attaining decent work an acute challenge; persistent of poverty in many part of the world including Nigeria, which counteract the impact-oriented and evidence-based nature of TVE; perception of graduates as ill-prepared for the world of work; and subjecting many to vulnerable employment [31], quality implementation of TVE begins with the formulation and adherence to admissions policy into the program at all levels.
The Joint Admission and Matriculations Board (JAMB) is the parastatal of the Federal Ministry of Education empowered by law to oversee matters of admission into tertiary institutions [22]. This it does by administering qualifying examinations to candidates [24]. Granting of admissions by this body is based on four cardinal factors. First is the quota system in which elements such as merit forms the criterion. This takes 45 percent of admission slots. Educationally disadvantaged state is the second factor which takes 20 percent. The third considerable factor is the catchment area, and is allotted 25 percent; the fourth which is the discretionary factor takes 10 percent [21]. It is worth stressing that when this policy was conceived, there were 25 federal, 13 states and 3 private universities in Nigeria as against the present 40 federal, 44 states and 68 privately owned universities [11].
[26] , reported that with the quota system and catchment areas policies, universities are under obligation to admit students not entirely on merit, but on quota of states as stipulated by the government. This depicts a sheer compromise on quality, in that; merit is downplayed in preference for other sentimental considerations. [1] stated that the 2014/2015 admission lists of a state university revealed that while some local government areas had candidates scoring more than 200 points and above, others had only a handful of candidates who meandered to score up to 180 points which was the cut off marks. Those candidates who scored 200 and above and were from the so-called educationally advantaged local government areas where their quotas were already filled were denied admissions, whereas those candidates who scored 180 points from local government areas with less number of candidates were admitted. Since inputs determine the outputs, graduates from such compromised admission processes may hardly find their bearings in situations where Nigeria's technology and economic development quests could be helped.
Closely linked to the formulation and adherence to admissions policy is the issue of quality of personnel. Two-third of the workforce that constitutes the backbone steering the flagships in nations labeled 'economic and technological heavyweights' are employees who have garnered greater part of their occupational skills and knowledge through the support of quality teachers and instructors in the domain of technical and vocational education [10], [30]. In a study credited to [16] on ensuring quality assurance in vocational education, it was made known that the quality of teachers/trainers has very serious impact on the assessment of quality in the universities. The author stressed that quality cannot be guaranteed when the quality of personnel are inadequate to meet the desired expectations. This eloquently speaks of the TVE personnel qualifications, number, experiences, competences, capacities and their acquisition of desirable skills to relevantly impart their learners. Of a truth, no nation can rise above the quality of its teachers [12], and since knowledge is becoming truly global, accessible, and democratic, the unavoidable need for quality TVE personnel in the present stage of Nigeria's development is unmistakably keen especially as the nation aspires towards becoming a technological powerhouse.
To point the needed direction to all stakeholders, the [21] prescribed the minimum number and quality of personnel required in TVE departments. This covers the academic and non-academic staff with a student-teacher ratio of 1:30. For the academic staff, the category stems from the Graduate Assistant cadre to the Professorial grade; with rank mixes and ratio of 20 percent in the professorial grade; 35 percent in the Senior Lecturer grade and 45 percent in the Lecturer 1 grade and below. For the non- academic staff, the department running TVE and desirous of achieving and maintaining quality is expected to have at least one secretary (who must be computer literate), one clerical officer, two office attendants/cleaners, two typists, one laboratory attendants, and one technician. The impact of not meeting this minimum benchmark abounds. [2] decried that the continuous shortage of TVE experts is the bane of Nigeria's underdevelopment. [14] reiterated that serious shortfalls exist in the number of professionally qualified TVE teachers needed in the implementation of TVE program in schools.
Unfolding events lay bare the reality that the narrow and static paradigms of growth that sadly defines Nigeria amidst her natural resources endowments are directly associated with teaching TVE programs using crude, obsolete, mal-functioning, dilapidated and at times without facilities at all. The era when natural resources preponderantly made trade saleable are long gone, ushering in an era where this natural giftedness are convertible through the vehicle of quality facilities. According to Prosser in [8], effective training can only be given where the training jobs are carried out in the same way with machines as in the occupation itself. This implies that the exact machines, facilities, tools and equipment utilized in teaching TVE programs should not differ significantly from the ones the learners would meet and use in employment. Doing so would directly inhibit the transfer of knowledge to the world of work.
In a study, conducted by [17], it was concluded that inadequate training facilities form the major constraint to entrepreneurial skills development, which if properly articulated could help align the nation towards technological greatness. [7] when indirectly acknowledging that a nation's development agenda may be homegrown, advocate for the provision of adequate facilities, equipment, instructional materials and consumables, because they are objects with potency for enlisting Nigeria among the comity of nations laden with economic and technological prowess. The facilities required for effective teaching of TVE programs according to [21] include adequate classrooms, computer laboratories, internet access and resource room to ensure proper execution and implementation of programs. These facilities should not only exist, but be the best in terms of quality. Meaning that regardless of age, its continuous form or functionality should maintain originality in relation to when it was procured [6].
The absence of these facilities abound. [5] reports that during the 2006/2007 accreditation and admission exercises, NUC discovered gross inadequate availability of human (personnel) and non-human resources as well as dilapidated, decaying and almost non-available infrastructural facilities in most of Nigeria's universities. The accreditation exercise the scholar tell involved the assessment of 1,343 undergraduate degree programs (TVE program inclusive) in 48 universities comprising 25 federal, 20 states and three private universities with five colleges of education among. The memorandum released by the NUC further disclosed that 42.5 percent of the university programs earned full accreditation status, 40.9 percent earned interim accreditation while, 7.6 percent were out-rightly denied accreditation for failing to meet the prescribed minimum academic standard. The factors adopted in considering the type of accreditation were: quality of teaching, facilities, ratio of teachers to students, level of research contributions to international journals and number of foreign students among others [4].
Apparently, for Nigeria to speedily achieve vision 20:2020 and the Sustainable Development Goals (SGDs) which she is a signatory before the 2030 dateline, the nation ought to sail past the stop-start development patterns that hallmarks the present mono oil-based economy and lunch itself into the dawn of creating balance as well as a healthy base for the 21st century society anchored on quality implementation of TVE. In pursuance of this benefit-laden course, can improvement in admissions policy, enhancement in the quality of personnel and the utilization of standard facilities in teaching TVE programs bring about entrepreneurial skill acquisition, and restore Nigeria to the path of technology and economic development? This forms the crux of the study.
The main purpose of the study was to examine the influence of quality implementation of TVE on entrepreneurial skill acquisition for technology and economic development in Nigeria. Specifically, the study:
Examine the influence of standard admission policy in TVE programs on entrepreneurial skill acquisition for technology and economic development.
Examine the influence of quality personnel in TVE departments on entrepreneurial skill acquisition for technology and economic development.
Examine the influence of standard facilities in TVE departments on entrepreneurial skill acquisition for technology and economic development.
Research hypotheses
The following hypotheses guided the study:
Implementation of admission policy into TVE programs has no significant influence on entrepreneurial skill acquisition for technology and economic development.
The quality of personnel in TVE departments has no significant influence on entrepreneurial skill acquisition for technology and economic development.
The standard of facilities in TVE departments has no significant influence on entrepreneurial skill acquisition for technology and economic development.
The study adopted survey research design involving the use of questionnaire in a bid to gather information on the quality implementation of TVE in Nigerian universities, and how it influences entrepreneurial skill acquisition for technology and economic development. Particularly, the focus was on admission policy, quality of personnel, and standard of facilities. Three research hypotheses guided the study. The study area was Cross River State, which is one of the 36 states in Nigeria. 135 respondents from the University of Calabar (UNICAL) and Cross River University of Technology (CRUTECH) were sampled from a population of 562. The sampling techniques adopted were census sampling for the heads of department, units heads and the senior non-academic staff; purposive sampling was adopted in choosing 300 and 200 level students in the 2016/2017 academic sessions because they were considered relatively exposed to the items contained in the questionnaire while systematic random sampling was adopted in selecting 300 and 200 levels students that actually responded to the questionnaire. This is shown in Table 1.
The structured questionnaire was validated by five experts, three in Vocational Education (Business Education, Agricultural Education and Home Economics) and two experts in measurement and evaluation. The reliability estimate of 0.71 was achieved for the instrument using Cronbach reliability coefficient after a pilot test. The instrument was administered personally by the researchers and retrieved after completion. This was done after relevant information about the problem being researched was explained to the respondents. A coding key was designed to code all responses. Of the one hundred and thirty five questionnaires distributed, one hundred and twenty five copies (92.59%) were duly returned. Linear regression was used to test all the hypotheses at .05 level of significance.
Population and sample distribution of the study
Subjects Population Sample
UNICAL CRUTECH Total UNICAL CRUTECH Total
Head of departments Units' heads Senior non-academic staff 300 level students 200 level students 1 3 10 127 132 1 3 12 142 131 2 6 22 269 263 1 3 10 25 26 1 3 12 28 26 2 6 22 53 52
Test for significance was done using linear regression at .05 level of significance. A summary of the result is presented in Table 2.
Simple Linear Regression Analysis of implementation of admission policy into TVE programs on entrepreneurial skill acquisition for technology and economic development.
Model R R Square Adjusted R Square Std. Error of the Estimate1 .640a .521 .520 1.012Model R R Square Adjusted R Square Std. Error of the Estimate1 .640a .521 .520 1.012
Source of variationSSdfMSF-ratiop-valRegression1242.38011242.380203.80*.028Residual810.7211336.096Total2,053.101134*p<.05; df 1, 133; critical F = 3.91Source of variationSSdfMSF-ratiop-valRegression1242.38011242.380203.80*.028Residual810.7211336.096Total2,053.101134*p<.05; df 1, 133; critical F = 3.91
From Table 2, the correlation between implementation of admission policy into TVE programs and entrepreneurial skill acquisition for technology and economic development was .640. This means that, as quality implementation of admission policy into TVE programs improves, so would entrepreneurial skill acquisition for technology and economic development improve. From the correlation coefficient, an R Square of .521 was obtained, this means that about 52.1% of the total variation in entrepreneurial skill acquisition for technology and economic development is accounted for by implementation of admission policy into TVE programs. The computed F-ratio of 203.80 is greater than the critical f-value of 3.91 with 1 and 133 degree of freedom. Also, the Table showed a p-value of .028 less than 0.05 level of significance. Consequently, the null hypothesis was rejected; this means that implementation of admission policy into TVE programs has significant influence on entrepreneurial skill acquisition for technology and economic development.
Simple Linear Regression Analysis of the quality of personnel in TVE departments on entrepreneurial skill acquisition for technology and economic development.
Model R R Square Adjusted R Square Std. Error of the Estimate
1 .582a .498 .497 1.578
Source of variation SS df MS F-ratio p-val
Regression 1197.802 1 1197.802 178.38* .035
Residual 893.055 133 6.715
Total 2,090.857 134
*p<.05; df 1, 133; critical F = 3.91
582 This means that, as the quality of personnel in TVE departments improve, so would entrepreneurial skill acquisition for technology and economic development improves From the correlation coefficient, an R Square of 498 was obtained, this means that about 498% of the total variation in entrepreneurial skill acquisition for technology and economic development is accounted for by the quality of personnel in TVE departments The computed F-ratio of 17838 is greater than the critical f-value of 391 with 1 and 133 degree of freedom Also, the Table showed a p-value of 035
less than 0.05 level of significance. Consequently, the null hypothesis was rejected; this means that the quality of personnel in TVE departments has significant influence on entrepreneurial skill acquisition for technology and economic development.
Hypothesis three
To test this hypothesis, independent t-test statistical technique was utilized. Test for significance was done using linear regression at .05 level of significance. A summary of the result is presented in Table 4.
Simple Linear Regression Analysis of the standard of facilities in TVE departments on entrepreneurial skill acquisition for technology and economic development.
810 This means that, as the quality of personnel in TVE departments improves, so entrepreneurial skill acquisition for technology and economic development improves From the correlation coefficient, an R Square of 785 was obtained, this means that about 785% of the total variation in entrepreneurial skill acquisition for technology and economic development is accounted for by the standard of facilities in TVE departments The computed F-ratio of 20875 is greater than the critical f-value of 391 with 1 and 133 degree of freedom Also, the Table showed a p-value of 012 less than 005 level of significance Consequently, the null hypothesis was rejected; this means that the standard of facilities in TVE departments has significant influence on entrepreneurial skill acquisition for technology and economic development
Discussion of findings
The result of hypothesis one revealed that the implementation of admission policy into TVE programs significantly influences entrepreneurial skill acquisition for technology and economic development. Of the four considerable factors upon which admission into TVE programs is based, merit system is viewed overriding with possibilities for entrepreneurial skill acquisition for technology and economic development. This finding corroborates [26] who found that using other bases outside merit such as quota system and catchment area policies, universities are obliged to admit students on grounds that clearly compromises quality and promotes mediocrity. This is worrisome in view of the fact that the quality of inputs determines the quality of outputs.
The result for hypothesis two revealed that quality of personnel in TVE departments significantly influences entrepreneurial skill acquisition for technology and economic development. This means that the [21] policy on numbers and quality as well as rank mixes and ratio of academic and non-academic staff in universities TVE departments if implemented to the later could launch the nation into technological and economic development stardom. This finding is in consonance with the [10],[31] who opined that two-third of the workforce that constitutes the backbone steering the flagships in nations labeled economic and technological heavyweights are employees who have garnered greater parts of their entrepreneurial skill and knowledge through the support of quality teachers and instructors in the domain of TVE. Accepting the dictum that no nation can rise above the quality of her teachers [12], and following it up with galvanized efforts towards recruiting the right caliber of personnel to run TVE programs in universities remains the lifeline.
The result in hypothesis three revealed that standard of facilities in TVE departments significantly influences entrepreneurial skill acquisition for technology and economic development. This implies that facilities in TVE departments in Nigerian universities are in a state of decay, moribund, obsolescence, and to a large extent non-existence. This makes the graduates of the program who would have been the drivers of technological and economic development agenda theory majors and practical minors, which contravene the creed of TVE. The finding is in sync with [5] who reported on the outcome of the 2006/2007 accreditation and admission exercises involving 25 federal, 20 states, and 3 private universities with 5 colleges of education inclusive. The finding as it were revealed gross inadequate availability of human (personnel), and non- human resources.
The crawling nature of Nigeria's technology and weakling economic development is not accidental; it is an accurate outcome of the many years of quality implementation of TVE neglect as revealed in the study. Formulation of lofty admissions policy, recruitment of adequate number of qualified personnel and investment in facilities in TVE departments in universities are merely on the pages of paper. The panacea for the nation's renaissance in technology and economic development is for the governments to put their money and actions where the put their mouth.
Based on the findings of the study, the following recommendations are made:
Only merit should be the basis for granting admissions into TVE programs in view of the fact that there is an equitable balance now in the establishment of universities across the nation, making the continuous adherence to other factors irrelevant.
A policy should be enacted by the government for an immediate recruitment of qualified personnel into TVE departments in universities.
A pool in the name 'TVE facilities fund' should be established in all universities wherein contributions and donations from well- meaning individuals and corporate bodies are collected exclusively for the purpose of rehabilitating, development and procurement of the needed facilities for use in TVE departments.
Authors' Profile
Adeyemi Kola. Higher Education.2001;:307-332. Google Scholar
Find in text
Constraints to Effective Implementation of Elements of Special Education Curriculum in Teacher Preparation Programme in Nigeria: A Case Study of Colleges of Education IOSR Journal of Research & Method in Education (IOSRJRME).2013;:12-16. Google Scholar
A Short History of National Wellbeing and its Measurement The Wellbeing of Nations.2014;:35-82. Google Scholar
Employability and Mobility of Norwegian Graduates Post Bologna Støren LivAnne, Wiers-Jenssen Jannecke, Arnesen ClaraÅse. Employability and Mobility of Bachelor Graduates in Europe.2011;:185-208. Google Scholar
Peruvian Education at a Crossroads .2001. Google Scholar
Facilities Provision and Maintenance: Necessity for Effective Teaching and Learning in Technical Vocational Education. IOSR Journal of Research & Method in Education (IOSRJRME).2013;:28-32. Google Scholar
Towards Quality Technical Vocational Education and Training (Tvet) Programmes in Nigeria: Challenges and Improvement Strategies Ayonmike ChinyereShirley, Okwelle PChijioke, Okeke BenjaminChukwumaijem. Journal of Education and Learning.2015-feb. Google Scholar
Physical Activity and Health Risk Behaviours among Colleges of Education Students Abayomi AO, and andMOMoses. Greener Journal of Educational Research.2012-jan;:020-027. Google Scholar
Trade, Remittances and Economic Growth in Nigeria: Any Causal Relationship? Olubiyi EbenezerAdesoji. African Development Review.2014-jun;:274-285. Google Scholar
Fourth "Cedefop" research report stresses importance of using applied research to underpin training policy Education $\mathplus$ Training.2010-jun. Google Scholar
Board of Directors: February 23, 2017: Approved Minutes .. Google Scholar
R&D, Innovation and Growth: Performance of the World's Leading Technology Corporations Gerybadze Alexander. Innovation and International Corporate Growth.2010;:11-30. Google Scholar
Innovations in Building Technology and Curriculum Revision Needs of Colleges of Education (Technical) in Nigeria O AnaeleEdmondOAnaeleEdmond, Eme andOkoroChinedu. International Journal of Scientific Research.2012-jun;:101-105. Google Scholar
International Entrepreneurship: The Case of the Unified Germany Grichnik Dietmar, Hisrich RobertD. Jahrbuch Entrepreneurship 2004/05.;:77-100. Google Scholar
Ensuring Quality Assurance In Vocational Education Idialu EthelE. Contemporary Issues in Education Research (CIER).2013-sep. Google Scholar
Rebranding Technical Vocational Education and Training$\mathsemicolon$ Youth Education for Vocational Careers, Kenya International Journal of Science and Research (IJSR).2016-apr;:1861-1864. Google Scholar
Insurance and Economic Growth Nexus in Nigeria: Asymmetric Non-Linear Relationship under Heterogeneous Agents Olayungbo DO. African Development Review.2015-sep;:248-261. Google Scholar
The Role Of The National Universities Commission (Nuc) In The Development Of University Education In Nigeria: Reflections And Projections ADEOTI EzekielOladele. Advances in Social Sciences Research Journal.2015-mar. Google Scholar
THE TEACHING OF ENGLISH FOR ACADEMIC PURPOSES IN NIGERIAN UNIVERSITIES AN APPRAISAL OF COMMUNICATION IN ENGLISH PROGRAMME IN THE NATIONAL UNIVERSITIES COMMISSION (NUC) NEW BENCHMARK MINIMUM ACADEMIC STANDARD 2014 DRAFT Sharndama EmmanuelC, and andMrsBlessingIjem. Researchers World : Journal of Arts, Science and Commerce.2017-apr;:104-112. Google Scholar
Pressure Groups and Governance of Secondary Education in Functional Democracy in Cross River State, Nigeria and OdigweFranciscaNonyelumNonyelum. International Journal of scientific research and management.2016-aug. Google Scholar
From a System of National Accounts to a Process of National Wellbeing Accounting Allin Paul, Hand DavidJ. International Statistical Review.2017-mar;:355-370. Google Scholar
Exploring Institutional Policies towards Achieving Macro Policy of Equal University Admission: A Case of a Selected University in Northwest Nigeria Salihu MuftahuJibirin, Jamil Hazri, Ismail Aziah. International Research in Higher Education.2015-nov. Google Scholar
An Assessment of the Business Environment for Micro and Small-Scale Enterprises in Tanzania Satta TadeoAndrew. Journal of Small Business & Entrepreneurship.2004-feb;:205-220. Google Scholar
Impact of Quota System and Catchment Area Policy on the University Admissions in North Central Nigeria Omeje JoachimChinweike, Egwa EneInyamu, Adikwu VictoriaOgwa. SAGE Open.2016-apr. Google Scholar
Accreditation outcomes, quality of and access to university education in Nigeria Alani RA, Ilusanya Gboyega. Quality Assurance in Education.2008-jul;:301-312. Google Scholar
LAS VEGAS SANDS CORP., a Nevada corporation, Plaintiff, v. UKNOWN REGISTRANTS OF www.wn0000.com, www.wn1111.com, www.wn2222.com, www.wn3333.com, www.wn4444.com, www.wn5555.com, www.wn6666.com, www.wn7777.com, www.wn8888.com, www.wn9999.com, www.112211.com, www.4456888.com, www.4489888.com, www.001148.com, and www.2289888.com, Defendants. Gaming Law Review and Economics.2016-dec;:859-868. Google Scholar
The top 15 destinations are high-income countries, and the top origin countries are SG countries .2016-dec. Google Scholar
The Professional Situation and Training of Vocational Teachers in Denmark Nielsen SĂśrenP. UNESCO-UNEVOC Book Series Technical and Vocational Education and Training: Issues, Concerns and Prospects.;:77-96. Google Scholar
TVET Teachers and Lecturers in Turkey Boynak Ferdi, Meral Mustafa. UNESCO-UNEVOC Book Series Technical and Vocational Education and Training: Issues, Concerns and Prospects.;:229-250. Google Scholar
6.5. Massive growth of Wikipedia .. Google Scholar
Email I'd for correspondance:
International Journal of Scientific Research and Management, 2018.
Page No.: EL-2018-01-11
Section: Education And Language
DOI: https://doi.org/10.18535/ijsrm/v6i1.el01
Ajigo, I., Asuquo, E. D., & Oliver, A. C. (2018). Quality Implementation of Technical and Vocational Education and Entrepreneurial Skill Acquisition for Technology And Economic Development In Nigeria. International Journal of Scientific Research and Management, 6(01), EL-2018. https://doi.org/10.18535/ijsrm/v6i1.el01
XML Downloaded - 22 Times | CommonCrawl |
Quarterly Reviews of Biophysics
Dynamics of proteins in solutio...
Dynamics of proteins in solution
List of symbols
Importance of protein dynamics in the biological environment
Scope and outline of this review
Protein dynamics on hierarchical time- and length-scales
Diffusion of the entire protein
Fluctuations of protein domains
Localized and confined diffusive relaxations
Vibrational dynamics
Overview: techniques addressing protein dynamics
Scattering techniques
Neutron spectroscopy
Photon correlation spectroscopy: dynamic light scattering (DLS) and X-ray photon correlation spectroscopy (XPCS)
Time resolved X-ray solution scattering (TRXS)
Fluorescence techniques
Fluorescence recovery after photobleaching (FRAP)
Fluorescence correlation spectroscopy (FCS)
Förster resonance energy transfer (FRET)
Nuclear magnetic resonance (NMR) techniques
Dielectric and terahertz spectroscopy
Molecular dynamics simulations
Other techniques on protein dynamics
Overview: complementarity of techniques
Principles of neutron spectroscopy
Quasi-elastic neutron scattering theory
Relevance of QENS for protein dynamics
Experimental techniques
Time-of-flight (TOF) spectroscopy
Neutron backscattering (NBS) spectroscopy
Neutron spin-echo (NSE) spectroscopy
Modeling and analysis
Combination of translational and rotational diffusion
Effects of crowding on diffusion
Large-scale and domain motions
Localized internal dynamics
Confinement geometry of localized motions: elastic incoherent structure factor
Diffusion in an isotropic confinement
Jumps between three sites on a circle
Combinations of models and immobile fraction
Dynamical signature of local motions
Jump-diffusion model
Diffusion in a Gaussian well and the overdamped Brownian oscillator
Switching model between dynamical states
Analysis of mean-squared displacements
Dynamics of hydrated protein powders
General features of the dynamics of proteins in solution as seen by neutron scattering
From powder to solution: influence of solution conditions on protein dynamics
The dynamical transition in solution
Comparison of internal protein dynamics in native, molten and denatured states
Relations of protein dynamics to structure: from globular to intrinsically disordered proteins
Internal dynamics of proteins at high pressure
Adaptation of proteins to ambient and extreme temperatures
Collective internal motions in proteins
Combination of neutron spectroscopy techniques: alcohol dehydrogenase
In vivo neutron spectroscopy
In vitro studies on the effect of macromolecular crowding on protein dynamics
Global diffusion
Internal dynamics
Dynamics of protein clusters, aggregates and glasses
Beck, Christian Grimaldo, Marco Roosen-Runge, Felix Maier, Ralph Matsarskaia, Olga Braun, Michal Sohmen, Benedikt Czakkel, Orsolya Schweins, Ralf Zhang, Fajun Seydel, Tilo and Schreiber, Frank 2019. Following Protein Dynamics in Real Time during Crystallization. Crystal Growth & Design, Vol. 19, Issue. 12, p. 7036.
Fujiwara, Satoru Matsuo, Tatsuhito Sugimoto, Yasunobu and Shibata, Kaoru 2019. Segmental Motions of Proteins under Non-native States Evaluated Using Quasielastic Neutron Scattering. The Journal of Physical Chemistry Letters, Vol. 10, Issue. 23, p. 7505.
Quarterly Reviews of Biophysics, Volume 52
2019 , e7
Marco Grimaldo (a1) (a2), Felix Roosen-Runge (a1) (a3), Fajun Zhang (a2), Frank Schreiber (a2) and Tilo Seydel (a1)
1Institut Max von Laue - Paul Langevin, 71 avenue des Martyrs, 38042 Grenoble, France
2Institut für Angewandte Physik, Universität Tübingen, Auf der Morgenstelle 10, 72076 Tübingen, Germany
3Division for Physical Chemistry, Lund University, Naturvetarvägen 14, 22100 Lund, Sweden
Copyright: © The Author(s) 2019
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Published online by Cambridge University Press: 13 June 2019
Fig. 1. Sketch of different types of protein dynamics. Left: The rotation and translation of the entire protein occurs on timescales of nanoseconds to seconds and lengthscales from nanometers to micrometers. Domain fluctuations occur on timescales of several nanoseconds to milliseconds with amplitudes from some Ångströms to about a nanometer. Right: Localized and confined diffusive relaxations occurring on a timescale of picoseconds to nanoseconds and a subnanometer length scale, as well as vibrations occurring on the femto- to pico-second timescale with amplitudes up to a few Ångströms are depicted. The IgG protein (Harris et al., 1997) was rendered using Mathematica (Wolfram Research, Inc.) and the figure was produced using Mathematica (Wolfram Research, Inc.) and Gimp (Spencer Kimball and the GIMP Development Team).
Fig. 2. Sketch of the diffusive MSD W(t) as a function of time. For very short times, W(t) ~ t2. For tB < t < tI, $W\lpar t \rpar \sim D_{\rm s}^{\lpar {\rm s} \rpar } \,t$, and for t ≫ tI, $W\lpar t \rpar \sim D_l^{\lpar {\rm s} \rpar } \,t$. τI is the typical interaction time, i.e. the time on which proteins collide.
Table 1. Comparison between several techniques in the context of dynamics of proteins in solution
Fig. 3. Accessible length- and time-scales of typical scattering techniques.
Fig. 4. Schematic representation of a scattering event. An incoming neutron with initial wavevector ${\bf k}_i$ interacts with an atomic nucleus and is scattered at an angle 2θ. After the event, its wavevector is ${\bf k}_f$. The scattering vector ${\bf q}$ is defined as the difference between ${\bf k}_f$ and ${\bf k}_i$. Figure rendered using Mathematica (Wolfram Research, Inc.).
Table 2. Coherent (σcoh), incoherent (σinc) and absorption (σa) neutron cross-sections in barns of the elements comprising proteins and common salts in biological environments (Sears, 1992)
Fig. 5. Sketch of the scattering function of elastic, quasi-elastic and inelastic neutron scattering near room temperature, in the absence of so-called detailed-balance effects. Elastic scattering gives a very sharp peak centered at ω = 0. QENS yields a broader peak centered at ω = 0, while the scattering function of inelastic scattering is characterized by peaks centered at ω ≠ 0. Figure rendered using Mathematica (Wolfram Research, Inc.).
Table 3. Neutron spectrometers with characteristics suitable for protein dynamics
Fig. 7. Schematic representation of the backscattering spectrometer IN16B at the ILL. A polychromatic ('pink') neutron beam (dashed line) illuminates the so-called PST (disk in the figure marked by '1'), which reflects the beam toward the backscattering monochromator (far bottom left, 2). This single crystal sends the monochromatic neutrons back toward the PST (1), which lets pass this neutron bunch via an open segment in the disk toward the sample (illustrated by the small cylinder, 3). The scattered neutrons are analyzed by the large crystals mounted on the surface of a sphere with a radius of 2 m, and the sample at its center (right part of the image, 4). The analyzed neutrons are detected by the detector tubes mounted right behind the sample (5). Figure rendered using Mathematica (Wolfram Research, Inc.), adapted from Hennig (2011).
Fig. 8. Schematic of the principle of a backscattering spectrometer. In the example, a neutron with energy E0 + δE is delivered to the sample, where E0 is the energy in backscattering from the analyzer crystals. After scattering by the sample, if the energy transfer equals − δE the neutron is reflected by the analyzers and detected; if the energy transfer differs from − δE the neutron is not reflected and is usually absorbed by absorbing material placed behind the analyzers. The thickness of the sample is chosen such that the probability for a neutron to be scattered once is $\sim 10{\rm \%} $ and that of being scattered twice is hence $\sim \!1{\rm \%} $. The distance from the sample to the analyzers is typically 2 m, while the distance from the sample to the detectors amounts to <0.2m. Figure rendered using Mathematica (Wolfram Research, Inc.).
Fig. 9. Simplified schematic representation of a spin echo spectrometer. A neutron beam with a typical wavelength spread of $\Delta \lambda /\lambda \approx 8{\rm \%} $ impinges from the left and first passes a polarizer and π/2- (i.e. 90°-)spin-flipper (marked by '1'). In a classical picture, this device 'flips' the neutron spin axis to be perpendicular to their flight axis. The beam subsequently enters a first, homogeneous magnetic field, illustrated by the schematic cylindrical coil (2). The neutron spins precess in this first magnetic field, as indicated by the arrows perpendicular to the optical axis and illuminate the sample (3) illustrated by the square box. The scattered neutrons pass _a_ π-spin flipper behind the sample (4) to invert their spin and enter a second, equivalent magnetic field indicated by the cylindrical coil on the right side of the figure (5), where they precess again and finally pass a π/2-flipper and polarization analyzer (6) and hit the detector (7). If the neutrons are elastically scattered by the sample, they will have the same polarization in (6) as they have had in (1) due to this symmetric setup. In contrast, any change in their velocity by the scattering in the sample will change their initial polarization. The scattering angle 2Θ is adjusted by rotating the arm (4–7) around the sample (3). Figure rendered using Mathematica (Wolfram Research, Inc.), adapted from Hennig (2011).
Fig. 10. Graphical representation of a normal mode of IgG (Harris et al., 1997) obtained through the anisotropic network model for CG normal mode analysis (Eyal et al., 2015).
Fig. 11. Comparison of the HWHM Γ as a function of q2 for Fickian diffusion and jump-diffusion. For Fickian diffusion Γ = Dq2: a straight line is obtained and the slope gives the diffusion coefficient D. For unrestricted jump-diffusion, the slope at low q gives the jump-diffusion coefficient D1, and the asymptote at high q gives the inverse of the residence time τ0. Figure rendered using Mathematica (Wolfram Research, Inc.).
Table 4. Effective force constants (Eq. (61)) of proteins in solution from different studies
Table 5. Parameters on protein internal dynamics obtained from TOF and NBS studies
Table 5b. Parameters on protein internal dynamics obtained from TOF and NBS studies (continued from page 23)
Table 6. Relaxation times and amplitudes of protein internal modes obtained from NSE studies
Fig. 12. Average mean-square displacements $ \langle u^2 \rangle $ of hydrogen atoms in Mb hydrated powder. Figure adapted and reproduced with permission from Doster et al. (1989). Copyright Nature Publishing Group.
Fig. 13. HWHM, Γ, of the internal motion Lorentzian L(Γ, ω), for Mb samples. The lines are guide to the eye. Except for the dry Mb sample, Γ increases with q2, which characterizes the presence of local diffusive motions as soon as the protein is hydrated. In the case of dry Mb, Γ is almost constant, as expected from a reorientational type of motion. The inverse of Γ gives the correlation time of the motions. In solutions, the correlation time extrapolated at q = 0 is ~4.4 ps, less than half of that in powders. Figure adapted and reproduced with permission from Pérez et al. (1999). Copyright Elsevier.
Fig. 14. Sketch of the gradual dynamics activation from protein powders to proteins in solution. Numerous studies indicate that, generally, additional dynamics is present in proteins in solution compared with hydrated protein powders, which in turn are characterized by additional types of motions compared with dry protein powders. IgG (Harris et al., 1997) was rendered using PyMol (DeLano, 2002) and the figure was produced with Gimp (Spencer Kimball and the GIMP Development Team).
Fig. 15. (a) EISF of: BLA (black squares) and MBLA (empty circles). Although a better fit to the EISF was achieved with a two-sphere model, the one sphere model (Eq. (40)) was used in Bu et al. (2000) to describe the change in the effective radius of restricted motions. (b) q dependence of HWHM Γ of the quasi-elastic Lorentzian peak of: BLA (black squares) and MBLA (empty circles). Clear differences are visible between the two states of the protein. Figure adapted and reproduced with permission from Bu et al. (2000). Copyright Elsevier.
Fig. 16. HWHM Γ of the Lorentzian accounting for the internal motion of Hb from (a) platypus Hb, (b) chicken Hb, (c) crocodile Hb, as a function of the squared scattering vector q2. The solid lines are fits according to a jump-diffusion model in the range of 0.64 ⩽ q2 ⩽ 3.24 Å−2. The horizontal solid lines indicate the region of constant half-widths. Figure reproduced with permission from Stadler et al. (2014a). Copyright Elsevier.
Fig. 17. (a) Difference of the corrected diffusion coefficients $D_{eff}^{0}(q)$ and the calculated translational or rotational diffusion coefficient. (b) Diffusion form factor of the normal modes 7 and 11 for the protein configuration with and without the cofactor. (c) Top – Motional pattern of mode 7: without cofactor the exterior domain (catalytic domain) tilts outward and opens the cleft. The inner domain with connection points between the monomers remains stiff. Bottom: Motional pattern of mode 11: with and without the bound cofactor the monomers within a dimer exhibit torsional motion around the long dimer axis (in the image plane), which is more pronounced with the cofactor. Figure reproduced with permission from (Biehl et al., 2008). Copyright American Physical Society.
Fig. 18. Translational self-diffusion coefficients Dt normalized by the dilute limit diffusion coefficient Dt(0) (circles) for two different temperatures (red and purple circles denote 280 and 300 K, respectively) after separation of the rotational contributions. The purple line superimposed on the data is a guide to the eye obtained from a polynomial fit indicating the temperature-independent master-curve. The top and bottom dashed purple lines indicate the upper and lower 96% prediction bounds, respectively. The blue lines denote the colloidal short-time self-diffusion for hard spheres (light blue, solid) and charged spheres (dark blue, dashed). The inset in the top right corner illustrates the flow field (light blue stream line plot) generated by the movement of three spheres (velocities are denoted by blue arrows) and therefore experiencing hydrodynamic forces (pink arrows). Figure reproduced with permission from Roosen-Runge et al. (2011). Copyright National Academy of Sciences of the United States of America.
Fig. 19. Comparison of normalized long-time self-diffusion coefficient, $D_{{\rm s}\_{\rm L}}/D_0$ and normalized short-time self-diffusion coefficient, Ds/D0, as a function of volume fraction. Figure reproduced with permission from Liu et al. (2010). Copyright American Chemical Society.
The dynamics of proteins in solution includes a variety of processes, such as backbone and side-chain fluctuations, interdomain motions, as well as global rotational and translational (i.e. center of mass) diffusion. Since protein dynamics is related to protein function and essential transport processes, a detailed mechanistic understanding and monitoring of protein dynamics in solution is highly desirable. The hierarchical character of protein dynamics requires experimental tools addressing a broad range of time- and length scales. We discuss how different techniques contribute to a comprehensive picture of protein dynamics, and focus in particular on results from neutron spectroscopy. We outline the underlying principles and review available instrumentation as well as related analysis frameworks.
〈Δr 2〉
mean-squared displacement (MSD)
apparent mean-squared displacement
2θ
scattering angle
b, b i
scattering length
$b_i^{{\rm coh}} $, $b_\alpha ^{{\rm coh}} $, $b_\beta ^{{\rm coh}} $
coherent scattering length
$b_i^{{\rm inc}} $, $b_\alpha ^{{\rm inc}} $
incoherent scattering length
stretching factor in a stretched exponential
salt concentration
D, D(t)
diffusion coefficient
${\bf D}$
diffusion tensor
dilute limit diffusion coefficient
short-time diffusion coefficient
D (s)
self-diffusion coefficient
$D_{\rm s}^{\lpar {\rm s} \rpar } $
short-time self-diffusion coefficient
$D_{\rm s}^{\lpar {\rm c} \rpar } $
short-time collective diffusion coefficient
$D_{{\rm app}}^{\lpar {\rm c} \rpar } $
apparent collective diffusion coefficient
$D_{\rm t}^{\lpar {\rm c} \rpar}$
collective translational diffusion coefficient
D t
translational diffusion coefficient
rotational diffusion coefficient
E kin
analyzer energy
neutron energy after scattering by the sample
neutron energy before scattering by the sample
volume fraction
$G\lpar {{\bf r},t} \rpar $
van Hove correlation function
$G_{\rm s}\lpar {{\bf r},t} \rpar $
van Hove self-correlation function
γ, γ(q)
global tumbling relaxation rate
internal relaxation rate
H(q)
hydrodynamic function
$\hbar \omega $
I(q, t)
autocorrelation function (intermediate scattering function)
I r(q, t)
rotational autocorrelation function
I t(q, t)
translational autocorrelation function
I(q, ω), I s
scattering intensity
I a
absorbed intensity
j l(.)
spherical Bessel function of first kind and l-th order
${\bf k}$, k
wavevector and its magnitude
${\bf k}_i$, k i
neutron wavevector before scattering and its magnitude
${\bf k}_f$, k f
neutron wavevector after scattering and its magnitude
length scale
${\cal L} $(.)
Lorentzian function
total number of particles
solid angle
Ωα
orientation of individual atoms
fraction of atoms immobile on the accessible timescale
scattering vector (momentum transfer)
radius of atomic confinement
${\bf R}_{i\alpha} $, ${\bf R}_{i\beta} $
position of particle i and type α or β
R eff
effective protein radius
hydrodynamic radius
protein radius
ρ(r)
radial distribution function
$\rho \lpar {{\bf r},t} \rpar $
microscopic particle density operator
S(q)
structure factor
S(q, ω)
scattering function
S αβ(q, ω)
coherent scattering function
S α(q, ω)
incoherent scattering function
σ s
scattering cross-section
σ coh
coherent scattering cross-section
σ inc
incoherent scattering cross-section
dynamical transition temperature
T d
denaturation temperature
residence time or relaxation time (depending on model)
τ B
ballistic timescale
τ D
diffusive timescale
τ I
interaction timescale
incidence angle
protein volume
W(t)
mean-squared displacement
correlation length
Y lm(Ω)
spherical harmonic functions
List of abbreviations aIF6, initiation factor 6 from Methanocaldococcus jannaschii; ADH, alcohol dehydrogenase; AFM, atomic force microscopy; αSN, α-synuclein; BLA, bovine α-lactalbumin; BLG, bovine beta-lactoglobulin; BSA, bovine serum albumin; CD, circular dichroism; CG, coarse grained; CI2, chymotrypsin inhibitor 2; CYP101, cytochrome P450cam; deoxyHb, deoxyhemoglobin; DLS, dynamic light scattering; eIF6, initiation factor 6 from Saccharomyces cerevisiae; EINS, elastic incoherent neutron scattering; EISF, elastic incoherent structure factor; EPR, electron paramagnetic resonance; FCS, fluorescence correlation spectroscopy; FRAP, fluorescence recovery after photobleaching; FRET, Föster resonance energy transfer; GFP, green fluorescent protein; Hb, hemoglobin; HbCO, carbonmonoxyhemoglobin; HDX, exchange-mass spectrometry; hIgG, human immunoglobulin G; HWHM, half width at half maximum; IDP, intrinsically disordered protein; IF6, initiation factor 6; IgG, immunoglobulin G; IHP, inositol hexaphosphate; IR, infrared; Ip, 'intermediate' pepsin (partially unfolded at pH 8); IRO, intermediate range order; K247R-Tn-CD, troponin core domain, mutant TnT2; Lys, lysozyme; LOV, light, oxygen, voltage; MalDH, malate dehydrogenase; MBLA, molten globule bovine α-lactalbumin; NBS, neutron backscattering; NMR, nuclear magnetic resonance; N-LDL, normolipidemic low-density lipoprotein; Np, native pepsin; NpP, pepstatin-bound native pepsin; NSE, neutron spin-echo; PAN, proteasome-activating nucleotidase; PFG-NMR, pulsed-field gradient nuclear magnetic resonance; PGK, phosphoglycerate kinase; PGKsub, substrate-bound phosphoglycerate kinase; pIgG, pig immunoglobulin G; ProTα, prothymosin α; PST, phase-space transformer; PVP, poly(vinylpyrrolidone); QENS, quasielastic neutron scattering; RBC, red blood cell; Rp, refolded; rOPN, recombinant osteopontin; SANS, small angle neutron scattering; SAXS, small angle X-ray scattering; snase, staphylococcal nuclease; TG-LDL, triglyceride-rich low-density lipoprotein; TMAO, trimethylamine-N-oxide; TOF, time-of-flight; TRXS, time resolved X-ray solution scattering; wtTn-CD, wild type troponin core domain; XPCS, X-ray photon correlation spectroscopy.
Proteins are considered the machinery of life. They are an exciting subject of study for many branches of modern science and technology, from biology to medicine and pharmacy, but also in colloid science, chemical engineering and nanotechnology.
Obviously, proteins were first studied because of their biological relevance. They take part in a large variety of processes of vital importance for all biological cells, and, depending on their composition, they can serve for instance as enzymes, antibodies or carriers of smaller molecules or ions, as well as for structural purposes (Berg et al., 2002). When defective, proteins can cause serious disorders in the life cycle of a cell (Griffiths et al., 1999). Moreover, deficiencies in protein activity resulting e.g. from misfolding, denaturation, and aggregation have been associated with a variety of different diseases (Benedek, 1997; Bloemendal et al., 2004; Ross and Poirier, 2004; Gunton et al., 2007).
In addition to the obvious importance of the time-averaged structure determined by the amino acid sequence and the folding state, leading typically to a few-nanometer-sized objects in the case of globular proteins, the dynamics is key to fulfill their function (Frauenfelder, 1998; Zaccai, 2000; Henzler-Wildman et al., 2007; Richter, 2012; Yang et al., 2014; Campbell et al., 2016; Hong et al., 2016). Here, different contributions have to be distinguished, namely internal dynamics as well as center-of-mass translational and rotational diffusion (details further below). A quantitative characterization of protein dynamics is essential for the understanding of living systems at a molecular level and presumably also the mechanisms leading to protein malfunction. Moreover, protein internal dynamics allowing structural flexibility can increase the affinity between a drug and its target and is therefore fundamental to understanding the ways in which drugs exert biological effects (Teague, 2003).
A large fraction of proteins exists in the aqueous intra-cellular or extra-cellular environment. In the current review, we therefore focus particularly on proteins in aqueous solutions. These solutions may include the presence of additives such as salt ions and/or other macromolecules. Both salt ions and other macromolecules in protein solutions can have an important impact on the dynamics of the proteins. The salt ions may for instance cause dynamic or static aggregation of the proteins, while other macromolecules induce the so-called crowding through the volume that they occupy.
Numerous studies have addressed protein diffusion in living cells (Lippincott-Schwartz et al., 2001), in the nucleoplasm (Phair and Misteli, 2000), in the mitochondrial lumen (Partikian et al., 1998), and in the cytoplasm (Wojcieszyn et al., 1981; Swaminathan et al., 1997; Arrio-Dupont et al., 2000; Verkman, 2002; Jasnin et al., 2008a). In the intracellular fluid of a living cell the macromolecular volume fraction amounts to 20–40%, which is roughly equivalent to a concentration of 200–400 mg ml−1 of a typical protein. Therefore, generally, the global protein diffusion in vivo is found to be significantly slower than in dilute protein solutions. In addition to this effect of crowding on the global motion, also the protein internal dynamics, and thus potentially protein function including reaction equilibria, is expected and indeed found to be affected by macromolecular crowding, i.e. by the high concentrations found in physiological environments (see e.g. (Ellis, 2001; Grimaldo et al., 2014)). It is therefore important to study the entire hierarchy of protein dynamics in solution with their range of length and timescales in order to ultimately better understand intracellular processes of life such as biomolecular self-assembly and dynamical function of enzymes.
The current review aims for a systematic and organized overview on protein dynamics in aqueous solutions at the molecular level. We will first explain the hierarchy of time and length scales involved, and then briefly illustrate the importance of understanding the impact of the biological environment on protein dynamics. Subsequently within this introductory section, we will provide an overview over various experimental methods accessing protein dynamics.
In the 'Principles of neutron spectroscopy' section, we will particularly focus on neutron spectroscopic methods. We will explain the principles of quasi-elastic neutron scattering (QENS) and their implementation in different types of neutron spectrometers, including a list of existing instruments worldwide that are frequently used for protein dynamics. We will also review the necessary theoretical and analysis frameworks as well as the fundamentals of diffusion.
In the 'Results' section, we will provide an overview of published results regarding neutron spectroscopy on proteins in solution, and compare with complementing results from other experimental techniques. This section will also review quantitative findings for observables in protein dynamics and distinguishing various different proteins.
The review will close with a summary drawing a few conclusions from the knowledge gained so far.
Given the size of the research field reflected in the several hundred references, we emphasize that we cannot claim completeness, but aim for a balanced account centered around neutron spectroscopy. We apologize for inevitable distortions in terms of the selection of and relative weight of experimental methods covered as well as omissions of publications, which we ensure are not intentional and not supposed to suggest that certain pieces of work are less relevant. In particular, this review does not comprise associated theoretical work and computer simulations on protein dynamics in depth, since these are beyond its scope. For further information on these aspects, we refer the reader to Okumura et al. (2017); Riest et al. (2018); Das et al. (2018); Riest and Nägele (2015); Liu et al. (2018); Mitsutake and Takano (2018); Zuckerman and Chong (2017); Feig et al. (2017); Perilla et al. (2015); Schöneberg et al. (2014); Karplus and McCammon (2002).
The dynamics of proteins in solution encompasses a hierarchy of dynamical processes on different length and timescales, and is linked to the hierarchical structure of proteins (McCammon, 1984). Proteins are heteropolymers made from a group of 20 amino acids, each of which consists of a backbone segment with an amino and a carboxylic group as well as a residue with further chemical and functional groups. During translation in the cell, the amino acids assemble with their backbone segments into a protein-specific sequence, the so-called primary structure. Parts of the sequence assemble into specific backbone configurations such as α-helix, β-sheet and random coil, the so-called secondary structure. Furthermore, this locally structured protein chain folds into rather compact domains, the tertiary structure, which potentially assemble to the quaternary structure of multi-domain proteins.
In the following, we outline four classes of dynamical processes occurring in proteins, from the largest supramolecular length scale to the smallest atomic length scale of chemical groups, all of which are linked and can contribute to the protein function (Henzler-Wildman et al., 2007). A sketch representing these types of processes is shown in Fig. 1. We emphasize already at this point that different techniques with different experimental resolution address potentially very different dynamical regimes, all of which are relevant for a complete picture of these complex systems at the interface of physics, biology and chemistry (Sakai et al., 2011; Khodadadi and Sokolov, 2017; Narayanan et al., 2017).
On the largest supramolecular scale, global diffusion occurs in two types: (1) gradients and fluctuations in the protein concentration are relaxed by collective diffusion, which depends on the protein–protein interactions and allows to connect to thermodynamic quantities of protein solutions. (2) Self-diffusion or synonymously tracer-diffusion of the entire molecule depends on the surrounding medium with possible obstacles. Theoretically, these two types of diffusion are defined by their respective correlation functions (see Eqs. (15) and (16)). Experimentally, the two types of diffusion are determined by the specific methods that access them separately.
As for all diffusive processes without global confinement, time and length scales are directly related via the diffusion coefficient D: relaxations on a length scale L occur on the timescale τ D = L 2/(4π 2D) = 1/(Dq 2) with the wavevector q = (2π)/L. In a real system with various environmental factors such as e.g. other macromolecules serving as 'obstacles', diffusion becomes a scale-dependent process. The mean-square displacement (MSD)
(1) $$\langle{{\rm \Delta} r^2} \rangle = 6D\lpar t \rpar t^\alpha $$
typically changes from simple diffusive behavior (α = 1) at nanosecond timescales to a crossover regime with apparent anomalous subdiffusion (α < 1) at microsecond timescales, and may recover another simple diffusive regime at much longer timescales (Höfling and Franosch, 2013) (see Fig. 2).
Fig. 2. Sketch of the diffusive MSD W(t) as a function of time. For very short times, W(t) ~ t 2. For t B < t < t I, $W\lpar t \rpar \sim D_{\rm s}^{\lpar {\rm s} \rpar } \,t$, and for t ≫ t I, $W\lpar t \rpar \sim D_l^{\lpar {\rm s} \rpar } \,t$. τ I is the typical interaction time, i.e. the time on which proteins collide.
A possible and remarkably productive framework to describe the global center-of-mass protein diffusion in liquid solutions is provided by colloid physics (see section 'Diffusion of the entire protein'), which predicts a short-time regime on which only hydrodynamic interactions induce a simple diffusive behavior (see Fig. 2). Beyond a so-called interaction time τ I, often approximated with the time needed for a protein to diffuse to a distance equal to its own radius, collisions of the proteins with obstacles increasingly slow down the motion, giving rise to subdiffusive behavior. At very long times, the interparticle interactions average out, and a simple diffusive long-time regime is recovered. Indeed, a rough estimate for the interaction timescale for a globular protein under conditions of macromolecular crowding yields $\tau _{\rm I}\approx R_{\rm p}^2 /D_{\rm s}\approx 100$ ns (R p ~ 2.5 nm protein radius, D s ~ 7 Å2 ns−1 short-time diffusion coefficient), consistent with the overall observations.
On the length scale of the protein size, rotational diffusion significantly contributes to the entire-molecule motion. Depending on the experimental technique, rotational diffusion can contribute a constant offset in the observed relaxation coefficient (e.g. dynamic light scattering (DLS); Berne and Pecora, 2000), appear as an apparent simple diffusion accounting for rotations and translations together (e.g. QENS, see section 'Diffusion of the entire protein' for details), or also be accessed directly (e.g. relaxometric nuclear magnetic resonance (NMR); Korzhnev et al., 2001; d'Auvergne and Gooley, 2008; Bouchoux et al., 2012; Roos et al., 2016).
Despite numerous positive results, the small size and softness of globular proteins poses a challenge for the application of colloidal concepts and theories in order to describe their translational and rotational diffusion in aqueous solution. In this context, the class of intrinsically disordered proteins (IDPs) as well as unfolded protein chains provides an interesting test case, how far colloidal concepts can be merged with polymer descriptions to account for the complex dynamics of unfolded structures. Moreover, the inhomogeneous surface charge pattern of proteins and their tendency, under certain conditions, to form clusters constitutes an additional challenge to colloid physics.
Recently, mutual inspiration from protein studies and colloid physics has led to the remarkably successful development and application of the theory of 'patchy colloids' for the interpretation of a number of experimental observations (Gögelein et al., 2008; Whitelam, 2010; Bianchi et al., 2011; Roosen-Runge et al., 2014). Thus, the study of proteins in solution under different conditions is ideal for testing and refining such theories, and is promising for smart engineering of self-assembling nano-particles and crystallization pathways.
The largest internal motions concern collective fluctuations of domains relative to each other. These motions, which also depend on the fluctuations of bulk solvent (Frauenfelder et al., 2009), can occur on rather long timescales from tens of nanoseconds to milliseconds (Henzler-Wildman et al., 2007; Biehl et al., 2011). Interdomain motions can be essential to protein function, e.g. in the case of cleft-opening around catalytic centers (Biehl et al., 2008). Furthermore, an understanding of these collective modes is important to understand un- and re-folding of proteins.
Conceptually, the interdomain motions have been linked back to overdamped low-frequency normal modes of the protein, and also resemble principal components of dynamics from computer simulations. The underlying idea is in these cases that the coordinate of the mode diffuses under an overall harmonic potential, corresponding to an Ornstein–Uhlenbeck process (Kneller, 2000). Finally, as one interesting experimental signature of different modes besides the relaxation constant Γ, the motional Fourier amplitude has been considered, and can be indeed used to describe experimental data (for details see section 'Large-scale and domain motions').
On smaller scales of several Ångströms within the protein, motions can be disentangled into local fluctuations of the backbone, and strongly confined diffusion of the side-chains fixed at the backbone anchor point and motionally restricted by neighboring side-chains. On the atomic scale of Ångström, diffusive rotations and jump-like reorientations of chemical and functional groups such as methyl represent the fastest processes to diffusive protein dynamics.
Since a disentanglement of these motions is experimentally challenging, information on the motion of individual atoms can be obtained e.g. using two effective quantities accessible in experiments. First, the relaxation constant, often modeled as
(2) $${\rm \Gamma} \lpar q \rpar = \displaystyle{{Dq^2} \over {1 + D\tau q^2}},$$
provides insights into the overall diffusivity D of the atom as well as, from the q dependence, the motional character, i.e. continuous (τ = 0) or jump-like (τ > 0) motion. Second, the degree of confinement on a length scale L = 2π/q results in a change of the amplitude of the relaxations Γ(q) as a function of q, and provides interesting insight into the local geometric confinement of the atom motion (for details see section 'Localized internal dynamics').
On still faster timescales of femto- to pico-seconds and length scales of Ångströms and below, collective vibrational excitations of the protein occur as well as vibrations of individual chemical bonds in the protein. Employing various techniques, protein vibrations have been successfully studied to e.g. address redox-dependent properties (Chin et al., 2002), pressure response (Lerbret et al., 2013), determine cellular death (Verrier et al., 2004) and study motions of the heme complex (Zhu et al., 1994; Levantino et al., 2015). These phenomena in the true inelastic regime are related to the so-called Boson peak in proteins and hydration water (Kataoka et al., 1999b; Leyser et al., 1999; Tarek and Tobias, 2002; Roh et al., 2006; Lerbret et al., 2013; Khodadadi and Sokolov, 2017).
We do not further consider such vibrational motions in this review, because they appear less specific to aspects of protein dynamics in solution. For further details, we refer the reader to other reviews covering protein vibrations from the perspective of various specific techniques (Vogel and Siebert, 2000; Parak, 2003a; Nibbering et al., 2005; Vural et al., 2017).
The dynamical hierarchy of protein dynamics implies that a broad range of time and length scales has to be accessed to comprehensively describe the motions of proteins and their subunits. In this context, individual experimental techniques address specific windows of experimental scales, and contribute to an overall picture of protein dynamics (see Table 1 for a brief comparison). Ideally, such techniques should be non-invasive, efficient and should require the least possible interpretation. In this section, we outline common techniques for the study of protein dynamics and put them in a context regarding the accessed scales as well as other advantages and limitations.
XPCS, X-ray photon correlation spectroscopy; TRXS, time-resolved X-ray scattering; TOF, neutron time-of-flight spectroscopy; NBS, neutron backscattering spectroscopy; NSE, neutron spin echo spectroscopy; NMR, nuclear magnetic resonance (for advanced techniques see text); PFG-NMR, pulsed-field gradient NMR; Diel. and THz spectroscopy: dielectrical and terahertz spectroscopy; FCS, fluorescence correlation spectroscopy; FRAP, fluorescence recovery after photobleaching; FRET, Förster resonance energy transfer.
Various probe particles may be used by scattering experiments, such as protons, electrons, He-atoms, photons or neutrons. For biological matter including proteins, photons and neutrons constitute the most obvious choice, because they can be tuned to energy ranges where they cause little or no damage and access intermolecular length scales. Furthermore, they can penetrate bulk matter including solvents. In the case of photons, the hard X-ray regime causes sufficiently little damage on the required measurement times due to the weak interaction of high-energy photons with biological matter. In the case of neutrons, the cold and thermal energy range from ~2 to 25 meV is perfectly suitable, since these energies well below the energies of chemical bonds do not cause any damage at all. The simultaneous access to information on well-defined time- and length scales (Fig. 3) makes scattering techniques a very valuable tool to study dynamics.
Scattering techniques provide three different modes how dynamics can be studied. Importantly, all of these include a clear notion of both the time and length scales on which dynamics occurs. First, dynamics can be studied through changes of the probe particle energy during the scattering event in the sample, as realized in neutron spectroscopy. In this context, cold or thermal neutrons allow for an unparalleled accuracy in defining the energy transfer due to their low absolute kinetic energies (on the order of 10 meV) compared with X-ray photons (on the order of 10 keV). Second, time-correlation spectroscopy can access the intensity fluctuations in the scattered wave field of a coherently illuminated sample, which is linked to underlying dynamical processes causing fluctuating phase shifts in the scattered radiation (Dierker et al., 1995; Thurn-Albrecht et al., 1996; Seydel et al., 2001; Grübel et al., 2008; Sun et al., 2017; Roseker et al., 2018). Third, scattering profiles at a given time lag from a trigger signal in pump–probe time-resolved setups can be compared with obtained information on changes in the sample (Cho et al., 2010; Lima et al., 2011; Navirian et al., 2011; Kim et al., 2012b, 2015).
In most cases relevant for biological studies, neutrons with Ångström wavelengths can be considered as classical particles that during the scattering process exchange momentum and energy with the sample. By measuring these changes, conclusions on the dynamics and structure in the sample can be drawn. Depending on the specific realization (see sections 'Quasi-elastic neutron scattering theory' and 'Experimental techniques' for details), timescales ranging from pico- to hundreds of nano seconds can be addressed on length scales ranging from Ångströms to several nanometers.
The simultaneous collection of spatial and temporal information as well as the inherent property of neutron spectroscopy to record ensemble averages allows a robust modeling of the statistical and motional characteristics of the underlying dynamical processes.
Neutron spectroscopy is an established technique to study systems of soft and biological matter (Fitter et al., 2006; Sakai and Arbe, 2009; Hoffmann, 2014), and due to the accessible time and length scales in particular suitable for protein dynamics (see section 'Results' for a comprehensive review). Using neutron spectroscopy, the full hierarchy of protein dynamics can be accessed, including global diffusion, inter-domain motions and local diffusive dynamics.
Photon correlation spectroscopy is based on coherently illuminating a macroscopically large volume in the sample by a photon beam, which ranges from 100 to 1000 cubic micrometers at synchrotron sources up to the entire sample volume (~mm3) at laser sources for both visible light and X-ray photons.
In photon correlation spectroscopy, information on the collective dynamics inside the sample is accessed via the temporal fluctuations of the speckle pattern scattered from the coherently illuminated volume. While the timescale is thus set by the read-out frequency and the stability of the system, the length scale is related to the scattering vector at which the so-called speckle is observed.
DLS is probably the most frequently used lab-based technique to obtain information on diffusional properties in soft matter (Dhont, 1996; Murphy, 1997; Berne and Pecora, 2000; Gun'ko et al., 2003; Scheffold and Cerbino, 2007; Schmitz, 2012; Phillies, 2016). Given the long history of several decades and the ubiquitous use also in protein science, a complete coverage of DLS results is beyond the scope of this review, and we only briefly mention a few case studies. Based on the measured translational collective (or gradient) diffusion coefficients on timescales of typically microseconds and length scales of micrometers, inter alia protein interactions (Phillies et al., 1976; Muschol and Rosenberger, 1995; Kuehner et al., 1997; Heinen et al., 2012), protein assemblies (Schurtenberger and Augusteyn, 1991; Shen et al., 1993; Ferré-D'Amaré and Burley, 1994; Piazza, 2004; Soraruf et al., 2014) and unfolding and denaturation (Baussay et al., 2004; Jachimska et al., 2008) have been addressed. Using depolarized DLS, also rotational self-diffusion has been accessed (Dubin et al., 1971; Berne and Pecora, 2000).
XPCS is a synchrotron-based technique, accessing length scales from the atomic scale up to a few micrometers (Grübel et al., 2008; Leheny, 2012; Möller et al., 2016). So far, it has only been used tentatively for protein dynamics because of the prevailing challenge of radiation damage (Vodnala et al., 2018), although XPCS could provide a unique time and length scale window for collective dynamics, and there may be ways to circumvent the problems (Verwohlt et al., 2018). The advent of free-electron lasers provides further promising opportunities for X-ray based techniques, such as X-ray speckle visibility studies using a single bunch and a delay line with variable lag time.
When specific time-dependent processes in proteins can be triggered, e.g. by photoactivation or changes of temperature and other environmental factors, the subsequent kinetics of changes of the protein structure can be followed by collecting scattering profiles at multiple suitably chosen lag times (Ihee et al., 2010). For slower processes on timescales longer than several microseconds such as assembly of virus capsids, crystal nucleation, as well as protein folding, these profiles can be collected on one identical sample, or in stopped-flow or rapid fluid mixing setups (Pollack et al., 2001; Svergun and Koch, 2003; Kirby and Cowieson, 2014; Sauter et al., 2015).
Faster processes on timescales below a few microseconds require pump–probe setups, in which the trigger signal is followed by the X-ray pulse after a defined lag time. The duration of the X-ray pulse sets the minimum accessible timescales to roughly 100 ps at third-generation synchrotron sources and clearly below 1 ps for X-ray free-electron lasers (Ihee et al., 2010; Kirby and Cowieson, 2014).
By these techniques, photo-induced protein dynamics on picosecond timescales could be addressed e.g. in hemoglobin (Hb) (Cammarata et al., 2008; Kim et al., 2012a, 2015), myoglobin (Mb) (Ahn et al., 2009; Cho et al., 2010), photoactive yellow protein (Kim et al., 2015), proton pumps (Andersson et al., 2009) and a photosynthetic reaction center (Arnlund et al., 2014). Given the necessity of a trigger signal, the application of this ultra-fast TRXS is limited to specific cases, and usually requires additional molecular modeling to interpret the data.
Fluorescence-based techniques provide a well-established, lab-based access to several aspects of protein dynamics, and are frequently used to obtain information on in-vivo biological systems (Lippincott-Schwartz et al., 2001; Rivas and Minton, 2016).
The required insertion of a fluorescence marker can be both advantageous, since it provides opportunities to target the property in question, and disadvantageous, since the solution behavior of proteins might be altered significantly (Quinn et al., 2015; Rivas and Minton, 2016).
While a thorough overview over this vivid research methodology is beyond the scope of this review, we briefly discuss main techniques, and refer the reader to review articles for further information (Lippincott-Schwartz et al., 2001; Rivas and Minton, 2016).
The basic idea of FRAP is to bleach the fluorophores in a part of the sample, and then record how the fluorescence signal recovers in time, yielding the density of fresh intruders as the observable. The length scale of the accessed dynamics is set by the optical resolution and bleaching volume (typically around a few micrometers), whereas the timescale is limited by the scanning speed of the confocal microscope to timescales of several milliseconds. Multiple variants of FRAP differing e.g. in the geometry of the bleach volume have been exploited (Bancaud et al., 2010). The achieved results include protein mobility, topology of cellular compartments and protein reaction dynamics (Lippincott-Schwartz et al., 2001; Bancaud et al., 2010; Fritzsche and Charras, 2015).
In FCS, the correlation of the fluorescence intensity is used to obtain information on the diffusion time of molecules across the confocal volume (Lippincott-Schwartz et al., 2001; Krichevsky and Bonnet, 2002). The accessible timescales start around several 100 ns, and are mainly determined by the deadtime of the photon counting detector and the subsequent hardware correlator. The length scales are directly given by the confocal volume, which usually is around a few μm3. Using the autocorrelation of single dyes, translational and rotational motions can be addressed (Lippincott-Schwartz et al., 2001; Krichevsky and Bonnet, 2002; Di Rienzo et al., 2014), while cross correlations of two dyes allow for a dynamical picture of binding and protein interaction (Lippincott-Schwartz et al., 2001; Bacia et al., 2006).
The energy transfer between specific pairs of donor and acceptor dyes exhibits a strong distance dependence on the scale of a few nanometers, which allows for high-precision measurements of the distance between labeled sites by measuring the efficiency of the transfer (Deniz et al., 2001; Lippincott-Schwartz et al., 2001; Piston and Kremers, 2007; Roy et al., 2008). The fastest accessible timescales are set by the read-out frequency of the photon detector, while the longest timescales are limited by the time molecules stay in the confocal volume. Since this time is of the order of a few milliseconds for freely diffusing molecules, long-time processes can only be monitored when molecules are immobilized (Lippincott-Schwartz et al., 2001; Roy et al., 2008).
The choice of the labeling dyes allows to fine-tune the sensitivity, and defines the type of possible experiment. Using two dyes on two molecules, intermolecular docking can be studied in time (Lippincott-Schwartz et al., 2001; Piston and Kremers, 2007; Roy et al., 2008) as well as protein localizations in the cell (Sekar and Periasamy, 2003). Labeling two sites on a single molecule, folding dynamics under native and denaturing conditions can be studied (Nienhaus, 2006; Borgia et al., 2008; Schuler and Eaton, 2008; Ferreon and Deniz, 2011). While adding specificity to the obtained information, the attachment of dyes can also have severe drawbacks, since their effects on internal dynamics and intermolecular interactions might not be negligible (Sánchez-Rico et al., 2017).
A multitude of NMR techniques exists and has been exploited to address protein dynamics and folding. We refer the reader to reviews from the last decade for a more detailed overview (Dosset et al., 2000; Ishima and Torchia, 2000; Dyson and Wright, 2004; Blackledge, 2005; Kay, 2005; Boehr et al., 2006; Mittermaier and Kay, 2009; Kleckner and Foster, 2011; Krushelnitsky et al., 2013). Standard measurements of the spin relaxations in protein solutions are bound to timescales of pico- to nanoseconds due to the protein tumbling (Boehr et al., 2006; Kleckner and Foster, 2011; Krushelnitsky et al., 2013). A much broader range of timescales up to seconds can be accessed using more specialized NMR techniques, such as e.g. residual dipolar couplings, exchange spectroscopy and real-time NMR (Blackledge, 2005; Boehr et al., 2006; Kleckner and Foster, 2011). While a full hierarchy of timescales is accessible, spatial information on dynamical processes can only be inferred through modeling when using NMR, with few exceptions (see below).
For most of these techniques, labeling with specific isotopes such as 13C and 15N is used, and also allows for site-specific information on protein dynamics in a non-invasive way.
In pulsed-field gradient NMR (PFG-NMR) (Price, 1997) or variants such as diffusion ordered spectroscopy (Johnson, 1999), the spin echo after at least two pulsed field gradients allows to obtain information on the molecular translational mobility, since displacement of the protein results in a varied phase shift. Technically, the timescale is set by the pulse separation usually on the order of several milliseconds. Practically, the spin–spin relaxation provides an upper limit for the accessible timescale. Importantly, the length scale can be independently set via the gradient (within certain technical limits), which allows to obtain information on diffusion coefficient and confinement geometry (Price, 1997).
Using translational self-diffusion coefficients, PFG-NMR allows for systematic study of numerous aspects of protein dynamics (Price, 2000), such as aggregation behavior (Price et al., 1999), unfolding (Wilkins et al., 1999) or effects of protein concentration (Le Bon et al., 1999; Roos et al., 2015, 2016).
Dielectric spectroscopy has been used to obtain information on the dynamics of proteins in solution and, in particular, their hydration properties from the dielectric spectrum up to several gigahertz (Nandi et al., 2000; Oleinikova et al., 2004; Cerveny et al., 2008; Frauenfelder et al., 2009; Fenimore et al., 2013; Nakanishi and Sokolov, 2015). Usually, the dielectric spectrum in protein solutions displays three main features, denoted as β, γ and δ dispersion, representing dielectric relaxation processes at well-separated timescales. While the technique is well-established, no general consensus of the physical origins of the dispersions has been found, and computer simulations and comparison with other techniques are needed for a conclusive assignment (Nakanishi and Sokolov, 2015).
While the β dispersion with relaxation times around tenths of nanoseconds can be assigned to protein tumbling, the γ dispersion on timescales of a few picoseconds is attributed to bulk water reorientations (Oleinikova et al., 2004). The origin of the bimodal δ dispersion at timescales of 100 ps to 1 ns is usually assigned to processes connected to the dynamics of hydration water (Nandi et al., 2000; Oleinikova et al., 2004).
Improvements in experimental techniques allowed the extension into the terahertz regime, corresponding to timescales down to around 1 ps (Markelz et al., 2000; Jepsen et al., 2011; Falconer and Markelz, 2012; Bellissent-Funel et al., 2016). In connection with theoretical modeling, the changes in absorbance have been linked to changes in the vibrational states of proteins (Castro-Camus and Johnston, 2008; Acbas et al., 2014), and solvation effects on proteins (Markelz et al., 2000; Xu et al., 2006; Ebbinghaus et al., 2008).
Molecular dynamics (MD) simulations complement experimental methods and modeling. The interpretation of experimental data with the help of MD simulations has evolved into a very large field (e.g. Smith, 1991; Daniel et al., 2003; Sakai and Arbe, 2009; Smith et al., 2018) beyond the scope of this review. The comparison of results from simulated atom trajectories with experimental scattering functions has been achieved using software packages such as nMOLDYN (Kneller et al., 1995; Róg et al., 2003) or MDANSE (Goret et al., 2017). To this effect, these software packages calculate the simulated scattering functions from the computed trajectories, and the possibility to selectively investigate for instance only some molecular groups contributes to the advantages of the simulation approach. To some extent, the simulation approach can be an alternative to the use of models outlined in the 'Modeling and analysis' section, especially for the interpretation of increasingly complex systems (Kneller, 2005; Sakai and Arbe, 2009).
Multiple other techniques have been used to study different aspects of the dynamics of proteins, which are beyond the scope of this review, and should only be mentioned briefly with key references.
Also exploiting magnetic resonance, but different from NMR, electron spin resonance has to be mentioned, which requires an unpaired electron spin for detection. It can also serve as a microscopic probe for the dynamics and kinetics in situ (Steinhoff et al., 1994; Klose et al., 2014; Beutel et al., 2015; Dunkel et al., 2015; Matthies et al., 2016).
As a classical tool to study vibrational dynamics we should mention infrared (IR) and Raman spectroscopy, which can also be performed in a time-dependent way to investigate various aspects of protein dynamics and kinetics of transformations (Arrondo and Goñi, 1999; Zanni and Hochstrasser, 2001; Schweitzer-Stenner, 2005; Garczarek and Gerwert, 2006; Kolano et al., 2006; Barth, 2007; Kong and Yu, 2007; Balakrishnan et al., 2008; Fayer, 2009; Kim and Hochstrasser, 2009; Kötting and Gerwert, 2015; Kuhne et al., 2015; López-Peña et al., 2015; Schröter et al., 2015).
High-speed atomic force microscopy (AFM) allows to study protein dynamics sticking to a surface on timescales of a few milliseconds with nanometer-resolution (Ando et al., 2001; Casuso et al., 2011; Katan and Dekker, 2011).
Single-molecule force spectroscopy using AFM, optical or magnetic tweezers accesses the folding dynamics in response to severe mechanical stresses (Viani et al., 1999; Zhuang and Rief, 2003; Borgia et al., 2008; Neuman and Nagy, 2008; Ferreon and Deniz, 2011).
Very recent developments in super-resolution microscopy allow for the tracking of single fluorescence-labeled molecules on length scales of several nanometers down to timescales of several milliseconds, such as the dynamics of the cytoskeleton, chromatin-binding and freely diffusing proteins in the cell (Schneider et al., 2015; Balzarotti et al., 2016; Basu et al., 2016; Finkenstaedt-Quinn et al., 2016; Nienhaus and Nienhaus, 2016).
Time-resolved Laue X-ray crystallography achieved high-resolution information on photo-activated protein dynamics in the crystalline state (Wulff et al., 1997; Srajer et al., 2001; Schotte et al., 2003, 2012; Aquila et al., 2012). Temperature dependencies of protein crystallography have been used to address spatial information on dynamical flexibility in the protein (Frauenfelder et al., 1979).
In special cases of proteins containing Mössbauer-active isotopes such as 57Fe, Mössbauer spectroscopy has been used to provide information on the mean-square displacement in proteins (Frauenfelder et al., 1988, 2009; Parak, 2003a, b; Fenimore et al., 2013).
In the context of dynamics, we shall also mention rheology-related techniques, although they are slightly different in nature than some of the other techniques. For a background on rheology, we refer to Mewis and Wagner (2012) and Zhang and Liu (2017).
Protein dynamics occurs on broad and hierarchical timescales ranging from picoseconds up to seconds, and length scales between Ångströms and several nanometers (for internal dynamics) or even millimeters (for long-range diffusion). Revisiting the different characteristics of the experimental techniques (Table 1), the necessity for complementary studies on protein dynamics becomes clear. NMR, dielectric and THz spectroscopy provide a broad range of timescales, but cannot be interpreted without the knowledge of the spatial distribution of the underlying motions. A combination of scattering techniques, PFG-NMR and fluorescence techniques, in combination with computer simulations, can provide the missing information, although individual techniques are bound to smaller time windows.
While this review is centered around protein dynamics as seen by QENS, we aim to provide connections to the other techniques to put forward a more comprehensive and detailed picture of protein dynamics, and promote fruitful, mutual understanding in the scientific landscape studying it.
In the following, a short introduction to the theory of neutron scattering will be given. For a more complete treatment of neutron scattering, the reader is invited to consult for instance the article by Schober (2014) and the text books by Squires (2012) and Bee (1988), on which this section is based.
Neutrons are spin-1/2 subatomic particles having a mass m ≃ 1.67 × 10−27 kg, and carrying no net charge, but a magnetic dipole moment. Together with protons, they are usually found in atomic nuclei, where they are stable. Free neutrons, which can be produced by fission or spallation nuclear reactions, are unstable and decay to a proton, an electron and an antineutrino with a mean lifetime of about 15 min. Their energy after moderation is simply equal to their non-relativistic kinetic energy:
(3) $$E_{{\rm kin}} = \displaystyle{1 \over 2}\; m\,v^2 = \displaystyle{{\hbar ^2}{k^2} \over {2m}},$$
where the last equality follows from the wave-particle duality with the wavevector ${\bf k} = (m/\hbar\!){\bf v}$ (Bee, 1988).
A neutron interacts with the atomic nuclei in the sample and can either be absorbed or scattered. In the case of scattering, the neutron may change, in addition to its spin orientation, its energy and its wavevector. Two basic quantities can be thus measured in a scattering experiment, that is, the energy transfer
(4) $$\hbar\!\omega = E_f-E_i = \displaystyle{{\hbar ^2} \over {2m}}\lpar {k_f^2 -k_i^2} \rpar $$
between the energy of the neutron before, E i, and after, E f, the scattering event, and the scattering vector
(5) $${\bf q} = {\bf k}_i - {\bf k}_f,$$
in addition to the magnetic polarization state. A schematic representation of a scattering event is depicted in Fig. 4. It should be noted that, for inelastic scattering, $\hbar\!\omega \ne 0$ implies that |k i| ≠ |k f|.
The neutron-nucleus scattering is characterized by the scattering length b tot = b + ib abs, which, for the thermal neutrons considered here, is independent of the neutron energy and can be a complex number. The imaginary part of b tot represents absorption, while the real part is related to scattering. For a repulsive neutron-nucleus potential, b is positive, while a negative and very large b indicates the imminent appearance of a bound state. Note that an attractive potential does not necessarily require a negative b (Squires, 2012; Schober, 2014).
Since b characterizes the interaction between a neutron and a nucleus, its value is different not only for different elements, but also for different isotopes and spin states. In general, a sample is composed of several atomic species i, all with a given b i. The coherent $b_i^{{\rm coh}} $ and the incoherent $b_i^{{\rm inc}} $ scattering lengths of the i species are defined as the average of b i over isotopes and spin states
(6) $$b_i^{{\rm coh}} = \langle{b_i} \rangle $$
and as the mean square deviation of b i from 〈b i〉
(7) $$b_i^{{\rm inc}} = \left[ {\langle{b_i^2} \rangle -{\langle{b_i} \rangle}^2} \right] ^{1/2},$$
respectively (Bee, 1988).
The scattering length can be related to the probability that a neutron with incident energy E i leaves the sample in the solid angle element dΩ about the direction Ω and with an energy transfer between $\hbar\! \omega $ and $\hbar\! (\omega + {\rm d}\omega {\rm )}$, that is the double-differential cross-section
(8) $$\displaystyle{{\partial ^2\sigma _{\rm s}} \over {\partial {\rm \Omega} \partial E}} = \displaystyle{1 \over \hbar} \displaystyle{{\partial ^2\sigma _{\rm s}} \over {\partial {\rm \Omega} \partial \omega}}. $$
The integral over Ω and E of the double-differential cross-section is the scattering cross-section σ s. Let I 0 be the number of incoming neutrons per unit of time and area, then the number I s of scattering events per unit of time is given by
(9) $$I_{\rm s} = I_{\rm 0}\,\sigma _{\rm s} = I_0 {\mathop \int \nolimits} \,{\rm d}E\; {\mathop \int \nolimits} \,{\rm d\Omega} {\displaystyle{{\partial ^2\sigma _{\rm s}} \over {\partial {\rm \Omega} \partial E}}},$$
(10) $$\sigma _{\rm s} = \sigma _{{\rm inc}} + \sigma _{{\rm coh}} = 4\pi {\rm (}b_{{\rm inc}}^2 {\rm}+{\rm } b_{{\rm coh}}^2 {\rm )}{\rm.} $$
We note that, similar to the scattering cross-section in Eq. (9), an absorption cross-section can be defined such that
(11) $$I_{\rm a} = I_0\,b_{{\rm abs}}^2 = I_0\,\sigma _{\rm a} = I_0 {\mathop \int \nolimits} \,{\rm d}E\; {\mathop \int \nolimits} \,{\rm d\Omega} \;{\displaystyle{{\partial ^2\sigma _{\rm a}} \over {\partial {\rm \Omega} \partial E}}},$$
where I a denotes the number of absorption events per unit of time (Bee, 1988).
In general, a sample can be composed of n different types of atoms such as H, D, and C. It can be shown that, in the absence of coupling between the values of the scattering lengths for different isotopes (i.e. independent averages), the double-differential scattering cross-section can be written as
(12) $$\displaystyle{{\partial ^2\sigma _{\rm s}} \over {\partial {\rm \Omega} \partial \omega}} = \left( {\displaystyle{{\partial^2\sigma_{\rm s}} \over {\partial {\rm \Omega} \partial \omega}}} \right)_{{\rm coh}} +\; \left( {\displaystyle{{\partial^2 \sigma_{\rm s}} \over {\partial {\rm \Omega} \partial \omega}}} \right)_{{\rm inc}\,},$$
(13) $$\left( {\displaystyle{{\partial^2\sigma_{\rm s}} \over {\partial {\rm \Omega} \partial \omega}}} \right)_{{\rm coh}} = \displaystyle{1 \over N}\displaystyle{k \over {k_0}}\mathop \sum \limits_{\alpha = 1}^n \mathop \sum \limits_{\beta = 1}^n b_\alpha ^{{\rm coh}} b_\beta ^{{\rm coh}} \matrix{ {\sqrt {N_\alpha \,N_\beta} \; S^{\alpha \beta} \lpar {{\bf q},\omega} \rpar\!} \cr}, $$
(14) $$\left( {\displaystyle{{\partial^2\sigma_{\rm s}} \over {\partial {\rm \Omega} \partial \omega}}} \right)_{{\rm inc}} = \displaystyle{1 \over N}\displaystyle{k \over {k_0}}\mathop \sum \limits_{\alpha = 1}^n \left( {b_\alpha^{{\rm inc}}} \right) ^2\,S_{{\rm inc}}^{\rm \alpha} ({\bf q}{\rm,} \,\omega ),$$
where N α and N β denote the number of atoms of type α and β (Bee, 1988).
The function $S^{\alpha \beta} \lpar {{\bf q},\omega} \rpar $ is the so-called coherent scattering function of the components α and β, while $S_{{\rm inc}}^\alpha \lpar {{\bf q},\omega} \rpar $ is the incoherent scattering function. They are defined as:
(15) $$\eqalign{S^{\alpha \beta} \lpar {{\bf q}, \!\omega} \rpar = &\displaystyle{1 \over {2\pi \sqrt {N_\alpha \,N_\beta}}} \int_{-\infty} ^{ + \infty} {{\rm d}t\,{\rm exp}\lpar {-i\omega \,t} \rpar } \cr & \times \mathop \sum \limits_{i_\alpha = 1}^{N_\alpha} \mathop \sum \limits_{\,j_\beta = 1}^{N_\beta} \langle{\exp [ {i\,{\bf q}\cdot \lpar {{\bf R}_{i_\alpha} \lpar t \rpar -{\bf R}_{\,j_\beta} \lpar 0 \rpar } \rpar } ] } \rangle,} $$
(16) $$\eqalign{S_{{\rm inc}}^\alpha \lpar {{\bf q},\!\omega} \rpar = &\displaystyle{1 \over {2\pi N_\alpha}} \int_{-\infty} ^{ + \infty} {{\rm d}t\,} {\rm exp}\lpar {-i\omega \,t} \rpar \cr & \times \mathop \sum \limits_{i_\alpha = 1}^{N_\alpha} \langle{\exp [ {i\,{\bf q}\cdot \lpar {{\bf R}_{i_\alpha} \lpar t \rpar -{\bf R}_{i_\alpha} \lpar 0 \rpar } \rpar } ] \,} \rangle} $$
Importantly, by applying the inverse Fourier transform twice to $S^{\alpha \beta} \lpar {{\bf q},\omega} \rpar $, from ω to t, and from q to r, the time-dependent pair-correlation function also known as the van Hove function (van Hove, 1954)
(17) $$G\lpar {{\bf r},t} \rpar = \displaystyle{1 \over N} \int \langle{\rho \lpar {{\bf {r}^{\prime}}-{\bf r},0} \rpar \,\rho \lpar {{\bf {r}^{\prime}}, \!t} \rpar } \rangle {\rm d}{\bf {r}^{\prime}}$$
is obtained, with the microscopic particle density operator
(18) $$\rho \lpar {{\bf r}, \!t} \rpar = \mathop \sum \limits_i \delta \lpar {{\bf r}-{\bf R}_i\lpar t \rpar } \rpar \,,$$
where Ri(t) is the position of particle i at time t. The same can be obtained from $S^\alpha_{{\rm inc}}\lpar {{\bi q},\omega \rpar } $, but without cross-correlation terms between different atoms, yielding the van Hove self-correlation function:
(19) $$G_{\rm s}\lpar {{\bf r}, \!t} \rpar = \displaystyle{1 \over N}\int {\left\langle {\mathop \sum \limits_i \delta \lpar {{\bf r}-{\bf R}_i\lpar 0 \rpar } \rpar \delta \lpar {{\bf r}-{\bf R}_i\lpar t \rpar } \rpar } \right\rangle {\rm d}{\bf {r}^{\prime}}}. $$
Consequently, S αβ(q, ω) provides information on the correlation between a particle at time t and another particle at time t + t′ (cross-correlation), while $S^\alpha_{{\rm inc}}\lpar {{\bi q},\omega \rpar }$ provides information on the correlation between the position of a particle at a time t and its position at time t + t′ (autocorrelation) (Bee, 1988).
To explain these two types of correlations in a more intuitive way, let us imagine many particles in suspension, where they can diffuse freely. At a given time t, the particle i is at the position R i(0). The presence of the particle at that location at that time may influence the behavior of the close-by particles at subsequent times, which may, in turn, influence other particles and so on. Let one of these be particle j. The cross-correlation
$$\langle{R_i(t)\,R_j(t + {t}^{\prime})} \rangle $$
between particles i and j gives information on how the position, R i(t), of particle i at time t influences particle j at a time t + t′. The same concept can be applied to a single particle, in which case the so-called self-correlation or autocorrelation is given by:
$$\langle{R_i(t)\,R_i(t + {t}^{\prime})\,}\! \rangle. $$
At time t, the particle i is at position R i(t). For sufficiently short times, the particle cannot move considerably, and therefore its position at time t + t′ will be related to its initial location, i.e. 〈R i(t) R i(t + t′)〉 ≠ 0. After a sufficiently long time, however, the position R i(t) of the particle could have been reached from several other locations. In other words, for sufficiently large times, one cannot tell anymore where the particle was in the beginning, and therefore the correlation between the position of the particle i at time t and that of the same particle at a later time will be lost, i.e. 〈R i(t) R i(t + t′ → ∞)〉 = 0.
The coherent scattering function provides information on collective motion, while the incoherent scattering function yields information on the particle's self-motion. A suitable combination of scattering techniques thus allows for the distinction of self- and collective diffusion.
In aqueous solutions of proteins, the by far highest neutron scattering cross-section is the incoherent scattering cross-section of 1H, as reported in Table 2. Since proteins are largely composed of hydrogen, their main contribution to the neutron scattering function is incoherent, at least at scattering vectors q sufficiently far from correlation peaks. Moreover, given that deuterium (D) atoms have a much smaller cross-section than H-atoms, one can dissolve proteins in D2O rather than in H2O, thereby significantly reducing the signal of the solvent. Therefore, although the impact of the replacement of H2O with D2O on the protein dynamics should be considered (see section 'From powder to solution: influence of solution conditions on protein dynamics'), neutron scattering is well-suited to access the self-dynamics of proteins in solution.
Table 2. Coherent (σ coh), incoherent (σ inc) and absorption (σ a) neutron cross-sections in barns of the elements comprising proteins and common salts in biological environments (Sears, 1992)
As an example, considering a colloidal particle undergoing free translational Fickian diffusion, the van Hove self-correlation function G s(r, t) is given by the probability density function (Vineyard, 1958):
(20) $$G_{\rm s}\lpar {{\bf r}, \!t} \rpar = \displaystyle{1 \over {{(4\pi \,D^{\lpar {\rm s} \rpar }\,t)}^{d/2}}}\exp \left( {-\displaystyle{{{({\bf r}-{\bf r}_0)}^2} \over {4\,D^{{\rm (s)}}t}}} \right),$$
with the self-diffusion coefficient D (s), and the dimension d. The double Fourier transform of G s(r, t) yields a Lorentzian function
(21) $$S(q,\omega ) = \displaystyle{1 \over \pi} \; \displaystyle{\gamma \over {\gamma ^2 + \omega ^2}}\,,$$
with half width at half maximum (HWHM) $\gamma = D^{(s)}q^2 $. For proteins, typical time-scales for short-time diffusion are on the order of nanoseconds, which correspond to γ ~ 1 μeV at q ~ 1 Å−1. These energy transfer and scattering vector ranges are directly accessible by quasi-elastic neutron backscattering (NBS).
Depending on the energy transfer spectrum, three types of scattering can be identified, as shown in Fig. 5: (i) elastic scattering occurs when the spectrum is, ideally, a delta function centered at $\hbar \! \omega = 0$, implying no dynamics in the sample. Note that, in practice, the delta function is always convoluted with a resolution function specific for every instrument, which in combination with a given statistics also limits the longest observable correlation time (cf. section 'Experimental techniques'). (ii) QENS corresponds to a broadening of the elastic spectrum due to the movement of the scattering centers in the sample. Hence, the spectrum is broader than in the elastic case but, importantly, it still features a peak centered at zero energy transfer, hence the term quasi-elastic. For instance, in the case of diffusion, quasi-elastic scattering resulting in a scattering function with a Lorentzian profile centered at $\hbar\!\omega = 0$ is expected, as seen in Eq. (21). Typical broadenings γ due to protein diffusion in solution are on the order of eV for 0.2 Å−1 ${\rm \lesssim} \;q\;{\rm \lesssim} 2\,$Å−1. (iii) If the spectrum presents peaks centered at $\hbar\!\omega \ne 0$, the scattering is inelastic. Inelastic scattering is due to e.g. vibrational modes, typically at energies of a few millielectronvolts.
In the case of elastic scattering, the magnitude of the wavevectors of the incoming (ki) and outgoing (kf) neutron is the same, and
(22) $$q\equiv \vert {\bf q} \vert = \displaystyle{{4\pi} \over \lambda} \sin \left( {\displaystyle{{2\theta} \over 2}} \right)\,,$$
where 2θ is the angle between ki and kf (see Fig. 4). Equation (22) also holds within the experimental accuracy for QENS.
The well-established theoretical framework for neutron scattering presented above involves some classical physics approximations. In a recent work by Kneller (2018), incoherent neutron scattering is interpreted in terms of quantum mechanical transition rather than on classical displacement probabilities. The approach results in expressions for the scattering functions enabling an energy landscape-oriented analysis of neutron scattering spectra as well as a physical interpretation of Van Hove's space–time correlation functions in the quantum regime. With this formalism being so recent, so far we are not aware of published applications in the context of biophysical research. Therefore, although it may be employed in future studies, it will not be further discussed here.
Neutron spectroscopy is a remarkable technique to study protein dynamics for a number of reasons. Cold neutrons interact only weakly with matter, which allows the measurement of fragile biological samples such as proteins with negligible radiation damage. Also, generally, high protein concentrations typical of cell environments and turbid solutions can be easily measured, as opposed to optical techniques. Moreover, the neutrons are scattered particularly well by the hydrogen atoms compared with the other elements in proteins (Table 2). Such a scattering is mainly incoherent, meaning ultimately that neutron spectroscopy can provide information on the self-correlations of the rather homogeneously distributed hydrogen atoms, which allows for a label-free measurement of the average protein dynamics. Furthermore, the coherent scattering can be exploited to provide information on collective motions e.g. of protein domains.
Since the hydrogen atoms in H2O scatter neutrons just as well as those in proteins, a common workaround is to use D2O as a solvent, due to the much lower scattering cross-section of deuterium atoms compared with hydrogen atoms (cf. Table 2). The same principle can be used within the macromolecules: proteins can be synthesized in such a way that the hydrogen atoms in specific domains and sub-domains are exchanged with deuterium atoms (D-labeling), thus information on selected parts can be obtained.
Neutron instruments used for the spectroscopy of protein samples generally operate with an incident neutron energy on the order of a few millielectronvolts. This energy, corresponding to the so-called cold or thermal neutrons, is commensurate with the diffusive motions as well as the low-energy vibrational excitations of the proteins at the molecular level. Moreover, these neutron energies permit to access scattering vectors that are in a biologically interesting range from several hundred nanometers down to around 0.1 nm. A trivial advantage of cold or thermal neutron beams is the absence of radiation damage in biological samples, since the neutron energy is too low for ionization or the destruction of chemical bonds. Furthermore, neutron spectroscopy carries the additional advantage of directly accessing the atom motions, and not being affected by selection rules. These features make neutron spectroscopy a unique tool to study geometrical and dynamical features of protein dynamics at different levels, while of course other techniques provide highly complementary information (cf. section 'Overview: techniques addressing protein dynamics').
Neutron beams for condensed matter research emerge from nuclear fission or spallation processes. Depending on the employed process of neutron production, neutron sources produce neutron beams that are either continuous in time or display a pulse pattern. As the first step after neutron production, the initial neutron energy on the order of 1–10 MeV has to be reduced to the meV energy range using cold or thermal neutron moderators at fixed temperatures. The minimum path length of a few centimeters necessary to moderate neutron energies ultimately defines the source size. After moderation, neutron guides deliver the neutrons to the instruments. Although focus optics reduce the initial beam size at most modern neutron spectrometers, the source size sets already the size scale for the beam size at the sample, since the cost of the increasing divergence of a focused beam mostly sets the lower limit of the characteristic cross-section of a neutron beam at the sample to ~2.2 cm2.
Neutron spectrometers are subject to continuous progress in neutron optics to make optimal use of the available source flux from current and future neutron sources. Although neutron spectrometers can be generally classified based on their type of neutron detection, the given beam characteristics at different sources require specific adaptations and optimizations, rendering each instrument a unique neutron spectrometer with characteristics complementary to others.
In most cases, protein samples investigated by neutron spectroscopy are not in a crystalline but in a liquid solution, gel, glass or (hydrated) powder state. For this reason, the prevailing types of neutron spectrometers employed to study the dynamics of proteins comprise time-of-flight (TOF) and backscattering spectrometers as well as spin-echo spectrometers. These types of spectrometers will be reviewed in this section. Another class of neutron spectrometers, the triple-axis spectrometers, is less frequently used for the protein samples. Triple-axis spectrometers define both ki and kf by Bragg reflections from single crystals. Employing different setups, triple-axis spectroscopy covers a q-range from 10−3 up to almost 102 Å−1 and a $\hbar\!\omega $-range from ~10 μeV to ~100 meV, with energy resolutions from some μeV to several millielectronvolts (Shirane et al., 2002). Their disadvantage for the study of powder samples is that they generally access only one pair (ki, kf) at once. However, a future advantage of triple-axis spectrometers may be the possibility to implement polarization analysis more easily than on other types of spectrometers. Moreover, triple-axis spectrometers can be combined with certain types of spin-echo techniques (Pynn, 1978) that are beyond the scope of this review.
Since energy-dispersive detectors do not exist for cold or thermal neutrons, a variety of different techniques has been developed to determine the energy of a scattered neutron. The TOF, backscattering and spin-echo techniques are named after three different ways to measure the energy of a cold or thermal neutron, namely by measuring its speed via its flight time over a known distance, by measuring its wavelength via a Bragg reflection, or by inferring the flight time from the number of its spin precessions while traveling through a magnetic field of known dimensions, respectively. The design of these spectrometers is mainly given by the specification of a desired resolution or coherence volume in reciprocal space and time and its optimal propagation through the primary (before the sample) and secondary (behind the sample) neutron optics. Optimal in this respect generally implies that the primary and secondary resolutions are identical, i.e. contribute equally to the Gaussian error propagation.
TOF spectrometers are the most common instruments for QENS. The basic idea is to measure changes in energy by the TOF of the neutron. For this purpose, the sample is illuminated by a pulsed monochromatic beam. Subsequent to a short illuminating pulse, the sample remains 'dark' while the detectors count, as a function of time, the neutrons having been scattered by the sample. Each detected neutron is sorted in an acquisition channel according to its arrival time. With the known fixed flight path of typically 2–3 m from the sample to the detector, the detected neutron velocity and thus energy E = 1/2 mv2 can be easily obtained. The 'classical' TOF spectrometer such as TOFTOF at the MLZ (Unruh et al., 2007) is a disk chopper spectrometer where a series of rotating chopper disks with narrow slits on their circumference is mounted with their center axis parallel to the neutron beam. These choppers provide the monochromatic incident beam. The resulting time series of brief openings allows neutrons to pass in an elaborate time pattern through the subsequent disks that are typically mounted with a distance of a few meters relative to each other. Both the individual disk speeds and their relative phases can be freely adjusted up to technically limited maximum speeds. On a continuous neutron source, in this way the incident wavelength and frame spacing can be freely tuned, which also varies the energy resolution greatly. A typical elastic energy resolution with cold neutrons at 5 Å is 90 μeV full width at half maximum (FWHM). The accessible range in the neutron energy transfer is limited by the initial neutron energy on the neutron energy loss side of the spectrum. The spectral range is in principle unlimited on the neutron energy gain side. However, the energy resolution drops with increasing energy. The large detectable range in energy transfers on TOF spectrometers allows to measure, besides QENS, also the low-energy inelastic scattering from molecular vibrations from some 100 μeV up to on the order of 1 eV. TOF instruments are therefore also particularly well-suited for phonon spectroscopy. Typical cold neutron TOF spectrometers are, hence, also THz spectrometers. These neutron THz spectra can be compared e.g. to photon and dielectric THz spectra.
Variants of TOF spectrometers are, among others, the so-called Fermi-chopper spectrometers and hybrid spectrometers that use the so-called mosaic crystals as monochromators. The latter can employ a series of several such crystals to become the so-called time-focusing TOF spectrometers such as FOCUS at SINQ (Janßen et al., 1997).
Novel designs at pulsed neutron sources with a low pulse frequency (so-called long-pulse target stations) allow for TOF spectrometers such as LET at ISIS (Bewley et al., 2011) that can record QENS spectra at several incident energies in a quasi-instantaneous mode (at the expense of the inelastic information). A typical TOF spectrometer layout is depicted in Fig. 6. Several typical TOF spectrometers at neutron facilities worldwide are summarized in Table 3.
Fig. 6. Simplified schematic representation of a TOF spectrometer at a reactor neutron source: a continuous 'white' neutron beam enters from the far left, passing a series of chopper disks (marked by the numbers 1–3). The total of six choppers in the chosen example setup are, from left, a pair of counter-rotating pulse choppers (1) chopping the continuous neutron beam into short pulses, the so-called contamination order and frame overlap choppers (2), and the pair of counter-rotating monochromator choppers (3). Each chopper carries a slit with an opening on its circumference that has an open area equal to the neutron guide cross-section area. The remainder of the chopper disk area absorbs neutrons. The counter-rotating chopper pairs serve to minimize the opening and closing times, thus enhancing the energy resolution. The sample (small red cylinder, 4) is located close to the last downstream chopper (3), typically tens of meters away from the pulse choppers. The detectors (long cyan cylindrical tubes, 5) cover a large solid angle at a neutron flight distance of several meters from the sample to simultaneously detect neutrons at multiple scattering angles. Figure rendered using Mathematica (Wolfram Research, Inc.).
We categorize the instruments along their fundamental measurement principles: NSE, neutron spin echo spectroscopy; NBS, neutron backscattering spectroscopy; TOF-NBS, combined time-of-flight backscattering spectroscopy; SA-TOF, small angle time-of-flight spectroscopy; TOF, neutron time-of-flight spectroscopy. Due to limited space, not all instruments and all configurations could be listed. We apologize for any omissions, which are not intentional.
a Sources: Reactor: ANSTO, Australian Nuclear Science and Technology Organisation, Lucas Heights, Australia; FLNP, Frank Laboratory of Neutron Physics, Dubna, Russia; ILL, Institut Laue-Langevin, Grenoble, France; LLB, Laboratoire Léon Brillouin, Saclay, France; MLZ, Heinz Maier-Leibnitz Zentrum, Garching, Germany; NCNR, NIST Center for Neutron Research, Gaithersburg, MD, USA. Spallation: ESS, European Spallation Source, Lund, Sweden (under construction); ISIS, STFC Rutherford Appleton Laboratory, UK; J-PARC, Japan Proton Accelerator Complex, Tokai, Ibaraki, Japan; SINQ, Swiss Spallation Neutron Source, Paul-Scherrer-Insitut, Villingen, Switzerland; SNS, Spallation Neutron Source, Oak Ridge National Laboratory, TN, USA; JCNS, Jülich Center for Neutron Sciences, Jülich, Germany.
The purpose of a backscattering spectrometer is to achieve a high energy resolution in the range of 0.5 μeV ⩽ Δω ⩽ 10 μeV FWHM, corresponding to an access to nanosecond relaxation timescales. To this end, NBS employs Bragg reflections at nearly or exactly vertical incidence on single crystals to achieve the highest possible definition of the neutron energy as becomes clear by the first derivative of Bragg's law,
(23) $$\displaystyle{{\delta E} \over E} = \displaystyle{{\delta d_{hkl}} \over {d_{hkl}}} + {\rm cot}({\rm \Theta} )\,\delta {\rm \Theta}, $$
where d hkl denotes the crystal lattice interplanar distance. Hence, the highest energy resolution is obtained for Θ = 90°, around which the contribution of the beam divergence is minimum.
Backscattering spectrometers at continuous cold neutron sources generally use exact backscattering both to define the incident neutron energy and the analyzed neutron energy. They achieve energy resolutions of ~0.75–0.9 µeV using bent Si crystals, depending on the crystal thickness. This resolution can be further enhanced by using flat Si crystals (at the expense of flux and a deviation from a Gaussian resolution line shape) or possibly further in the future by using GaAs crystals as an option on IN16B. Moreover, IN16B allows for the use of Si311 crystals. IN16B (Frick et al., 2010), HFBS (Meyer et al., 2003) and SPHERES (Wuttke et al., 2012) in addition possess a so-called phase-space transformer (PST) (Hennig et al., 2011) for a flux enhancement at the sample. IN13 (Natali et al., 2008) at the ILL employs CaF-analyzers, displaying a slight deviation from backscattering at the monochromator, and uses neutrons at higher energies in the thermal range.
Backscattering spectroscopy accesses the quasi-elastic spectrum of a sample typically in a range from ~1 to 100 μeV. A schematic representation of a standard backscattering spectrometer is shown in Fig. 7, and its principle is further illustrated in Fig. 8. NBS spectrometers are generally designed as so-called inverted geometry spectrometers, i.e. the incident energy is varied to record a spectrum, while the analyzed energy is kept fixed. This requirement results from the large analyzer surfaces usually employed (cf. Fig. 7). Depending on whether a pulsed or a continuous neutron source is employed, the incident energy is either scanned by detuning the effective monochromator lattice spacing relative to the analyzer lattice spacing, or by letting a very short source pulse spread in energy over a long distance on the order of 100 m (TOF method). For the former method, the effective monochromator lattice spacing can be detuned either by varying the crystal temperature (as employed on IN13; Natali et al., 2008), or by mechanically moving the monochromator along its optical axis in order to achieve an apparent Doppler shift of the incident energy (effectively being explained by a Bragg reflection in a moving reference frame).
Fig. 8. Schematic of the principle of a backscattering spectrometer. In the example, a neutron with energy E 0 + δE is delivered to the sample, where E 0 is the energy in backscattering from the analyzer crystals. After scattering by the sample, if the energy transfer equals − δE the neutron is reflected by the analyzers and detected; if the energy transfer differs from − δE the neutron is not reflected and is usually absorbed by absorbing material placed behind the analyzers. The thickness of the sample is chosen such that the probability for a neutron to be scattered once is $\sim 10{\rm \%} $ and that of being scattered twice is hence $\sim \!1{\rm \%} $. The distance from the sample to the analyzers is typically 2 m, while the distance from the sample to the detectors amounts to <0.2m. Figure rendered using Mathematica (Wolfram Research, Inc.).
In the case of the latter method, the combined TOF backscattering spectrometers (TOF-NBS such as BASIS; Mamontov and Herwig, 2011), which use the flight time to define the incident neutron energy and backscattering from single crystals in the secondary spectrometer, in most cases allow for a slight deviation from exact backscattering. This deviation is matched to the incident resolution and results in an overall broader resolution function compared with continuous-source backscattering spectrometers. In most cases, these TOF-NBS instruments are located at pulsed neutron sources and are optimized for the time structure of these sources.
The rather involved backscattering setup requires that the neutrons scattered by the sample and having been backscattered by the analyzers travel backward in their original path toward the detectors (Fig. 8). The discrimination of outgoing and returning neutrons is therefore achieved by their flight time, and backscattering spectrometers at continuous sources use pulse choppers with an ~50% duty cycle for this purpose, while pulsed neutron sources are 'dark' for most of the time anyway. Note that the scattering probability by the sample is in general chosen to be <10% such that for the same neutron to be scattered a second time upon return is negligible.
Some NBS spectrometers worldwide are listed in Table 3.
Following the TOF technique and the single-crystal technique, NSE spectroscopy (Mezei, 1972) provides the third option for the measurement of a neutron energy, or, more accurately, the change of the energy of a neutron. A fundamental idea of NSE is that, similar to interferometry, small changes in the energy of a neutron can be measured without knowing the absolute energy of this neutron precisely. The basic idea of a NSE spectrometer is to infer on the difference in the total number of spin precessions of a neutron in two equal magnetic fields, one of the fields being located upstream and the other downstream from the sample. This difference in the spin precessions is obtained indirectly by measuring the change in the polarization of a given neutron in the scattering process. If the polarization of a neutron changes differently in the second, identical magnetic field, it follows that this given neutron must have undergone a different number of spin precessions in that second field. Since with this concept, in a first approximation, only the change in the number of spin precessions is relevant to determine the change in the energy of a neutron, a spin echo spectrometer does not require a very good monochromaticity of the incident radiation. Importantly therefore, a spin echo spectrometer does not measure absolute energies, but accesses a correlation function in time associating the polarizations of the incident and scattered neutrons. From this information, the intermediate scattering function I(q,t) of the sample is obtained (Mezei et al., 2002) (cf. section 'Modeling and analysis'). The monochromaticity of the incident beam is in this context still important for the definition of the scattering vector. Typically, a monochromaticity of Δλ/λ ≈ 0.08 is employed. This incident Δλ/λ is two orders of magnitude larger than on a typical backscattering spectrometer, implying largely increase neutron flux. The technical implementation of spin echo spectrometers is quite complex, and a large variety of designs can be found. Reviewing these would be beyond the scope of this paper. The reader is referred to the book by Mezei et al. (2002). Novel instruments (Fouquet et al., 2007; Farago et al., 2015) benefit particularly from progress both in the strength and in the accurate geometry of magnetic fields.
In practical terms, the intermediate scattering function obtained from an NSE experiment is analogous to the scattering function obtained in a photon correlation spectroscopy experiment (i.e. DLS or XPCS). In NSE, only a small number of polarizations needs to be measured per data point in I(q,t). This requirement differs strongly from the necessary photon statistics that has to be obtained in a photon correlation spectroscopy experiment (be it using visible or X-ray photons) in order to build the photon intensity autocorrelation function for one point in I(q,t). NSE can therefore provide conceptually similar results compared with photon correlation spectroscopy despite the significantly weaker flux at neutron sources compared with visible or X-ray laser sources. Also, due the polarization analysis inherent to NSE, it distinguishes coherent and incoherent scattering. A standard NSE experiment records the coherent scattering analogous to a photon correlation spectroscopy experiment. Incoherent scattering by the sample in this case is rather a nuisance, but NSE can also be used to specifically explore incoherent scattering.
NSE can access very high relaxation-times (up to microseconds) and, thus, very slow motions, but requires long incident wavelengths implying relatively small scattering vectors. Nevertheless, the accessed scattering vectors are significantly larger than those obtained in DLS experiments, and NSE is obviously not limited to transparent samples. NSE does encounter difficulties when applied to magnetic samples that manipulate the neutron spin in the scattering process, but this issue is of little relevance for protein samples. Example NSE spectrometers worldwide are listed in Table 3, and a typical NSE layout is depicted in Fig. 9.
The full hierarchy of motions ranging from translational and rotational diffusion of the entire molecule over large-scale domain motions down to localized dynamics of backbone and side-chains (see section 'Protein dynamics on hierarchical time- and length-scales') is represented in the quasielastic spectrum S(q,ω) in a convoluted way. While qualitative dynamical changes can be already observed from the spectra alone, extracting more detailed and quantitative information on the underlying dynamics requires modeling and data fitting, and potentially comparison with simulation (Vural et al., 2017). In the following sections, we outline the basic modeling approaches to address the different hierarchical levels of protein dynamics.
Translations and rotations of the entire protein molecule are in a good approximation independent of the internal dynamics. They do, however, depend on the protein structure and size as well as the concentration of protein and other crowder molecules in solutions. Here, we will first outline two modeling approaches for the combined effect of translational and rotational diffusion. Second, we briefly revisit how interprotein interactions can be incorporated based on colloid theory.
The translational diffusion of the center of mass of the protein results in a Lorentzian scattering function (cf. Eqs. (20) and (21)). However, also the rotation of the protein is a diffusive process, and causes an additional broadening of the quasi-elastic spectrum, which is practically indistinguishable from an effectively enhanced translational diffusion (Pérez et al., 1999; Roosen-Runge et al., 2011), and modeling is key to extract more detailed information.
One approach to model these global contributions to protein dynamics is to calculate the diffusion tensor based on protein structures using software packages such as HYDROPRO (de la Torre et al., 2000). This approach accounts for the full anisotropy of the shape and diffusion, but depends on an available protein structure, which furthermore is assumed to be rigid. From the diffusion tensor D, the apparent collective diffusion coefficient $D_{{\rm app}}^{\lpar {\rm c} \rpar } (q)$ is given by (Biehl et al., 2011; Stingaciu et al., 2016)
(24) $$ D_{{\rm app}}^{\lpar {\rm c} \rpar } \lpar q \rpar = \displaystyle{1 \over {P\lpar q \rpar }}\left\langle {\mathop \sum \limits_{\alpha, \beta} b_\alpha^{{\rm coh}} b_\beta^{{\rm coh}} {\rm e}^{i{\bf q}\cdot \lpar {{\bf r}_\alpha -{\bf r}_\beta} \rpar }} \right.\left. {\left( {\matrix{ {\hat{{\bf q}}} \cr {{\hat{\bf q} \times} {\bf r}_\alpha} \cr}} \right){\bf D}\left( {\matrix{ {\hat{{\bf q}}} \cr {{\hat{\bf q} \times} {\bf r}_\beta} \cr}} \right)} \right\rangle $$
where $P\lpar q \rpar = \left\langle {\sum\nolimits_{\alpha, \beta} {b_\alpha^{{\rm coh}} b_\beta^{{\rm coh}} {\rm e}^{i{\bf q}\cdot \lpar {{\bf r}_\alpha -{\bf r}_\beta} \rpar }}} \right\rangle $ is the form factor of the particle and b i and r i denote the scattering length and the coordinate of the individual atoms. The related autocorrelation functions are represented as $I\lpar {q,t} \rpar = \exp {\rm \;} (-6D_{{\rm app}}^{\lpar {\rm c} \rpar } t)$. Note that a similar evaluation is in principle possible for the apparent self-diffusion coefficient by collapsing the double sum to a single sum with j = k.
A different coarse-grained (CG) approach has been to use effective (isotropic) translational and rotational diffusion coefficients D t and D r as an input, and study the resulting apparent diffusion coefficient. The explicit choice of the diffusion coefficients can be based on knowledge from other techniques, and allows to implicitly introduce effects of anisotropic shape (Harding, 1995; Ferrer et al., 2001), permeability (Abade et al., 2010; Riest et al., 2015) and softness (Protopapas et al., 1973; Senff and Richtering, 1999) into the effective hard sphere model (Jennings and Parslow, 1988). The translational and rotational autocorrelations are represented as (Bee, 1988; Pérez et al., 1999)
(25) $$I_{\rm t}\lpar {q, \!t} \rpar = \exp (-D_{\rm t}q^2t)$$
$$I_{\rm r}\lpar {q,t} \rpar = \mathop \sum \limits_{l = 0}^\infty \exp {\rm } \! (-l\lpar {l + 1} \rpar D_{\rm r}t)B_{\rm l}\lpar q \rpar $$
where, depending on coherent or incoherent scattering,
(26) $$B_l^{{\rm coh}} \lpar q \rpar = \mathop \sum \limits_m \left \vert {\mathop \sum \limits_\alpha b_\alpha^{{\rm coh}} j_l\lpar {{\bf q}\cdot {\bf r}_\alpha} \rpar Y_{lm}\lpar {{\rm \Omega}_\alpha} \rpar } \right \vert ^2\;, $$
$$B_l^{{\rm inc}} \lpar q \rpar = \int_0^\infty {{\rm d}r\,\rho \lpar r \rpar \lpar {2l + 1} \rpar j_l^2 \lpar {qr} \rpar } $$
with the orientation Ωα of the individual atoms, and the radial distribution function ρ(r) of incoherent scatterers, i.e. mainly hydrogen atoms. j l(x) denotes the spherical Bessel function of first kind, and Y lm(Ω) are spherical harmonic functions. We remark that the focus on hydrogen in the case of self-dynamics is valid since protons dominate the incoherent scattering. Generally, ρ(r) would be the radial distribution of incoherent scatterers.
Assuming a decoupling of translation and rotation, the total autocorrelation is multiplicative, I(q, t) = I t(q, t)I r(q, t). The resulting apparent diffusion coefficient D app can been calculated as the first cumulant via comparably expensive numerical integration (Pérez et al., 1999; Stadler et al., 2008) or the more efficient numerical solution of the analytical implicit expression (Roosen-Runge et al., 2011; Grimaldo et al., 2015c)
(27) $$\mathop \sum \limits_{l = 0}^{l_{{\rm max}}} B_l(q)\displaystyle{{D_{\rm r}l\lpar {l + 1} \rpar + \lpar {D_{\rm t}-D_{{\rm app}}} \rpar q^2} \over {D_{\rm r}l\lpar {l + 1} \rpar + \lpar {D_{\rm t} + D_{{\rm app}}} \rpar q^2}} = 0$$
where l max has to be large enough to ensure convergence, depending on the evaluated q range. The apparent diffusion coefficient D app(q) starts off at low q with the value of D t. Around q ≈ 1/R p with the effective protein radius R p, D app(q) increases to a second plateau value at high q, which includes the contribution of translational and rotational diffusion (Grimaldo et al., 2015c).
Interactions between proteins and crowding molecules inevitably alter the diffusion behavior. While self-diffusion is generally slowed down by crowding effects, repulsive interactions can increase collective diffusion on selected wavelengths (Nägele, 1996). Colloid theory has been found to provide a suitable reference for protein solutions, even in concentrated and strongly interacting suspensions and on different length and timescales (see e.g. Longeville et al. (2003b); Doster and Longeville (2007); Heinen et al. (2012); Roosen-Runge et al. (2011) and references therein).
Importantly, colloid theory provides not only a qualitative understanding, but also quantitative predictions for self- and collective-diffusion coefficients. In particular, series expressions of the type
(28) $$D^{\lpar {\rm s} \rpar } = D_0\lpar {1 + a\varphi + \cdots} \rpar$$
have been derived for the dependence of translational and rotational self-diffusion on volume fraction φ for hard and charged spheres (Medina-Noyola, 1988; Tokuyama and Oppenheim, 1994; Banchio and Nägele, 2008).
For translational collective diffusion, the diffusion coefficient D(q) depends explicitly on the structure factor S(q) (Dhont, 1996; Nägele, 1996),
(29) $$D_{\rm t}^{\lpar {\rm c} \rpar } (q) = D_0\displaystyle{{H(q)} \over {S(q)}}$$
The hydrodynamic function H(q) can be calculated based on information on the interparticle interaction (Beenakker and Mazur, 1984; Dhont, 1996; Nägele, 1996; Heinen et al., 2012), while the self-diffusion coefficient in the dilute limit, D 0, has to be provided by other experimental or modeling methods. In the high-q limit, in the absence of attractive interactions, S(q) → 1 and H(q) → D (s)/D 0, and hence $D_{\rm t}^{\lpar {\rm c} \rpar } (q)\to D^{\lpar {\rm s} \rpar }$. Note that although the method has been developed for repulsive systems, it has also been applied successfully to mildly attractive systems (Riest and Nägele, 2015).
Given the relevance of large-scale motions such as cleft opening for protein function, experimental access to these motions is of great importance. The determination of prevailing large-scale motions in proteins is, however, no trivial task, and depends in general on the availability of protein structures.
The most conventional approach for the analysis of experimental data of large-scale protein motions is a normal mode analysis aiming at the low-frequency Brownian modes (Hinsen, 1998; Doruker et al., 2000; Hinsen et al., 2000; Cui and Bahar, 2006), as implemented in open-source software packages such as the molecular modeling toolkit MMTK (Hinsen, 2000) or ProDy (Bakan et al., 2011). The basic idea is to assume an elastic network of all atoms in the protein as a model of collective fluctuations around the equilibrium structure, which corresponds to the classical Ornstein–Uhlenbeck process. The force matrix ${\bf K}_{\alpha \beta} = \left. {{\textstyle{{\partial^2E_{{\rm pot}}} \over {\partial R_\alpha \partial R_\beta}}}} \right\vert_{{\rm eq}}$ represents harmonic springs between atom α and β, usually varying with interatomic distance. In addition, atom-wise frictions γ α are introduced. The input parameters can be chosen either as semiempirical constants, or based on MD simulations (Hinsen et al., 2000).
The resulting autocorrelations are represented as (Kneller, 2000):
(30) $$I_{{\rm coh}}\lpar {q,\!t} \rpar = \left\langle {\mathop \sum \limits_{\alpha, \beta} b_\alpha^{{\rm coh}} b_\beta^{{\rm coh}} \exp \lpar {-i{\bf q}\cdot \lpar {{\bf R}_\alpha^{{\rm eq}} -{\bf R}_\beta^{{\rm eq}}} \rpar } \rpar f_{\alpha \beta} \lpar {{\bf q},\!t} \rpar } \right\rangle $$
(31) $$I_{{\rm inc}}\lpar {q,\!t} \rpar = \left\langle {\mathop \sum \limits_\alpha {\lpar {b_\alpha^{{\rm inc}}} \rpar }^2f_{\alpha \alpha} \lpar {{\bf q}, \!t} \rpar } \right\rangle $$
where $R_\alpha ^{{\rm eq}} $ are the equilibrium positions of the atom α, and b α denote the scattering length (see section 'Quasi-elastic neutron scattering theory'). The dynamical form factors f αβ(q, t) can be rewritten in a static and a time-dependent part (Kneller, 2000; Stingaciu et al., 2016):
(32) $$f_{\alpha \beta} \lpar {{\bf q}, \! t} \rpar = f_{\alpha \beta} \lpar {{\bf q}, \!\infty} \rpar {\,f}^{\prime}_{\alpha \beta} \lpar {{\bf q}, \!t} \rpar $$
The static part as well as the initial equilibrium expression only depend on the harmonic potential, and are calculated using the eigenvalues ω j and orthonormal eigenvectors $\hat{{\bf u}}_j$ of the mass-weighted force matrix ${\bf K}_{\alpha \beta} ^{\lpar m \rpar } = {\bf K}_{\alpha \beta} /\sqrt {m_\alpha m_\beta} $ as (Kneller, 2000)
(33) $$f_{\alpha \beta} \lpar {{\bf q}, \! \infty} \rpar = \exp \left( {-\displaystyle{{k_{\rm B}T} \over 2}\mathop \sum \limits_{\,j = 1}^{3N} \displaystyle{{{({\bf q}\cdot {\hat{{\bf u}}}_{\,j\alpha} )}^2 + {({\bf q}\cdot {\hat{{\bf u}}}_{\,j\beta} )}^2} \over {\omega_j^2}}} \right)$$
(34) $$f_{\alpha \beta} \lpar {{\bf q}, \!0} \rpar = \exp \left( {-\displaystyle{{k_{\rm B}T} \over 2}\mathop \sum \limits_{\,j = 1}^{3N} \displaystyle{{{({\bf q}\cdot {\hat{{\bf u}}}_{\,j\alpha} -{\bf q}\cdot {\hat{{\bf u}}}_{\,j\beta} )}^2} \over {\omega_j^2}}} \right)$$
with the displacement $\hat{{\bf u}}_{j\alpha} $ of the atom α in the jth mode.
For the time-dependent part, the solution depends on the specific dynamics, see Kneller (2000) for details. Assuming overdamped dynamics, the Brownian modes are given by the eigenvalues λ k and orthonormal eigenvectors $\hat{{\bi v}}_k$ of the friction-weighted force matrix ${\bf K}_{\alpha \beta} ^{\lpar \gamma \rpar } = {\bf K}_{\alpha \beta} /\sqrt {\gamma _\alpha \gamma _\beta} $ and yield for the time-dependent part (Hinsen et al., 2000; Kneller, 2000)
(35) $${\,f}^{\prime}_{\alpha \beta} \lpar {q, \!t} \rpar = \exp \left( {\displaystyle{{k_{\rm B}T} \over {\sqrt {\gamma_\alpha \gamma_\beta}}}} \right.\left. {\mathop \sum \limits_{k = 1}^{3N} ({\bf q}\cdot {\hat{{\bf v}}}_{k\alpha} )\lpar {{\bf q}\cdot {\hat{{\bf v}}}_{k\beta}} \rpar \displaystyle{{{\rm exp}(-\lambda_kt)} \over {\lambda_k}}} \right)$$
with the displacement $\hat{{\bf v}}_{k\alpha} $ of the atom α in the kth mode. A graphic visualization of a normal mode calculated for immunoglobulin G (IgG) is shown in Fig. 10.
Considering the limited instrumental time window, not all relaxations will be resolved in the experiment. In particular, fast motions, i.e. higher order modes, will have relaxed already, and slower modes might be too slow to be resolved. Taking this into account, a simplified form suitable for data fitting can be derived when summarizing all modes in the accessible time window with an effective relaxation constant $\bar{\lambda} $,
(36) $$I\lpar {q,t} \rpar /I\lpar {q,0} \rpar = I_{\rm t}\lpar {q,t} \rpar I_{\rm r}\lpar {q,t} \rpar \lpar {C\lpar q \rpar + A\lpar q \rpar \exp (-\bar{\lambda} t)} \rpar.$$
In principle, I(q,0) equals the form factor P(q). However, I(q,0) is usually determined from the experimental data, and thus effectively corrects for fast relaxations outside the experimental window as well as effects of incoherent scattering. After this correction, the slower motions that appear fixed during the experiment time are represented by the amplitude C(q) = 1 − A(q). Finally, A(q) for a set S of Brownian modes is given by (Biehl et al., 2011)
(37) $$\!\!\!\!A(q) = \mathop \sum \limits_{k\in S} a_k\mathop \sum \limits_{\alpha, \beta} b_\alpha b_\beta \exp\!\lpar {i{\bf q}\cdot \lpar {{\bf r}_\alpha -{\bf r}_\beta} \rpar } \rpar \lpar {\hat{{\bf q}}\cdot {\hat{{\bf v}}}_{k\alpha}} \rpar \lpar {\hat{{\bf q}}\cdot {\hat{{\bf v}}}_{k\beta}} \rpar $$
where the mode amplitudes a k depend on the friction constants and the temperature. A(q) as the q signature of internal motions has been successfully used to determine motional patterns occurring in proteins (Biehl et al., 2008, 2011; Sill et al., 2016).
It has to be mentioned that the standard normal mode analysis does not account for anharmonic potentials and effects of secondary structure as well as details in the friction within the protein and with water (Smolin et al., 2012). Nevertheless, results for the low-frequency normal modes are reasonably robust against changes of the representation from full-atomic detail to only backbone carbons or even small rigid blocks (Hinsen, 1998; Tama et al., 2000). Furthermore, the results can be compared with much more costly computer simulations aiming for the so-called essential dynamics or proteins, i.e. a smaller set of motions that describe larger protein motions with good accuracy (Amadei et al., 1993). In particular, modes derived from singular value decomposition of simulated protein dynamics revealed similar signatures and cross-correlations as normal modes (Doruker et al., 2000). Dynamical models can also be constructed and improved ad hoc, based on complementary techniques or intuition (Yang et al., 2007; Stingaciu et al., 2016).
Comparing the obtained signatures for large scale motions in proteins with experimentally accessible data e.g. from NSE spectroscopy allows to understand collective internal protein dynamics, and thus the underlying dynamical mechanisms of protein function.
Localized internal dynamics such as fluctuations of backbone and side-chains are intrinsic to the physicochemical properties of proteins. In the following, we present several conventional models for these fast motions on small length scales. As the basic model and analogously to the case of large-scale motions, two contributions are considered (Bee, 1988):
(38) $$S_{{\rm int}}(q, \!\omega ) = A(q)\,\delta (\omega ) + (1-A(q))\,{\cal L} ({\rm \Gamma}, \!\omega )$$
The so-called elastic incoherent structure factor (EISF) A(q) (see section 'Localized internal dynamics') provides access to the confinement geometry of motions within the experimental time window. The effective relaxation constant Γ(q) (see section 'Localized internal dynamics') comprises all dynamical processes within the experimental time window, and thereby accesses the motional character of the underlying motions. Thus notably, neutron spectroscopy provides information on both geometrical and dynamical signatures of localized protein dynamics.
The formal expression for the EISF is represented as (Kneller, 2000)
(39) $$A(q) = \mathop \sum \limits_\alpha b_\alpha ^2 \vert \exp \! \lpar {-i{\bf q}\cdot {\bf r}_\alpha} \rpar \vert ^{2.}$$
Given the dominant (incoherent) scattering contribution of hydrogen, the most common models focus on the motion of one representative atom in an effective confinement such as an impermeable sphere, a parabolic potential or a finite number of sites on a circle.
The motion in an isotropic confinement has been modeled in two variants. First, considering free diffusion in an impermeable sphere with radius r, the EISF is represented as (Volino and Dianoux, 1980; Press, 1981)
(40) $$A_{{\rm sph}}(q\,,R) = \left \vert {\displaystyle{{3j_1\lpar {qR} \rpar } \over {qR}}} \right \vert ^2$$
where j 1(x) = [sin(x) − x cos(x)]/x 2 denotes the first order spherical Bessel function of the first kind.
Second, assuming diffusion in a radial harmonic potential, a Gaussian density profile is obtained with a corresponding EISF of (Volino et al., 2006)
(41) $$A_{\rm G}(q) = \exp \! \lpar {-q^2\langle{u^2} \rangle} \rpar \,.$$
When the variance of the density profile is $\langle u^2 \rangle \equals R^2 \hbox{/} 5$, both expressions are almost superimposed for $(q\,R)^2{\rm \lesssim} 5$ (Volino et al., 2006). The radius R can be used to estimate the volume accessible to the hydrogen atoms in a protein, although there are no obvious reasons for this volume to be spherical. The radius obtained from the model is easily compared between different proteins and may be used as a rough indicator of the local structural flexibility and the ability of the protein to explore different conformations.
In order to describe the geometrical confinement of the H-atoms in a methyl group, a model describing an atom jumping between three equivalent sites on the vertices of an equilateral triangle is commonly used. The expression for the EISF is given by (Press, 1981; Bee, 1988; Bée, 1992; Pérez et al., 1999):
(42) $$A_{3-j}\lpar {q,a} \rpar = \displaystyle{1 \over 3}[ {1 + 2\,j_0\lpar {qa} \rpar } ],$$
where j 0(x) = sin(x)/x is the zeroth order spherical Bessel function of the first kind and a denotes the jump-distance between two sites.
The EISF is usually described simply by a linear combination of these and similar models, whose amplitudes represent the fraction of hydrogens in the respective confinement. In most cases the total EISF is modeled as
(43) $$A(q) = p + (1-p)\,A_{{\rm model}}(q)\,,$$
where p denotes the number of atoms which remain immobile on the timescale accessible by the instrument, and A model(q) is one or a combination of the models presented above. It should be emphasized that these models provide a very simplified description of the complexity of the local confinement of atoms in a protein. Nevertheless, these simple models were successfully applied in previous studies to obtain valuable information on the confinement of the internal dynamics, and its dependence on several parameters. Further information on the topic and other models can be found e.g. in Bee (1988).
The internal self-dynamics as measured by QENS is usually modeled by one or more Lorentzian functions. As already seen in the 'Quasi-elastic neutron scattering theory' section (in particular Eq. (21)), simple Fickian diffusion of an unbound scatterer results in a HWHM of the quasi-elastic broadening Γ ∝ q 2.
In contrast, atoms in the protein are affected by a complex potential landscape with several minima, corresponding to preferred spatial configurations. Under such conditions, the functional form of Γ(q) changes significantly (Fig. 11). The resulting motion of atoms can be thought of as a sequence of effective jumps with rate 1/τ 0 between preferred configurations. Consequently, a behavior resembling free diffusion may be expected only on a length scale sufficiently large compared with the length of the jumps, i.e. at lower q. Furthermore, geometrical constraints due to e.g. steric interactions or chemical bond lengths limit the length scale on which dynamical processes occur. In the following, we present some of the models used for a description of the internal dynamics.
Fig. 11. Comparison of the HWHM Γ as a function of q 2 for Fickian diffusion and jump-diffusion. For Fickian diffusion Γ = Dq 2: a straight line is obtained and the slope gives the diffusion coefficient D. For unrestricted jump-diffusion, the slope at low q gives the jump-diffusion coefficient D 1, and the asymptote at high q gives the inverse of the residence time τ 0. Figure rendered using Mathematica (Wolfram Research, Inc.).
The jump-diffusion model by Singwi and Sjölander (1960) describes a particle switching between an oscillation around an equilibrium position for a time τ osc, and a diffusive motion with diffusion coefficient D diff.
In order for the model to describe jump diffusion, the limiting case of a short diffusive period τ osc ≫ τ diff and very small oscillation amplitude is done, yielding a Lorentzian shape with HWHM (Singwi and Sjölander, 1960)
(44) $${\rm \Gamma} _{{\rm jump}}\lpar q \rpar = \displaystyle{{D_{{\rm diff}}q^2} \over {1 + D_{{\rm diff}}\tau _{{\rm osc}}\,q^2}}\;. $$
As shown in Fig. 11, a simple diffusive signature Γjump ≈ D diffq 2 is obtained for q → 0, whereas Γjump ≈ 1/τ 0 for q → ∞. This characteristic is typical for random jump-diffusion models, such as the one presented here or e.g. that by Hall and Ross (1981).
To describe a localized motion in a region of space with a soft boundary, Volino, Perrin, and Lyonnard (Volino et al., 2006) have developed a model based on Gaussian statistics. Considering a particle moving isotropically and in three dimensions about the origin, under the assumption that the displacement u from the origin is a Gaussian random variable with variance $\langle u^2 \rangle $, characterizing the region in which the particle is confined, and the normalized equilibrium density probability function is given by (Volino et al., 2006):
(45) $$p(u) = \displaystyle{1 \over {{[ {2\pi \langle{u^2} \rangle} ] }^{1/2}}}\exp \left[ {-\displaystyle{{u^2} \over {2\langle{u^2}\rangle }}} \right]\,.$$
Then, the intermediate scattering function can be expressed as (Volino et al., 2006):
(46) $$I\lpar {q, \! t} \rpar = \exp [ {-q^2\langle{u^2} \rangle \lpar {1-\rho \lpar t \rpar } \rpar } ] \,$$
with the correlation function ρ(t). In the case of a particle diffusing with a diffusion coefficient D eff, ρ(t) = exp[ −t·D eff/〈u 2〉] (Volino et al., 2006; Stadler et al., 2014a). Hence, for an overdamped Brownian oscillator, the intermediate scattering function becomes (Volino et al., 2006; Stadler et al., 2014a):
(47) $$I(q, \!t) = \exp [ {-q^2\langle u^2\rangle (1-\exp {\rm}\! [-t\,\cdot D_{{\rm eff}}/\langle u^2\rangle ])} ] \,,$$
and the scattering function is represented as (Volino et al., 2006; Stadler et al., 2014a):
(48) $$\eqalign{S\lpar {q, \! \omega} \rpar = &\exp [{-q^2\langle{u^2} \rangle} ] \cr & \times \left(\delta \lpar \omega \rpar + \mathop \sum \limits_{n = 1}^{\infty} \displaystyle{(q^2\langle u^2\rangle )^n \over n!} \right. \left. \displaystyle{1 \over 2\pi} \displaystyle{n\; D_{{\rm eff}\;} /\langle u^2\rangle \over n\; D_{\rm eff}\; /\langle u^2 \rangle + \omega^2} \right)}$$
D eff being an effective coefficient describing the diffusive motion of an atom within the protein, the model can be integrated with the jump-diffusion model if D eff(q) = D diff/(1 + D diffτ osc q 2) (Volino et al., 2006). The scattering functions for the anisotropic case as well as for the two- and one-dimensional cases can be found in Volino et al. (2006).
The jump-diffusion model presented in the previous subsection concerns a specific situation. However, the idea of a motion switching between dynamical states can be exploited in a more general way, providing analytical correlation functions for a larger class of dynamical switching processes (Roosen-Runge et al., 2016).
As one practical example, an atom switching between two diffusive states with relaxation constants Γ1(q) and Γ2(q) is considered. With switching rates 1/τ 1 and 1/τ 2 constant in time, one obtains a function with two Lorentzian profiles (Roosen-Runge et al., 2016)
(49) $$S_{{\rm sw}}(q, \!\omega ) = \alpha (q)\,{\cal L} (\lambda _1(q), \! \omega ) + (1-\alpha (q))\,{\cal L} (\lambda _2(q), \!\omega )$$
with HWHM λ 1 and λ 2, respectively, and
(50) $$\alpha = \tau_1\Gamma_2 + \tau_2\Gamma_1 + (\tau_1+\tau_2)(\tau_1^{-1} +\tau_1^{-1} - \lambda_1)$$
(51) $$\lambda _{1,2} = \displaystyle{{{\rm \Gamma} _1 + {\rm \Gamma} _2 + \tau _1^{-1} + \tau _2^{-1} \pm {\rm \Lambda}} \over 2}$$
(52) $${\rm \Lambda} = \sqrt {{({\rm \Gamma} _1-{\rm \Gamma} _2 + \tau _1^{-1} -\tau _2^{-1} )}^2 + 4\; {(\tau _1\,\tau _2)}^{-1}} \,.$$
Although more complicated compared with the jump diffusion model, this model allows to test a specific picture of internal dynamics against experimental data. Depending on data statistics, simultaneous fitting of several q values can be used to achieve reliable results for the chosen model parameter, as e.g. proven successfully by Grimaldo et al. (2015a). We remark that the flexibility to choose the specific form of Γ1,2(q) as well as the switching time distributions for τ 1,2 allows to model processes ranging from simple diffusion and Ornstein–Uhlenbeck processes to continuous time random walks (Roosen-Runge et al., 2016).
In the models presented above, we have only considered exponential relaxations. In some cases, non-exponential relaxations are observed. The non-exponentiality may either be of intrinsic characteristics of the relaxation in a complex system, or arise from the combination of a distribution of single exponential relaxations. In both cases, the phenomenological stretched exponential function, also known as Kohlrausch–Williams–Watts (KWW) function (Williams and Watts, 1970),
(53) $$I(q, \!t) = \exp [(-t/\tau (q))^\beta ]$$
is often used to model the intermediate scattering function, with the relaxation times τ(q) and the stretching coefficient β. A mean relaxation time can be calculated by
(54) $$\langle \tau \rangle \lpar q \rpar = \displaystyle{{\tau \lpar q \rpar } \over \beta} \cdot {\rm \Gamma} \left( {\displaystyle{1 \over \beta}} \right)\,,$$
where Γ(.) is the gamma function. There exists an analytical form of the Fourier transform of Eq. (53) (Kneller and Hinsen, 2004) (without resolution function term), but practically its calculation is often performed numerically (Toppozini et al., 2015; Ameseder et al., 2018b).
To describe non-exponential relaxations, also the fractional Brownian dynamics (FBD) model can be employed. This model is derived as a solution of the fractional generalization of the Fokker–Planck equation, and is characterized by construction by non-exponential decays (Kneller, 2005). The solution can derived for a fractional Ornstein–Uhlenbeck process describing non-Markovian diffusion of a Brownian particle under a harmonic potential (see the paragraph Brownian oscillator and the review by Kneller (2005) and references therein). The intermediate scattering function for this dynamical model is represented as:
(55) $$I\lpar {q, \!t} \rpar = \exp (-q^2\langle u^2\rangle )\mathop \sum \limits_{n = 0}^\infty \displaystyle{{{(q^2\langle u^2\rangle )}^n} \over {n!}}E_\alpha (-nt^\alpha \tilde{\tau} ^{1-\alpha} /\tau )\,,$$
where u is the displacement of the atom under the harmonic potential, τ is the relaxation time, and $\tilde{\tau} $ is a scaling factor ensuring the correct dimension of the expression. The function
$$E_\alpha (z) = \mathop \sum \limits_{k = 0}^\infty \displaystyle{{z^k} \over {{\rm \Gamma} \lpar {1 + \alpha k} \rpar }}$$
is the Mittag–Leffler function, which can be seen as a generalized KWW function (Kneller, 2005). In the limit α → 1, Eq. (55) tends to Eq. (47), with τ = 〈u 2〉/D eff. The Mittag–Leffler function has the advantage, compared with the KWW function, of possessing a an analytical Fourier transform, namely, for 0 < α ⩽ 1, the generalized Lorentzian function
(56) $${\cal L} _\alpha (\omega ; \!\tau ) = \displaystyle{{2\tau \,{\rm sin}(\alpha \pi /2)} \over {\vert {\omega \tau} \vert ( \vert \omega \tau \vert ^\alpha + 2\,{\rm cos}(\alpha \pi /2) + \vert \omega \tau \vert ^{-\alpha} )}}\;. $$
Hence, the scattering function can be expressed analytically as
(57) $$\eqalign{S(q, \!\omega ) = &\exp [-q^2\langle u^2\rangle ] \cr & \times \left( {\delta \lpar \omega \rpar + \mathop \sum \limits_{n = 1}^\infty \displaystyle{{{(q^2\langle u^2\rangle )}^n} \over {n!}}} \right.\left. {\displaystyle{}{1 \over 2\pi} {\cal L}_\alpha \lpar {\omega ;\tau_{\alpha, n}} \rpar } \right),} $$
with the relaxation times $\tau _{\alpha, n} = \tilde{\tau} /(n\tilde{\tau} /\tau )^{1/\alpha} $.
Besides the model-driven approaches outlined in the last sections, a model-free approach provides a frequently used measure for the overall averaged mean-squared displacement (MSD) of hydrogens
(58) $$\langle {\rm \Delta} R^2\rangle \lpar t \rpar = \displaystyle{1 \over N}\mathop \sum \limits_{\alpha = 1}^{N_\alpha} ({\bf R}_\alpha \lpar t \rpar -{\bf R}_\alpha (0))^2\;. $$
The approach is based on the cumulant expansion of the intermediate scattering function
(59) $$ \eqalign{I\lpar {{\bf q}, \! t} \rpar &= \mathop \sum \limits_{\alpha = 1}^{N_\alpha} \,\langle{\exp [ {i\,{\bf q}\cdot \lpar {{\bf R}_\alpha \lpar t \rpar -{\bf R}_\alpha \lpar 0 \rpar } \rpar } ] } \rangle \cr I\lpar {{\bf q}, \!t} \rpar &= \exp \left[ {-\displaystyle{{q^2} \over 2}\langle{{\rm \Delta} R^2} \rangle + {\rm {\cal O}}\lpar {q^4} \rpar } \right]\;.}$$
Experimentally, and based only on the elastic incoherent scattering, one obtains the apparent mean-square displacement 〈u 2〉, which is linked to the real mean-square displacements (Magazu et al., 2004; Zorn, 2009; Roosen-Runge and Seydel, 2015):
(60) $$ \eqalign{ \langle{u^2} \rangle & = -\mathop {\lim} \limits_{q\to 0} \displaystyle{3 \over {q^2}}\log [ {S_{{\rm exp}}\lpar {q, \!\omega = 0} \rpar }] \cr \langle{u^2} \rangle &= \displaystyle{3 \over 2}\mathop \int \limits_{-\infty} ^\infty {\rm d}t\,\langle{{\rm \Delta} R^2} \rangle \lpar t \rpar {\rm {\cal R}}\lpar t \rpar \;.} $$
Here, ${\rm {\cal R}}(t)$ denotes the instrumental resolution function, and introduces an explicit resolution dependence.
The apparent MSD 〈u 2〉 thus corresponds to a measure of geometrical confinement within a timescale given by the instrumental resolution, and is not decisive to the underlying dynamical processes. Although the apparent MSD thus does not allow for a detailed picture of protein dynamics, it provides a model-free way of monitoring changes in a comparative fashion. Historically, this approach was specifically important, since full quasi-elastic spectra could not be collected with sufficient statistics in reasonable times, while the elastic intensity was accessible. Even nowadays, scans at several fixed energies provide a promising approach to monitor changes in confinement and dynamical processes in real-time or during temperature ramps (Frick et al., 2012; Appel et al., 2015; Roosen-Runge and Seydel, 2015).
In this section, we report on studies on protein dynamics in solution, summarized into several subsections on specific topical areas. In each subsection, we will first review results from neutron scattering, which, as seen in the previous sections, is well suited for the study of protein dynamics (see also Zaccai, 2003; Gabel et al., 2003; Fitter et al., 2006). These results will then be compared with results obtained from different techniques on similar systems. Although we attempt to provide a list of references as complete as possible, we emphasize that the selection of results is inevitably and non-intentionally not representative.
Three types of neutron scattering can be used to obtain dynamical information on proteins: elastic, quasi-elastic and inelastic neutron scattering (cf. section 'Quasi-elastic neutron scattering theory'). In the following, elastic and QENS investigations will be extensively reviewed. Inelastic neutron scattering reveals atomic vibrations and the so-called protein boson peak, which is related to low-frequency vibrational modes observed at low temperatures for globular proteins, where it is coupled to the protein hydration water, but is also found in several glassy materials (Cusack and Doster, 1990; Smith et al., 1990; Diehl et al., 1997; Joti et al., 2005; Ciliberti et al., 2006; Perticaroli et al., 2014). Its origin was found to be related to an energy landscape rich of local minima (Joti et al., 2005) and could be explained in terms of a mechanical instability of the system (Ciliberti et al., 2006). Inelastic scattering is however not in the focus of this review, and therefore related studies will not be reviewed. For further information on this topic we refer the reader to Smith et al. (2018).
Parameters regarding internal dynamics of proteins in solution from several studies are reported in Table 4 (effective forces, see section 'Dynamics of hydrated protein powders'), Table 5 (parameters from analysis of NBS and TOF spectra) and Table 6 (relaxation times and amplitudes of domain motions, from NSE). We note that certain observations may be influenced by the resolution functions of the instruments (Magazù et al., 2017) as well as by the specific data treatment and models used to describe the data. The respective instrument resolution is also listed in the tables.
a Intended as the FWHM of the resolution function; the resolution in time is reported when estimated in the respective reference.
b Calculated with the specific volume of BSA, $\vartheta = 0.735\,$ ml g−1, from a nominal concentrations of 200 and 500 mg ml−1.
c Calculated with the specific volume of Hb, $\vartheta = 0.75\,$ ml g−1 from a hydration level of 1.1 g D2O per 1 g protein.
b Calculated with the specific volume of Hb, $\vartheta = 0.75\,$ ml g−1 from a hydration level of 1.1 g D2O per 1 g protein.
cCalculated with the specific volume of IgG, $\vartheta = 0.739\,$ ml g−1, from nominal concentrations of 100–500 mg ml−1.
dCalculated with the specific volume of BSA, $\vartheta = 0.735\,$ ml g−1.
eEstimated.
b Calculated with the specific volume of BSA, $\vartheta = 0.735\,$ ml g−1.
c Calculated with the specific volume of IgG, $\vartheta = 0.739\,$ ml g−1, from nominal concentrations of 100–500 mg ml−1.
d Calculated with the specific volume of Hb, $\vartheta = 0.75\,$ ml g−1 from a hydration level of 1.1 g D2O per 1 g protein.
e Estimated.
This section is organized as follows: first, a brief summary of the results obtained on the dynamics of hydrated protein powders will be given, and some recent results will be shortly reviewed in the 'Dynamics of hydrated protein powders' section. Second, the main studies pointing out differences of protein internal dynamics in hydrated powders compared with solutions will be reviewed in the 'From powder to solution: influence of solution conditions on protein dynamics' section. Subsequently, studies regarding a temperature-dependent dynamical transition in solution will be reviewed in the 'The dynamical transition in solution' section, followed by studies on the influence of the protein conformation on the internal macromolecular motions in the 'Comparison of internal protein dynamics in native, molten and denatured states' and 'Relations of protein dynamics to structure: from globular to intrinsically disordered proteins' sections. In the 'Internal dynamics of proteins at high pressure' section, pressure experiments on protein solutions are reported. In the 'Adaptation of proteins to ambient and extreme temperatures' section, investigation of the molecular basis of thermal adaptation will be reviewed, part of which were performed in vivo. Studies on slow, collective dynamics of domains and subdomains are presented in the 'Collective internal motions in proteins' section, and in the 'Combination of neutron spectroscopy techniques: alcohol dehydrogenase' section, the characterization of the internal dynamics of alcohol dehydrogenase (ADH) from the fast self-dynamics, to the slow collective dynamics is reviewed. Finally, further in vivo studies are presented in the 'In vivo neutron spectroscopy' section, followed by a review of results regarding protein diffusion, in the 'In vitro studies on the effect of macromolecular crowding on protein dynamics' and 'Dynamics of protein clusters, aggregates and glasses' sections.
Several neutron scattering experiments focused on hydrated protein powders with the aim of suppressing the diffusive dynamics of the entire protein. As a first step toward protein dynamics in solutions, in this section we review some of these studies with the aim to give an overview of the kind of information that can be obtained from such experiments.
Numerous studies have investigated protein internal dynamics on subnanosecond timescales by comparing dry and hydrated protein powders, i.e. with a layer of hydration on the protein surface (for extensive reviews also highlighting the fundamental connection between water and protein structure and dynamics see Daniel et al. (2003); Gabel et al. (2003); Halle (2004); Pieper and Renger (2009); Khodadadi and Sokolov (2015); Bellissent-Funel et al. (2016); Khodadadi and Sokolov (2017); Magazù et al. (2017)). Notably, neutron studies have shown that the presence of one hydration water layer activates specific internal motions above a transition temperature T 0 that are not visible in dry samples, thereby confirming Mössbauer spectroscopy results (Parak et al., 1982). Such an activation is indicated in elastic incoherent neutron scattering (EINS) experiments by the change of the slope of the MSD as a function of temperature, as shown in Fig. 12. This so-called dynamical transition, in some cases, appears correlated with the onset of the protein activity (Fitter et al., 1998; Lehnert et al., 1998; Réat et al., 1998; Zaccai, 2000, 2003). Moreover, it was found to be correlated with an increased mobility of water above T 0 (Tournier et al., 2003). Later, Frauenfelder et al. (2009) explained the experimental observations obtained with Mössbauer spectroscopy proposing a model, in which the localized internal dynamics is strictly entangled with the β-fluctuations of the protein hydration shell. The increased mobility above T ~ 200 K was explained as a result of the changes in the β-relaxations combined with the sensitivity of the experimental technique. Hence, the fact that the dynamical transition, as observed by neutron scattering, does not require a protein structure (He et al., 2008) (see also section 'The dynamical transition in solution'), and occurs even in hydrated amino acids that are not connected in a chain (Schiro et al., 2011) is possibly to be interpreted in light of this model.
A further study by Nickels et al. (2012) employed NBS to access the picosecond-to-nanosecond dynamics of green fluorescent protein (GFP) and its hydration water, revealing that hydration water suppresses protein motions at $T{\rm \lesssim} 200\,{\rm K}$, and facilitates protein dynamics at higher temperatures. Moreover, a decoupling of the dynamics of hydration water from that of the protein was reported at higher temperatures. Hence, the dynamical transition seen in hydrated powder seems to be slaved to the hydration water, but we anticipate that this is not always observed in solution (see section 'The dynamical transition in solution'). Finally, the authors found a reduced dynamics of GFP compared with that of other globular proteins from earlier studies and attributed it to the β-barrel structure of GFP.
Sakai et al. (2013) explored the influence of water, glycerol and trehalose on the picosecond–nanosecond dynamics of lysozyme using neutron scattering. The data suggested that at room temperature or above, trehalose forms hydrogen bonds with the protein surface, replacing the water molecules, and building a glassy layer suppressing protein dynamics and improving the protein stability (Sakai et al., 2013). At lower temperatures, instead, it was found that glycerol forms a strong glass integrated with protein residues, which results in suppressed fast motions in the glassy state. In contrast, trehalose interacts only weakly with the protein surface at low temperatures – consistent with arguments based on DSC glass temperature measurements (Olsson et al., 2016) – and has no stabilizing effect (Sakai et al., 2013). Hence, the authors concluded that glycerol is the most effective bioprotectant for low temperatures and trehalose for high temperatures.
In an attempt to obtain further insight into the physics behind the increase of the MSD with rising temperature, Zaccai (2000) proposed to model proteins in a simplified picture of atoms connected by effective springs. In this picture, the apparent vibrational mean square displacements $\langle u^2 \rangle $ measured by elastic neutron scattering could be interpreted in terms of the harmonic spring equation, yielding the mean force constant (Zaccai, 2000)
(61) $${\langle k \rangle} = 2\,k_{\rm B}\,\left( {\displaystyle{{{\rm d}\langle{u^2} \rangle} \over {{\rm d}T}}} \right)^{-1}$$
with the Boltzmann constant k B. The force constant was associated with the resilience of the protein, and was calculated in a number of studies to compare in a quantitative manner EINS results from different proteins in various conditions (Gabel et al., 2003).
Zaccai et al. (2016) investigated both water dynamics and conformational fluctuations in the 30 S and 50 S ribosomal subunits from Haloarcula marismortui, under high salt, stable conditions. Whereas no significant difference was observed for hydration water of the two subunits, the 30 S was found to have a softer force constant (0.016(1) and 0.018(2) N m−1 in 3 M NaCl and 3 M KCl, respectively) and larger MSD (17.9(9) and 16.3(8) Å2) than the 50 S ( $\langle k \rangle $ = 0.034(4) N m−1, $\langle u^2 \rangle $ = 12.1(6) Å2). The authors argued that the enhanced flexibility likely facilitates conformational adjustments required for messenger and transfer RNA binding (Zaccai et al., 2016).
Fabiani et al. (2009) combined circular dichroism (CD), neutron and X-ray scattering to study changes of MD of apomyoglobin (apoMb) as a function of temperature. A dynamical transition at about 200 K for motions on the 50 ps timescale was observed also for a hydrated powder of heat-denatured aggregated apoMb. Moreover, a significant change in MD indicating a more resilient structure was observed at about 328 K, above which α-helix secondary structure of apoMb at pH 9 was replaced by β-sheet structures, as seen by CD. Such structural changes, confirmed by X-ray scattering, can generate amyloid deposits in humans (Fabiani et al., 2009). These results were later on confirmed by Stadler et al. (2012b), who, in addition, found evidence for amyloid formation and noted that the dynamic changes observed in the α–β transition were more important on the nanosecond timescale, than on the 0.1 ns timescale. Hence, it was suggested by the authors that the secondary structure has a stronger influence on the longer timescale (Stadler et al., 2012b).
With regard to Mb, Stadler et al. (2012c) found different dynamics of hydrated powders of holomyoglobin (holoMb) compared with apoMb: the resilience of holoMb was found to be significantly lower than that of apoMb, indicating entropic stabilization by a higher degree of conformational sampling. The experimental results were further corroborated by MD simulations indicating that, although the residues close to the heme group in the holoMb have a lower MSD, the binding of heme increases the MSD of the other residues, thus providing an entropic contribution to the protein stability.
Andersson et al. (2017) investigated the effect of two different inhibitors, namely TPCK and chymostatin, on the dynamics of the serine protease α-chymotrypsin. From the analysis of the EINS signal, the authors concluded that the inhibited enzymes underwent a dynamical transition at lower temperatures and in a more cooperative way leading to bigger amplitudes of motions at higher temperatures (up to 310 K). Andersson and collaborators deduced that the inhibitor either directly allows for larger amplitudes of the enzyme motions, or influences the water network around the enzymes in a way that permits more degrees of motional freedom leading to a lowering of the potential energy barrier seen by the enzyme atoms (Andersson et al., 2017).
Dynamical differences between distinct states of a protein characterized by the presence or absence of a ligand were also investigated by Shrestha et al. (2016), who employed neutron scattering to study the activation of the membrane protein rhodopsin. They found a broadly distributed relaxation of hydrogen atom dynamics of rhodopsin on a pico- to nano-second timescale as only observed for globular proteins previously. Moreover they found that the dynamics of opsin, yielded after removal of the ligand 11-cis-retinal through photon absorption, is significantly slower than that of the dark state rhodopsin (before photoactivation), which, instead, is locked by the ligand. This, suggested the authors, may be crucial for functional interactions between opsin and the G-protein transducin leading to its catalytic activation.
Lenton et al. (2017) studied the effect of phosphorylation on a disordered protein, the recombinant human-like osteopontin (rOPN). While no significant structural changes could be detected by small-angle X-ray scattering (SAXS), NBS and TOF-NBS showed differences between the dynamics of the phosphorylated and unphosphorylated rOPN. In particular, it was concluded by the authors that phosphorylation of rOPN blocks some nanosecond side-chain motions while increasing the flexibility of other side-chains on faster timescales. Lenton and collaborators suggested that such a selective change of the dynamic behavior of even a highly disordered protein such as osteopontin could direct allosteric mechanisms, interactions with substrates, cofactors and amorphous or crystalline biominerals (Lenton et al., 2017).
Notably, Hong et al. (2016) performed NBS and NSE experiments on perdeuterated powder of the protein cytochrome P450cam (CYP101) and were able to obtain a description of even large-scale dynamic modes, thereby establishing a microscopic relationship between the structure, dynamics and function. In particular, both experiments and simulations indicated that in CYP101 three domains rotate against each other to grant access of the substrate to the catalytic site, with an amplitude of about 0.4 Å2, which is crucial for the enzymatic function.
In summary, while studies on proteins in hydrated powders generally fail to consider the full dynamical spectrum including center of mass and large scale motions, they highlight the presence of a temperature-activated dynamical transition in the sub-nanosecond internal dynamics. The transition was found to be coupled to the dynamics of the hydration water, and, in some cases, it was possible to associate the onset of the fast dynamics with the activation of the protein function. Recent studies on protein powders have also shown that distinct ribosomal subunits are characterized by different mean effective force constants. Experiments further demonstrated that binding of ligands and phosphorylation can induce changes in the protein dynamics and suggested that the secondary structure of proteins may have a stronger influence on nanosecond internal dynamics than on shorter timescales. Finally, a perdeuterated powder of CYP101 was successfully used to obtain a description of even large-scale dynamic modes.
As a general remark on the parameters measured by neutron scattering (cf. Table 4), we note that the effective force constants of proteins in solution in vitro span from some piconewtons per nanometer (10−3 N m−1), to some hundreds of piconewtons per nanometer (10−1 N m−1), with the exception of Methanococcus jannaschii malate dehydrogenase (Mj MalDH), having 〈k〉=1.5 N m−1. In particular, force constants measured on the 10 ps timescale are mostly on the order of 10−2 N m−1. Those measured on a 100 ps timescale are predominantly on the order of some 10−1 N m−1, while on the longer timescales of some nanoseconds and some hundreds of nanoseconds, effective force constants appear to decrease to some 10−2 N m−1 and even some 10−3 N m−1, as obtained from models of the full NBS and NSE signals, respectively.
QENS studies (cf. Table 5) suggest that the radius R of the effective sphere accessible by H-atoms within a protein close to room temperature vary from ~1.5 to ~7 Å, whereas the internal relaxation times τ are more heterogeneous, spanning from some picoseconds to some hundreds of picoseconds, depending on the instrument resolution. These relaxation times can be compared with typical 'fast' rotational correlation times on the order of some hundreds of picoseconds in native proteins, as observed for specific residues by fluorescence spectroscopy (Lakshmikanth and Krishnamoorthy, 1999; Mukhopadhyay et al., 2006; Mondal et al., 2015), and residue-specific internal relaxation times ranging from a few tens to some hundreds of picoseconds in the same protein, as generally observed by NMR (Stone et al., 1993; Wand et al., 1996; Constantine et al., 1998; Hill et al., 2000; Ishima et al., 2001; Skrynnikov et al., 2002). The former represent the local motional freedom of the covalently bound probe with respect to the polypeptide chain, the latter are due either to methyl reorientational and other motions of side-chains bearing methyl groups, or to backbone fluctuations.
The fraction of atoms seen as immobile by QENS on the 10–100 ps time is generally about 0.6, while it ranges from ~0.1 to ~0.2 on the nanosecond timescale. Finally, the internal collective dynamics as measured by NSE (cf. Table 6) is characterized by relaxation times ranging from several nanoseconds to several tens of nanoseconds and amplitudes of several Ångströms.
Motivated by the different states of powder samples and protein solutions, several studies focused on the subnanosecond internal dynamics of protein powders under different hydration conditions and in solution. After reviewing these studies, results on the influence of the specific solvent conditions (use of H2O versus D2O, presence of salts) on protein dynamics are summarized.
In 1999, Pérez and co-workers performed the first systematic neutron scattering experiments on the picosecond internal dynamics of proteins as a function of hydration, from dry powders to solutions (Pérez et al., 1999). Measurements of two proteins, Mb and lysozyme (Lys), demonstrated that, from dry powder to coverage by one water layer, the surface side-chains progressively participate in local diffusive motions. The half-widths at half-maxima, Γ, of the Lorentzian function $\cal L $(Γ, ω) accounting for the internal dynamics of the proteins in hydrated powders and solutions are shown in Fig. 13. When increasing the level of hydration, the rate of the local proton diffusion is enhanced. In a solution with ~60 mg ml−1 protein, motions were found to occur with an average amplitude larger than in the fully hydrated powder by about a factor three. Also, the calculated average relaxation time decreased from ~9.4 ps in powders with one hydration layer to ~4.5 ps in solution.
Fig. 13. HWHM, Γ, of the internal motion Lorentzian L(Γ, ω), for Mb samples. The lines are guide to the eye. Except for the dry Mb sample, Γ increases with q 2, which characterizes the presence of local diffusive motions as soon as the protein is hydrated. In the case of dry Mb, Γ is almost constant, as expected from a reorientational type of motion. The inverse of Γ gives the correlation time of the motions. In solutions, the correlation time extrapolated at q = 0 is ~4.4 ps, less than half of that in powders. Figure adapted and reproduced with permission from Pérez et al. (1999). Copyright Elsevier.
The authors also noticed that, in solution, a component of the total scattering characterized by a quasi-elastic broadening proportional to q 2 (typical for Fickian diffusion) could be attributed to the global diffusion of the entire proteins, including both a translational and a rotational contribution. Importantly, the contribution of rotational diffusion to the apparent diffusion coefficient measured by QENS was calculated and the translational coefficient could be consistently extracted from the data (Pérez et al., 1999).
Shortly after this first systematic study, differences between the dynamics of a hydrated powder (0.4 g D2O per g protein) and a solution (50 mg ml−1) of α-amylase from Bacillus licheniformis were investigated by Fitter (2000) with a similar energy resolution. Clear differences were observed between the two types of samples. In particular, the number of mobile atoms was found to increase significantly, whereas the volume accessible to the atoms was reported to decrease. We remark that here any contribution arising from global diffusion was neglected (Fitter, 2000).
In a comparative neutron scattering study of the dynamics of Lys in hydrated powders and in solutions at ~100 mg ml−1, a two-power law characteristics of the quasi-elastic contribution to the spectra was identified, with a ballistic Gaussian decrease above ~2 meV (Marconi et al., 2008). The most significant difference between the powder and the solution sample was the much larger intensity of the quasi-elastic contribution, which was attributed by the authors to the increase of both the number and the amplitudes of the confined diffusive processes related to protein side-chain motions at the protein surface (Marconi et al., 2008). The comparison confirmed thus that proteins in solution exhibit enhanced dynamics.
Further investigations focused on the effect of an increasing hydration level and on the difference between the dynamics in powders and solution (Jansson et al., 2009; Russo et al., 2009; Stadler et al., 2009). Jansson et al. (2009) employed NBS to study Mb in water:glycerol mixtures at hydration levels ranging from h = 0.5 to 2 g solvent per g protein, in a temperature range of 260–320 K. The data were fitted with a single KWW function (see section 'Localized internal dynamics'). The results suggested that the stretched nature of the correlation functions is due to a distribution of exponential relaxations with different relaxation times rather than to a homogeneous non-exponential relaxation, but were also consistent with the assumption that the protein dynamics is dominated by confined motions on the accessible timescale. Jansson et al. (2009) investigated the dynamics of the solvent, which was well described by a jump-diffusion model, with the jump-diffusion coefficients being a factor 2.0 ± 0.3 and 2.6 ± 0.3 slower than for bulk water, for h = 1 and h = 0.5, respectively. Moreover, they found temperature dependences of the protein average relaxation times very similar to that of the average solvent relaxation times. The absolute values, however, were found to be significantly different, those of the protein atoms being in a range of ~1–10 ps, and those of the solvent in a range of ~5–30 ps. Hence, the data were consistent with the hypothesis of a slaving of the protein dynamics to the solvent, but at the same time the authors noted that the protein dynamics was strongly dependent on the hydration level, implying that also the amount of solvent plays an important role in the activation of protein internal relaxations (Jansson et al., 2009).
Russo et al. (2009, 2007) measured the internal dynamics of hydrophobic side-chains of small peptides as a function of the level of hydration and of temperature. Hydrated powders were measured from 50 to 300 K, whereas the solutions were measured between 275 and 310 K. An evolution of the dynamics was observed: for low levels of hydration, only rotational motions, mostly due to methyl group rotations were observed, whereas at high hydration levels also translational internal diffusive motions were detected above 250 K. Importantly, the experiments revealed that only long side-chains could trigger the diffusive motion, while short side-chains were found to undergo only rotational motions. Therefore, both the interfacial water and the side-chain length play a major role in the dynamical transition (Russo et al., 2009). Surprisingly, the internal translational motions were not observed in highly concentrated solutions at room temperature (Russo et al., 2007), but were only measured in hydrated powders (Russo et al., 2009). This discrepancy was explained as a consequence of the structural and dynamical properties of the specific interfacial water network, since the hydration water network around hydrophobic side-chains in hydrated powders is less structured (Russo et al., 2009). Therefore, the outcome of the experiment corroborated the hypothesis of the authors that protein dynamics is strongly influenced by the structural and dynamical properties of interfacial water (Russo et al., 2009). No significant differences between different hydration levels were reported at low temperatures (T <250 K) (Russo et al., 2009).
Stadler et al. (2009, 2012a) focused on the internal picosecond dynamics of human Hb, as well as platypus and chicken Hb, as a function of hydration and temperature. The rates of the diffusive motion were found to increase with increasing hydration up to highly concentrated solutions (~570 mg ml−1). Moreover, the data showed a substantial difference between powders and solutions: in solution, the volume accessible to the amino acid side-chains above body temperature was larger than that expected from a linear extrapolation from lower temperatures. The same was not observed in fully hydrated powders, suggesting that the investigation of fully hydrated protein powders is not sufficient to accurately describe all aspects of protein picosecond dynamics that might be relevant for biological function (Stadler et al., 2009).
Before concluding this section we note that the amplitude of fast molecular motions depends on the properties of the solvent, such as on whether the protein is dissolved in H2O or in D2O, and on the presence of different kinds of salts or cosolvents. This phenomenon was measured with elastic neutron scattering in solutions of halophilic MalDH, and bovine serum albumin (BSA) at ~200 mg ml−1 (Tehei et al., 2001), as well as for full cells of Escherichia coli (Jasnin et al., 2008b). The values of resilience measured in the study can be found in Table 4, and the overall cytoplasm shows a smaller motional amplitude and smaller resilience in D2O compared with H2O (Jasnin et al., 2008b). It was noted by Tehei et al. (2001) that BSA must be stabilized predominantly by entropic effects, since its resilience is higher in H2O, even though its thermostability is higher in D2O. In contrast, the higher resilience of MalDH in D2O, where the protein is more stable, suggests that enthalpic terms dominate its stability (Tehei et al., 2001). The ion-dependent changes of k in MalDH were interpreted as a consequence of a significant contribution of the protein–ion interactions in the hydration shell. Hence, although the use of D2O is common praxis for neutron scattering, NMR, and other spectroscopic techniques, it should be kept in mind that the solvent affects protein dynamics in a non-trivial way (Tehei et al., 2001) and the quantitative determination of some parameters related to the protein structure and dynamics may differ depending on whether the protein is dissolved in H2O or in D2O (Gabel et al., 2003).
Al-Ayoubi et al. (2017) further found that two different cosolvents in D2O, namely trimethylamine-N-oxide (TMAO) and urea, affect the sub-nanosecond dynamics in different ways. At ambient pressure, the presence of 2 M TMAO in a solution of Lys at concentrations of 80 and 160 mg ml−1 in D2O results in the MSD being reduced by $\sim 70\% $ and $\sim 45\% $ respectively, compared with Lys in pure D2O. Instead, 2 M urea was found to reduce the MSD by only $\sim 35\% $ in the most diluted sample and to have no effect on the most concentrated one. Finally, a urea–TMAO 2–1 M mixture caused a reduction of the MSD by $\sim 75\% $ in the 80 mg ml−1 Lys sample. At higher pressures, the MSD values in the presence and absence of urea were found to be of similar magnitude at both protein concentrations, and in general the MSD remains rather constant over the whole pressure range probed in the presence of both cosolvents (Al-Ayoubi et al., 2017) (see also section 'Internal dynamics of proteins at high pressure'). Additional Fourier-transform IR spectroscopy indicated a stabilization effect of the osmolyte TMAO and a destabilization in the presence of urea. The authors hypothesized that the different influences of TMAO and urea be due the fact that urea interacts weakly with water, but directly with the protein-backbone, while TMAO is preferentially excluded from the protein surface and instead enhances the overall hydrogen bonding network structure. The authors speculate that this might lead to a damping of conformational fluctuations of the protein's surface groups which propagates into the protein interior, thereby affecting the flexibility of the whole protein molecule (Al-Ayoubi et al., 2017).
In conclusion, a general outcome of the reported neutron scattering experiments is that additional MD is found when proteins are in solution, compared with hydrated powders, as schematically depicted in Fig. 14. A similar result was obtained in a recent NMR study (Harpole et al., 2016) highlighting how the dynamics faster than ~3 ns of methyl-bearing side-chains in solution is clearly different from that in hydrated powders (~0.4 g water per g protein), for which in turn a change of dynamics was detected by NMR (Separovic et al., 1998; Diakova et al., 2007; Krushelnitsky et al., 2009) and dielectric relaxation spectroscopy (Mijovic et al., 2005; Khodadadi et al., 2008; Fomina et al., 2014) as a function of hydration. Further NMR studies, instead, indicated that the dynamics of both methyl groups (Reif et al., 2006; Agarwal et al., 2008) on the sub-nanosecond timescale and backbone atoms (Chevelkov et al., 2010) on a ~1–100 ns timescale is mostly unchanged, when comparing proteins in crystals and aqueous solution, consistent with the observation that, in hydrated powders, the millisecond-dynamics does not change significantly above ~0.3 g g−1, at least up to h ~0.7 g water per g protein (Krushelnitsky et al., 2009).
Nevertheless, the additional sub-nanosecond dynamics observed mostly by neutron scattering in solution can be relevant for biological function (for more general information on the hydration–function relation, see e.g. Ball (2008)), and hence a full picture of the dynamical properties of proteins does require studies in solution. In this context, it was established that, after appropriate sample preparation, information on the internal dynamics can be detected by neutron scattering experiments also in bulk solutions composed of membrane proteins in detergent microemulsions (Gall et al., 2002).
Finally, it should be kept in mind that marked differences in the amplitude of protein fluctuations can occur depending on the solvent. Although no general understanding could be presented so far, recent studies suggest that the hydrogen bonding network around the protein surface might play a role in this respect.
A dynamical transition at low T far below physiological temperatures was observed in protein hydrated powders in numerous studies, as briefly mentioned in the 'Dynamics of hydrated protein powders' section (for a review see those by Gabel et al. (2003) and Daniel et al. (2003)). In aqueous solutions, similar measurements are limited by the crystallization of water. Nevertheless, few experiments, mostly on proteins dissolved in antifreeze solvents (cryoprotectants), revealed a dynamical transition also in solution. Such studies are presented below.
Neutron scattering experiments probing two different timescales, namely below 100 ps, as seen by TOF, and below 5 ns, as seen by NBS, were performed on solutions of the enzyme glutamate dehydrogenase in 70% v/v CD3OD/D2O for 70 K<T <320 K (Daniel et al., 1998, 1999). The temperature dependence of the MSD was found to be markedly dependent on the instrument resolution: at the nanosecond timescale, several inflections of the MSD were identified, at ~140, ~210 and 280 K, while on the picosecond timescale only the well-known dynamical transition at ~220 K was observed. Moreover, none of these temperatures could be associated with an activity loss. It was therefore concluded that anharmonic fast motions are not necessarily coupled to the much slower motions required for the enzyme activity. However, as noted by the same authors, it cannot be excluded that functionally important fast motions may occur locally in the protein at the active site, even though these are not detectable in the average dynamics (Daniel et al., 1999).
Another investigation regarding the dynamical transition in protein solutions was carried out by Réat et al. (2000). The study focused on the solvent dependence of the picosecond dynamic transition of solutions of xylanase, a simple single-subunit enzyme. The elastic intensity of the protein in dry powder, in D2O, and in four two-component perdeuterated single-phase cryosolvents in which the protein is active and stable was measured with a resolution of 50 eV. It was found that the dynamic transitions of the protein solutions are partially coupled to those of the respective solvents. In D2O a very sharp transition is observed at ~280 K, i.e. substantially above the transition temperature of hydrated powders (200–220 K), but very close to the melting temperature of D2O, T m = 277 K. In the cryosolvents used in the performed experiment, instead, the transition is much more gradual and starts at ~230 K, independent of the cryosolvent composition. In particular, the transition temperature remains the same not only in the cryosolvent with a melting temperature of ≈ 230 K, but also in that with melting temperature below 190 K (Réat et al., 2000).
An at least apparent decoupling of the dynamics of a protein (Lys) in solution from the dynamics of its solvent (7.6 M LiCl D2O) was observed also in a more recent QENS study (Chu et al., 2012). Other than the cryosolvent used in the aforementioned study, which was characterized by a melting point below 190 K (Réat et al., 2000), the LiCl solution undergoes a dynamical crossover at about 220 K. Nevertheless, no transition is visible in the protein dynamics at this temperature. The authors argued that there may be two ways to explain the observations: (i) there is a real decoupling of the dynamics of the solvent from that of the protein, or (ii) the transition observed for the solvent does not reflect a transition in its α-relaxation, which is believed to drive the dynamical transition in proteins (Chu et al., 2012).
The protein dynamical transition in solution was further observed by terahertz dielectric spectroscopy by Markelz et al. (2007) and He et al. (2008), in comparison with MD simulations. Such investigations, as opposed to the ones reported above, revealed that, on the sub-picosecond timescale, the dynamical transition of native and denatured proteins as well as polypeptides dissolved in H2O occurs at about 200 K (the same temperature observed for hydrated powders), independent of the protein secondary and tertiary structure and concentration (He et al., 2008). Hence, this study first revealed that the dynamical transition on the sub-picosecond timescale does not require a protein structure, but is probably rather due to the side-chain interaction with the solvent. He et al. (2008), however, provide evidence using a phonon-assisted Mössbauer experiment on hydrated powders (Achterhold et al., 2002) that such a transition on the sub-picosecond timescale does not concern the protein core.
Summarizing, neutron as well as terahertz spectroscopy experiments show that the dynamical transition is not restricted to protein hydrated powders, but can be observed also in D2O, H2O and cryosolvents. Unlike for hydrated powders, the coupling of the protein dynamics and the dynamics of the solvent at low temperature (when using cryosolvents) does not seem to be obvious.
For proteins in (heavy) water, the combination of results from THz spectroscopy and neutron scattering suggests the following picture. The water-side-chain interaction suppresses water crystallization on the protein surface except for short chain lengths, as reflected by the transition at ~200 K of the sub-picosecond dynamics probed by THz spectroscopy (Markelz et al., 2007; He et al., 2008). The suppression of water crystallization, however, is probably still insufficient to trigger motions on the picosecond to nanosecond timescale, or it affects too few side-chains to be observed by neutron scattering. Instead, significant movements on these timescales seem to require the melting of a larger fraction of D2O (Réat et al., 2000).
Several studies have attempted identifying general differences of the internal dynamics of proteins in solution between well-folded conformations and molten (i.e. an intermediate equilibrium state between native and fully denatured) or unfolded states. In the following, we summarize the results of such investigations.
The comparison of TOF data of yeast phosphoglycerate kinase (PGK) in the native form and denatured in 1.5 M guanidinium chloride revealed a clear increase of the fraction of hydrogen atoms undergoing picosecond diffusive motions upon denaturation (Receveur et al., 1997). The same experiment evidenced that the H-atoms can access a larger volume in the denatured state, as reflected by the increase of the radius of the effective accessible sphere from 1.8 to 2.2 Å (cf. section 'Modeling and analysis').
Partial denaturation through C-terminal truncation of staphylococcal nuclease (snase) was studied in solutions at a concentration c p ~ 80 mg ml−1 by TOF spectroscopy by Kataoka et al. (1999a). An increase in the amplitude of the picosecond-timescale average local fluctuations u 2 on truncation of the 13C-terminal residues of snase was detected, from 0.49 ± 0.02 Å in the native state to 0.60 ± 0.02 Å in the denatured form. MD simulations suggested that these differences are related to an increased solvent accessibility of the protein chain, accompanied by a decrease of the number of internal hydrogen bonds (Kataoka et al., 1999a). However, overall, NMR studies probing the sub-nanosecond dynamics of methyl-bearing side-chains in solution showed essentially no correlation between their amplitudes and their depth, their local packing density, or their solvent accessible surface area (Igumenova et al., 2006). This suggests that the increased amplitude of motion observed by Kataoka et al. (1999a) is rather related to the decrease of the number of internal hydrogen bonds.
Picosecond dynamics differences were also measured between the native bovine α-lactalbumin (BLA) and its molten globules (MBLA) at ~75 mg ml−1 by Bu et al. (2000), as shown in Fig. 15. The authors observed that spatially restricted long-range diffusive motions and local jump motions (cf. section 'Modeling and analysis') of H-atoms within the proteins are less restricted in the molten globules than in the native BLA. At T = 303 K, it was found that H-atoms in BLA and MBLA diffuse in effective spheres of radii R BLA = 4.1 ± 0.1 Å and R MBLA = 5.4 ± 0.1 Å, as obtained from the fit of the EISF in Fig. 15a. A jump-diffusion was identified, with diffusion coefficients D BLA = 42 ± 0.5 Å2 ns−1 and D MBLA = 73 ± 0.5 Å2 ns−1, and a residence time τ BLA = 56 ± 7 ps in native BLA, reducing to τ MBLA = 23 ± 2 ps in MBLA (Fig. 15b). However, root mean square jump distances did not change significantly ( $\sqrt {{\langle r^2\rangle} _{{\rm BLA}}} = 3.7 \pm 0.3\,$ Å versus $\sqrt {{\langle r^2\rangle} _{{\rm MBLA}}} = 3.2 \pm 0.3$ Å), which may indicate switching between different rotamers (Bu et al., 2000). In addition to the self-dynamics, Bu and co-workers extracted information on collective diffusive motions within the protein (Bu et al., 2000). Evaluating the coherent scattering suggested that atoms move in a correlated manner with correlation lengths ξ BLA = 18 ± 4 Å and ξ MBLA = 6.9 ± 1.2 Å respectively. Therefore, the dynamics of the native protein is characterized by more localized motions of atoms correlated up to relatively long distances, as opposed to that of the molten globule presenting less localized motions of less strongly correlated atoms.
In a later study, Bu et al. (2001) compared the nanosecond and picosecond dynamics of native, molten and denatured BLA in solution at 60 and ~15 to 20 mg ml−1, respectively. Two dynamical contributions (two Lorentzian functions) were used to describe the TOF data. The fast contribution was identified with the motion of the side-chains, whereas the other one was attributed to the center-of-mass motion of the protein and was not discussed. The picosecond dynamics showed a reduced potential barrier to side-chain torsional motion in the molten globule and in the denatured protein. Importantly, although faster internal dynamics may be expected also because of the lower concentration of the molten and denatured protein samples compared with that of the native protein solution, the urea-denatured BLA showed a less restricted long-range motion than both native protein and the molten globule, the latter at a comparable concentration. Unlike the TOF data, the NBS data were interpreted in terms of one single dynamical contribution, under the assumption that the center-of-mass diffusion was too fast to be detected. The results suggested the presence of dynamical regimes and strongly correlated density fluctuations within the native protein and the denatured states, based on coherent scattering intensity: an unusual dynamical behavior not observed for chain-like polymers was reported, which was suggested to be due to strongly non-local, attractive forces within the proteins. Finally, the analysis of the q-dependence of the scattering intensity suggested that both a residual secondary structure and long-range interactions on the scale of the tertiary structure fluctuate over several hundred picoseconds in both the native protein and in the highly denatured states α-lactalbumin (Bu et al., 2001).
BLA in the native and molten state was further studied combining QENS and MD simulations (Tarek et al., 2003). The protein concentrations were ~60 mg ml−1 for the native and ~20 mg ml−1 for the molten globule. The study confirmed the increase of the average internal motion of the molten state, and showed that such a motion is characterized by a high degree of heterogeneity, which is more pronounced in MBLA compared with BLA. The simulations showed differences of up to an order of magnitude between the amplitude of motions in highly structured parts of the protein, compared with unstructured regions (loops connecting secondary structural elements, termini, unfolded chain segments of the molten globule). Thereby, it was demonstrated that the increased average sub-nanosecond dynamics of the molten globule was mainly due to additional motions in the region of the protein that unfolds upon formation of the molten globule (Tarek et al., 2003).
The same conclusion was drawn earlier by Russo et al. (2002) in a study on the temperature dependence of the picosecond internal dynamics of an all-β protein, neocarzinostatin, in solutions at 58 and 42 mg ml−1 (Russo et al., 2002). The authors observed that the number of protons undergoing detectable diffusive motions increased from 33% at 293 K to ~90% upon heat-induced partial unfolding at 344 K. Furthermore, a decrease of the average volume accessible to the atoms upon unfolding was reported. It was pointed out by the authors of this study that 33% is very close to the fraction of protons contained in the side-chains of random coil structures and it was suggested that, at room temperature, the only detectable diffusive movements are those involving the side-chains of random coil structures. Thus, the increased fraction of mobile atoms at higher temperature could be explained by the onset of picosecond dynamics in the fraction of backbone and β-sheet side-chain hydrogen atoms, which were seen as immobile at room temperature. These atoms would still be in a very confined environment until the protein is fully unfolded, which would account for the decrease of the average accessible volume, as most of the additional motion would have to be very restricted (Russo et al., 2002).
Also a later study by Gibrat et al. (2008) on apo-calmodulin reported the observation of a dynamical transition upon protein thermal denaturation. The transition was characterized by a decrease of the confinement of hydrogen atoms, and a decrease of the fraction of immobile protons. Moreover, the data analysis revealed an increase of dynamical heterogeneity, but also a decrease of the 'most probable volume explored' (Gibrat et al., 2008). It was proposed that the distance between atom and protein backbone is more important for the dynamics than the solvent exposure of the residue, or a related classification of the residue as belonging to the surface or the interior of the protein. In fact, if the exposure of the residue to the solvent determined dynamics, this should be less homogeneous in the folded state, where only a fraction of side-chains is solvent-exposed, than in the unfolded state, where all residues become exposed to the solvent (Gibrat et al., 2008). This hypothesis is consistent with previously mentioned NMR observations indicating the absences of correlations between the amplitude of sub-nanosecond dynamics of side-chains containing methyl groups and their solvent accessibility, as well as their depth, and their local packing density (Igumenova et al., 2006).
An increase of the radius as obtained from the diffusion in a sphere model (cf. section 'Modeling and analysis') was observed instead for α-amylase by Fitter (2003a) at 30 mg ml−1 from the folded (R = 1.2 Å) to a pH-induced unfolded conformation (R = 1.8 Å). In both states, at 303 K, an average correlation time of ~4.4 ps and a mean square displacement $\langle u^2 \rangle $ = 0.15 ± 0.03 Å2 were found (Fitter, 2003a).
At higher temperatures, while no variation within the experimental error of the correlation time was observed, an increase of R was detected (Fitter, 2003b). Such a change was more pronounced in the unfolded conformation, the radius of which increased from R ≃ 1.8 Å at 303 K to R ≃ 2.4 Å at 343 K, as opposed to that of the native protein, which increased from R ≃ 1.2 Å at 303 K to R ≃ 1.4 Å at 343 K. The conformational entropy change between the two states upon heating was also estimated on the basis of the neutron scattering data and a significant increase upon heating was demonstrated. As a consequence, it was concluded that, since an increasingly larger part of the conformational space can be explored by confined motions with increasing temperature in the unfolded compared with the natively folded protein, the conformational entropy change contributes significantly to thermal unfolding (Fitter, 2003b). This result is consistent with NMR experiments on α-lactalbumin showing that also conformational fluctuations of the backbone on microsecond to millisecond timescales are more strongly enhanced in molten globules than in the native protein when rising temperature (Ramboarina and Redfield, 2008).
Jansson and Swenson (2008) investigated the dynamical behavior of both the protein Hb and its surrounding water during thermal denaturation using EINS as a support to complement modulated temperature differential scanning calorimetry, and frequency dependent conductivity measurements. Their analysis of the elastic intensity as a function of temperature throughout thermal denaturation suggested that the unfolding of the secondary structure reduces the number of water molecules mobile on a 50–100 ps timescale – probably because of an increased number of water molecules interacting with the larger exposed protein surface –, whereas the flexibility of the protein was found to be enhanced by denaturation. The important role of the solvent properties at the hydration interface in determining the region in the temperature-pressure thermodynamic plane in which proteins are stable, is further demonstrated by multiple experimental and computational studies (Bellissent-Funel et al., 2016; Bianco et al., 2017).
A combined QENS and EINS study was performed on the thermal denaturation of BSA in highly concentrated aqueous solution (Hennig et al., 2012). The apparent mean square displacement u 2 was decomposed into the global diffusive contribution $u_{{\rm diff}}^2 $ and the internal part $ \langle u^2 \rangle -u_{{\rm diff}}^2 $ comprising the vibrations and the subunit diffusive motions. Upon increasing T, u 2 was characterized by a linear increase up to T = 325 K, followed by a sharp decrease in the range 331 K<T <354 K and a second linear increase up to 370 K. This observation was interpreted as a result of the transition from a liquid solution to a cross-linked gel-like state: as long as the proteins are free, they diffuse increasingly fast with rising temperature. When denaturation starts, the diffusion is hindered by the formation of a growing network. When the formation of the cross-linked network of denatured proteins is complete, a further temperature increase enhances the dynamics of the subunits of the proteins, while their center-of-mass diffusion is restrained. The MSD of the internal dynamics $ \langle u^2\rangle - \langle u_{{\rm diff}}^2 \rangle $ showed a slow linear increase up to the denaturation temperature T = 343 K, after which it increased linearly with a higher slope, indicating an enhanced flexibility after denaturation, consistent with a more marked increase of the average relaxation rate measured by fluorescence spectroscopy on rubredoxin mutant A51C (Santos et al., 2010) and on human serum albumin (Yadav et al., 2014) above ~343 K. In particular, Santos et al. (2010) observed an additional fast relaxation (~0.1 ns) above 343 K. The force constants were calculated by Hennig et al. (2012) in the two regimes: before denaturation, $\langle k_1 \rangle $ = 4.1 × 10−2 N m−1, consistent with the force constant measured by Wood et al. (2008) in a solution of ribonuclease A, while after denaturation $\langle k_2 \rangle $ = 0.1 × 10−2 N m−1.
Following this investigation, a quasi-elastic NBS study on BSA in solution by Grimaldo et al. (2015a) evidenced dynamical processes on three distinct timescales. The authors identified one component with the translation and rotation of the entire protein, after finding it quantitatively consistent with theories of effective colloidal hard spheres, for all temperatures where the protein is in its native conformational state. Above the denaturation temperature T d, the corresponding diffusion coefficient was found to drop, consistent with the result by Hennig et al. (2012), indicating that the motion of the entire macromolecule is strongly obstructed by cross-linking or entanglement (Grimaldo et al., 2015a). The two remaining processes were associated with internal dynamics and interpreted in terms of a model of two switching diffusive states. It was hypothesized that these two internal diffusive states could be assigned to a slow backbone fluctuation and a fast side-chain motion, respectively. In this picture the amplitude of backbone fluctuations was found to grow with rising T, and the associated diffusion coefficient increased more steeply around and above T d, which was attributed to the melting of secondary structures. An effective force constant of the backbone $\langle k \rangle $ = 0.09 ± 0.01 N m−1 was extracted from the data and found consistent with independent measurements (see also Table 4). Finally, the number of mobile side-chains was found to increase sharply at T d, while their average dynamics and accessible volume exhibited only little or no variations (Grimaldo et al., 2015a).
The effect of chemical denaturation on BSA at lower concentrations (~30 mg ml−1) induced by 6 M guanidinium hydrochloride (GndCl) at T = 295 K was studied with TOF and NBS by Ameseder et al. (2018a, 2018b). Moreover, also the effect of the reduction of disulfide bridges in denatured BSA induced by 6 M GndCl 150 mM β-mercaptoethanol (β-met) was investigated. The data were fitted with two different models for the internal dynamics, namely a KWW function (Williams and Watts, 1970) and a Brownian oscillator (Volino et al., 2006) (see section 'Localized internal dynamics'). From the former, clear differences in the stretching factor β between the native and the denatured proteins were observed both with TOF and NBS. In the unfolded state, β was found between 0.7 and 0.8, in qualitative agreement with a Zimm-like dynamics (β ~ 1/2 to 2/3) (Ameseder et al., 2018b). From further analysis of the q-dependence of the relaxation rates 1/τ it was concluded by the authors that, in the native protein, the dynamics is heterogeneous due to a fast anomalous diffusion ( $D_{{\rm int}}^{{\rm fast}} = 47.7 \pm 6.1\,$Å2 ns−1) and a slower normal diffusive motion ( $D_{{\rm int}}^{{\rm slow}} = 26.1 \pm 1.6\,$Å2 ns−1) (Ameseder et al., 2018b). Also in denatured BSA a heterogeneous dynamics was observed, but in this case it was found consistent with a distribution of relaxation times arising only from slow, normal diffusive processes with diffusion coefficients between 8 and 13 Å2 ns−1. The additional reduction of disulfide bridges in the denatured protein was found to lead to only a slightly increased flexibility. However, a later NSE study by Ameseder et al. (2018a) indicated that presence of the sulfur bonds leads to a suppression of low-frequency Zimm modes (see section 'Relations of protein dynamics to structure: from globular to intrinsically disordered proteins'). Although incapable of distinguishing heterogeneous dynamics on the same timescale, as the authors noted (Ameseder et al., 2018b), the Brownian oscillator model yielded reasonably similar results with $D_{{\rm int}}^{{\rm fast}} = 95.8 \pm 1.8\,$Å2 ns−1 and $D_{{\rm int}}^{{\rm slow}} = 24.5 \pm 1.5\,$Å2 ns−1 in the native protein and diffusion coefficients between 14 and 21 Å2 ns−1 in denatured BSA. As remarked by Ameseder and collaborators, $D_{{\rm int}}^{{\rm fast}} $ and $D_{{\rm int}}^{{\rm slow}} $ obtained with this model are on the same order of magnitude as the diffusion coefficients obtained from the switching model applied before on BSA at the same temperature (Grimaldo et al., 2015a) and $D_{{\rm int}}^{{\rm fast}} $ is in good agreement with a 'slow' dynamics of amino acid side-chains in folded proteins observed by Monkenbusch et al. (2015) and characterized by effective diffusion coefficients of around 70–80 Å2 ns−1 (Ameseder et al., 2018b) (see also section 'Combination of neutron spectroscopy techniques: alcohol dehydrogenase'). The apparent disappearance of the fast dynamics upon denaturation is to be interpreted together with the drastic increase of the fraction of atoms moving on both the timescales accessible by TOF and NSE, respectively, as obtained from the Brownian oscillator model. As suggested by Ameseder et al. (2018b), in the denatured protein, the slow process may become the predominating contribution to the dynamics (e.g. if the additional mobile atoms were mostly slow), practically obscuring the faster one. The authors further suggested that in BSA unfolded by GndCl conformational dynamics that are relevant for the sampling of the conformational space are governed by diffusion of the entire protein backbone (Ameseder et al., 2018b). Finally, the root MSD was found to decrease when measured with TOF, but slightly increased when measured with NBS, possibly also as an effect of the increased mobility of the slow atoms.
Stadler et al. combined TOF and NBS measurements to outline differences of global and internal dynamics of Mb between different folding states characterized by α-helical content (Stadler et al., 2016a). The global diffusion appeared to be slightly faster for the folded apo- and holo-Mb compared with the acid-denatured unfolded apo-Mb. Molten globule Mb shows a transition from a liquid-like to a gel-like behavior in the range of 3–5% volume fraction, which is reflected by a drop of the global diffusion coefficient. The internal dynamics at the high-resolution NBS spectrometer was found to be localized with a relaxation time τ ≈ 200 ps independent of the folding state. On the lower-resolution spectrometers, a jump-diffusion signature was observed. While the jump-diffusion coefficients showed no clear trend across the folding states, the residence time τ 0 of the folded state was larger than the molten and unfolded states. This observation suggested that secondary structure confines and temporarily arrests side-chain motions, which is further supported by the geometric parameters from the EISF. Both confinement radius and fraction of immobile protons evidence more flexibility in the molten and unfolded state, which can be connected to changes in conformational entropy relevant for protein folding (Stadler et al., 2016a).
Several additional studies with complementary non-neutron-based techniques were carried out in order to characterize the dynamics of proteins in native, molten and denatured states (Buck et al., 1996; Bai et al., 2000; Dilg et al., 2002; Kuzmenkina et al., 2005; Nienhaus, 2006; Ramboarina and Redfield, 2008; Santos et al., 2010; Dutta et al., 2014; Yadav et al., 2014; Ghosh et al., 2015; Mondal et al., 2015; Aznauryan et al., 2016). In general, FRET (Kuzmenkina et al., 2005; Nienhaus, 2006; Yadav et al., 2014; Mondal et al., 2015) as well as NMR (Buck et al., 1996; Bai et al., 2000; Ramboarina and Redfield, 2008; Dutta et al., 2014) and Mössbauer (Dilg et al., 2002) as well as NMR (Buck et al., 1996; Bai et al., 2000; Ramboarina and Redfield, 2008; Dutta et al., 2014) and Mössbauer (Dilg et al., 2002) studies indicate a higher flexibility and dynamic heterogeneity of denatured proteins and molten globules compared with natively folded proteins on timescales ranging from nanoseconds (Buck et al., 1996; Ramboarina and Redfield, 2008; Dutta et al., 2014; Yadav et al., 2014; Mondal et al., 2015) to micro- to milli-seconds (Buck et al., 1996; Bai et al., 2000; Dutta et al., 2014), and even several seconds, as evidenced by a significant 'dynamic' heterogeneity of the structure of the unfolded proteins (Kuzmenkina et al., 2005; Nienhaus, 2006). By analyzing the average fluorescence lifetime of labeled HSA at different GnHCl concentrations and temperatures, Yadav et al. (2014) also suggested that chemical denaturation induced by GnHCl involves two intermediate states, which are not observed during thermal denaturation. These intermediates were not visible with CD or through monitoring changes in the hydrodynamic radius of the protein. Interestingly, Buck et al. (1996) found that, although denaturation leads to an overall more pronounced backbone dynamics, the most mobile residues in the native protein remain more mobile than the average also in the denatured protein, in the absence of secondary structure. In contrast to the aforementioned results, Ghosh et al. (2015) found that denaturation of HSA induced by cholesterol causes a slowing down of the side-chain dynamics on the microsecond timescale. Hence, the type of denaturation seems also to play an important role of the way the dynamics changes compared with the native proteins. The characterization of these changes on different timescales is crucial for obtaining a complete picture of such processes.
In summary, most experiments performed with different techniques accessing protein internal dynamics on an extremely large range of timescales from picoseconds to several seconds indicate that molten globules and denatured proteins are characterized by an increased flexibility, a loss of local confinement and a larger fraction of mobile atoms (Buck et al., 1996; Receveur et al., 1997; Kataoka et al., 1999a; Bai et al., 2000; Bu et al., 2000, 2001; Dilg et al., 2002; Russo et al., 2002; Tarek et al., 2003; Fitter, 2003a, 2003b; Jansson and Swenson, 2008; Gibrat et al., 2008; Ramboarina and Redfield, 2008; Santos et al., 2010; Hennig et al., 2012; Dutta et al., 2014; Yadav et al., 2014; Mondal et al., 2015; Grimaldo et al., 2015a; Aznauryan et al., 2016; Stadler et al., 2016a; Ameseder et al., 2018b). Moreover, the dynamics of the mobile atoms is generally characterized by an increased dynamic heterogeneity in the molten and denatured structures compared with the native conformations (Bai et al., 2000; Tarek et al., 2003; Kuzmenkina et al., 2005; Nienhaus, 2006; Ramboarina and Redfield, 2008; Santos et al., 2010; Dutta et al., 2014; Mondal et al., 2015; Aznauryan et al., 2016). This, however, might not be true in all denaturing environments (Ghosh et al., 2015). The combination of simulations with neutron scattering and NMR experiments suggest that changes are related to the decrease of internal hydrogen bonds (Kataoka et al., 1999a; Igumenova et al., 2006). Furthermore, evidence was found from both neutron scattering and NMR suggesting that the conformational entropy change significantly contributes to thermal unfolding (Fitter, 2003b; Ramboarina and Redfield, 2008). In addition to this, a neutron scattering study suggested that the distance of atoms to the protein backbone is more important than the solvent exposure or the distance from the surface when determining average dynamics (Gibrat et al., 2008), consistent with NMR studies (Igumenova et al., 2006). Also in light of such observations, a model was proposed to interpret NBS data on thermal denaturation of BSA in solution, suggesting that, while the number of mobile side-chains sharply increases upon denaturation, the acceleration of their motion due to temperature is more smooth and constant, whereas a diffusion coefficient associated with backbone fluctuations was found to increase more quickly in the denatured protein (Grimaldo et al., 2015a).
Finally, we note that FRET and NMR can also, in some cases, follow almost in real-time fast processes such as protein folding, hence not only the dynamics of the native, unfolded and intermediate states, but also the protein folding kinetics (Dobson and Hore, 1998; van Nuland et al., 1998; Mok et al., 2003; Dyson and Wright, 2004; Schuler and Eaton, 2008; Goluguri and Udgaonkar, 2016; Takahashi et al., 2016).
In light of the results presented above, an obvious question is how the specific hierarchical structure of different proteins affects their subnanosecond dynamics.
In an attempt to answer this question, Gaspar et al. (2008) investigated the dynamics faster than 22 ps of well-folded and IDPs, and found that the rigidity, as obtained from the fraction of mobile atoms, changes depending on the secondary structure. The most rigid protein was concanavalin A, composed of β-sheets. Mb, the structure of which is made of α-helices, was found less rigid than concanavalin A, but more rigid than the α/β-protein Lys. In contrast to this, Ramboarina and Redfield (2008) obtained from NMR that the α-domain of α-lactalbumin molten globule has significantly more restricted pico- to nano-second backbone dynamics than the β-domain, whereas, for instance, Mandel et al. (1995) reported a more complex pattern, involving enhanced mobility of a band of residues across three parallel β-sheets, alternating high and low picosecond-mobility in another β-sheet, as well as an intricate pattern in α-helices, and an enhanced mobility of loops surrounding the active site of E. coli RNase H. Differences between these studies suggest that further investigation is needed to understand the impact of the secondary structure on localized internal dynamics, but may be at least partially due to different sampling. While TOF spectroscopy prominently measures the average picosecond dynamics of all the side-chains, 15N NMR relaxation measurements provide an order parameter at a residue level indicating how restricted the pico- to nano-second backbone dynamics is. Hence, differences may arise from the different types of dynamics the techniques are sensitive to. Alternatively, such seemingly contradictory trends may indicate that also, if not mostly, the primary sequence of the protein plays an important role in determining the pico- to nano-second dynamics, as suggested by Buck et al. (1996) and by a statistical study by Goodman et al. (2000). The latter study suggested that backbone dynamics on such a time-scale is only weakly correlated with the secondary and tertiary structure, whereas amino acids with small side-chains tend to have greater backbone flexibility than those with large side-chains. Moreover, the analysis showed that the motions of a given NH group may be restricted by the presence of large amino acid side-chains in the two preceding or two following amino acid residues in the primary sequence. In addition, recently, Cilia et al. (2013) have demonstrated that the backbone dynamics as observed from NMR can be predicted remarkably well based solely on the amino acid sequence. On the virtually infinite timescale probed by X-ray crystallography, however, the flexibility as measured by the atomic mean square displacements was found to be inversely proportional to the number of noncovalent neighbors within a local region of ~1.5 nm3 (Halle, 2002). Taken together, these two results suggest that the primary structure may have a larger impact on shorter timescales, whereas on sufficiently long timescales all the space sterically available to the atoms is eventually explored. Further hints of the importance of the primary sequence is given by an NMR study by Tada et al. (2002) in which two segments with the same type of secondary structure (distorted α-helix) were shown to have markedly different dynamics in two homologous proteins. Nevertheless, the primary structure cannot be the only factor regulating the atomic motion, as demonstrated by the studies reviewed below and in the 'Collective internal motions in proteins' section reporting changes of the dynamics for instance upon ligand binding, as well as by the studies in the 'Comparison of internal protein dynamics in native, molten and denatured states' section, where differences upon denaturation were observed. As a matter of fact, the sensitivity of dynamics to local structural changes even allowed to address effects of photo-activation on the dynamics of a light, oxygen, voltage (LOV) photoreceptor from Pseudomonas putida using TOF and NBS (Stadler et al., 2016b). Upon photo-activation, the overall structure remains similar and compact, as reflected in an unchanged global diffusion. The internal dynamics displays a slower relaxation in light state on the timescale of a few picoseconds accessible to TOF, whereas slower internal dynamics around 100–200 ps accessed by NBS show no significant trend. From the dynamical confinement based on the EISF, effective force constants were calculated. On the TOF timescale of a few picoseconds, 〈k〉 = 0.28 ± 0.07 and 〈k〉 = 0.16 ± 0.03 N m−1 for the light and dark states, respectively, were found, while a reversed behavior was observed on the NBS timescales of hundreds of picoseconds, with 〈k〉 = 0.018 ± 0.003 and 〈k〉 = 0.10 ± 0.02 N m−1 for the light and dark states, respectively. This behavior was interpreted in a picture where local forces are strong on the motions on picosecond timescales, but then effectively become less determining when approaching nanosecond timescales, resulting in the significant loss of stiffness. The pronounced effect for the photo-activated state has been suggested to be related to the formation of a specific covalent bond that stabilizes the light state. Overall, the significant changes of dynamics and stiffness of the protein on subnanosecond timescales were suggested to be important for signaling in the LOV photoreceptor family (Stadler et al., 2016b). In any case, even if a complete picture of the effect of the local arrangement of amino acids on protein dynamics is still missing, NMR measurements generally indicate that at least interconnecting loops are more mobile than other secondary structures (Keeler et al., 2003; Shi et al., 2009; Fenwick et al., 2016). Consistent with these observations, the intrinsically disordered casein proteins appeared to undergo additional motions, compared with well-folded proteins, and were associated by Gaspar and co-workers with the smallest rigidity (Gaspar et al., 2008).
Further information relating structural elements with protein internal dynamics was obtained by Ameseder et al. (2018a), in a combined NSE-small-angle neutron scattering (SANS) study of BSA denatured by GndCl. The dynamics of the chemically denatured BSA was interpreted in terms of the Zimm theory (Edwards et al., 1986) from polymer physics including an internal friction, which was previously used to model the dynamics of IDPs observed by FRET (Soranno et al., 2012) and NSE (Stadler et al., 2014b) (see also section 'Collective internal motions in proteins'). As noted by Ameseder et al. (2018a), at the atomistic level, the origin of internal friction in unfolded proteins was investigated by computer simulations before (Echeverria et al., 2014) and was attributed to concerted dihedral rotations in the polypeptide chain. The NSE data by Ameseder et al. (2018a) indicated that the structural expansion induced by denaturation leads to a reduction of internal chain friction and a suppression of low-frequency Zimm modes acting on long length scales. A further comparison of denatured BSA in the absence and in the presence of β-met – which causes the rupture of the sulfur bonds – suggested that active disulfide bridges within the proteins block longer wavelength Zimm modes. Hence, concluded the authors, the structural expansion and other structural constraints affect both the internal friction and the low-frequency Zimm mode suppression (Ameseder et al., 2018a).
In order to understand the influence of the quaternary structure on the nanosecond and subnanosecond protein dynamics, neutron scattering experiments were performed on Hb solutions in two states, namely deoxyhemoglobin (deoxyHb) in T-quaternary conformation, and carbonmonoxyhemoglobin (HbCO) in R-quaternary conformation (Caronna et al., 2005). As a solvent, 65% glycerolD8/D2O was used. EINS showed no differences in the sub-nanosecond MSD on the entire temperature range from 20 to 300 K. Similarly, the MSD shows no difference up to ~250 K on the nanosecond timescale. Above that temperature, deoxyHb is characterized by an MSD ~15% smaller than that of HbCO. It was concluded that the quaternary structure does not affect the sub-nanosecond dynamics, but it does influence the nanosecond dynamics. Also, the q-averaged QENS spectra at room temperature showed a stronger quasi-elastic broadening for deoxyHb than HbCO, indicating a faster dynamics of the former conformation (Caronna et al., 2005). In light of recent studies showing how the protein internal dynamics can be modulated by the bonding of a ligand also without major structural changes (see e.g. Lal et al. (2017) and Matsuo et al. (2017) below, and section 'Collective internal motions in proteins'), it seems however not easy to disentangle the effect of quaternary structure changes from a possible allosteric effect1.
Lal et al. (2017) employed NSE at scattering vectors 0.1 Å−1 <q <1 Å−1 to study allosteric effects in Hb. They observed a change in the dynamics of Hb upon ligandation of the allosteric effector inositol hexaphosphate (IHP), which leads to a lowered oxygen affinity in both deoxy-Hb and HbCO. In particular, it was shown that binding of IHP to HbCO results in an increased rate of coordinated motions of Hb subunits relative to one another, even though little if any change in the protein quaternary structure could be observed by wide-angle X-ray scattering, suggesting that enhanced dynamic motions may be responsible for the lowered oxygen affinity triggered by IHP ligandation. In addition, rather surprisingly, the increase of large-scale dynamics seemed to be coupled with a decrease in the average magnitude of higher frequency modes of individual residues (Lal et al., 2017).
Four forms of pepsin – a kinetically stable (Np), a thermodynamically stable (Rp), a partially unfolded (Ip) and an inhibitor-bound (NpP) state – were investigated by neutron TOF and backscattering. The aim of the study was to determine whether different states of the same enzyme are characterized by different picosecond to nanosecond internal dynamics (Dee et al., 2011). By comparing solutions at 50 mg ml−1 of Np, Rp and Ip, and at 100 mg ml−1 of Np and NpP, differences between the different states could indeed be identified. In particular, the authors found increasing flexibility in the order Rp < Np < Ip, and therefore concluded that kinetic stabilization does not necessarily correspond to a reduction in picosecond diffusive motions. The TOF measurements yielded, especially at high q, significant differences between quasi-elastic broadening of Rp and Np. The variations between Np and Ip on the picosecond timescale were more subtle. However, on the nanosecond timescale Ip was characterized by faster dynamics, especially at short length scales. Instead, no significant variations were observed between Np and NpP (Dee et al., 2011).
Matsuo et al. (2017) used QENS to investigate the picosecond dynamics and its changes upon binding of Ca2+ in the troponin core domain (wtTn-CD), regulating cardiac muscle contraction in a Ca2+-dependent manner, and the mutant TnT2 (K247R-Tn-CD), characterized by a functional aberration. The protein solutions were prepared at 20 mg ml−1 in D2O and measured at T = 300 K. Both Ca2+-binding to the wtTn-CD and the mutation (in the absence of Ca2+) were found to decrease the residence time of H-atoms from 3.25 ± 0.07 to 2.91 ± 0.06 ps, as obtained by fitting the data with a jump-diffusion model. The mutant residence times in the absence and presence of Ca2+ were identical (2.88 ± 0.06 ps) and equal to that of wtTn-CD in the presence of the calcium ions. Instead, while for wtTn-CD the amplitudes of motion were essentially unchanged by the addition of calcium, those of K247R-Tn-CD showed a significant increase, indicating increased flexibility. It was therefore suggested by the authors that the short residence times are essential for the correct regulatory function of the protein, and that the functional aberration of this specific mutant may be due to a too high flexibility. Matsuo and collaborators further analyzed their observations in light of results from NMR (Blumenschein et al., 2005; Cordina et al., 2014), exchange-mass spectrometry (HDX) (Kowlessur and Tobacman, 2010a, b, 2012) and electron paramagnetic resonance (EPR) (Nakamura et al., 2005; Aihara et al., 2006), and argued that, although Ca2+-binding causes selective slowing down of certain amino acids and increases the rigidity around the binding site, the enhanced mobility seen elsewhere is sufficiently large to make the atomic motion on the picosecond timescale averaged over all non-exchangeable H-atoms faster (Matsuo et al., 2017). This study illustrates well the potential of combining different techniques. While incoherent neutron spectroscopy is well suited for determining the overall changes in residence times and amplitudes of localized motions, it cannot easily determine how such changes are distributed along the polypeptide chain. This information is more easily obtained by other techniques such as NMR, HDX and EPR, from which, on the other hand, it is not simple to obtain information on the dynamics averaged over the entire protein.
In summary, the experiments reported above give a clear indication that mean fast internal motions can be affected by changes in the protein state, which are often related to at least small structural changes (Blumenschein et al., 2005; Caronna et al., 2005; Nakamura et al., 2005; Aihara et al., 2006; Kowlessur and Tobacman, 2010a, b, 2012; Dee et al., 2011; Cordina et al., 2014; Stadler et al., 2016b; Matsuo et al., 2017). Several NMR studies reported a higher mobility of interconnecting loops compared with more structured parts of the proteins (Keeler et al., 2003; Shi et al., 2009; Fenwick et al., 2016). Some studies suggest that, rather than the secondary structure, the primary sequence and the neighboring amino acids are crucial in determining the dynamics of each residue (Buck et al., 1996; Goodman et al., 2000; Tada et al., 2002; Cilia et al., 2013), and recent MD simulations even suggested the presence of transient clusters of residues moving in a concerted manner in protein kinases, which do not follow subdomain structure nor secondary structure elements (Kornev and Taylor, 2015). Differences between the dynamics of proteins in different states as observed by neutron spectroscopy can occur on both the pico- and nano-second timescale (Dee et al., 2011), but also either predominantly on the nanosecond timescale (Caronna et al., 2005; Dee et al., 2011), or mostly on the picosecond timescale (Stadler et al., 2016b). Gaining a deep understanding of the physical reasons behind these differences, as well as of the relation between the hierarchical protein structure and dynamics will require more studies spanning several timescales, and observing different aspects of the dynamics of different protein components. In this context, the combined use of complementary techniques such as neutron scattering and NMR will be crucial.
Protein dynamics are effected by the environmental conditions, such as prevailing temperature and pressure. The following two sections focus on the effect of these two control parameters on protein dynamics. Particular interest emerges from extremophile organisms, since the mechanisms of adaptation to high temperatures and pressures are of fundamental interest to understand the essential parameters of protein function.
In this section, we focus on effects of high pressures on protein dynamics, which due to the difficulty of high-pressure experiments, only relatively recently could be studied by neutron spectroscopy (Ortore et al., 2009; Appavou et al., 2011; Erlkamp et al., 2015; Marion et al., 2015; Shrestha et al., 2015; Martinez et al., 2016; Al-Ayoubi et al., 2017; Golub et al., 2017).
The investigation of the influence of pressure on the dynamics of human Hb at ~320 mg ml−1 by Appavou et al. (2011) demonstrated a subtle pressure-induced slowing down of the protein fluctuations, as evidenced by a slight increase of the proton relaxation time, from 3.36 ps at atmospheric pressure to 3.71 ps at 2 kbar. The authors hypothesized that the change may be attributed to the rearrangement of water molecules in the hydration shell of the protein leading to stronger geometrical constraints for the motions of lateral chain residues. In addition to this, the global diffusion was observed to slow down with increasing pressure, which was tentatively explained by the authors as due to the increase of solvent viscosity and to the formation of Hb pentamers and hexamers (Appavou et al., 2011).
In a combined X-ray and neutron scattering experiment, the high-pressure-induced changes on interactions, the structure and the dynamics of egg-white Lys in solution (10 w/w%) were investigated by Ortore et al. (2009). The neutron scattering data indicated that the global and the local Lys dynamics change close to a threshold pressure where the mass density of the protein hydration water undergoes a soft transition, suggesting that these effects are related (Ortore et al., 2009). In particular, the MSD was found to decrease relatively fast up to ~700 bar and then to decrease more slowly. A change in the q-dependence of the quasi-elastic broadening associated with internal dynamics was also observed above 1.5 kbar and attributed to a change of the type of dynamics of the protein side-chains. Moreover, the fraction of immobile atoms and the confinement radii were observed to change from p = 0.68 and R = 3 Å at 1 bar to p = 0.87 and R = 2 Å at 1.5 kbar and p = 1 at 2 kbar. The center-of-mass diffusion, instead, was constant up to 1 kbar and then started decreasing linearly up to 2 kbar (Ortore et al., 2009).
EINS was employed to investigate the influence of high pressure (up to 4 kbar) on the internal subnanosecond dynamics of Lys in solution at 80 and 160 mg ml−1 (Erlkamp et al., 2015; Al-Ayoubi et al., 2017). At 80 mg ml−1, the MSD was found to decrease from ~1.4 to ~1.0 Å2 in the range 1–2000 bar, indicating that pressure induces a loss of protein mobility. No further change was observed from 2 to 4 kbar. At 160 mg ml−1, instead, no changes in the MSD are observed up to 1000 bar. Above this pressure, up to 4 kbar, a slow decrease of the MSD occurs, from ~0.9 to ~0.75 Å2. These results, further supported by Fourier-transform IR, indicate therefore that (a) crowding reduces the protein sub-nanosecond dynamics, and (b) the crowding condition stabilizes the protein against pressure changes (Erlkamp et al., 2015) (see also section 'In vitro studies on the effect of macromolecular crowding on protein dynamics' for the effect of crowding on protein dynamics). This result may explain the different qualitative trends of the results by Appavou et al. (2011) and Ortore et al. (2009), the former detecting only a subtle change in the dynamics of Hb at ~320 mg ml−1 up to 2 kbar, the latter observing a relatively fast dampening of the dynamics of Lys at ~100 mg ml−1 up to 700 bar.
Recent experiments combined SANS and EINS to investigate the effects of pressure on ~400 mg ml−1 human acetylcholinesterase in D2O (Marion et al., 2015). A four-step model was proposed, based on different regimes of the 100 ps dynamics:
(i) From 1 bar to 1 kbar, only minor changes are visible in the measured MSD, even though a clear compression of the enzyme structure by about 11% was detected by SANS up to a pressure of 900 bar (Marion et al., 2015).
(ii) In the range of 1–3 kbar, a marked decrease of the MSD is observed, indicating that local degrees of freedom are strongly reduced. This was attributed to a reduction of the cavities inside the inner parts of the proteins as a consequence of pressure (Marion et al., 2015).
(iii) The MSD at 1750 bar clearly deviates from the trend defined by the points at lower and higher pressure. At these pressures, the formation of a molten globule was expected, which is associated with an increased protein flexibility and a larger MSD (Marion et al., 2015).
(iv) In the range 3–6 kbar, the slope of the MSD as a function of pressure was found to decrease again. This was interpreted by the competition of two effects: on the one hand, the degrees of freedom at the atomic scale are decreased. On the other hand, large parts of the protein are exposed to water as a consequence of unfolding. An increasing amount of inner cavities are invaded by water, leading to an increase in the degrees of freedom (Marion et al., 2015).
A picture consistent with this interpretation was obtained from phosphorescence relaxation measurements accessing the microsecond–millisecond timescale (Cioni and Gabellieri, 2011). The pressure profile of the phosphorescence lifetime of Trp-48 in native azurin indicated an initial tightening of the protein core up to ~3 kbar, presumably due to the predominance of cavity reductions, followed by a progressive loosening when further increasing pressure, reflecting enhanced internal hydration (Cioni and Gabellieri, 2011). A comparison with the profile of two mutants with single-point cavity-forming mutations indicated that the more flexible the structure, the shorter the compaction phase. For the most flexible protein structure there was even no sign of compaction as the lifetime was found to decrease monotonously above 0.5 kbar (Cioni and Gabellieri, 2011).
Pressure-induced changes of protein dynamics have been used to address functional differences between low-density lipoprotein in its normal healthy (N-LDL) and triglyceride-rich form (TG-LDL) (Golub et al., 2017). While N-LDL dynamics remained rather similar, QENS scans of TG-LDL evidenced a slowing down of at least some components of the protein, reflected also in a dramatic decrease of the MDS observed in EINS. The authors also observed small adaptations of the molecular shape of TG-LDL via SANS, whereas N-LDL was compressed with an overall constant shape (Golub et al., 2017).
A recent paper addressed adaptive strategies within the extremophile bacteria to deep-sea pressures (Martinez et al., 2016). Based on a detailed analysis of QENS spectra at ambient conditions and high pressure of piezophile and piezosensitive bacteria from the Thermococcus family, Martinez et al. (2016) suggested two main factors of adaptation. First, the internal relaxations in the piezophile proteome were reported to be faster than the piezosensitive proteome at both pressures. Interestingly, the relaxations appeared faster at higher pressure for the piezophile protein, while the piezosensitive protein displayed a slowing down with pressure. Second, the authors found evidence for a reduced level of hydration water in the piezophile proteome (Martinez et al., 2016).
In summary, pressure provides a well-controlled way to vary protein dynamics (Golub et al., 2017). While neutron scattering measurements evidence an attenuation of protein dynamics at high pressures for most proteins (Ortore et al., 2009; Appavou et al., 2011; Erlkamp et al., 2015; Marion et al., 2015), the proteome of the studied piezophile bacterium showed a reverse behavior (Martinez et al., 2016). Furthermore, pressure helped associating the formation of a molten globule and denaturation with changes in subnanosecond dynamics (Marion et al., 2015).
Numerous NMR experiments were performed using high-pressure also as a tool for tuning the distribution of different functionally important protein conformations. Several reviews were written on this topic (Akasaka, 2006; Li and Akasaka, 2006; Cioni and Gabellieri, 2011; Williamson, 2015). Generally, fluctuations on microsecond and longer timescales were found to be significantly affected by pressure, with the exchange rates between different conformational states being slowed down by a factor 4–10 per kbar (Williamson, 2015). Instead, 15N relaxation measurements reported no evidence for significant pressure-induced changes in the backbone dynamics on the picosecond–nanosecond timescale. Hence, as already seen in the previous section, NMR and neutron measurements seem to provide inconsistent results, possibly due to slightly different sampling and sensitivity to different dynamics (e.g. backbone atoms versus side-chain H-atoms, per-residue signal versus average over entire protein). Therefore, combining the two techniques may provide further, more complete insight on hierarchical protein dynamics.
Even though the pressures reached in the reviewed studies are often beyond deep-sea limits, their outcomes may help understanding the molecular mechanisms of adaptation of organisms living under high pressure, such as deep-sea bacteria. Evidence for the adaptation of organisms to hostile conditions at a macromolecular level were found in the case of extreme temperatures and are presented in the next section.
For the adaptation of proteins to the prevailing temperature in their environment, protein dynamics is of particular interest. First, we will focus on adaptation to extreme temperatures. Second, we will report findings on the correlation of Hb dynamics with body temperature.
The first neutron scattering studies on thermal adaptation were performed on living bacteria adapted to low temperature (psychrophile), room temperature (mesophile), high (thermophile) and very high temperature (hyperthermophile) by Tehei and co-workers (Tehei et al., 2004; Tehei and Zaccai, 2007). Even if in such systems a very large number of different types of macromolecules contribute to the overall scattering, EINS was successfully employed to determine the root mean square atomic fluctuation amplitudes averaged over all these cellular constituents. Interestingly, it was found that, on the 100 ps timescale, $\sqrt {\langle u^2 \rangle} \simeq 1\,$Å for each organism at its respective physiological temperature. The authors could also calculate the effective force constants determining the mean macromolecular resilience, and observed that they increase with rising physiological temperature: for the measured psychrophiles $\langle k \rangle $ = (0.21 ± 0.03) N m−1, for the mesophiles $\langle k \rangle $ = (0.39 ± 0.01) N m−1, for the thermophiles $\langle k \rangle $ = (0.67 ± 0.11) N m−1, and for the hyperthermophiles $\langle k \rangle $ = (0.60 ± 0.01) N m−1. This result indicated that the increase in stabilization free energy is dominated by enthalpic rather than entropic terms, and it was suggested by the authors that larger resilience allows for macromolecular stability at high temperatures, while maintaining flexibility within acceptable limits for biological activity (Tehei et al., 2004).
In a subsequent study, Tehei et al. (2005) measured $\langle u^2 \rangle $ and $\langle k \rangle $ of two homolog enzymes in solution at 200 mg ml−1. The enzymes were extracted from a mesophilic and a hyperthermophilic organism, and once again the root mean square fluctuations were approximately the same, $\sqrt { \langle u^2 \rangle} \simeq 1.5\,$Å for both enzymes at their optimal activity temperature. Furthermore, $\langle k \rangle $ ≃ 0.15 N m−1 for the enzyme of the mesophile organism, while $\langle k \rangle $ ≃ 1.5 N m−1 for the enzyme of the hyperthermophilic organism, consistent with the earlier in vivo measurements (Tehei et al., 2004; Tehei and Zaccai, 2005). An enhancement of the overall protein rigidity with increasing physiological temperature has been reported and suggested as an adaptation strategy already several years ago (Feller and Gerday, 1997; Feller, 2003) and corroborated by studies performed with several experimental techniques such as H/D exchange (Závodszky et al., 1998), NMR (Leone et al., 2004; Wolf-Watz et al., 2004; Henzler-Wildman et al., 2007; Schrank et al., 2009; Lee et al., 2013), fluorescence T-jump spectroscopy (Peng et al., 2015) and even X-ray spectroscopy (Siglioccolo et al., 2010) as well as computational techniques (Radestock and Gohlke, 2011; Stafford et al., 2013; Papaleo et al., 2014). Most of these studies were performed on enzymes and suggest also that adaptation mechanisms tend to provide, at the adaptation temperature, optimal flexibility close to the active site while maintaining a good structural stability. Notably, Henzler-Wildman et al. (2007), not only reported a remarkably similar fast, local fluctuations of atoms in mesophilic and hyperthermophilic adenylate kinase at temperatures at which enzymatic activity and free energy of folding are matched, but also showed that such fluctuations facilitate large-scale domain motions promoting the catalysis. The outcome of these investigations is not that all proteins in psychrophilic organisms are more flexible than all those in meso-, thermo- and hyper-thermophilic organisms: depending on their function, a protein from a mesophilic organism may be more flexible than one of a psychrophile, as suggested by an NMR study by Choi et al. (2015) showing that the backbone dynamics of an ice-binding protein of a psychrophilic organism is significantly less flexible than that of a human sialic-acid-binding protein at 5 °C. The reported neutron studies directly measured the average flexibility on a 100 ps timescale of entire enzymes or even entire cells, and hence strongly suggest that the adaptation of the dynamics is a very general mechanism, used not only by single proteins, but achieved by most macromolecules in the cells.
Following these findings, a specific correlation between dynamics and body temperature was discovered also for human Hb, both in solution (Stadler et al., 2009) and in red blood cells (RBCs) (Stadler et al., 2008). The investigation of the dynamics of human Hb in RBCs revealed a change in the geometry of confinement of the protein protons at 36.9 °C (Stadler et al., 2008): above that temperature, the volume accessible by the side-chain atoms was larger than expected from normal temperature dependence. As mentioned in the 'From powder to solution: influence of solution conditions on protein dynamics' section, the same was observed for Hb in highly concentrated solution (Stadler et al., 2009). In addition to the internal dynamics, the global diffusion of Hb was found rather consistent with theoretical predictions for the short-time self-diffusion of effective hard-sphere suspensions (Stadler et al., 2008).
Finally, also Hb from platypus and from chicken exhibit a resilience correlated with the respective physiological temperatures (the higher the body temperature, the stronger the resilience) (Stadler et al., 2012a, 2014a), and a root MSD at the body temperature of ~1.2 Å (Stadler et al., 2012a). Hb from salt water crocodile, instead, does not undergo any similar change in the dynamics, presumably because of the much larger body temperature range of reptiles (Stadler et al., 2012a, 2014a). The half-widths at half-maxima of the Lorentzian function accounting for internal dynamics in Hb of platypus, chicken and crocodile are shown in Fig. 16. The solid lines are fits according to a jump-diffusion model (Eq. (44)). Activation energies calculated from the temperature dependence of the residence times and diffusion coefficients associated with the side-chain motions were found to be similar for all species, namely ~4 and ~10 kJ mol−1, respectively (Stadler et al., 2014a).
Fig. 16. HWHM Γ of the Lorentzian accounting for the internal motion of Hb from (a) platypus Hb, (b) chicken Hb, (c) crocodile Hb, as a function of the squared scattering vector q 2. The solid lines are fits according to a jump-diffusion model in the range of 0.64 ⩽ q 2 ⩽ 3.24 Å−2. The horizontal solid lines indicate the region of constant half-widths. Figure reproduced with permission from Stadler et al. (2014a). Copyright Elsevier.
Further insight into the molecular basis for thermal adaptation is provided by MD simulations: Calligari et al. (2015) combined MD simulations and QENS measurements of the protein Initiation Factor 6 (IF6) from Methanocaldococcus jannaschii (aIF6) at high temperature and pressure and its eukaryotic homolog from Saccharomyces cerevisiae under ambient conditions. Results obtained by MD were consistent with QENS data and showed once more that the two proteins share similar flexibility at the respective physiological temperatures, which can be fine-tuned by pressure (Calligari et al., 2015). The analysis of the scattering functions using a FBD model suggested that such a similarity is mainly due to entropic contributions. Furthermore, structure dependent analysis of the MD simulations showed that, in the extremophilic protein (aIF6), a suppression of the backbone flexibility with increasing pressure is compensated by an increased mobility of the amino acid side-chains, and that the most significant pressure- and temperature-induced flexibility changes occur in the bending regions between helices and β-strands. Finally, the differences between aIFS and its mesophilic (initiation factor 6 from S. cerevisiae (eIF6)) homolog were found to be due to the presence, in the latter protein, of a 20 amino acid tail, ensuring the necessary flexibility to eIF6 at ambient temperature (Calligari et al., 2015).
Summarizing, the reviewed results provide solid evidence for temperature adaptation of protein dynamics, ensuring comparable stability and function of proteins at their respective physiological temperatures (Tehei and Zaccai, 2007), and MD simulations help understanding the underlying molecular mechanisms.
As discussed in the Introduction, the protein exhibits a hierarchy of dynamics. So far, a picture of the local self-dynamics of atoms on timescales ranging from picoseconds to a few nanoseconds was provided. On a larger length scale, the motion of entire domains and subdomains takes place on timescales from nanoseconds to microseconds. Such movements can be directly observed at rather low protein concentrations by employing NSE spectroscopy combined with normal mode analysis of the protein crystal structure or a CG structural ensemble, as explained in detail in Callaway et al. (2013); Richter (2012); Monkenbusch and Richter (2007); Monkenbusch et al. (2010); Callaway and Bu (2016, 2017); Stadler (2018); Biehl and Richter (2014) and the modeling section 'Modeling and analysis'.
To the best of our knowledge, the first NSE study regarding the domain motion of a protein was carried out in 1982 by Alpert et al. (1982). The pig anti-Dnp-immunoglobulin (pIgG), an antibody protein, was modeled as two prolate ellipsoids of revolution connected at the end points of their longer axes (Fab arm) and an ellipsoid of revolution whose volume was equal to the volume of the Fab arm (Fc part) (Alpert et al., 1982). Samples at ~40 and ~80 mg ml−1 in the presence of ~200 mg ml−1 sucrose were measured at 14 °C. The comparison of the experimental scattering function with those expected for a rigid and two increasingly flexible pIgGs indicated that the molecule is subject to a rather large wobbling type motion of the Fab arms around the so-called hinge region within an angle of 50° (Alpert et al., 1982). A few years later, Alpert et al. (1985) confirmed the result also in the absence of sucrose.
More recently, Stingaciu et al. (2016) have studied IgG from human serum (hIgG) and observed that, on a timescale of 7 ns, hIgG fragments move with motional amplitudes of about 1 nm relative to each other. hIgG was measured at 29 mg ml−1 and 25 °C. Notably, the observed dynamics could be well described by a rather simple model neglecting the details of the complex interaction at the residue level in the linker region, which was instead modeled by an effective spring with a force constant of ~0.02 N m−1, while fragments undergo Brownian motion under a harmonic potential (Stingaciu et al., 2016).
In 2005, Bu and collaborators, measured coupled motion of domains separated by 70 Å of DNA polymerase I from Thermus aquaticus (Bu et al., 2005). This motion is essential to coordinate nucleotide synthesis and cleavage during DNA synthesis and repair. In particular, at low concentration, the deviation of the effective diffusion coefficients from the diffusion coefficient measured by DLS was attributed to large-scale internal dynamics. These experimentally determined deviations were compared with those calculated assuming domain motions based on normal mode analysis. Thereby, it was shown that the motion of DNA polymerase I can be well approximated by a few normal modes of three coupled domains.
In a similar study, NSE revealed that catalytic activity of PGK is enabled by large domain fluctuations on the 50 ns timescale (Inoue et al., 2010). Small-angle scattering data revealed that the protein in solution has a more compact structure than that in a crystal, but the structural analysis indicated that the distance between residues taking part in the catalytic reaction would be too large if the protein were static. Correlation functions measured with NSE were characterized, above q ~ 0.08 Å−1, by a superposition of two decays. The slower of these was ascribed to the long-time translational and rotational diffusion, while the faster decay was attributed to internal dynamics. Normal mode data analysis indicated that domain movements facilitate a close encounter of the key residues in the active center to build the active configuration. Furthermore, the measurements showed that substrate binding induces faster domain motion, but with a simultaneous reduction of its amplitude. Hence, it was shown that the binding of a substrate leads to an increased rigidity of PGK. The results of this study were later compared with MD simulations, which confirmed that a significant component of the NSE signal arises from internal dynamics. The comparison also evidenced that the amplitudes of the motions derived by MD are smaller than those derived from the experimental analysis (Smolin et al., 2012).
The use of a normal mode approach was later justified by Hong et al. by analyzing neutron scattering data and performing MD simulations, as protein interdomain motion was shown to obey overdamped Langevin dynamics (Hong et al., 2014b). Moreover, the authors demonstrated that protein interdomain motion follows the principle of de Gennes narrowing, meaning that the wavevector dependence of the interdomain diffusion coefficient is inversely proportional to the interdomain structure factor. As noted by the authors (Hong et al., 2014b), this aspect can be understood as the domains moving slower with respect to each other when in favored spatial arrangements.
As an interesting example of joint use of complementary techniques, Hong et al. (2014a) investigated the structure and dynamics of the compact form of the multidomain protein mercuric ion reductase (MerA). MerA is a protein acting as an enzyme, which is central to the mercury resistance pathway found in many aerobic bacteria (Hong et al., 2014a). By comparing the dynamics of the full length MerA and the bare catalytic core domain without linkers or NmerA domains, SANS indicated that MerA adopts a compact structure in solution with the NmerA domains in direct contact with the core (Hong et al., 2014a). Moreover, NSE measurements and CG simulations indicated that the domain motion is of small amplitude and the linkers are relatively rigid (Hong et al., 2014a). Finally, all-atom MD simulations indicated that the NmerA domain electrostatically interacts with the core, hence being in close contact with it, and undergoing a subdiffusive motion over its surface. Since it is believed that Hg2+ first binds to the Nmer domains and is then transferred to the catalytic site in the core, it was suggested by the authors that such an exploratory movement may facilitate the binding of Hg and a fast delivery to the core (Hong et al., 2014a).
In order to understand the domain motion of complex proteins, selective deuteration can be employed. This was the case for NHERF1, a multidomain protein assembling protein complexes after being allosterically triggered by the binding of another protein, ezrin, 11 nm away from the active domains (Farago et al., 2010). NSE measurements of selectively deuterated NHERF1 highlighted the activation of interdomain collective dynamics on nanometer length-scales and on submicrosecond timescales after the formation of a complex with ezrin. The results demonstrated therefore that allosteric regulation can involve changes in long-range submicrosecond domain motions (Farago et al., 2010). Subsequent to this study, Callaway et al. (2017) studied a phosphomimic mutation of NHERF1 at different NaCl concentrations, and hence ionic strengths, in order to investigate the role of electrostatics in allosteric regulation. The results showed that the phosphomimic mutation and the salt concentration alter the nanoscale dynamics and target-binding kinetics in the intrinsically disordered tail of NHERF1. The authors suggested that the electrostatic charges introduced by phosphomimic mutation NHERF1 cause the activation of specific internal dynamics, which can be reversed by increasing the salt concentration. Moreover, the kinetic association rate constant of the binding of the mutant to erzin was also found to correlate with the excited nanoscale dynamics. In fact, increased nanoscale dynamics was found to correspond to an improved binding ability (Callaway et al., 2017).
For additional details on the use of NSE for the study of the implications of protein dynamics changes on several timescales due to allosteric signaling, we refer the reader to Bu and Callaway (2013) and Callaway and Bu (2015). Additional neutron scattering studies regarding the role of the dynamics in allosteric regulation can be found in the 'Relations of protein dynamics to structure: from globular to intrinsically disordered proteins' section. Other techniques such as NMR, and fluorescence-based techniques, as well as MD simulations can also be employed to investigate this topic. Such studies have demonstrated that proteins respond to perturbations by redistributing their motions, even in the absence of detectable structural changes (Tzeng and Kalodimos, 2011). Moreover, there is also strong evidence for the crucial role of fluctuating conformational states and conformational entropy in the allosteric mechanism (Kern and Zuiderweg, 2003; Tzeng and Kalodimos, 2011). Emerging evidence indicates furthermore that connecting, poorly structured regions of the polypeptide chain play an important role in allosteric regulation (Papaleo et al., 2016). For further reading on this topic, we refer the reader to Kern and Zuiderweg (2003); Tzeng and Kalodimos (2011); Motlagh et al. (2014); Kornev and Taylor (2015); Papaleo et al. (2016).
Recently, NSE has been successfully employed also to determine the internal dynamics of intrinsically disordered myelin basic protein (MBP) (Stadler et al., 2014b; Stadler, 2018) (further information on the dynamics in intrinsically disordered as well as denatured proteins and molten globules can be found in the 'Comparison of internal protein dynamics in native, molten and denatured states' and 'Relations of protein dynamics to structure: from globular to intrinsically disordered proteins' sections). First, small-angle scattering revealed that the protein compactness lies between that of a globular protein and that of a random coil polymer. Instead, the large contribution of the internal motions of the peptide chain to the overall diffusion measured by NSE indicated a high-structural flexibility with a relaxation rate of 8.4 ns. Collective stretching and bending motions, especially pronounced at the termini, were identified by normal mode analysis as the prominent contribution to the internal dynamics. Moreover, the data were found to be inconsistent with the Zimm model with internal friction derived from polymer theory. The inconsistency was interpreted by the authors as a result of the presence of a compact core and of a secondary structure content of 44%. Relaxations on correlation times of several nanoseconds to ~100 ns were observed in IDPs also with fast field cycling relaxation NMR (Parigi et al., 2014) (probing proton relaxations, also at higher protein concentrations than in Stadler et al. (2014b)) and fluorescence spectroscopy (Mukhopadhyay et al., 2007; Müller-Späth et al., 2010; Liu et al., 2014) (dynamics of labeled residues), in the latter case depending also on the presence of salt. In addition to this, FRET studies detected conformational fluctuations on the millisecond or longer timescale in IDPs or in disordered domains (Huang et al., 2009; Lamboy et al., 2011; Liu et al., 2014). Hence, the combination of neutron scattering and other techniques might be crucial also for a complete understanding of the internal collective dynamics of IDPs.
The NSE technique has the advantage to not only measure internal relaxations, but also explicitly relate such relaxations to the geometry of collective domain motions, providing essential information on mechanisms of protein function. NMR and FRET can complement the information on protein collective dynamics and its correlation times for a broader range of volume fractions. In fact, slow (nanosecond–millisecond) collective dynamics along the backbone of proteins could be observed also by NMR (Eisenmesser et al., 2002; Wolf-Watz et al., 2004; Vögeli and Yao, 2009; Fenwick et al., 2016), usually at a residue level, although also the domain motion could be measured by combining SAXS, reorientational eigenmode dynamics analysis and NMR (Bernadó et al., 2010). For a better investigation of this type of motions in proteins, encouraging results were obtained with computational means with the recently developed essential collective dynamics method (Stepanova, 2007; Barakat et al., 2011; Santo et al., 2011; Issack et al., 2012; Dorosh et al., 2013; Stueker et al., 2014), which was also successfully employed to interpret NMR data.
Particularly interesting for determining the protein domain motions in solution, on timescales of microseconds and milliseconds, is the recent development of a new methodology based on networks of distance distributions obtained from time-resolved single-molecule FRET (Hellenkamp et al., 2017). The approach was applied to the flexible multidomain heat-shock protein Hsp90. By combining the data from more than 100 pairs of FRET dyes across the entire Hsp90 dimer under various conditions in solution with MD simulations, Hellenkamp et al. (2017) were able to show how Hsp90 interdomain dynamics changes depending on the protein state (open versus closed).
As mentioned in the Introduction, structural rearrangements of proteins during the transition from one state to another can be followed, in specific cases, by time-resolved X-ray scattering (see section 'Scattering techniques'). Lately, also time-resolved neutron scattering profiting from deuterium labeling and contrast variation, and combined with fluorescence spectroscopy has been successfully employed to follow a coarse structural evolution of the protein complex PAN unfoldase PAN unfoldase (proteasome-activating nucleotidase in archaebacteria), on tens of seconds to minutes. More in detail, Ibrahim et al. (2017) observed that, while unfolding its substrate, the PAN complex undergoes a transition from a relaxed to a contracted conformation, with a characteristic time of 315 ± 25 s, followed by a slower expansion to its initial state at the end of the reaction, with a characteristic time of 405 ± 30 s. The authors argued that the result supports a model in which these complexes unfold their substrates in a reversible power stroke mechanism involving several subunits (Ibrahim et al., 2017).
In summary, recent developments in different techniques enable the study of the concerted motion of protein domains on different, complementary timescales. Future studies combining these methods may considerably help understanding the hierarchy of domain dynamics and gathering more information on an in-depth understanding of these nanomachines.
Both the internal and center-of-mass dynamics of ADH, a protein responsible for the interconversion between alcohol and ketones, were investigated with NSE, TOF and NBS, on a broad range of timescales (Biehl et al., 2008; Stadler et al., 2013a; Monkenbusch et al., 2015).
Biehl et al. (2008) determined the main domain motions of ADH by employing NSE. The difference ${\rm \Delta} D_{{\rm eff}}^0 $ between the measured diffusion coefficient $D_{{\rm eff}}^0 $ and that calculated accounting for translation and rotational diffusion is shown in Fig. 17a as a function of q. Such a difference is due to slow collective internal dynamics. The analysis of the q-dependence of ${\rm \Delta} D_{{\rm eff}}^0 $ and its comparison with the diffusion coefficients calculated based on normal mode analysis (Fig. 17b) revealed the occurrence of two main domain motions shown in Fig. 17c, one of which corresponds to the opening and closing of a cleft in the protein structure, between the binding and the catalytic domains (Biehl et al., 2008). This motion enables the binding and release of the cofactor required for the conversion of ethanol to acetaldehyde. Moreover, the analysis indicated that, when the cofactor is bound, the mode related to the opening and closing of the cleft is reduced, denoting a stiffening of the concerned domains.
The diffusion coefficient and the slow internal relaxation obtained with NSE were fixed in a global fit of TOF and NBS data (Monkenbusch et al., 2015). The fitted model considered atoms to belong to one of three classes of motions occurring on top of rotation and translation of the entire protein: (i) atoms immobile with respect to the protein as a rigid body; (ii) atoms undergoing large scale domain motions and (iii) atoms participating in fast localized motions. In addition, atoms belonging to each of the three classes were free to distribute over 10 concentric shells comprising the protein. Approximately 34–37% of the atoms were found to undergo fast diffusive motions with the diffusion coefficient D s ranging between 65 and 78 Å2 ns−1 in a confined volume with an effective radius of 7.1–7.5 Å, depending on whether class (ii) was taken into account or not. It was noted by the authors that the results are consistent with a more simple analysis combining NSE and TOF, but not NBS data on ADH, yielding a fraction of mobile atoms around 35%, and a radius R ~ 8 Å (Stadler et al., 2013a), and with the results in Grimaldo et al. (2014), where R ~ 6.7 Å. A further outcome of the model is that most H-atoms undergoing fast movements seemed to be close to the surface, while those being immobile seem to be located mainly in the center. It was therefore hypothesized that the fast motions result from direct interaction of amino acids at the surface with the surrounding water.
In conclusion, the analysis of NSE, TOF and NBS data proposed by Monkenbusch et al. (2015) to model the dynamics of ADH suggested the presence of three distinct dynamical processes: one due to the translation and rotation of the protein, the second, occurring on tens of nanoseconds, due to domain motion and the third, occurring on hundreds of picoseconds, related to fast localized motions at the amino acid level. Moreover, the model suggested that the third component is mainly arising from motions of amino acids in the outer shells, close to the solvent, whereas those in the inner parts appeared immobile.
In addition to the EINS measurements of living bacteria and RBCs presented in the 'Adaptation of proteins to ambient and extreme temperatures' section, other neutron spectroscopy studies have been performed in living E. coli and human RBCs, exploiting instruments accessing different timescales.
The first attempts to measure Hb diffusion at rather high concentrations and in RBCs, as well as the dynamics of an entire bromegrass mosaic virus with neutron scattering date back, to the best of our knowledge, to 1980, when Alpert (1980) was testing the applicability of NSE to biological systems. In this pioneering experiment, Alpert was able to measure the diffusion of Hb at 295 K and c p = 120 and 180 mg ml−1, obtaining 3.7 and 3.0 Å2 ns−1, respectively, hence significantly lower than the dilute limit diffusion coefficient D 0 ≃ 5.6 Å2 ns−1 in D2O (Alpert, 1980). Furthermore, a much lower relaxation rate, also with a different q-dependence, was observed in the RBCs compared with the Hb solutions. No significant dynamics could be observed in the bromegrass mosaic virus, likely because the signal was too weak and the dynamics too slow for the spectrometers at that time.
Later, Hb diffusion in RBCs was studied again using NSE spectroscopy by Doster and Longeville (2007). The measured time and wavevector behavior suggested the crossover of self- and collective diffusion in the accessible time and q-range. The data revealed feature characteristics of hydrodynamic interactions between the proteins, and the diffusion coefficients (in RBC containing H2O, at 20 °C D = 1.1 Å2 ns−1) agreed quantitatively with long-time self-diffusion coefficients from the theory of hard-sphere suspensions, after adjusting the volume fraction of Hb by including the volume of the hydration layer to the dry protein volume. The results suggested therefore the applicability of concepts of colloid physics to proteins. It was concluded from these results that hydrodynamic interactions dominate long-range molecular transport at physiological concentrations.
In a recent study by Longeville and Stingaciu (2017) the diffusion of Hb in RBCs was again measured with NSE and quantitative agreement was found with that in an aqueous solution of Hb at a concentration similar to that within the cells. Moreover, diffusion was reported to be Brownian, up to the accessed timescale (~50 to ~100 ns) and concentrations (~330 mg ml−1). Remarkably, using a rather simple model for the kinetics of oxygen uptake by Hb in the lungs, the authors found that the diffusion of Hb facilitates the oxygen capture of the RBCs, and that the Hb concentration in the RBCs corresponds to an optimum of oxygen capture for an individual under physical activity (Longeville and Stingaciu, 2017).
An earlier study on Hb dynamics in RBC was performed with two backscattering spectrometers with different energy resolutions (Stadler et al., 2010). The scattering function of both spectrometers was modeled with two dynamical contributions, one arising from the protein global diffusion (translation and rotation of the whole protein), the other due to internal motions. Different apparent global diffusion coefficients were obtained from each instrument. The translational diffusion coefficient was extracted from the apparent coefficient, and compared with the values expected from the theory of colloidal suspensions. Similar to Doster and Longeville (2007), Stadler et al. (2010) added the volume of the hydration layer to that of the bare protein in calculating the volume fraction. After this correction, the two translational diffusion coefficients were found to agree quantitatively with the expected short- and long-time self-diffusion coefficients. Regarding the internal Hb dynamics, the faster contribution was attributed to localized jump-diffusion with D jump ~ 300 Å2 ns−1 and τ 0 ~ 4 ps at 20 °C. The slow internal contribution was rather q-independent and its correlation time at 27 °C was ~100 ps. While the fast internal dynamics was higher than in fully hydrated Hb powder, the slow component was found to be rather similar to that of fully hydrated protein powders, solutions and E. coli cells reported in other studies (Stadler et al., 2010).
Jasnin et al. (2008a) carried out measurements of E. coli on three different spectrometers providing access to dynamics on a wide range of timescales. Several types of motions were identified and associated with contributions from diverse dynamical classes. Three types of motions were associated with internal processes. A fast relaxation was attributed to confined jump-diffusion motions with correlation times of 4.7 and 3.6 ps at 280 and 300 K, respectively. A slower process characterized by a q-independent quasi-elastic broadening was assigned to rotational motions occurring on characteristic times of ~40 ps. It was noted by the authors that this process may arise from stochastic reorientations of large molecular subunits, such as polypeptide side-chains, fatty acid chains or other molecular subunits, as well as rotational motions of smaller groups such as protons in methyl groups. The slowest dynamical contribution associated with internal dynamics was also characterized by a q-independent quasi-elastic broadening and was therefore ascribed to rotational motions, with correlation times of ~94 and 90 ps at 284 and 303 K, respectively. The authors interpreted this contribution as due, for example, to librations of buried groups, relative displacements of globular domains, sugar conformational changes or RNA global bending. A comparison of the results of this study with those in hydrated powders led Jasnin and co-workers to the conclusion that the cellular environment induces a significant enhancement of internal dynamics. Instead, the in vivo dynamics appeared limited compared with that measured in solutions. It was therefore inferred that macromolecular interactions and confinement typical of physiological environments is not mimicked accurately by protein solutions, and it was suggested that intracellular complexity may participate in functional dynamics necessary for biological activity (Jasnin et al., 2008a). Finally, an even slower dynamical contribution showing typical features of jump-diffusion was attributed to the average macromolecular self-diffusion with diffusion coefficients 0.85 ± 0.15 Å2 ns−1 = 0.85 ± 0.15 × 10−7cm2 s−1 and 1.06 ± 0.11 Å2 ns−1 at 284 and 303 K, consistent with the measurements of self-diffusion in RBCs (Doster and Longeville, 2007). The respective residence times were τ = 0.97 ± 0.08 and τ = 0.59 ± 0.04 ns.
Further in vivo studies were performed on extremophile bacteria in the context of the adaptation to extreme temperatures and pressures (Tehei et al., 2004; Tehei and Zaccai, 2005; Martinez et al., 2016), see sections 'Internal dynamics of proteins at high pressure' and 'Adaptation of proteins to ambient and extreme temperatures'. The solvent isotope effect on in vivo protein dynamics was studied in E. coli (Jasnin et al., 2008b), see section 'From powder to solution: influence of solution conditions on protein dynamics'.
Recently, even multicellular living organisms, namely planarian flatworms, have been studied by QENS (Mamontov, 2018). Mamontov (2018) found two remarkably well defined pico- to nano-second dynamical contributions to the scattering function, one of which was attributed to water diffusion and the other to the dynamics of the other cellular constituents. Since the quasielastic broadenings γ had no obvious q 2-dependence, they were fitted both with a jump-diffusion model and a Fickian diffusion model. In both cases, the diffusion coefficients in a temperature range of 284.5–304.1 K were not increasing according to the Stokes–Einstein coefficient, but rather decoupled from the solvent diffusivity, as expected instead for the lateral diffusion of lipids in membranes. Moreover, the author noted, independent of the model, the diffusion coefficients seemed to exhibit systematically higher values above 298 K compared with the lower measurement temperatures, reasonably consistent with the well-known phase transition in lipid assemblies around that temperature (Mamontov, 2018). Hence, it was suggested by the author that the measured component was at least mainly due to lipid diffusion. Finally, Mamontov speculated that the fact that temperatures higher than 294–296 K may eventually damage planarians may be related to the possible increase in the diffusivity of cell constituent above 298 K and possibly a diminished ability to maintain tightly the diffusivity in their cell constituents at elevated temperatures (Mamontov, 2018).
While the studies so far monitored the dynamics of all cellular components, Anunciado et al. reported a study on the global and internal dynamics of only a particular protein in a living bacterial cell (Anunciado et al., 2017). Using overexpression of protonated GroEL protein in a deuterated E. coli, the global diffusion was found to be slowed down by a factor of 4 compared with dilute buffer conditions. Furthermore, internal motions were found to be slowed down by roughly a factor of 2 from ~ 39 ps under buffer conditions to ~ 65 ps, while their confinement geometry remained similar with a confinement radius of about 1.3 Å (Anunciado et al., 2017).
Other techniques such as NMR and fluorescence spectroscopy are in principle suitable to study specific protein dynamics in living cells. NMR has the potential to detect the dynamics of a selected type of labeled protein. Such measurements are still a major challenge mainly due to the limited signal produced by low concentrated proteins and their interactions with the crowded cellular environment (Hänsel et al., 2014). So far, only a few proteins have yielded sufficiently good NMR spectra in this context (Freedberg and Selenko, 2014; Hänsel et al., 2014). Fluorescence techniques such as FRAP and FCS provide further complementary information on diffusion and binding of labeled macromolecules in living cells down to the microsecond timescale, with high-spatial resolution (Lippincott-Schwartz et al., 2001; Diekmann and Hoischen, 2014; Wachsmuth, 2014). While several in vivo studies on diffusion as observed by FCS have reported a subdiffusive, anomalous behavior, other observations were consistent with normal Brownian diffusion (Höfling and Franosch, 2013), and even unobstructed Brownian diffusion was reported for GFP in the cell cytoplasm on the 1 µs timescale, and explained as a consequence of the presence of rather immobile structures (the endoplasmic reticulum sheets, mitochondria, vesicles, Golgi apparatus, etc.), rather than of freely diffusing macro-molecules (Di Rienzo et al., 2014). The question of the generality of anomalous diffusion is a topic of current discussion, and whether or not it occurs as well as how strong the deviation is from normal diffusion may depend on the moment in time of the cell life cycle (Selhuber-Unkel et al., 2009) (related to changes in the cellular environment) and on the position in the cell, the nucleus seemingly leading to the largest anomalies (Wachsmuth et al., 2000; Höfling and Franosch, 2013). Also, whether the obstacles are mobile or immobile and the size of the probe seems to play a crucial role in the appearance of anomalous behaviors, as demonstrated by experiments and simulations (Berry and Chaté, 2014; Sentjabrskaja et al., 2016). For further reading on the question of anomalous diffusion in biological cells, we refer the reader to Höfling and Franosch (2013) and Cardarelli and Gratton (2016). Usually, fluorescence studies reported diffusion coefficients 3–10 times smaller than in dilute aqueous solutions (Luby-Phelps et al., 1987; Arrio-Dupont et al., 1996; Elowitz et al., 1999; Schwille et al., 1999; Arrio-Dupont et al., 2000; Wachsmuth et al., 2000; Lippincott-Schwartz et al., 2001; Dauty and Verkman, 2005), although also values smaller by a factor 104 were reported for large proteins in muscle cells (Arrio-Dupont et al., 1996, 2000). Frequently, an increase of this factor with rising size of the probe was reported and often explained by the hindrance due to filamentous networks permeating the cells (Höfling and Franosch, 2013), such as the cytoskeleton, with typical mesh sizes of 20–250 nm (Charras et al., 2006; Morone et al., 2006; Salbreux et al., 2012) becoming particularly relevant at the long time probed by fluorescence spectroscopy.
The cellular environment and its implications for the macromolecular dynamics have been subject of an increasing number of computational studies in the past decade (Feig et al., 2017). Generally, such studies achieved a rather good agreement with experimental observations and gave valuable information on the heterogeneity of the internal dynamics of biomacromolecules under crowded conditions (Feig et al., 2017). Regarding the translational diffusion in the cellular environment simulations provided further insight into the crucial role of hydrodynamic interactions in slowing down the diffusion of highly concentrated spherical colloidal particles with a size distribution inspired from that of the components of E. coli cytoplasm (Ando and Skolnick, 2010), as well as into the importance of the intrinsic polydispersity of bacterial cytoplasm in suppressing the occurrence of glassy dynamics at high concentrations (Hwang et al., 2016).
In summary, evidence for a substantially decreased diffusion of proteins in living cells compared with dilute solutions has been gathered by several complementary techniques, covering timescales from nanoseconds to hundreds of seconds. In some cases, especially on shorter timescales, concepts of colloid physics were applied to understand the diffusive properties of the macromolecules. On longer timescales, anomalous diffusion was often, but not always, observed by FCS. Its occurrence seems ultimately related to the characteristics of the local environment, and, although a complete physical picture is still missing, some studies indicate that it may be influenced by the presence of rather immobile structure versus freely diffusing macromolecules and the size of the probe relative to typical lengths characterizing the structure.
One of the strengths of high-resolution neutron scattering spectroscopy is the possibility to investigate the dynamics of proteins up to very high solution concentrations. The study of proteins at high concentrations is motivated by the fact that the environment in which most proteins are found in living cells is crowded, i.e. it is filled with several types of macromolecules at volume fractions between 20 and 30% (Ellis, 2001). In the following, we review the results of in vitro neutron experiments under such conditions (see section 'In vivo neutron spectroscopy' for measurements of dynamics in vivo).
The most obvious effect of crowding is on global translational and rotational diffusion of entire proteins. Several studies were carried out in order to understand and quantify such an effect. Mb dynamics at high concentrations was investigated using NSE by Longeville and co-workers (Longeville et al., 2003a, 2003b) and by Le Coeur and Longeville (2008). At q ~ 0.3 Å−1, collective diffusion approaches the self-diffusion (cf. section 'Diffusion of the entire protein'). The self-diffusion coefficient D s was found to decrease with concentration (Longeville et al., 2003b). Compared with the dilute limit, D s for 32 mM (~530 mg ml−1) Mb was reduced by a factor of 15 (Longeville et al., 2003a). After the volume of one hydration layer on the surface was added to that of the bare protein to calculate the volume fraction – as done by Doster and Longeville (2007) for Hb in RBCs – the measured diffusion coefficients were found to agree remarkably well with the long-time self-diffusion coefficient predicted by the theory of colloidal hard spheres (Le Coeur and Longeville, 2008). The same was observed also in a subsequent NSE experiment by Longeville and Stingaciu (2017). The collective diffusion at low q, instead, was shown to increase with concentration as a result of increasing direct interactions (Longeville et al., 2003a). In a similar NSE experiment, Wood et al. (2008) reported a decrease of the diffusion of ribonuclease A with increasing concentration.
In an attempt to measure also the short-time limit of the diffusion coefficient by NSE, Hb was measured at 350 mg ml−1 in H2O (Le Coeur and Longeville, 2008). Two relaxations were observed at 1 Å−1, one of which was attributed to water diffusion. The second component with relaxation time τ = 67 ± 15 ps was too fast due to short-time diffusion and it was not possible to determine the related process.
Later, Lal et al. (2010) investigated Hb and Mb at two protein concentrations (20 and 150 mg ml−1) and two temperatures (15 and 37 °C) employing NSE at rather high scattering vectors 0.1 Å−1 < q <1 Å−1 (small length scales). For q>0.26 Å−1, the NSE intermediate scattering function was well fitted with a single exponential decay exp ( − t/τ). The relaxation times τ of Hb and Mb as a function of q showed marked differences. While for Mb the q-dependence could be interpreted in terms of a relatively rigid, freely diffusing quasi-spherical particle, for Hb an increase of τ with q was attributed to both additional rotational diffusion and internal modes (Lal et al., 2010). For q >0.26 Å−1, where the signal was interpreted in terms of self-correlations due to the predominant contribution of incoherent scattering, the intermediate scattering function could not be fitted by single-exponential function, and a KWW function (Williams and Watts, 1970) (see section 'Localized internal dynamics') was used instead. After extracting the MSD $\langle r^2 (t)\rangle $ for q > 0.26 Å−1 as a function of time from several picoseconds to a few nanoseconds, found that <r 2(t)> ~ t β, with β = 0.4 ± 0.03, indicating subdiffusive motion, and that, for all timescales measured, <r 2(t)> was greater than expected if proteins were rigid. Hence, the authors concluded that the additional dynamics must be due to internal motion. In addition, MSDs at 150 mg ml−1 were found to be slower than those at 20 mg ml−1, consistent with several other studies (cf. section 'In vitro studies on the effect of macromolecular crowding on protein dynamics'). The KWW characteristic time τ KWW ~ q −2/β, i.e. with a steeper q-dependence than for simple diffusion. Such a behavior, noted by the authors, is similar to that observed in polymer systems and in simulations of the dynamics of protein backbone atoms (Lal et al., 2010). It was suggested by Lal and co-workers that the observation of a stretched-exponential decay at high q-values results from the superposition of processes on at least three time regimes: an essentially harmonically, constrained fast motion at short times, concerted domain motions at intermediate times, and whole-body diffusion at longer times (Lal et al., 2010).
Mb diffusion up to a volume fraction of 40% was measured also by using NBS spectrometers (Busch et al., 2006). Two components of the scattering function were identified: the slow relaxation was attributed to translational diffusion (neglecting rotational diffusion), whereas the fast process was assigned to internal dynamics. The narrow quasi-elastic broadening was found to follow a q-behavior typical of subdiffusive processes. The data were fitted with the jump-diffusion model by Singwi and Sjölander (see section 'Localized internal dynamics'), and the diffusion coefficients were found to decrease with increasing protein concentration, whereas the residence times increased.
Similar results were observed in solutions of ferritin, a complex consisting of protein units responsible for iron storage (Häußler, 2008). At low concentrations and high-ionic strength, the diffusion measured by NSE approaches the dilute limit. Increasing the concentration, as expected, slows down the diffusion. The study examined also solutions with low-salt content. In these samples, a structure factor peak appeared as a consequence of the ordering of the ferritin molecules. In the vicinity of the peak, only an approximate analysis was possible, because the q-averaged intermediate scattering function is affected by the slope of the structure factor. Nevertheless, the data indicated that in low-salt samples both direct electrostatic and indirect interactions influence the ferritin dynamics, especially close to the structure factor. These results essentially confirmed those by Häussler and Farago on apoferritin (ferritin without the iron core) suggesting that, at the structure factor peak, dynamics is slowed down by strong spatial correlations, while at lower scattering vectors (larger length scales), dynamics is hindered by hydrodynamic interactions (Häußler and Farago, 2003).
Recently, Gupta et al. (2016) have mimicked a crowded environment by dispersing model globular proteins such as α-lactalbumin and Hb, in aqueous solution of poly(ethylene oxide). By employing NSE and SANS, the corresponding protein dynamics in semidilute polymer solution was measured. The authors could describe protein dynamics in such a crowded environment analogous to the diffusion under a periodic potential. A fast dynamic process was attributed to diffusion inside a trap built by the polymer mesh whereas a slower process was interpreted in terms of long time diffusion on macroscopic length scales also observed by other techniques. Moreover, for higher concentrated polymer solutions, the onset of fractional diffusion was observed (Gupta et al., 2016). Previously, it was observed by NMR that the effect of crowding by a polymer mesh on long-time diffusion is qualitatively different from that of crowding by proteins (Wang et al., 2010). Instead, the dynamics observed by Gupta et al. (2016) is likely rather analogous to that of a protein in nucleic acids such as DNA chains, or similarly trapping local environments within the cytoplasm. In fact, the long-time translational diffusion of GFP and BSA as a function of the concentration of DNA as investigated by FRAP (Wattenbarger et al., 1992; Busch et al., 2000) and NMR (Wang et al., 2010) is qualitatively more similar to that observed in the presence of rather large synthetic polymers (Furukawa et al., 1991; Wang et al., 2010) than in the presence of freely diffusing proteins or cell lysate (Wang et al., 2010). Different dynamical behaviors might be related to the strand-like, meshing and more trapping nature of the DNA and polymers compared with the proteins.
The effect of crowding and the presence of NaCl on the diffusion of BSA was investigated using both NBS and NSE (Roosen-Runge et al., 2010; 2011). A crowding-induced decrease of the apparent self-diffusion coefficient D app was observed with both techniques. The addition of NaCl was found to affect the diffusion only at low protein concentrations (Roosen-Runge et al., 2010). Later, Heinen et al. (2012) investigated the static and dynamic properties of aqueous BSA solutions employing DLS, SLS, SAXS and rheometry integrated with analytical colloid theory. In the absence of salt, the long-time collective diffusion coefficients were found to rapidly increase from D 0 ~ 5 Å2 ns−1= 50 m2 s−1 in the dilute limit, to roughly 25 Å2 ns−1 at 10 mg ml−1 and then linearly decrease to ~25 Å2 ns−1 at 120 mg ml−1. The increase, observed in the collective diffusion measured by DLS but not in the self-diffusion measured in another study by NBS (Roosen-Runge et al., 2011), is due to the repulsive nature of the interactions between BSA molecules, and was less and less marked with increasing screening induced by the addition of NaCl. Notably, while intentionally keeping the modeling rather simple, the measured static and dynamic features of the system were captured with semi-quantitative accuracy by such a colloid physics approach. Roosen-Runge et al. (2011) established an analytical framework for separating the rotational D r and the translational D t contributions to the experimentally determined D app, requiring knowledge of D r (see also section 'Diffusion of the entire protein'). Using the short-time limit of D r from the theory of colloids, D t could be extracted from the data at 280 and 300 K and its value as a function of the protein volume fraction φ, calculated with the hydrodynamic radius R h rather than the bare protein radius R p, was found to agree quantitatively with the corresponding theoretical short-time translational diffusion coefficients, as shown in Fig. 18. Therefore, hydrodynamic interactions arising from self-crowding at physiological volume fractions (between 20 and 30%) were shown to slow down short-time self-diffusion by a factor 5 compared with the dilute limit (Roosen-Runge et al., 2011). A slowing down generally on the same order of magnitude or more pronounced was observed on longer timescales by fluorescence spectroscopy in vitro (Höfling and Franosch, 2013), as well as in vivo, as reported in the previous section.
Fig. 18. Translational self-diffusion coefficients D t normalized by the dilute limit diffusion coefficient D t(0) (circles) for two different temperatures (red and purple circles denote 280 and 300 K, respectively) after separation of the rotational contributions. The purple line superimposed on the data is a guide to the eye obtained from a polynomial fit indicating the temperature-independent master-curve. The top and bottom dashed purple lines indicate the upper and lower 96% prediction bounds, respectively. The blue lines denote the colloidal short-time self-diffusion for hard spheres (light blue, solid) and charged spheres (dark blue, dashed). The inset in the top right corner illustrates the flow field (light blue stream line plot) generated by the movement of three spheres (velocities are denoted by blue arrows) and therefore experiencing hydrodynamic forces (pink arrows). Figure reproduced with permission from Roosen-Runge et al. (2011). Copyright National Academy of Sciences of the United States of America.
In a subsequent NBS study, Grimaldo et al. (2014) measured the apparent self-diffusion of the distinctly non-spherical Y-shaped γ-globulins in D2O as a function of φ. As for BSA, the short-time self-diffusion coefficient D app was found to decrease significantly with φ. The system was considered as a monodisperse solution of monomers of IgG, and the framework established in Roosen-Runge et al. (2011) was employed to extract D t, after assuming D r(φ) from the theory of colloidal hard spheres. The obtained D t(φt) was found to agree quantitatively with the theory of colloidal hard-sphere suspensions if an empirical effective hydrodynamic radius R eff = 1.4((3/(4π))V p)1/3 is used to account for anisotropy of the protein structure. This value may also reflect the presence of a significant motion of the three branches of the protein, as recently suggested by a NSE study on the internal dynamics of IgG by Stingaciu et al. (2016) and an earlier study based on a comparison between crystal structures (Saphire et al., 2002). Moreover, the radius R eff may effectively account for the anisotropy of the structure of IgG, which is not considered in the theory for hard-spheres. Hence, this observation represents a challenge for the modeling of such a flexible and anisotropic protein, while maintaining a consistent physical picture. Non-neutron-based complementary techniques and simulations may help tackling this challenge. For instance, Brownian dynamics simulations, neglecting however solvent-mediated hydrodynamic interactions, suggest that protein anisotropy significantly affects the rotational diffusion, especially at high-volume fractions (Długosz and Antosiewicz, 2013). MD simulations predict stronger effects of crowding through glucose on the rotational diffusion, than on the translational diffusion (Spiga et al., 2014). A similar trend was observed in all-atom simulations investigating self-crowding, in which such a slowing down was attributed to protein–protein interactions and the formation of increasingly large clusters with rising protein concentration (Nawrocki et al., 2017) (clustering due to increased protein concentration is observed also in other systems; see section 'Dynamics of protein clusters, aggregates and glasses'). Also a previous NMR study by Wang et al. (2010), probing diffusion on a timescales of 0.01–1 s, suggested that weak protein–protein interactions were responsible for a stronger slowing down of rotational diffusion compared with translational diffusion. Interestingly, the same trend was observed when the crowders were mostly compact biomacromolecules (BSA, ovalbumin, Lys, cell lysate), but not with synthetic polymers (also forming a mesh), for which the opposite was observed, that is translational diffusion, although faster than expected by the modified Stokes–Einstein relation, was more strongly affected by the crowder concentration than rotational diffusion (Wang et al., 2010). Hence, it is possibly because of a lack of weak protein–protein interactions that Roos et al. (2015) have found by NMR measurements that the slowing down of the rotation of α B-crystallin due to increasing concentration is decoupled and less pronounced than that of the translation, which instead follows well the inverse solution viscosity change (Roos et al., 2015). As a matter of fact, also SAXS, DLS and viscometry measurements, together with simulations and mode-coupling theory scaling relations (Foffi et al., 2014), as well as NSE measurements (Bucciarelli et al., 2016) of α-crystallin between 48 and 330 mg ml−1 were found consistent with a bare hard sphere-like repulsion. NMR and FCS measurements by Roos et al. (2015, 2016) indicated essentially that short-time rotational diffusion can be coupled or uncoupled from the viscosity at high concentrations, and suggested that the strength of such a coupling may be due to anisotropic interactions originating from hydrodynamic shape effects combined with high charge and possibly a patchy charge distribution.
As a matter of fact, colloid physics and biophysics are mutually profiting from the development of concepts, theories and simulations on the so-called patchy colloids on the one hand and the application and verification of such theories on protein solutions on the other hand (see also section 'Dynamics of protein clusters, aggregates and glasses'). An NSE study by Bucciarelli et al. (2016) on highly stable eye lens proteins, bovine α-crystallin and γ B-crystallin detected a slowing down of the diffusion of both proteins with increasing concentration over distances comparable with the nearest neighbor distance, but with marked variations that are directly linked to subtle differences in their interaction potentials: when the proteins exhibit short-range attractions – a feature proper to γ B-crystallin and common to many globular proteins – the reduction of the diffusion becomes particularly pronounced (Bucciarelli et al., 2016). Furthermore, by a comparison with computer simulations, it was shown that, at comparable effective pair potential strength, the presence of attractive patches on the protein surface rather than an isotropic interaction potential could have a tremendous effect on short-time diffusion, related also to the formation of large and open network-like clusters (Bucciarelli et al., 2016). Therefore, these results point out that, in numerous cases, further extending models for proteins in crowded environments by considering anisotropic interactions might be needed.
Finally, while all the studies reviewed so far in this section have been focusing on the effect of crowding on the diffusion of globular proteins, Li et al. (2008) and Cino et al. (2012) investigated by NMR and MD simulations the effect of macromolecular crowding on the dynamics of IDPs. Li et al. (2008), comparing the 15N NMR spectra of the IDP α-synuclein (αSN) with those of the globular protein chymotrypsin inhibitor 2 (CI2) in buffer and in 300 mg ml−1 poly(vinylpyrrolidone) (PVP) at 4 K, found that relaxations in αSN are less influenced by the presence of PVP than those in CI2. The authors argued that, while the relaxation rates of CI2 mostly reflect the rotation of the entire globular protein, which is quite sensitive to the level of crowding, the spectra of αSN rather provide information on the fluctuations of residues along the IDP chain, which remains remarkably flexible also in the presence of PVP (Li et al., 2008). Cino et al. (2012) observed a similar trend in solutions of prothymosin α (ProTα) with Ficoll 70 up to 400 mg ml−1 as crowding agents. Combining NMR data and simulations, the authors concluded that, even though crowded environments can slow down the local segmental motions in ProTα, the protein still retains a certain level of flexibility even at high concentrations of crowders, although a few regions become more structured. Cino and co-workers also reported that some of these regions overlap or are close to known target-binding motifs of ProTα, and argued that this feature might be rather general and crucial for the biological function of IDPs in the crowded physiological environment (Cino et al., 2012).
A general outcome of the above studies is that both the translational and the rotational diffusion of proteins is slowed down by crowding. The application of concepts from the theory of colloids is rather common and indicates that hydrodynamic interactions play a major role in the damping of the dynamics on nanosecond timescales. In addition to that, it was found that, close to the structure peak appearing at high concentrations of charged proteins at low salt concentrations, collective dynamics is hindered by strong spatial correlations. Despite the substantial progress in the past few years in understanding the effect of crowding on protein diffusion, further work is needed to eventually model complex systems such as crowded biological cells. In particular, the question how anisotropic potentials, as well as different shapes and domain motions of globular and IDPs affect both the translation and rotation of proteins on different timescales in crowded environments will be only fully answered by systematic studies combining different experimental techniques.
Before concluding this section, two critical issues for the interpretation of the measured self-diffusion in terms of colloid physics deserve discussion. (i) In most cases, an agreement between experimental data and theories for effective hard-spheres is observed only after an appropriate renormalization of the volume fraction of the bare proteins to an effective, larger volume fraction. This renormalization requires a calibration based on accurate information on the volume, dilute limit dynamics (e.g. from DLS or HYDROPRO; Ortega et al., 2011), and structure of the proteins (e.g. from the specific volume, SAXS/SANS measurements or PDB files). In addition to this, assumptions on the physical origin of the volume fraction renormalization are necessary, in particular concerning the relevant radius for the hydrodynamic interactions. The sensitivity of the result to the precise calibration of the volumes occupied by the proteins and hydration layers will inevitably lead to different conclusions under different assumptions. In particular, it should be mentioned that non-sphericity in general leads to a larger effective volume and it is thus in principle insufficient to increase the volume only by the geometric volume of the hydration layer. (ii) The application of the aforementioned colloid theories implies that the measurement must access either the long- or short-time diffusion limit, which in practice is not always simple to ensure and depends on the experimental observation scales.
In the 'Internal dynamics of proteins at high pressure' section it was mentioned that crowding was found to reduce the protein sub-nanosecond dynamics, and stabilize the protein against pressure changes (Erlkamp et al., 2015). A reduction of the internal protein dynamics in crowded environments was also observed at atmospheric pressure by quasi-elastic NBS. Grimaldo et al. (2014) measured the average internal self-dynamics of γ-globulin (IgG) in D2O on the sub-nanosecond timescale, which could be described by a jump-diffusion process (cf. Eq. (44)). With increasing volume fraction, the residence time between jumps increased, suggesting a crowding-induced thermal stabilization of the protein conformation. Interestingly, the geometrical constraints to the internal motion of hydrogen atoms did not change within the experimental uncertainty (Grimaldo et al., 2014). Similar results were obtained by Makowski et al. (2008) from a wide-angle X-ray scattering study.
Neutron studies focusing on the effect of external crowding (i.e. when the crowding agents are other than the target proteins) on protein internal dynamics are, to the best of our knowledge, still missing, but MD simulations, NMR and FRET were employed to investigate the effect of different types of external crowders. NMR studies suggest that different crowders can affect protein dynamics in different ways: while non-interacting crowding agents had little effect on protein dynamics, especially on the picosecond- to nanosecond-timescale, direct non-specific interactions seemed to have a larger impact, at least on the millisecond-timescale (Latham and Kay, 2012; Santero et al., 2016). Moreover, while some FRET measurements indicated that crowding can induce a damping of the protein local dynamics on subnanosecond timescale accompanied by a decrease in structural heterogeneity (Mondal et al., 2015), as well as a marked decrease of the rate of subunit exchange and thermal stabilization (Ghahghaei et al., 2007), other results suggested that crowding can result both in the stabilization of several compact conformations and in enhanced flexibility of some parts of the protein (Santero et al., 2016). MD simulations, instead, have shown that high glucose concentrations extensively dehydrate the protein surface and restrict the motion of the remaining water molecules. This effect leads to a slight damping of the fast internal dynamics and to a more significant limitation of the rate of exploration of the conformational space (Spiga et al., 2014). Further simulations indicated that crowding stabilizes the proteins (Cheung et al., 2005; Minh et al., 2006; Stagg et al., 2007). Overall, studies with other techniques suggest that the effect of crowding depends on the nature of the crowding agents. Hence further neutron scattering studies with different types of crowders are auspicable to understand how different crowders affect the average pico- to nano-second internal dynamics of H-atoms.
In the previous sections, solutions of protein monomers were studied. Proteins can, under specific conditions, also form different types of clusters and aggregates, as well as dynamically arrest in gels and glasses. In this section, we review neutron scattering studies on these phenomena.
Porcar et al. (2009) investigated solutions of Lys at concentrations in the range ~50 to ~250 mg ml−1. Prior to that study, small-angle scattering had not provided a definite picture on whether proteins were forming clusters or rather strongly repulsive, individual Lys proteins were present in solution at high concentrations (see Porcar et al. (2009) and references therein). The self-diffusion coefficients measured by NSE spectroscopy were found to decrease with increasing concentration more than predicted by colloid theories, assuming proteins are diffusing as monomers and dimers, as at low concentrations (Porcar et al., 2009). It was therefore inferred that increasingly large protein clusters form at high Lys concentrations, with a lifetime larger than 25 ns, since the data were consistent with static clusters in the timescale accessible by the instrument (25 ns). Moreover, at a volume fraction φ = 0.2, the effective cluster radius was found to increase from 2.5 to 3.6 times that of a monomer when temperature was decreased from 25 to 5 °C, in agreement with small-angle scattering (Porcar et al., 2009). In a subsequent study, the short- and long-time diffusion properties of Lys samples at different concentrations were obtained from NSE and NMR (Liu et al., 2010). As shown in Fig. 19 the comparison yielded, within the error bars, the same diffusion coefficients, even though the long-time diffusion coefficient is expected to be smaller than in the short-time limit. Both diffusion coefficients were significantly slower than those expected for suspensions of both hard and charged spheres with radii calculated from the fit of SANS spectra. These findings were interpreted as a result of the diffusion of clusters with a finite lifetime, larger than the NSE timescale, but shorter than that probed by NMR. In other words, while on the timescale of NSE clusters are still static, at a timescale of ~200 ms proteins can escape from the cluster, which is thus a dynamic cluster (Liu et al., 2010).
Fig. 19. Comparison of normalized long-time self-diffusion coefficient, $D_{{\rm s}\_{\rm L}}/D_0$ and normalized short-time self-diffusion coefficient, D s/D 0, as a function of volume fraction. Figure reproduced with permission from Liu et al. (2010). Copyright American Chemical Society.
The competition of more types of interactions can lead to the formation of kinds of clusters in protein solutions differing e.g. in the lifetime, or in the stiffness. However, the variety of phenomena arising from such pair potentials is very broad, and in the same system more than one may occur, depending for instance on the time- and length scale of observation. As an example, Cardinaux et al. (2011) identified an arrest transition at volume fractions $\varphi {\rm \mathbin{\lower.3ex\hbox{$\buildrel\gt\over {\smash{\scriptstyle\sim}\vphantom{_x}}$}}} 0.26$ in Lys solutions by combining SAXS, NSE and rheology experiments. In addition to this, the authors employed molecular and Brownian dynamics simulations using effective pair potential among proteins based on the combination of short-range attraction and long-range repulsion. Such simulations suggested that the experimentally observed arrest is driven by the slowing down of the motion of clusters (Cardinaux et al., 2011). In particular, a transition from a suspension dominated by monomers to one dominated by transient clusters was obtained at volume fractions larger than φ ~ 0.05. However, at higher volume fractions, simulations still predicted transient clusters, even though NSE measurements indicated that the cluster lifetime was becoming increasingly large, as evinced by the fact that the dynamic structure factor became almost independent of q for q >q c, where q c is the wavevector at which a cluster–cluster correlation peak is observed in S(q) from SAXS (Cardinaux et al., 2011).
Godfrin et al. (2015) also noticed that, while viscosity measurements on Lys in aqueous solutions show a behavior typical for Newtonian liquids, at high concentration the short-time dynamics measured by NSE is characterized by features typical of glassy colloid systems. Moreover, with increasing protein concentration, a correlation peak grows in the SANS data, the so-called intermediate range order peak (IRO, observed first by Stradner et al., 2004), also referred to as cluster–cluster correlation peak in previous studies. This behavior was explained by Godfrin and co-workers as a consequence of localized heterogeneous density distributions occurring at the same length-scale at which the IRO peak is detected, due in turn to competing short-range attractive and long-range repulsive interactions (Godfrin et al., 2015).
Recently, Riest et al. (2018) have also tested a semi-analytic theoretical method predicting diffusion properties and viscosity in isotropic particle suspensions to low-salinity Lys protein solutions, using a short-range attractive plus long-range repulsive potential. Monte Carlo simulations representing seven lysozyme samples indicated that Lys in these systems is either in a dispersed fluid or in random percolated states. The obtained theoretical predictions for the hydrodynamic function were in quantitative agreement with experimental NSE data up to φ ~ 0.04, also featuring an IRO peak. Significant differences at higher concentrations and low temperatures were suggested to be due to translational–rotational diffusion coupling induced by the shape and interaction anisotropy of particles and clusters, patchiness of the Lys particle surfaces, and the intra-cluster dynamics, which were not included in the theoretical model (Riest et al., 2018). Nevertheless, such a simplified model may be of interest for predicting and identifying trends in the structure, short-time diffusion and rheology of globular protein solutions as a function of different interaction and system parameters (Riest et al., 2018).
The formation of increasingly large clusters as a function of protein concentration is rather often observed (although also the opposite, that is a decreased aggregation at high concentrations, has been observed; see e.g. Da Vela et al. (2017) for γ-globulin). For instance, clusters of bovine beta-lactoglobulin (BLG) were investigated with SAXS as well as NSE and NBS spectroscopy by Braun et al. (2017), and a monotonous increase of the average hydrodynamic cluster radius was observed over a broad protein concentration range, corresponding to oligomeric structures of BLG ranging from the native dimers up to roughly four dimers. The combination of static and dynamic measurements suggested that the clusters are compact and have a lifetime that is larger than both the NSE and NBS observation timescales, that is up to ~50 ns (Braun et al., 2017). The authors further reported that an SLS/DLS study by Piazza and Iacopini (2002) on a specific type of BLG (bovine β-lactalbumin A) indicated the formation of oligomer-type 'transient' clusters with a limited lifetime on the microsecond observation timescale, consistent with PFG-NMR results on millisecond timescales by Le Bon et al. (1999), indicating that BLG self-diffusion in concentrated solutions is in agreement with that of dimers. A recent all-atom MD investigation of the dynamics of chicken villin headpiece HP-36 at concentrations ranging from ~30 up to ~130 mg ml−1 by Nawrocki et al. (2017) yielded similar results and provided further insight into the possible mechanism of formation of these clusters. After adjusting the force-field to increase the protein–water interactions and reproduce the expected dilute limit translational and rotational diffusion coefficients, the concentration of villin was increased and the formation of transient clusters was observed. The clusters were characterized by a size distribution moving to larger sizes and by an increasing lifetime (up to some hundreds of nanoseconds) with rising protein concentration. Other than in the approach by Braun et al. (2017), Nawrocki and collaborators found in their simulations that the formation of clusters alone nearly completely accounts for the obtained slowing down of the translational diffusion of villin (Nawrocki et al., 2017). For rotational diffusion, a somewhat more pronounced effect was observed. Finally, the authors also found that the residues involved in protein–protein binding were largely acidic and basic, which, was argued, suggests that the assembly of nonspecific clusters may be driven by electrostatic interactions and salt-bridge formation (Nawrocki et al., 2017).
Yearley et al. (2014) observed by NSE the formation of protein clusters even in solutions of monoclonal antibodies (mAbs) at high concentrations. In their study, two model mAbs in solution were compared, one of which was characterized by a steep increase of the viscosity of solutions at high concentration (mAb1), the other showing a less pronounced increase (mAb2). The combination of NSE and small-angle scattering demonstrated that in the solutions with high viscosity the mAb1 molecules formed dimers, whereas the mAb2 molecules retained their monomer structure. The high viscosity, which is undesirable for pharmaceutical applications, was therefore related to the formation of such dimers at high concentrations (Yearley et al., 2014).
Understanding protein aggregation is fundamental also because it often leads to the formation of the so-called amyloid fibrils, which are related to numerous diseases. In this context, Erlkamp et al. (2014) performed a combined SAXS-NSE study of insulin under two solvent different conditions, one promoting, and the other inhibiting amyloid fibril formation. In the former case, no collective diffusion (density fluctuations) was observed in the range of time and length scales experimentally accessible, and only self-diffusion could be measured. In the latter case, collective diffusion was visible, along with an appearance of a correlation peak in the SAXS profiles. The results suggested therefore that a lack of repulsive interactions reducing collective effects promotes the fibril formation (Erlkamp et al., 2014).
The formation of amyloid fibrils by an IDP, αSN, was shown to be involved in the pathogenesis of Parkinson's disease, which motivated an NBS study by Fujiwara et al. (2016). The authors were able to measure the dynamics of 9.5 mg ml−1 purified αSN, and that of 46 mg ml−1 αSN in the fibril state in D2O. Such measurements are remarkable for NBS, since this technique usually requires rather large protein concentrations to obtain a reasonable signal-to-noise ratio. The detection of low quantities of proteins is challenging and is only possible in latest, state of the art spectrometers. Fujiwara et al. (2016) found that the αSN in its monomeric state undergoes diffusive global motions, which are instead largely suppressed in the fibril state. In contrast, the amplitude of the side-chain motion was found to be larger in the fibril state than in the monomeric state (Fujiwara et al., 2016). It was concluded by the authors that, within the fibrils, a significant space is left for the solvent, which allows for a large a distribution of conformations of αSN side-chains. Moreover, it was pointed out that the larger amplitude of the side-chain motion in the fibril state compared with the monomeric state implies that the fibril state is entropically favorable (Fujiwara et al., 2016).
Other non-neutron based techniques such as DLS (Bolañnos-García et al., 1998; Li et al., 2011; Arzenšek et al., 2012; Soraruf et al., 2014; Maes et al., 2015; Bauer et al., 2016), DLS combined with Raman spectroscopy (Lewis et al., 2014) or with NMR (Poznański et al., 2005), fluorescence spectroscopy (Nath et al., 2010; Roberti et al., 2011; Nath and Rhoades, 2013) and rheology (Dharmaraj et al., 2016), as well as simulations (Bratko et al., 2007) can be used to study the dynamics of systems with aggregating proteins. One of the strengths of the combination of such techniques with neutron scattering related to the different accessible times, is the potential of inferring information on the type and lifetime of clusters (as in Liu et al., 2010) and aggregates on wide concentration ranges, as well as on the kinetics of the process.
Several types of isotropic and anisotropic interactions between proteins exist, with different typical ranges and strengths. One of these anisotropic interactions arises as a consequence of the highly heterogeneous surface charge pattern of proteins, such that, in some cases, proteins could be successfully modeled as hard spheres with attractive patches (Gögelein et al., 2008; Roosen-Runge et al., 2014). In an NBS study on the dynamics of BSA in the presence of the trivalent salt YCl3, Grimaldo et al. (2015b) found that, at several fixed protein concentrations c p and a series of salt concentrations c s, the apparent diffusion coefficient of BSA D(c s, c p) normalized by D(c s = 0, c p) decreases as a function of the number c s/c p of ions of Y3+ per protein in a remarkably universal manner with respect to c p. The authors interpreted such a result in terms of a model of ion-activated patchy hard spheres (Roosen-Runge et al., 2014), and suggested that the observations could be explained by the formation of protein clusters with a given size-distribution mediated by Y3+ ions binding semi-quantitatively to specific sites on the protein surface (Grimaldo et al., 2015b). Such a result complemented a previous DLS study on the same system, but at smaller protein concentrations and corroborated the hypothesis that the formation of clusters could be observed by light scattering when increasing c s/c p (Soraruf et al., 2014).
In many globular proteins competing interactions can lead, in addition to cluster formation, to liquid–liquid phase separation at low temperatures, as in the case of γ B-crystallin. The collective diffusion of proteins in such a system was investigated by Bucciarelli et al. (2015) with a combination of DLS and NBS. The authors found that the combination of critical slowing down and dynamical arrest results in a peculiar wavevector dependence of the dynamic structure factor I(q,t), even though the static properties such as the osmotic compressibility and the static correlation length are in quantitative agreement with predictions for binary liquid mixtures (Bucciarelli et al., 2015). Later studies combining NSE experiments with CG simulations (Bucciarelli et al., 2016; Myung et al., 2018) indicated that a major role in determining S(q, t) and hence collective diffusion might be played by the presence of attractive patches on the protein surface and nonspherical shape (see also section 'Global diffusion').
In summary, the combination of static and dynamic techniques was shown to provide important information on microscopic properties such as clustering and phase transitions useful to understand the basis of macroscopic properties of the system. From the perspective of colloid science, proteins represent a fascinating model system and an opportunity for the study of new phenomena related to the interplay of repulsive and attractive, potentially anisotropic interactions (Riest and Nägele, 2015; Sentjabrskaja et al., 2016; Das et al., 2018; Myung et al., 2018).
From the reviewed work employing neutron spectroscopy and complementary techniques to explore proteins in liquid solutions, three aspects have become undoubtedly clear:
• Protein dynamics occurs on multiple hierarchical time and length scales.
• Different experimental techniques, each having specific advantages and disadvantages, access different types of dynamics and, hence, are complementary (see Table 1).
• Protein dynamics is related to protein function.
The review has focused on neutron spectroscopy while mentioning complementary methods in a non-exhaustive manner to highlight where neutron spectroscopy, to their advantage or disadvantage, differ from these other methods. For instance, the access to short-time center-of-mass diffusive dynamics in protein solutions is part of the advantages of employing neutron spectroscopy, by providing a unique probe to global dynamics on a timescale before this dynamics is altered by protein–protein collisions. However, it is generally necessary to use D2O as a solvent to focus on the incoherent signal from protonated proteins. D2O solvent may alter the dynamics of the proteins themselves beyond the sole effect of the different viscosity compared with H2O.
In liquid protein solutions, mainly the diffusive dynamics has been explored using QENS, accessing picosecond to nanosecond timescales and nanometer length scales. Moreover, diffusive dynamics on longer time- and length-scales has been explored using NSE spectroscopy. Our review has attempted but certainly not attained a comprehensive overview of these studies addressing diffusive dynamics in liquid protein solutions using neutrons. In contrast, besides the above-mentioned complementary methods, also the deep inelastic neutron scattering accessing high-frequency vibrational dynamics (in the THz range and beyond) could not be addressed in any detail within the limited scope of this review.
Among the unique properties of neutron spectroscopy experiments, we highlight the possibility to benefit from the information contained in incoherent scattering to infer on ensemble-averaged localized dynamics. This information on the geometrical confinement of a diffusive process indirectly provides knowledge on local order in proteins which – outside crystals – generally do not display any long-range order. In liquid solutions of proteins, both the coherent and incoherent scattering can be interesting, and many NSE experiments on protein solutions are based on the coherent part of the signal. For instance, information on dynamic or transient protein cluster formation can be enhanced by combining both coherent (NSE, on longer length scales) and incoherent (NBS, on shorter length scales) scattering experiments.
Despite the substantial efforts and number of studies carried out in the past few decades on protein dynamics, several questions remain at least partially unanswered. The hierarchical nature of protein dynamics in relation to the protein function, the role of the structure, the binding of different ligands, the state of a protein in determining the protein internal dynamics and the impact of different local cellular environments, as well as anisotropic pair potentials or particles shapes on the global protein dynamics are all topics still lacking a deep, fundamental understanding. As seen in this review, to tackle these problems a great flexibility regarding both the type of probed atoms or groups of atoms and the observation timescales is required. For this reason, we believe that, in the near future, studies combining different experimental techniques such as neutron scattering, NMR, fluorescence spectroscopy, light scattering and profiting from MD simulations will be essential to provide a substantial thrust to several fields of biological physics. Recent improvements as well as new neutron scattering instruments will also open new perspectives. Future topics of great interest include kinetic studies of dynamics during processes such as macromolecular assembly formation, in situ crystallization and cluster formation.
Protein dynamics is a key research area in biophysics with many facets in terms of time and length scales as well as implications for physical properties and biological function. Neutron scattering spectroscopy was employed in a multitude of investigations of protein dynamics – from fast, localized motion with TOF and NBS to slow domain motion with NSE, as well as center-of-mass diffusion with NBS and NSE. In this context, neutron spectroscopy has benefited from progress in neutron instrumentation and data analysis during the recent years. With its nondestructive and contact-free access to a wide range of time and length scales accessing the molecular level, including in optically opaque samples, the neutron scattering technique has proven to be complementary to other established biophysical techniques such as NMR, fluorescence spectroscopy and light scattering. By probing both the coherent and incoherent scattering, it can measure both collective and self-dynamics. Among the emerging topics of research, the center-of-mass and internal dynamics in crowded protein solutions, the formation of static or transient protein clusters, and more generally protein self-assembly in solution may be named. Experiments have shown that the internal motion of proteins in hydrated powders differs from that in solution, where generally additional dynamics was observed. Protein internal dynamics does not only depend on the hydration level, but also on the characteristics of the solvent, e.g. if H2O or D2O are used. EINS studies revealed the existence of a dynamical transition in solution similar to that observed in powders. Other than in powders, some results suggested an apparent decoupling of water and protein dynamics. Furthermore, numerous studies have demonstrated an effect of the structure and state of proteins, as well as an influence of pressure, temperature and crowding, on their dynamic behavior. Finally, in addition to studies on highly concentrated protein solutions, investigations of entire living cells demonstrated the occurrence of different dynamical processes in vivo, and provided evidence for a molecular mechanism of adaptation of organisms to the temperature at which they live.
1 Allosteric regulation is the process through which the activity of an enzyme is altered by means of a conformational and dynamical change induced by a different molecule.
Author ORCIDs
Marco Grimaldo: 0000-0002-3772-7137, Felix Roosen-Runge, 0000-0001-5106-4360, Fajun Zhang, 0000-0001-7639-8594, Frank Schreiber, 0000-0003-3659-6718, Tilo Seydel, 0000-0001-9630-1630
The authors are indebted to many collaborators, with whom it has been a pleasure to interact. These include in particular present and former members of the Tübingen-ILL collaboration. The authors would like to thank in particular H. Schober, B. Farago and B. Frick for discussions and support. Fruitful interactions with A. Stadler, R. Biehl, I. Hoffmann, V. Garcia Sakai, G. Kneller, J. Peters, K. Saalwächter, A. Stradner and P. Schurtenberger are also gratefully acknowledged. The authors would like to acknowledge enlightening discussions with numerous colleagues in the field, who cannot all be mentioned. We thank C. Beck, O. Matsarskaia, M. Braun, M. Oettel, R. Roth and E. Schäffer for stimulating discussions. The authors wish to thank the ILL (Grenoble), the JCNS (Jülich), FRM-II (Munich) and SNS (Oak Ridge) for making our own neutron-based work on proteins in solution possible. Finally, this work would not have been possible without the support (in part) of the Deutsche Forschungsgemeinschaft (DFG), the Agence Nationale de la Recherche (Project ID: ANR-16-CE92-0009 'ImmunoglobulinCrowding'), the ILL and the Studienstiftung des Deutschen Volkes.
Abade, GC, Cichocki, B, Ekiel-Jeyewska, ML, Nägele, G and Wajnryb, E (2010) Short-time dynamics of permeable particles in concentrated suspensions. Journal of Chemical Physics 132, 014503.
Acbas, G, Niessen, KA, Snell, EH and Markelz, A (2014) Optical measurements of long-range protein vibrations. Nature Communications 5, 3076.
Achterhold, K, Keppler, C, Ostermann, A, Van Bürck, U, Sturhahn, W, Alp, E and Parak, F (2002) Vibrational dynamics of myoglobin determined by the phonon-assisted Mössbauer effect. Physical Review E 65, 051916.
Agarwal, V, Xue, Y, Reif, B and Skrynnikov, NR (2008) Protein side-chain dynamics as observed by solution- and solid-state NMR spectroscopy: a similarity revealed. Journal of the American Chemical Society 130, 16611–16621.
Ahn, S, Kim, KH, Kim, Y, Kim, J and Ihee, H (2009) Protein tertiary structural changes visualized by time-resolved X-ray solution scattering. Journal of Physical Chemistry B 113, 13131–13133.
Aihara, T, Ueki, S, Nakamura, M and Arata, T (2006) Calcium-dependent movement of troponin I between troponin C and actin as revealed by spin-labeling EPR. Biochemical and Biophysical Research Communications 340, 462–468.
Aisa, D, Aisa, S, Babucci, E, Barocchi, F, Cunsolo, A, D'Anca, F, De Francesco, A, Formisano, F, Gahl, T, Guarini, E, Jahn, S, Laloni, A, Mutka, H, Orecchini, A, Petrillo, C, Pilgrim, W-C, Piluso, A, Sacchetti, F, Suck, J-B and Venturi, G (2006) The Brillouin spectrometer BRISP at the ILL. Physica B: Condensed Matter 385, 1092–1094.
Akasaka, K (2006) Probing conformational fluctuation of proteins by pressure perturbation. Chemical Reviews 106, 1814–1835.
Al-Ayoubi, S, Schummel, P, Golub, M, Peters, J and Winter, R (2017) Influence of cosolvents, self-crowding, temperature and pressure on the sub-nanosecond dynamics and folding stability of lysozyme. Physical Chemistry Chemical Physics 19, 14230–14237.
Alpert, Y (1980) Tentative use of NSE in biological studies. In Neutron Spin Echo, Budapest, Hungary: Springer, pp. 87–93.
Alpert, Y, Cser, L, Farago, B, Franek, F, Mezei | CommonCrawl |
Autophagy and apoptosis are regulated by stress on Bcl2 by AMBRA1 in the endoplasmic reticulum and mitochondria
Bojie Yang1,
Quansheng Liu ORCID: orcid.org/0000-0002-2190-61061 &
Yuanhong Bi2,3
Theoretical Biology and Medical Modelling volume 16, Article number: 18 (2019) Cite this article
Autophagy and apoptosis are two important physiological processes that determine cell survival or death in response to different stress signals. The regulatory mechanisms of these two processes share B-cell lymphoma-2 family proteins and AMBRA1, which are present in both the endoplasmic reticulum and mitochondria. B-cell lymphoma-2 family proteins sense different stresses and interact with AMBRA1 to regulate autophagy and apoptosis, which are respectively mediated by Beclin1 and Caspases. Therefore, we investigated how different levels of stress on B-cell lymphoma-2 family proteins that bind to AMBRA1 in the endoplasmic reticulum and mitochondria regulate the switch from autophagy to apoptosis.
In this paper, we considered the responses of B-cell lymphoma-2 family proteins, which bind to AMBRA1 in both the endoplasmic reticulum and mitochondria, to two different levels of stress in a model originally proposed by Kapuy et al. We investigated how these two stress levels affect the transition from autophagy to apoptosis and their effects on apoptosis activation over time. Additionally, we analyzed how the feedback regulation in this model affects the bifurcation diagrams of two levels of stress and cell fate decisions between autophagy and apoptosis.
Autophagy is activated for minor stress in mitochondria regardless of endoplasmic reticulum stress, while apoptosis is activated for only significant stress in mitochondria. Apoptosis is only sensitive to mitochondria stress. The time duration before apoptosis activation is longer in the presence of high AMBRA1 levels with high endoplasmic reticulum and mitochondria stress. AMBRA1 can compete with B-cell lymphoma-2 family proteins to bind and activate Beclin1 and thus promote the autophagy process for a long time before apoptosis. Furthermore, apoptosis is prone to occur with increasing activation of Caspases, inactivation of Beclin1-A and the Michaelis constant of Caspases.
A novel mathematical model has been developed to understand the complex regulatory mechanisms of autophagy and apoptosis. Our model may be applied to further autophagy-apoptosis dynamic modeling experiments and simulations.
Autophagy and apoptosis play crucial roles in deciding cellular survival and death in response to different stress signals, such as nutrient starvation and endoplasmic reticulum (ER) stress [1,2,3]. Autophagy, a cellular survival process, provides energy through degrading abnormal cytoplasmic components in the lysosomal pathway and can be activated by the Beclin1 protein in the ER [4,5,6,7,8,9]. However, excessive levels of autophagy can lead to apoptosis with increased stress levels [7, 10,11,12,13]. Apoptosis, a kind of programmed cell death, can be triggered by the proapoptotic protein Bax, which causes mitochondrial membrane permeabilization to release mitochondrial cytochrome c into the cytoplasm, further activating Caspases to induce apoptosis [14,15,16,17]. An increasing number of studies confirm that the autophagy and apoptosis networks are linked at various levels through common regulatory elements [18, 19].
B-cell lymphoma-2 (Bcl2) proteins in the mitochondria and ER are important regulators of autophagy and apoptosis [20,21,22,23]. Bcl2 in the ER (ER-Bcl2) and mitochondrial Bcl2 (mito-Bcl2) play different roles in activating the different responses of autophagy and apoptosis [24]. ER-Bcl2 negatively regulates the Beclin1-dependent autophagy program, while mito-Bcl2 has been shown to exert an antiapoptotic effect [25,26,27]. Several mathematical models of the autophagy-apoptosis network, including crosstalk between Bcl2 proteins, have been established [28, 29] to explore cell fate decisions [30,31,32]. Kapuy et al. presented a minimal model that contains interplay between crucial autophagy and apoptosis proteins; however, they did not compare simulations and experimental measurements. Tavassoly et al. addressed this disadvantage; however, the important AMBRA1 protein was not included in their assessment. The AMBRA1 protein translocates from mitochondria to the ER and regulates both Beclin1-dependent autophagy and apoptosis [33, 34]. AMBRA1 positively regulates the Beclin1-dependent autophagy program through functioning with ER-Bcl2 and mito-Bcl2 [35]. AMBRA1 binds preferentially to mito-Bcl2 under normal conditions; after autophagy initiation upon stress, AMBRA1 is released from mito-Bcl2 to ER-Bcl2, and binding to Beclin1 is increased to promote autophagy in the ER [36]. Therefore, AMBRA1, ER-Bcl2 and mito-Bcl2 should be included in the autophagy and apoptosis network in further analyses of cell fate decisions upon different stress levels.
Various stress signals in the ER and mitochondria can activate autophagy or apoptosis. For example, nutrient-induced stress of the ER activates autophagy to recycle damaged organelles [37, 38]. DNA damage can cause apoptosis through promoting the release of cytochrome c from the mitochondria to the cytosol [39,40,41]. Bcl2, a coregulator of autophagy and apoptosis in both the ER and mitochondria, can sense different stresses [42,43,44]. Therefore, it is important to explore how different levels of stress on ER-Bcl2 and mito-Bcl2 regulate the switch from autophagy to apoptosis. In this work, ER-Bcl2 and mito-Bcl2 protein binding to AMBRA1 is considered in a previously reported model [29], and different levels of stress are imposed in this model. Then, we focus on the effects of these two levels of stress on the switch from autophagy to apoptosis. The results are organized as follows. First, a new model is proposed. Second, typical time series of both active Beclin1 (Beclin-A) and Caspases and Caspases activation times are given for different stress levels, bifurcation analyses of these two stress levels are described, and the effect of feedback regulation intensity in this model on bifurcation diagrams is studied. Finally, we discuss the results and provide a conclusion.
Here, we consider two different levels of stress (such as transient nutrient starvation, DNA damage or growth factor withdrawal) denoted by S1 and S2, respectively, on two modules: the ER and mitochondria, as shown in the network diagram in Fig. 1. Under normal conditions, AMBRA1 binds preferentially to mito-Bcl2 over ER-Bcl2. However, after autophagy induction under stress conditions, some AMBRA1 proteins disassociate from mito-Bcl2 and translocate into the ER to promote Beclin1-A activity. Notably, AMBRA1 binds to Bcl2 and promotes the degradation of Bcl2, but its Bcl2-binding rates in mitochondria and the ER are different. Additionally, Beclin1-A activity is promoted by AMBRA1. In the ER, Beclin1-A, an inducer of autophagy, can cotransform with an inactive form of Beclin1 (Beclin1-I). Beclin1-A and Beclin1-I deactivate and activate apoptosis-inducing Caspases, respectively, which in turn promotes the production of Beclin1-I. Therefore, there is a positive feedback loop between Caspases and Beclin1-I but a double-negative feedback loop between Caspases and Beclin1-A. In the mitochondria, Bax promotes the release of cytochrome c to activate Caspases. Caspases inhibit ER-Bcl2 and mito-Bcl2. Two stresses S1 and S2, are imposed on ER-Bcl2 and mito-Bcl2; while the former inhibits both Beclin1-A and Beclin1-I, the latter inhibits Beclin1-A and Bax.
The autophagy-apoptosis network with two modules: the ER and mitochondria. The ER and mitochondria are represented by boxes outlined with green and blue dashed lines, respectively. State transitions are indicated by dotted lines with arrowheads, and promotion and inhibition are denoted by solid lines with arrowheads and dots, respectively
Dynamic equations
Based on their biochemical interactions shown in Fig. 1, the following eight components are considered: ER-BCL2 ([Bcl2e]), mito-BCL2 ([Bcl2m]), AMBRA1 ([AMBRA1]), Caspases ([Casp]), active Beclin1 ([Beca]), inactive Beclin1 ([Beci]), Bcl2e-Beclin1 complex ([Becac]) and Bcl2m-Bax complex ([Baxc]). The rate of every component is described by an ordinary differential equation (ODE) composed of production and consumption terms. The production term is a protein synthesis or activation term, while the consumption term is a protein degradation or inaction term. Every term on the right-hand side of the ODE corresponds to each biochemical reaction, which is described by using either the law of mass action or Michaelis-Menten kinetics, and the Michaelis constant Jcp is the substrate concentration at which the rate is equal to half of the maximal rate [45]. The unit of time is h, while protein concentrations are in arbitrary units. The significance and parameter values are shown in Table 1. In this work, the time series and bifurcation curves were computed numerically by XPP-AUT. The rate of every component is described by Eqs. (1)–(10) as follows:
$$ \frac{d\left[ Bcl{2}_e\right]}{dt}={k}_1-\left({k}_2+{k}_3\cdot {S}_1+{k}_4\cdot Casp+{k}_5\cdot AMBRA1\right)\cdot Bcl{2}_e $$
$$ \frac{d\left[ Bcl{2}_m\right]}{dt}={k}_1-\left({k}_2+{k}_3\cdot {S}_2+{k}_4\cdot Casp+{k}_6\cdot AMBRA1\right)\cdot Bcl{2}_m $$
$$ \frac{d\left[ AMBRA1\right]}{dt}={k}_7-\left({k}_8\cdot Bcl{2}_e+{k}_9\cdot Bcl{2}_m+{k}_{10}\right)\cdot AMBRA1 $$
$$ {\displaystyle \begin{array}{c}\frac{d\left[ Casp\right]}{dt}=\left({k}_{12}+{k}_{13}\cdot Beci+{k}_{14}\cdot \left( Baxt- Baxc\right)\right)\\ {}\cdot \left( Caspt- Casp\right)/\left( Jcp+ Casp t- Casp\right)\\ {}-\left({k}_{15}+{k}_{16}\cdot Beca\right)\cdot Casp/\left( Jcp+ Casp\right)\end{array}} $$
$$ {\displaystyle \begin{array}{c}\frac{d\left[ Beca\right]}{dt}={k}_{11}\cdot AMBRA1-{k}_a\cdot \left( Bcl{2}_e- Becac- Becic\right)\cdot Beca\\ {}-\left({k}_{18}+{k}_{19}\cdot Casp\right)\cdot Beca+\left({k}_b+{k}_2+{k}_3\cdot {S}_1+{k}_4\cdot Casp\right)\cdot Beca c\\ {}+{k}_{17}\cdot Beci\end{array}} $$
$$ {\displaystyle \begin{array}{c}\frac{d\left[ Beci\right]}{dt}=-{k}_a\cdot \left( Bcl{2}_e- Becac- Becic\right)\cdot Beci\\ {}+\left({k}_{18}+{k}_{19}\cdot Casp\right)\cdot Beca+\left({k}_b+{k}_2+{k}_3\cdot {S}_1+{k}_4\cdot Casp\right)\cdot Beci c\\ {}-{k}_{17}\cdot Beci\end{array}} $$
$$ {\displaystyle \begin{array}{c}\frac{d\left[ Becac\right]}{dt}={k}_a\cdot \left( Bcl{2}_e- Becac- Becic\right)\cdot Beca\\ {}-\left({k}_{18}+{k}_{19}\cdot Casp\right)\cdot Beca c-\left({k}_b+{k}_2+{k}_3\cdot {S}_1+{k}_4\cdot Casp\right)\cdot Beca c\\ {}+{k}_{17}\cdot Becic\end{array}} $$
$$ {\displaystyle \begin{array}{c}\frac{d\left[ Baxc\right]}{dt}={k}_c\cdot \left( Baxt- Baxc\right)\cdot \left( Bcl{2}_m- Baxc\right)\\ {}-\left({k}_d+{k}_2+{k}_3\cdot {S}_2+{k}_4\cdot Casp\right)\cdot Baxc\end{array}} $$
$$ Bcl2t= Bcl{2}_e+ Bcl{2}_m $$
$$ Becic= Bect- Beca- Beci- Beca c $$
Table 1 Parameters and their descriptions (protein concentrations are in arbitrary units, and the unit of time is h)
We focused on exploring the effect of different stress levels on ER-Bcl2 and mito-Bcl2 on the transition between autophagy and apoptosis. First, typical time series and Caspases activation times are given for different stresses. Then, bifurcation analyses of the two stresses in this model are carried out under different feedback regulation conditions.
Autophagy-apoptosis transition mediated by two different stresses on the ER and mitochondria
In general, autophagy is activated first by Beclin1-A in the ER, and apoptosis is then activated by Caspases in the mitochondria, depending on both the intensity and duration of stress on the ER and mitochondria. In this section, we focus on how two stresses, S1 and S2, on ER-Bcl2 and mito-Bcl2, respectively, regulate the transition from autophagy to apoptosis. Without loss of generality, we explore the sensitive of autophagy or apoptosis to two stresses, S1 and S2, with different intensities, as shown in Fig. 2 by the time series of the concentrations of Bcl2e (black, solid curve), Bcl2m (short, red, dashed curve), AMBRA1 (gray, solid curve), Casp (long, blue, dashed curve) and Beca (green, dash-dot curve). First, at a low S2(S2 = 0.2), we consider low and a high value of S1(S1 = 0.1 and 4.5), as shown in Fig. 2a and b, respectively. An abrupt increase in Beca with either a low S1 or a high S1, which facilitates the degradation of ER-Bcl2 and dissociation of the Bcl2e-Beclin1 complex, can activate autophagy. Autophagy induction promotes the release of AMBRA1 from mito-Bcl2, which further promotes the Beclin1-A-dependent autophagy program. Furthermore, Beclin1-A and mito-Bcl2 deactivate Caspases to protect cells from death. Notably, although the high S1 value in Fig. 2b prompts ER-Bcl2 and mito-Bcl2 levels to first decrease rapidly and even remain very low, Caspases are still inactive due to the promotion of Beclin1-A activity in the ER by AMBRA1, promoting autophagy.
Time series of ER-Bcl2, mito-Bcl2, AMBRA1, Casp and Beca under different stress levels on the ER and mitochondria. a S1 = 0.1, S2 = 0.2; b S1 = 4.5, S2 = 0.2; c S1 = 0.1, S2 = 2; d S1 = 4.5, S2 = 2. The arrows indicate Casp activation times
In contrast, at a high S2(S2 = 2), as shown in Fig. 2c and d, apoptosis with either a low (S1 = 0.1 in Fig. 2c) or high (S1 = 4.5 in Fig. 2d) value of S1 is activated very quickly by increasing Casp levels. In fact, autophagy is first activated by low levels of mito-Bcl2 and high levels of AMBRA1 and S2, while sustained levels of these molecules activate Caspases-induced apoptosis. Additionally, Casp reaches a high level (as labeled by arrows in Fig. 2) are 10 h and 20 h before apoptosis activation for a low S1 and a high S1, respectively. As shown in Fig. 2c and d, apoptosis can be activated at a higher S2, while the Caspases activation time is dependent on the value of S1, and the activation time is shorter for a low S1 (Fig. 2c) than for a high S1 (Fig. 2d). When there is a large difference between low stress levels in the ER and high stress levels in mitochondria, apoptosis can be easily activated. Otherwise, apoptosis activation is delayed with high stress levels on both the ER and mitochondria. This delay occurs because less inhibition of AMBRA1 due to low ER-Bcl2 and mito-Bcl2 levels maintains high AMBRA1 levels for a long time to active autophagy before apoptosis. Furthermore, we determine the dependence of time before apoptosis activation on the levels of two stresses, S1 and S2, as shown in Fig. 3.
Time before apoptosis activation in which Casp reached its highest level for different values. a S1 at S2 = 0.2; b S2 at S1 = 0.1; c grayscale intensity showing the dependency of time on both S1 and S2
At a low S2 = 0.2, as shown in Fig. 3a, apoptosis can never be activated with increasing S1. However, apoptosis will be activated when S2 is higher than 0.4 with a low S1 = 0.1, as shown in Fig. 3b. Additionally, the time before apoptosis activation shown in Fig. 3b decreases with increasing S2. All these results are displayed in the overall view of grayscale intensities of time as a function of both S1 and S2 in Fig. 3c. As shown in Fig. 3c, apoptosis can easily be activated in a short time with a high S2 and low S1.
As discussed above, low levels of stress on mitochondria activate only autophagy with any stress level on the ER, while high levels of stress on mitochondria can induce a transition from autophagy to apoptosis. Additionally, autophagy is first induced by a high Beca level accompanied by a low Casp level, and apoptosis is then activated when Casp gradually increases. Therefore, autophagy and apoptosis exhibit two steady states, and further necessitate analysis of the stability of these steady states through a bifurcation diagram of different stress levels is discussed in the next section.
Autophagy and apoptosis are determined by bifurcations for different stresses on the ER and mitochondria
Now, we show bifurcation diagrams of the steady state of Casp as well as Beca for two parameters, S1 and S2, in Fig. 4. As shown in Fig. 4a, codimension-one bifurcation curves of Beca and Casp with respect to the parameter S1 when S2 = 0.2 show only a stable steady state with a low Casp level and a high Beca level; this steady state corresponds to the autophagy process. However, the bifurcation curves of Beca and Casp with respect to the parameter S2 when S1 = 0.2 shown in Fig. 4b are bistable switch curves, in which upper and lower branches of the curves are composed of stable steady states separated by middle branches composed of unstable steady states. With a low value of S2, one of two stable steady states corresponds to a high Beca level but a low Casp level for autophagy, while the other corresponds to a low Beca level but a high Casp level for apoptosis. Furthermore, an increase in S2 leads to a transition from bistability to monostability via fold bifurcation point F, and Casp adopts a high steady state for apoptosis. Additionally, a high Casp steady state cannot return to a low steady state of autophagy with decreasing stresses because the other fold bifurcation point does not appear in the positive half-axis of the x-axis.
Bifurcation diagrams of both Beca (black lines) and Casp (gray lines). Codimension-one bifurcation curves with respect to a S1 at S2 = 0.2 and b S2 at S1 = 0.2. Stable steady states and unstable steady states are represented by solid and dotted lines, respectively. F is the equilibrium fold bifurcation point. c Codimension-two bifurcation diagram with respect to S1 and S2; the black line is the equilibrium fold bifurcation curve
Furthermore, the codimension-two bifurcation diagram of S1 and S2 in Fig. 4c is divided into two regions by the fold bifurcation curve f1: monostability on the right region corresponds to apoptosis, and bistability on the left region denotes autophagy or apoptosis. Additionally, the f1 curve is an almost vertical line with increasing S2 and intersects the x-axis at only S2 = 0.16, which indicates that the system is sensitive to only S2. In addition, the codimension-two bifurcation diagram can be affected by feedback regulation, which is discussed in the following section.
The effect of feedback regulation on autophagy and apoptosis
In this section, we explore the effect of all feedback regulation parameters on the codimension-two bifurcation diagrams of S1 and S2. All parameters are divided into three groups; the first two groups are related to the activation and inhibition of both Caspases and Beclin1-A, respectively (Fig. 5a–d), and the third group is associated with Bcl2 (Fig. 5e–f). Only one bifurcation diagram for each parameter is shown in every group due to similar effects. Among the Caspases activation rates k12, k13, k14, codimension-two bifurcation diagrams of S1 and S2 for different k14 values are shown in Fig. 5a; the fold bifurcation curves move left, and the monostable region is enlarged for an increased k14 activation rate. A higher activation rate increases the Casp level and then activates apoptosis with a low S2 value. In contrast, a high Caspases inactivation rate, k15, shifts the fold bifurcation curve right and reduces the monostable region on the bifurcation diagram in Fig. 5b, which is similar to the parameter k16. Therefore, apoptosis can be activated with only high S2 values when Caspases inactivation rates are high.
Codimension-two bifurcation diagrams of S1 and S2 for different levels of feedback regulation. a k14; b k15; c k17; d k19; e k1; f Jcp. B and M denote bistability and monostability, respectively
In contrast, based on Fig. 5c and d, the fold bifurcation curve moves right and left with an increase in the activation (k17) and degradation (k18, k19) rates of Beclin1-A, respectively. A high Beclin1-A activation rate increases the level of Beca to activate autophagy for the large bistable region, while a high Beclin1-A inactivation rate decreases the Beca level to easily activate apoptosis for the large monostable region. All parameters related to Bcl2 in the third group have little effect on the codimension-two bifurcation diagrams, except k1 and Jcp. A high k1 value moves the fold bifurcation curve to the right for autophagy, while a high Jcp value moves the fold bifurcation curve to the left for apoptosis.
Autophagy and apoptosis play essential roles in making cell fate decisions between life and death under stress. Autophagy promotes cell survival through activating Beclin1-A in the ER and can switch to apoptosis when Caspases in the mitochondria are activated. The Bcl2 and AMBRA1 proteins in both the ER and mitochondria act as important regulators of autophagy and apoptosis.
In this study, we added Bcl2 and AMBRA1 in the ER and mitochondria to an original model proposed by Kapuy et al. and investigated how two different stresses on Bcl2 in the ER (S1) and mitochondria (S2) affect the transition from autophagy to apoptosis. Based on typical time series and bifurcation analyses, we concluded that autophagy is activated upon a low level of stress on mitochondria regardless of t he level of stress on the ER (Fig. 2a, b), while apoptosis is activated for a high level of stress on the mitochondria (Fig. 2c, d). Normally, AMBRA1 partially localizes to the mitochondria and translocates to the ER to activate Beclin1 when autophagy is induced. In Fig. 2c, autophagy is maintained for a short time but then turns to apoptosis quickly because the AMBRA1 protein can compete with both ER-Bcl2 and mito-Bcl2 to bind and activate Beclin1. However, in Fig. 2d, autophagy is maintained for a long time before slowly switching to apoptosis. This occurs because higher AMBRA1 levels due to less inhibition by lower mito-Bcl2 and ER-Bcl2 levels play a positive role in maintaining autophagy for a much longer time (Fig. 2d). The delay in switching from autophagy to apoptosis shown in Fig. 2c and d was caused mainly by AMBRA1 under stress conditions. Also, the delay for different stresses can be seen clearly in Fig. 3c. Furthermore, apoptosis is sensitive to only S2, which is the key factor that divides the areas of the parameter plane (S1 and S2) into bistability and monostability (Fig. 4c). Apoptosis is prone to occur with an increased Caspases activation rate, Beclin1-A deactivation rate and Caspases Michaelis constant (Fig. 5). In summary, under two different stress levels on mito-Bcl2 and ER-Bcl2, the process of autophagy is promoted and maintained by AMBRA1 in the ER, while apoptosis is decided mainly by the stress on mitochondria.
Autophagy and apoptosis are important cellular responses to pharmacological interventions for diseases that are controlled by a dynamic network of interacting proteins. It is important to identify and target key components of this network when designing therapeutic regimens for diseases. In this work, Bcl2 and AMBRA1 in the ER and mitochondria are included in a previously described model, and we explored cellular responses to different levels of stress on the ER and mitochondria and feedback regulation in the network. However, the inclusion of more proteins in a more complete autophagy-apoptosis network is necessary, and cell fate in response to different conditions should be quantitatively analyzed.
A mathematical model of autophagy and apoptosis regulated by stress on the binding of Bcl2 with AMBRA1 in the ER and mitochondria has been established. This model links experimental evidence and theoretical biology for a more comprehensive understanding of the complex regulatory mechanisms of autophagy and apoptosis. Therefore, our work may provide an application for further experiments and simulations of dynamic autophagy-apoptosis models.
All data generated or analyzed during this study are included in this published article.
Baxc:
Bcl2m-Bax complex
Bcl2:
B-cell lymphoma-2
Bcl2e :
B-cell lymphoma-2 in the endoplasmic reticulum
Bcl2m :
B-cell lymphoma-2 in the mitochondria
Beca:
Active Beclin1
Becac:
Bcl2e-Beclin1 complex
Beci:
Inactive Beclin1
Beclin1-I:
Beclin-A:
Casp:
ER:
ER-Bcl2:
mito-Bcl2:
ODE:
Ordinary differential equation
Marycz K, Kornicka K, Szlapka-Kosarzewska J, Weiss C. Excessive endoplasmic reticulum stress correlates with impaired mitochondrial dynamics, Mitophagy and apoptosis, in liver and adipose tissue, but not in muscles in EMS horses. Int J Mol Sci. 2018;19:165.
PubMed Central Article CAS PubMed Google Scholar
Song S, Tan J, Miao Y, Li M, Zhang Q. Crosstalk of autophagy and apoptosis: involvement of the dual role of autophagy under ER stress. J Cell Physiol. 2017;232:2977–84.
Yu P, Wang HY, Tian M, Li AX, Chen XS, Wang XL, Zhang Y, Cheng Y. Eukaryotic elongation factor-2 kinase regulates the cross-talk between autophagy and pyroptosis in doxorubicin-treated human melanoma cells in vitro. Acta Pharmacol Sin. 2019;0:1–8.
Senft D, Ronai ZA. UPR, autophagy, and mitochondria crosstalk underlies the ER stress response. Trends Biochem Sci. 2015;40:141–8.
Iurlaro R, Munoz-Pinedo C. Cell death induced by endoplasmic reticulum stress. FEBS J. 2016;283:2640–52.
Jin H, Lei J. A hybrid model of molecular regulation and population dynamics for yeast autophagy. J Theor Biol. 2016;402:45–53.
Tang D, Kang R, Berghe TV, Vandenabeele P, Kroemer G. The molecular machinery of regulated cell death. Cell Res. 2019;0:1–18.
Hill SM, Wrobel L, Rubinsztein DC. Post-translational modifications of Beclin 1 provide multiple strategies for autophagy regulation. Cell Death Differ. 2019;26:617–29.
Maiuri MC, Criollo A, Kroemer G. Crosstalk between apoptosis and autophagy within the Beclin-1 interactome. EMBO J. 2010;29:515–6.
Oral O, Akkoc Y, Bayraktar O, Gozuacik D. Physiological and pathological significance of the molecular cross-talk between autophagy and apoptosis. Histol Histopathol. 2016;31:479–98.
Li M, Gao P, Zhang J. Crosstalk between autophagy and apoptosis: potential and emerging therapeutic targets for cardiac diseases. Int J Mol Sci. 2016;17:332.
Clarke AJ, Simon AK. Autophagy in the renewal, differentiation and homeostasis of immune cells. Nat Rev Immunol. 2019;19:170–83.
Doherty J, Baehrecke EH. Life, death and autophagy. Nat Cell Biol. 2018;20:1110–7.
Kapuy O, Liz'ak B, Stiller I, B'anhegyi G. A systems biological perspective of cellular stress-directed programmed cell death. Comput Mol Biosci. 2014;04:28–34.
Martinou JC, Youle RJ. Mitochondria in apoptosis: Bcl-2 family members and mitochondrial dynamics. Dev Cell. 2011;21:92–101.
Booth LA, Tavallai S, Hamed HA, Cruickshanks N, Dent P. The role of cell signalling in the crosstalk between autophagy and apoptosis. Cell Signal. 2014;26:549–55.
Santos LC, Vogel R, Chipuk JE, Birtwistle MR, Stolovitzky G, Meyer P. Mitochondrial origins of fractional control in regulated cell death. Nat Commun. 2019;10:1313.
Gump JM, Thorburn A. Autophagy and apoptosis: what is the connection? Trends Cell Biol. 2011;21:387–92.
Gordy C, He YW. The crosstalk between autophagy and apoptosis: where does this lead? Protein Cell. 2012;3:17–27.
Chen X, He Y, Lu F. Autophagy in stem cell biology: a perspective on stem cell self-renewal and differentiation. Stem Cells Int. 2018;2018:9131397.
Liu B, Oltvai ZN, Bayir H, Silverman GA, Pak SC, Perlmutter DH, Bahar I. Quantitative assessment of cell fate decision between autophagy and apoptosis. Sci Rep. 2017;7:17605.
Cooper KF. Till death do us part: the marriage of autophagy and apoptosis. Oxidative Med Cell Longev. 2018;2018:4701275.
Singh R, Letai A, Sarosiek K. Regulation of apoptosis in health and disease: the balancing act of BCL-2 family proteins. Nat Rev Mol Cell Biol. 2019;20:175–93.
Akl H, Vervloessem T, Kiviluoto S, Bittremieux M, Parys JB, De Smedt H, Bultynck G. A dual role for the anti-apoptotic Bcl-2 protein in cancer: mitochondria versus endoplasmic reticulum. Biochim Biophys Acta. 2014;1843:2240–52.
Erlich S, Mizrachy L, Segev O, Lindenboim L, Zmira O, Adi-Harel S, Hirsch JA, Stein R, Pinkas-Kramarski R. Differential interactions between Beclin 1 and Bcl-2 family members. Autophagy. 2014;3:561–8.
Sohn EJ, Park HT. Natural agents mediated autophagic signal networks in cancer. Cancer Cell Int. 2017;17:110.
Kang R, Zeh HJ, Lotze MT, Tang D. The Beclin 1 network regulates autophagy and apoptosis. Cell Death Differ. 2011;18:571–80.
Tavassoly I, Parmar J, Shajahan-Haq AN, Clarke R, Baumann WT, Tyson JJ. Dynamic modeling of the interaction between autophagy and apoptosis in mammalian cells. CPT Pharmacometrics Syst Pharmacol. 2015;4:263–72.
Kapuy O, Vinod PK, Mandl J, Banhegyi G. A cellular stress-directed bistable switch controls the crosstalk between autophagy and apoptosis. Mol BioSyst. 2013;9:296–306.
Fernandez AF, et al. Disruption of the beclin 1-BCL2 autophagy regulatory complex promotes longevity in mice. Nature. 2018;558:136–40.
Heath-Engel HM, Chang NC, Shore GC. The endoplasmic reticulum in apoptosis and autophagy: role of the BCL-2 protein family. Oncogene. 2008;27:6419–33.
Vogler M, Weber K, Dinsdale D, Schmitz I, Schulze-Osthoff K, Dyer MJ, Cohen GM. Different forms of cell death induced by putative BCL2 inhibitors. Cell Death Differ. 2009;16:1030–9.
Yazdankhah M, Farioli-Vecchioli S, Tonchev AB, Stoykova A, Cecconi F. The autophagy regulators Ambra1 and Beclin 1 are required for adult neurogenesis in the brain subventricular zone. Cell Death Dis. 2014;5:e1403.
Fimia GM, Corazzari M, Antonioli M, Piacentini M. Ambra1 at the crossroad between autophagy and cell death. Oncogene. 2013;32:3311–8.
Fimia GM, Stoykova A, Romagnoli A, Giunta L, Di Bartolomeo S, Nardacci R, Corazzari M, Fuoco C, Ucar A, Schwartz P, et al. Ambra1 regulates autophagy and development of the nervous system. Nature. 2007;447:1121–5.
Di Bartolomeo S, Corazzari M, Nazio F, Oliverio S, Lisi G, Antonioli M, Pagliarini V, Matteoni S, Fuoco C, Giunta L, et al. The dynamic interaction of AMBRA1 with the dynein motor complex regulates mammalian autophagy. J Cell Biol. 2010;191:155–68.
Sotthibundhu A, Promjuntuek W, Liu M, Shen S, Noisa P. Roles of autophagy in controlling stem cell identity: a perspective of self-renewal and differentiation. Cell Tissue Res. 2018;374:205–16.
Chinchwadkar S, Padmanabhan S, Mishra P, Singh S, Suresh SN, Vats S, Barve G, Ammanathan V, Manjithaya R. Multifaceted housekeeping functions of autophagy. J Indian Inst Sci. 2017;97:79–94.
Kaczanowski S. Apoptosis: its origin, history, maintenance and the medical implications for cancer and aging. Phys Biol. 2016;13:031001.
Ashkenazi A, Fairbrother WJ, Leverson JD, Souers AJ. From basic apoptosis discoveries to advanced selective BCL-2 family inhibitors. Nat Rev Drug Discov. 2017;16:273–84.
Zheng P, Chen Q, Tian X, Qian N, Chai P, Liu B, Hu J, Blackstone C, Zhu D, Teng J, et al. DNA damage triggers tubular endoplasmic reticulum extension to promote apoptosis by facilitating ER-mitochondria signaling. Cell Res. 2018;28:833–54.
Ciechomska IA, Goemans GC, Skepper JN, Tolkovsky AM. Bcl-2 complexed with Beclin-1 maintains full anti-apoptotic function. Oncogene. 2009;28:2128–41.
Zheng JH, Viacava Follis A, Kriwacki RW, Moldoveanu T. Discoveries and controversies in BCL-2 protein-mediated apoptosis. FEBS J. 2016;283:2690–700.
Siddiqui WA, Ahad A, Ahsan H. The mystery of BCL2 family: BCL-2 proteins and apoptosis: an update. Arch Toxicol. 2015;89:289–317.
HermanN F. Statistical estimations in enzyme kinetics. Eur J Biochem. 1974;43:377–8.
This work was supported by the National Natural Science Foundation of China (Grants 11562014 and 11702149) and the Natural Science Foundation of the Inner Mongolia Autonomous Region (Grants 2017MS0105 and 2017MS0108).
School of Mathematical Sciences, Inner Mongolia University, Hohhot, 010021, China
Bojie Yang & Quansheng Liu
School of Statistics and Mathematics, Inner Mongolia, University of Finance and Economics, Hohhot, 010070, China
Yuanhong Bi
Inner Mongolia Key Laboratory of Economic Data Analysis and Mining, Hohhot, 010070, China
Bojie Yang
Quansheng Liu
BJY designed the mathematical model, performed the simulations, analyzed the data, and wrote the manuscript. QSL conceived the study, participated in its design and coordination, and analyzed the data. YHB supervised the research, wrote the manuscript, designed the mathematical model, and analyzed the data. All authors read and approved the final manuscript.
Correspondence to Quansheng Liu.
Yang, B., Liu, Q. & Bi, Y. Autophagy and apoptosis are regulated by stress on Bcl2 by AMBRA1 in the endoplasmic reticulum and mitochondria. Theor Biol Med Model 16, 18 (2019). https://doi.org/10.1186/s12976-019-0113-5
AMBRA1 | CommonCrawl |
multinomial coefficient proof
Applying the binomial theorem to the last factor, which completes the induction. Notation for writing multinomial coefficient as sum of smaller multinomial coefficients. However, we created duplicate permutations, because some letters are the same, and must divide to correct our answer.). Theorem Let … that divides a multinomial coefficient may be computed using a generalization of Kummer's theorem. What is this part which is mounted on the wing of Embraer ERJ-145? Why is it easier to carry a person while spinning than not spinning? The third power of the trinomial a + b + c is given by. They can be expressed in numerous ways, including as a product of binomial coefficients or of factorials: The substitution of xi = 1 for all i into the multinomial theorem. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. One can use the multinomial theorem to generalize Pascal's triangle or Pascal's pyramid to Pascal's simplex. How do rationalists justify the scientific method, Title of book about humanity seeing their lives X years in the future due to astronomical event, Decipher name of Reverend on Burial entry. The algebraic proof is presented first. {n \choose n_1}\cdots{n-n_1-n_2-\cdots n_{k-2}-n_{k-1} \choose n_{k}}=\frac{n! 2. $$. For any positive integer m and any nonnegative integer n, the multinomial formula tells us how a sum with m terms expands when raised to an arbitrary power n: is a multinomial coefficient. it follows that, $$ [1], In statistical mechanics and combinatorics if one has a number distribution of labels then the multinomial coefficients naturally arise from the binomial coefficients. Thanks for contributing an answer to Mathematics Stack Exchange! {n \choose n_1}{n-n_1\choose n_2}\cdots{n-n_1-n_2-\cdots n_{k-2} \choose n_{k-1}}{n-n_1-n_2-\cdots n_{k-2}-n_{k-1} \choose n_{k}} Proof with multinomial. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. $$, and then with the next term: Difference in pdf formula between Dirichlet and Multinomial distributions, Notation for writing multinomial coefficient as sum of smaller multinomial coefficients, C compositions of $N$ balls grouped in k types given first and/or last offset …. $$ n=\sum_{i=1}^kn_i Multinomial Theorem Multinomial Theorem is a natural extension of binomial theorem and the proof gives a good exercise for using the Principle of Mathematical Induction. {n \choose n_1}{n-n_1\choose n_2}{n-n_1-n_2\choose n_3}=\frac{n!}{n_1!n_2!(n-n_1-n_2)!}\frac{(n-n_1-n_2)!}{n_3!(n-n_1-n_2-n_3)!}=\frac{n!}{n_1!n_2!n_3!(n-n_1-n_2-n_3)!} }{n_1!n_2!\cdots n_k!}\frac{1}{(n-n_1-n_2-\cdots-n_k)!} ... multinomial coefficient. Asking for help, clarification, or responding to other answers. The sum is taken over all combinations of nonnegative integer indices k1 through km such that the sum of all ki is n. That is, for each term in the expansion, the exponents of the xi must add up to n. Also, as with the binomial theorem, quantities of the form x0 that appear are taken to equal 1 (even when x equals zero). {n \choose n_1}{n-n_1\choose n_2}=\frac{n!}{n_1!(n-n_1)!}\frac{(n-n_1)!}{n_2!(n-n_1-n_2)!}=\frac{n!}{n_1!n_2!(n-n_1-n_2)!} The multinomial coefficients (1) are the terms in the multinomial series expansion. How should I consider a rude(?) For example, the number of distinct permutations of the letters of the word MISSISSIPPI, which has 1 M, 4 Is, 4 Ss, and 2 Ps is, (This is just like saying that there are 11! Proof idea. Did Star Trek ever tackle slavery as a theme in one of its episodes? An algebraic equation consists of a number of terms added and/or subtracted together. Each of these terms has two parts to it: variables and coefficients. {\displaystyle p} $$, However, since How do smaller capacitors filter out higher frequencies than larger values? Proceed by induction on m. m. m. When k = 1 k = 1 k = 1 the result is true, and when k = 2 k = 2 k = 2 the result is the binomial theorem. Use the same generalized FOIL method argument as in the Binomial and Trinomial Theorem proofs, and simplify the product of combination formulas obtained.
Dua For Incurable Disease, Buy Mitsubishi Lancer, Ao Smith Ews-3 3-litre 4500-watt Instant Water Heater, Don't Call Me Baby Tik Tok Song, Surf City, Nc Police Department Employment, Clamming At Oregon Coast, Creeping Phlox Seeds, Landslide Photos Pictures, Sony Ht-x8500 Soundbar Review, 1 Eur To Usd, Black And White Floral Patterns,
multinomial coefficient proof 2020 | CommonCrawl |
Zero-Sum Games
Sign up with Facebook or Sign up manually
Relevant For...
Logic >
Samir Khan, Christopher Williams, Edwin Huang, and
Jimin Khim
A zero-sum game is a game in which it is impossible for any player to help themselves without hurting another player. The name comes from the fact that in such a situation, the gains and losses of all the players sum to zero. For example, if players A and B are playing a zero-sum game, and player A chooses a strategy that wins him $1 more, then this strategy must cause player B to lose $1 more.
Many simple real world situations can be modeled as zero sum games: for example, dividing up a limited supply of resources among neighboring countries is a zero-sum, because any extra resources one country takes will lead to another country receiving less resources.
Equilibria in Zero-Sum Games
Real-World Examples
Two-player zero sum games are extremely nice, because they always have at least one Nash equilibrium as long as mixed strategies are allowed.
Suppose that players A and B play a game where they each write down 0 or 1 on a piece of paper, and receive payoffs according to the following matrix:
(A/B)PlayerA01Player03/−3−4/4B1−2/23/−3.\begin{array}{ccc} (A/B)& & Player & A \\ & & 0&1\\ Player&0&3/-3&-4/4\\ B&1&-2/2&3/-3. \end{array}(A/B)PlayerB01Player03/−3−2/2A1−4/43/−3.
Effectively, player A wins when they play the same numbers and player B wins when they play different numbers. Note that this is a zero-sum game, because in any situation, the gains and losses of A and B sum to zero.
Now, if player A plays a mixed strategy where he plays 0 with probability ppp and 1 with probability 1−p1-p1−p, his expected payoff if player B plays a 0 is 3p−4(1−p)=7p−43p-4(1-p)=7p-43p−4(1−p)=7p−4. If player B plays a 1, his expected payoff is −2p+3(1−p)=−5p+3-2p+3(1-p)=-5p+3−2p+3(1−p)=−5p+3. At a Nash equilibrium, these two will be equal, so we find 7p−4=−5p+3 ⟹ p=712.7p-4=-5p+3\implies p=\frac{7}{12}.7p−4=−5p+3⟹p=127. So player A should play 0 712\frac7{12}127 of the time and 1 512\frac5{12}125 of the time.
In general, when there are more than two options available to each player, the Nash equilibrium for the zero-sum game can be found by solving an optimization problem. If MMM is the payoff matrix, the problem is to find a vector uuu that minimizes ∑ui\sum u_i∑ui subject to u≥0u\geq 0u≥0 and Mu≥1Mu\geq 1Mu≥1. Then, rescaling uuu to make it a probability vector will give the Nash equilibrium for the zero-sum game, which is guaranteed to exist.
A good example to see how zero-sum games can be used to model real world situations, but fail to account for all complexities, is a simple election. If there are candidates A,B,CA, B, CA,B,C, where each receives some number of votes, and the candidate with the highest number of votes wins, then this situation is a zero-sum game. If candidate AAA wishes to gain more votes, they must be taken from candidate BBB or CCC. However, this is only true under the assumption that every single person in the population is voting for one of the three candidates--if some voters are abstaining, then a candidate can increase his vote total by attracting abstaining voters, without decreasing any of the other candidates totals.
Other common real-life examples of zero sum games include games like chess and poker, and financial instruments like options and futures (excluding transaction costs). In each of these cases, an increase in one player's payoff corresponds to a decrease in another player's payoff.
Cite as: Zero-Sum Games. Brilliant.org. Retrieved from https://brilliant.org/wiki/zero-sum-games/
Get more Brilliant. Sign up
Sign up to read all wikis and quizzes in math, science, and engineering topics. | CommonCrawl |
Methodology article | Open | Published: 12 January 2016
GBS-SNP-CROP: a reference-optional pipeline for SNP discovery and plant germplasm characterization using variable length, paired-end genotyping-by-sequencing data
Arthur T. O. Melo1,
Radhika Bartaula2 &
Iago Hale1
BMC Bioinformaticsvolume 17, Article number: 29 (2016) | Download Citation
With its simple library preparation and robust approach to genome reduction, genotyping-by-sequencing (GBS) is a flexible and cost-effective strategy for SNP discovery and genotyping, provided an appropriate reference genome is available. For resource-limited curation, research, and breeding programs of underutilized plant genetic resources, however, even low-depth references may not be within reach, despite declining sequencing costs. Such programs would find value in an open-source bioinformatics pipeline that can maximize GBS data usage and perform high-density SNP genotyping in the absence of a reference.
The GBS SNP-Calling Reference Optional Pipeline (GBS-SNP-CROP) developed and presented here adopts a clustering strategy to build a population-tailored "Mock Reference" from the same GBS data used for downstream SNP calling and genotyping. Designed for libraries of paired-end (PE) reads, GBS-SNP-CROP maximizes data usage by eliminating unnecessary data culling due to imposed read-length uniformity requirements. Using 150 bp PE reads from a GBS library of 48 accessions of tetraploid kiwiberry (Actinidia arguta), GBS-SNP-CROP yielded on average three times as many SNPs as TASSEL-GBS analyses (32 and 64 bp tag lengths) and over 18 times as many as TASSEL-UNEAK, with fewer genotyping errors in all cases, as evidenced by comparing the genotypic characterizations of biological replicates. Using the published reference genome of a related diploid species (A. chinensis), the reference-based version of GBS-SNP-CROP behaved similarly to TASSEL-GBS in terms of the number of SNPs called but had an improved read depth distribution and fewer genotyping errors. Our results also indicate that the sets of SNPs detected by the different pipelines above are largely orthogonal to one another; thus GBS-SNP-CROP may be used to augment the results of alternative analyses, whether or not a reference is available.
By achieving high-density SNP genotyping in populations for which no reference genome is available, GBS-SNP-CROP is worth consideration by curators, researchers, and breeders of under-researched plant genetic resources. In cases where a reference is available, especially if from a related species or when the target population is particularly diverse, GBS-SNP-CROP may complement other reference-based pipelines by extracting more information per sequencing dollar spent. The current version of GBS-SNP-CROP is available at https://github.com/halelab/GBS-SNP-CROP.git
The conservation and utilization of plant genetic diversity is regularly cited as a critical strategy in meeting the growing global food demand [1]. For the handful of truly global crops that provide the vast majority of the world's caloric and protein intake (e.g. wheat, rice, maize, soybean, palm) [2], extensive resources exist to facilitate such ongoing improvement, including well-characterized gene/seed banks, international communities of researchers, and vast collections of genetic and genomic resources. Rightly, the call for ongoing investment in such resources continues [3]. For more minor agricultural plant species, however, particularly those of unique or limited relevance to developing countries, relatively fewer resources exist, leading to the designation of such species as underutilized, neglected, or orphan crops [4]. In West Africa alone, examples of such species abound and include cereal grains (e.g. Digitaria exilis), leafy vegetables and seed crops (e.g. Telfairia occidentalis), legumes (e.g. Sphenostylis stenocarpa), tuber crops (e.g. Plectranthus rotundifolius), corm crops (e.g. Colocasia esculenta), fruit trees (e.g. Annona senegalensis), oil nut trees (e.g. Vitellaria paradoxa), and herbs (e.g. Hibiscus sabdariffa). Though historically under-researched, orphan crops are now recognized as germane to the issue of future global food security due to their potential to diversify the food supply [5], enhance the micronutrient content of people's daily diets [6], perform favorably under local and often extreme environmental conditions [7], and improve the overall environmental sustainability of smallholder agricultural systems [8].
Increasingly rapid and inexpensive genome-wide genotyping methods, enabled by ever improving next generation sequencing (NGS) platforms, have revolutionized trait development, breeding, and germplasm curation in the global crops [9]; and the potential for such genome-enabled improvement of orphan crops is clear. By virtue of its simple library preparation and robust approach to genome reduction, genotyping-by-sequencing (GBS) [10] in particular has emerged as a cost-effective strategy for genome-wide SNP discovery and population genotyping. The objective of GBS is not merely to discover SNPs for use in a fixed downstream assay (e.g. SNP-chip) but rather to simultaneously discover such polymorphisms and use them to genotype a population of interest. By combining the power of multiplexed NGS with enzyme-based genome complexity reduction, GBS is able to genotype large populations of individuals for many thousands of SNPs for well under $0.01 per datapoint [11, 12]. Shown to be robust and flexible across a range of species and populations, GBS has become an important tool for genomic studies in plants, yielding molecular markers for genetic mapping [12], genomic selection [13], genetic diversity studies [14, 15], germplasm characterization [16–18], cultivar identification [19–21], and conservation biology and evolutionary ecology studies [22].
To date, relatively little effort has been devoted to developing high-performing GBS pipelines in the absence of a reference genome [23], perhaps in part due to the assumption that a low-quality reference of any plant species is now affordable enough to be within the reach of interested programs [24, 25]. For severely under-resourced curation, research, and breeding programs for orphan crops, however, such an assumption may not hold. Although great effort is underway to muster the resources necessary to develop foundational genomics resources like annotated reference genomes for some orphan crop species (e.g. the African Orphan Crops Consortium) [26], such efforts are necessarily targeted and narrow in scope relative to the estimated 80,000 edible plant species around the world, of varying relevance to local diets [27–29]. For many orphan crop species, therefore, a reference-free GBS pipeline could be of great value, enabling access to the per-genotype cost-effectiveness of GBS without the up-front and often prohibitive cost of a reference genome.
Here, we describe an efficient pipeline for SNP discovery and genotyping using paired-end (PE) GBS data of arbitrary read lengths to facilitate genetic characterization, whether or not a reference genome is available. Executed via a sequence of Perl scripts, this GBS SNP-Calling Reference Optional Pipeline (GBS-SNP-CROP) integrates custom parsing and filtering procedures with well-known, vetted bioinformatic tools, giving users full access to all intermediate files.
In this section, we explain the GBS-SNP-CROP workflow in detail and discuss its strategies for maximizing data usage and distinguishing high-confidence SNPs from both sequencing and PCR errors. Finally, we present data on its favorable performance relative to the reference-based TASSEL-GBS [30] and network-based (i.e. reference-independent) TASSEL-UNEAK [15] pipelines for a sample dataset consisting of 150 bp PE GBS reads for a library of 48 diverse accessions of cold-hardy kiwiberry (Actinidia arguta), an underutilized tetraploid horticultural species.
The GBS-SNP-CROP workflow
The GBS-SNP-CROP workflow can be divided conceptually into four main stages: (1) Process the raw GBS data; (2) Build the Mock Reference, if a reference genome is unavailable; (3) Map the processed reads and generate standardized alignment files; and (4) Call SNPs and genotypes (Table 1; Fig. 1). In this section, we explain how these stages are accomplished within GBS-SNP-CROP, with particular emphasis on the rationale throughout. While the relevant Perl scripts are referenced in this discussion, please refer to the GBS-SNP-CROP User Manual for the details of pipeline execution (https://github.com/halelab/GBS-SNP-CROP.git).
Table 1 Outline of the GBS-SNP-CROP workflow, featuring inputs and outputs of all seven steps (scripts)
Schematic of the four stages of the SNP-GBS-CROP workflow
Stage 1. Process the raw GBS data
As written, the code associated with Step 1 ("Parse the raw reads"; see Table 1) is compatible with Illumina1.8+ sequencing data, where the input files are assumed to be CASAVA-processed, paired-end (i.e. R1 and R2), and compressed FASTQ files (*.fastq.gz). As per the protocol developed by Poland et al. [11], these FASTQ files are assumed to contain multiplexed reads from a barcoded library of genotypes, where the R1 read begins with a 6–10 bp barcode followed by the restriction site of the less-frequent cutter (e.g. PstI); and the R2 read begins with the restriction site of the more-frequent cutter (e.g. MspI). To execute this stage of the pipeline, an auxiliary text file is required that associates each barcode with its corresponding genotype ID (see example "Barcode-ID" file in Appendix A of the GBS-SNP-CROP User Manual).
The script for Step 1 processes the raw reads in a relatively standard manner, beginning by searching the R1 read for a high-confidence barcode sequence (i.e. no more than one mismatch, relative to the provided list of barcodes) immediately preceding the expected cut site remnant of the less frequent cutter. If both barcode and cut site are found, they are trimmed from the read, the barcode is appended to the headers of both the R1 and R2 reads, and the pair is retained for further processing. This first parsing script then searches for the 3′-ends of each GBS fragment, indicated by the in-line presence of the Illumina common adapter coupled with the appropriate cut site residue. If found, the reads are truncated appropriately. Finally, all reads consisting of a majority of uncalled bases (i.e. N's) are discarded.
Further read trimming based on user-specified minimums for both Phred quality score and read length is done in Step 2, using the bioinformatics tool Trimmomatic [31]. Finally, in Step 3, all parsed and quality-filtered reads are processed according to their barcodes; and genotype-specific FASTQ files are produced for all genotypes. The final output of Stage 1 is a pair (R1 and R2) of FASTQ files for each genotype, containing all parsed and quality-filtered reads for downstream analysis.
Stage 2. Build the Mock Reference
If a suitable reference genome is available for the target population, one may move directly to Stage 3 of the pipeline. If such a reference is unavailable, however, the parsed and quality-filtered reads from Stage 1 are used to build a GBS-specific, reduced-representation reference (hereafter "Mock Reference") to enable GBS read mapping and facilitate SNP discovery. This stage of the pipeline relies upon a similarity-based clustering strategy to group the GBS reads, first within- and subsequently (if desired) across-genotypes, in order to generate representative reference sequences for the full set of GBS fragments.
To begin, the pipeline calls upon the PEAR software package [32] to merge the processed paired-end reads into single reads spanning the complete GBS fragment lengths, wherever sequence overlap for a pair is sufficient (≥10 bp) to justify merging. For each genotype selected to contribute to the Mock Reference (see "GBS-SNP-CROP Performance"), this step generates three different FASTQ files: An "assembled" file, containing successfully merged reads, and two "unassembled" files (R1 and R2), comprised of sequentially-paired R1 and R2 reads that could not be merged, due in part to a lack of sufficient overlap because of long GBS fragment lengths. Next, the pipeline stitches together all unmerged reads by joining pairs of sufficiently long "unassembled" R1 and R2 sequences together with an intermediate run of 20 high-quality A's, thus producing a FASTQ file of "stitched" R1 + R2 reads. Representing the reduced genomic space targeted by the GBS restriction protocol, these PEAR-assembled and manually-stitched reads are then concatenated into a single FASTQ file per genotype for use in building the Mock Reference.
Next, GBS-SNP-CROP calls upon the USEARCH software package [33] to cluster these "assembled" and "stitched" reads based on a user-specified similarity threshold, thereby producing a reduced list of non-redundant consensus sequences (centroids) that span the GBS fragment space. To accomplish this, the USEARCH clustering procedure is executed first within each selected genotype (i.e. USEARCH clusters "assembled" and "stitched" reads into sets of genotype-specific centroids) and subsequently, if more than one genotype is selected to build the Mock Reference, across all selected genotypes (i.e. USEARCH clusters all genotype-specific centroids into a master set of centroids for the population). Representing the sampled GBS data space for the population, it is this resultant set of non-redundant consensus sequences that comprises the Mock Reference genome for subsequent mapping. Depending on the intended use of the resultant genotypic data (e.g. diversity characterization, linkage map construction, trait association, etc.), the similarity threshold specified for USEARCH may be adjusted to collapse homologous regions or maximize their discrimination, an issue of particular relevance in polyploid species.
In the end, Stage 2 produces two different Mock Reference FASTA files. The first ("MockRef_Genome.fasta") consists of a single, long FASTA read comprised of all the centroids identified above, linked together into one contiguous sequence. The second ("MockRef_Clusters.fasta") contains the same centroids in the same order, but in this case the centroid boundaries are preserved because each centroid exists as a separate FASTA entry. While the former file is used as the Mock Reference for read alignment (see next section), the latter is useful for optional downstream SNP filtering and analysis.
Stage 3. Map the processed reads and generate standardized alignment files
To align the processed reads from Stage 1 to the reference, whether a true reference genome or a Mock Reference built in Stage 2, GBS-SNP-CROP again relies upon familiar bioinformatics tools, in this case BWA [34] for alignment and SAMtools [35] for manipulating and processing the alignment output. Specifically, the BWA-mem algorithm is used to align the processed reads, genotype-by-genotype, to the reference. SAMtools is then called upon to accomplish the following steps: 1) Filter the mapped reads via SAMtools flags, retaining only those which map appropriately as pairs without potentially confounding secondary or supplementary alignments (see the GBS-SNP-CROP User Manual for more detail); 2) Convert the filtered SAM files to BAM files; 3) Index and sort the BAM files; 4) Index the FASTA reference sequence; and 5) Produce a base call alignment summary (mpileup file) for each genotype. These six steps (BWA-mem alignment and the five SAMtools procedures) are carried out individually for each genotype, with the Step 5 script automating the process.
In Step 6, the genotype-specific mpileup files are distilled into "count" text files containing four essential tab-delimited columns: (1) Reference genome/chromosome identifier; (2) Base position; (3) Reference base at that position; and (4) A comma-delimited string containing aggregated alignment information at that position (i.e. depths of A, C, G, and T reads). Each count file is then parsed, with only those rows containing reads polymorphic to the reference sequence kept, thereby generating liberal genotype-specific lists of potential SNP positions, with full read depth information retained. It is during this mpileup parsing that all putative indels are rigorously detected and excluded from downstream variant calling, thus making GBS-SNP-CROP a SNP-exclusive pipeline.
Once the mpileup parsing is completed for each genotype separately, Step 6 proceeds by mining the full set of resultant genotype-specific count files to generate a single, non-redundant master list of all potential SNP positions throughout the target population. Alignment information is then extracted from the original count files for each genotype for all potential SNP positions in the master list and the data organized into a SNP discovery "master matrix" for the entire population. By capturing both genotype-specific (columns) and population-level (rows) alignment data in one table, the master matrix is a powerful and streamlined summary of the GBS data that contains the essential information to not only distinguish high-confidence SNPs from likely sequencing and PCR errors but also to make subsequent genotype calls using stringent depth criteria, as explained in the next section.
Stage 4. Call SNPs and genotypes
Once generated, the master matrix is systematically pared down via a series of SNP-culling filters to arrive at a final "SNP genotyping matrix" containing only high-confidence SNPs and genotypes. To begin, the master list of potential SNPs is parsed based upon a flat criteria of independence, namely that a SNP is retained for further consideration if and only if there exist independent instances of the putative secondary allele, at a specified minimum depth (e.g. 3), across at least three genotypes. This simple requirement for independent occurrences of the less-frequent allele is an essential strategy for minimizing false SNP declarations due to random sequencing and PCR errors, including strand bias errors [36].
Next, GBS-SNP-CROP advances only potential bi-allelic SNPs (i.e. it excludes multi-allelic variants) by imposing a population-level allele frequency filter via a user-defined Alternative Allele Strength parameter (-altStrength, Step 7). For each potential SNP position, this parameter considers the total read depth, across the whole population, of all four bases, from primary (the allele with the highest depth at that position) to quaternary (the allele with the lowest depth). A potential SNP is retained for further downstream analysis if and only if it is strongly bi-allelic, that is if:
$$ \frac{2{}^{\circ}\ Allele\ Depth}{2{}^{\circ}\ Depth+3{}^{\circ}\ Depth + 4{}^{\circ}\ Depth}> altStrength $$
For a tetraploid species, we suggest a minimum value of 0.90 for this parameter, though higher values may be imposed in the interest of stricter error control (see Additional file 1).
After these initial basic population-level culling procedures, genotypic states (primary homozygote, heterozygote, or secondary homozygote) are assigned for all remaining SNP-accession combinations. To call a heterozygote, a given genotype must have a user-specified minimum read depth for each allele (e.g. 3); and the read depth ratio of the lower-coverage to higher-coverage allele must exceed a user-specified, ploidy-appropriate threshold (e.g. 0.1; see Additional file 1). If the ratio falls below this minimum threshold, GBS-SNP-CROP refrains from making a genotypic assignment (i.e. the genotype is designated as missing data). The GBS-SNP-CROP genotyping criterion for homozygotes is more stringent, requiring a relatively high, user-specified minimum depth (e.g. ≥11 when the secondary allele count is zero and ≥48 when the secondary allele count is one; see Additional file 1) in an effort to reduce the rate of erroneous calls (i.e. true heterozygotes called as homozygous due to sampling bias). Finally, in an effort to retain only broadly informative SNPs, the matrix is further reduced such that all SNPs (i.e. rows) are discarded for which more than some user-specified maximum of genotypes are without genotypic calls, either because read depth = 0 or genotypic states were unassignable due to the criteria discussed above.
To facilitate the downstream characterization of the high-confidence SNPs that pass all the above filters, the final SNP genotyping matrix contains both summary statistics as well as complete genotype-specific alignment data for each retained SNP. As shown in Fig. 2, the first ten columns of the matrix feature the following information: 1) Genome/chromosome identifier; 2) SNP position; 3) Reference base; 4) Average read depth at that SNP position across the population; 5) Primary allele (i.e. the most frequent allele at that position, based on read depth across the population); 6) Secondary allele (i.e. the less frequent, or alternative, allele at that position); 7) Percentage of individuals from the population genotyped for that SNP; 8) Total number of homozygotes for the primary allele; 9) Total number of heterozygotes; and 10) Total number of homozygotes for the secondary allele. Columns 11 and higher contain the complete alignment data for each individual genotype for all possible SNP positions. The ability of GBS-SNP-CROP to consider both genotype-specific and population-level alignment data simultaneously through the master matrix during the processes of SNP filtering and genotyping is an essential feature of the pipeline and motivates its disuse of Minor Allele Frequency (MAF), a problematic filtering parameter when attempting to characterize broadly diverse germplasm collections, as opposed to more closely-related breeding populations.
Structure of the final SNP genotyping matrix. As shown here, the GBS-SNP-CROP final genotyping matrix contains summary statistics as well as complete genotype-specific alignment data for each SNP called. The cells in red represent instances in which a genotypic state could not be assigned, either due to insufficient read depth (-|0/4) or a read depth ratio outside of the user-specified acceptable range (-|132/5)
Other downstream tools
In addition to the scripts associated with the core GBS-SNP-CROP workflow described above, one additional script ("GBS-SNP-CROP-8.pl") is provided to facilitate downstream management of the final SNP genotyping matrix by enabling users to convert the matrix into formats compatible with the familiar statistical analysis software packages R [37], Tassel GUI [30], and PLINK [38]. Specifically, the script produces a genotype matrix appropriate for diversity analyses within R (e.g. calculating distance metrics, generating cladograms, etc.) by replacing primary homozygotes with 0, heterozygotes with 0.5, secondary homozygotes with 1, and unassigned genotypes with "NA". It can also transform the final SNP genotyping matrix into a HapMap file for use as input into Tassel GUI, allowing users to easily access the functionality of that software package for forward analysis, or create the transposed . PED file required by the whole genome association analysis toolset PLINK.
Avoiding false SNP calls
One well-recognized challenge posed by NGS data is the rate of erroneous base calls produced, rates which vary across both platforms and base position within reads. For instance, the error rate of current Illumina sequencing platforms ranges from 1 to 10 bases per kilobase sequenced, with errors concentrated in the beginnings and ends of reads (i.e. tail distance bias) [39, 40]. With typical sequencing runs producing billions of base calls (e.g. a single HiSeq 2500 Illumina flow cell can produce as much as 400 Gb of data [41]), there is real potential for millions of errors that can confound analysis [42]. Del Fabbro et al. [43] discuss the importance of quality trimming to increase the reliability of downstream analysis, with simultaneous gains in terms of both computational resources and time. While other authors assert that quality scores may not be perfectly reliable indicators of true nucleotide quality [44, 45], GBS-SNP-CROP begins with a stringent recognition of barcodes (Hamming distance ≤1) and cut sites (no mismatches), followed by trimming based on Phred score.
In addition to this basic quality filtering of the raw reads, the pipeline seeks to minimize false SNP calls through its approach to SNP discovery and filtering. First, only those reads that map as paired-ends without secondary or supplementary alignments to the reference are retained. Additional parameters are called upon within the SAMtools mpileup algorithm to avoid false SNPs due to misalignment and excessive mismatches (see the GBS-SNP-CROP User Manual). SNPs that pass the above filters must then also satisfy the aforementioned requirement of independence, assessable by virtue of the unique format of the GBS-SNP-CROP master matrix. By leveraging both genotype-specific and population-level depth information, this requirement effectively reduces the probability of calling false SNPs due to both sequencing and PCR errors, including strand bias errors, since the exact same errors must arise independently, at depth, across multiple genotypes. GBS-SNP-CROP also makes use of stringent genotyping criteria to further reduce the probability of calling false SNPs and assigning incorrect genotypic states. Such genotyping criteria are based on relatively high depth requirements, information again accessible for evaluation via the master matrix.
Through its strict initial parsing and filtering of the raw reads as well as its rigorous approach to alignment, SNP filtering, and genotyping, GBS-SNP-CROP takes a very conservative approach to SNP calling. Nevertheless, as shown in the next section, the number of identified SNPs compares favorably to more permissive pipelines, in part because of GBS-SNP-CROP's ability to make use of all available data, regardless of read length.
Finally, in addition to the embedded strategies for minimizing false SNP calls discussed here, users can easily impose additional desired filters due to the fact that the output from all GBS-SNP-CROP steps, like the master matrix, are human-readable text files. For example, for the purpose of mapping studies as opposed to diversity analyses, which are the primary focus here, the elimination of markers in particularly SNP-dense regions may be an important quality control, as such high SNP density may be an artifact of promiscuous alignment, particularly in polyploids. In a reference-based approach, such culling is straightforward given the set of unique SNP coordinates across the linkage groups. In a reference-independent pipeline, a similar filter can be applied; but users will need to consider SNP densities within each cluster (centroid) used to build the Mock reference. To accomplish this, centroid boundaries must be located within the Mock Reference, which is one reason why the second Mock Reference (clusters) file is generated by the pipeline, to enable such projection.
GBS-SNP-CROP performance
We assessed the performance of GBS-SNP-CROP in genotyping a population of 48 diverse accessions of the perennial dioecious tetraploid Actinidia arguta. Specifically, its performance using both a reference from a related diploid species (A. chinensis) and a Mock Reference was compared to that of TASSEL-GBS [30], a widely-used reference-based pipeline, and TASSEL-UNEAK [15], its reference-independent version.
Sampling strategy to build a Mock Reference
Three different GBS-SNP-CROP Mock Reference assembly strategies were investigated, differing only in the numbers of genotypes from the target population used to construct the Mock Reference. Contrary to our original expectations, we found that the number of genotypes used to build the Mock Reference is inversely related to the number of mapped reads retained by the pipeline and thus the number of SNPs called (Table 2). For example, using all reads from the full set of 48 unique genotypes, the pipeline called 14,712 SNPs (average depth = 70.7) that passed all population-level filters. Because more than 4 h were needed to assemble the Mock Reference in this case (see Table 1), we investigated the relative performance of the pipeline under scenarios where fewer genotypes were used to construct the Mock Reference, first using only the top five genotypes (ranked simply by the number of parsed reads) and then again using only the top genotype. Using the five genotypes with the highest numbers of parsed reads, the pipeline assembled the Mock Reference in less than an hour and identified 20,226 potential SNPs (average depth = 71.0). Using only the single most read-abundant genotype, the pipeline assembled the Mock Reference in 14 min and called 21,318 SNPs (average depth = 69.3). Based on these results, all subsequent pipeline evaluation was conducted using the results from GBS-SNP-CROP-MR01 (i.e. Mock Reference constructed from one genotype). The pipeline itself is flexible, however, able to integrate centroids from multiple genotypes into a Mock Reference, a feature of potential use for genotyping particularly diverse populations (e.g. multiple closely-related species).
Table 2 Performance of GBS-SNP-CROP under three different sampling strategies for building the Mock Reference: Using all 48 individuals in the population (MR48), using only the 5 individuals with the highest number of parsed reads (MR05), and using only the single most read-abundant genotype (MR01)
Data usage
One of the most noteworthy differences between the GBS-SNP-CROP and TASSEL pipelines is the ability of GBS-SNP-CROP to access and make use of a greater amount of sequence data (Table 3). In the TASSEL-GBS pipeline, due to its tag-based alignment strategy, a uniform tag length (mxTagL) must be specified that effectively limits the number of reads used for analysis. According to the TASSEL 5.2.11 manual, "the mxTagL value must be chosen such that the longest barcode + mxTagL < read length" [30]; thus all reads that violate this statement are discarded. Further, all reads that meet this requirement are subsequently truncated to a uniform length based on this parameter; thus not only short reads but also the full lengths of long reads are culled. While this tag length requirement is adjustable within TASSEL-GBS (here, we ran the pipeline with tag lengths of both 32 bp as well as the default 64 bp), it is fixed at 64 bp for TASSEL-UNEAK. In contrast, aside from a user-specified minimum read length, GBS-SNP-CROP imposes no requirement for read length uniformity, even within read pairs.
Table 3 Comparative data usage and computation times for five different analyses of 150 bp paired-end GBS data from 48 accessions of Actinidia arguta
Following initial parsing and quality trimming (Stage 1), a total of 16.82 Gb of sequence data was found to be usable for analysis (alignment, SNP discovery, etc.) within GBS-SNP-CROP (Table 3). In contrast, due mainly to tag length requirements and the usability of only R1 (single-end) reads, a much reduced 3.85 Gb, 6.77 Gb and 8.60 Gb were used, respectively, by the TASSEL-GBS (mxTagL = 32), TASSEL-GBS (mxTagL = 64) and TASSEL-UNEAK pipelines. In terms of data usage, therefore, GBS-SNP-CROP performs quite favorably, with approximately 2.0–4.4 times more high-quality sequence data available to it for SNP discovery.
In theory, one should be able to make more reads available to TASSEL-GBS by reducing the mxTagL threshold. Such a reduction (in this case, from 64 to 32 bp) leads, however, to a significant reduction in overall data usage (from 6.77 Gb to 3.85 Gb; Table 3) and a concomitant reduction in identified SNPs (from 8,907 to 5,593; Table 4B). For TASSEL-GBS, therefore, it may be advantageous to set a larger mxTagL value, thereby discarding a larger number of reads that fail to meet that requirement, than to use a higher number of shorter reads permitted by a lower mxTagL value.
Table 4 Comparative pipeline performances before (4A) and after (4B) depth-based genotyping criteria and population-level SNP calling filters for 150 bp paired-end GBS data from 48 accessions of Actinidia arguta
The low average proportion (10.8 %) of shared SNPs discovered by TASSEL-GBS under both the 32 and 64 bp mxTagL scenarios (Fig. 3; Additional file 2) indicates that essentially different datasets are made available to the TASSEL-GBS pipeline, depending on the chosen value of this one parameter. Such a comparison suggests that the requirement within the TASSEL pipelines for uniform read lengths (i.e. TASSEL's tag-based mapping strategy) is fundamentally limiting, in terms of data usage. By taking a read-based rather than a tag-based approach to alignment and SNP discovery, GBS-SNP-CROP leverages all available data in a single analysis, thereby avoiding undue fractionation of the dataset.
Bar plot showing the extent of marker overlap among the five evaluated pipelines. The sets of SNPs called by the five pipelines are largely orthogonal to one another, as shown by the fact that both the reference-based and reference-independent pipelines call high proportions of SNPs called by no other pipeline (grey bars). Shared SNPs among pipelines are indicated by color-coordinated bars. Whereas only 0.6 and 0.4 % of the 8,907 and 5,593 SNPs called by TASSEL-GBS-64 and TASSEL-32, respectively, were identified by TASSEL-UNEAK, 33.7 % of the SNPs called by GBS-SNP-CROP-RG were called by GBS-SNP-CROP-MR01
Numbers of SNPs
Analyses by the different pipelines lead to widely varying numbers of identified SNPs (Table 4). Using only the single most read-abundant genotype to build the Mock Reference, GBS-SNP-CROP called 56,598 potential SNPs (average depth = 44.5; Table 4A), of which 21,318 were retained after applying all SNP calling and genotyping filters (Table 4B), a reduction of 62.3 %. In comparison, the reference-free TASSEL-UNEAK pipeline called 12,905 potential SNPs (average read depth = 7.0), of which only 1,160 SNPs passed these same filters, a striking reduction of 91.0 %.
Using the published A. chinensis diploid genome as a reference and a liberal pipeline (i.e. no imposed SNP culling or genotyping filters), GBS-SNP-CROP-RG called 23,564 potential SNPs (average depth = 47.4), of which 5,471 were retained after filtering, a reduction of 76.8 %. In comparison, the 32 and 64 bp reference-based TASSEL-GBS analyses called 19,095 and 25,005 potential SNPs (average depths of 134.2 and 34.7, respectively), of which 5,593 (70.7 % reduction) and 8,907 (64.4 % reduction) passed the imposed filters (Table 4). Unlike the reference-independent analyses, therefore, TASSEL-GBS was found to outperform the reference-based GBS-SNP-CROP in terms of numbers of identified SNPs.
Using only those SNPs that passed the stringent genotyping criteria and population-level filters described earlier, we compared the set of SNPs called by GBS-SNP-CROP (using the A. chinensis reference) with those called by the TASSEL-GBS analyses (Table 4B). There is strikingly little congruence among these analyses, with many unshared markers (on average 96.3 %) between them (Fig. 3; Additional file 2). Interestingly, a high proportion of unshared markers (on average 89.2 %) also exists between the two different TASSEL-GBS analyses themselves, even though they differ only in their specified mxTagL thresholds. Because the initial dataset is the same for both TASSEL analyses, we expected roughly half of the SNPs called under mxTagL = 64 to also be called under mxTagL = 32 (i.e. that SNPs located within the first 32 bases of the mxTagL = 64 SNPs should comprise a proportional subset of the mxTagL = 32 SNPs); but such is not the case (see Fig. 3).
One stated reason for TASSEL's approach to SNP calling based on tags is decreased computational time spent for pipeline execution, with the added rationale that sequencing errors increase after the first 64 bp of a read [11, 30]. While this may be the case, TASSEL's SNP discovery method appears to be highly sensitive to this tag length parameter, a result that suggests there may be some benefit in aggregating the results (i.e. lists of SNPs) of multiple TASSEL-GBS analyses under various mxTagL values. Similarly, the largely non-overlapping results of the reference-based GBS-SNP-CROP analysis may also have value as a complement to the TASSEL-GBS approach.
To investigate the overlap among the sets of SNPs called between the reference-based and reference-independent pipelines, we mapped all SNPs discovered using both GBS-SNP-CROP (Mock Reference centroids) and TASSEL-UNEAK (tags) to the A. chinensis reference. In so doing, we found that 33.7 % of the SNPs called by the reference-based GBS-SNP-CROP (A. chinensis) were also called by the reference-independent GBS-SNP-CROP (Mock Reference based on the single most read-abundant genotype). In contrast, only 0.6 and 0.4 % of the SNPs called by TASSEL-GBS (64 and 32 bp, respectively) were identified by the reference-independent TASSEL-UNEAK pipeline (Fig. 3; Additional file 2).
Average depth
One of the most efficient means of distinguishing sequencing error from true nucleotide polymorphism is to increase read depth thresholds because polymorphisms called on the basis of more reads mapped to the same locus can be declared with greater reliability that those based on fewer reads [46]. Nielsen et al. [47] discussed many studies using NGS data with medium-to-low coverage (<20×) and showed that genotype calls based on such data exhibit statistical uncertainty. According to the authors, there are two reasons for this: (1) In heterozygotes, both alleles may not be sampled, thus leading to incorrect homozygote calls; and (2) In the case of high sequencing error technologies, a significant number of homozygotes may be incorrectly declared heterozygotes if genotype calling is based simply on the allelic presence/absence. According to Illumina's technical notes [35], the probability of making a correct genotyping call is roughly 95 % for 20× coverage. While 99.9 % of the 21,318 SNPs identified by the GBS-SNP-CROP Mock Reference pipeline have an average read depth higher than 20×, this is true of only 83.6 % of the 1,160 SNPs called by TASSEL-UNEAK (Table 4). In comparison, 92.5 % of the 5,593 SNPs (-mxTagL32) and 78.1 % of the 8,907 SNPs (-mxTagL64) called by the reference-based TASSEL-GBS pipelines have an average read depth higher than 20×, compared to 99.9 % of the 5,471 SNPs called by the reference-based GBS-SNP-CROP. In terms of average read-depth, therefore, GBS-SNP-CROP performs favorably compared to both TASSEL-GBS and TASSEL-UNEAK (see Additional file 3).
Recognizing biological replicates
The primary motivation for developing GBS-SNP-CROP was the need for a tool to accurately characterize the genetic diversity of understudied germplasm collections, including identifying redundant accessions as a means of boosting the resource efficiency of curation efforts. Given this goal, a relevant performance criterion is the ability of the pipeline to identify biological replicates in a population, as indicated by the observed genetic distance between those replicates. To quantify such distance, we employed a modified Gower's Coefficient of Similarity [48], ranging from 0 to 1, to quantify identity-by-state based on bi-allelic SNPs:
$$ {S}_{Gower}\left(x,y\right) = \frac{{\displaystyle {\sum}_{i=1}^m}{s}_i{w}_i}{{\displaystyle {\sum}_{i=1}^m}{w}_i} $$
where si = 1 if the genotypes are the same, 0.5 if the genotypes differ by one allele (i.e. heterozygote vs. homozygote), and 0 if the genotypes differ by both alleles (i.e. primary homozygote vs. secondary homozygote); and wi = 1 if both replicates are genotyped for the SNP in question and 0 if either replicate lacks an assigned genotypic state.
Using the SNPs called by the GBS-SNP-CROP-MR01 analysis (Table 4B), the Gower genetic similarity calculated between two biological replicates of A. arguta accession 'Opitz Male' was found to be 0.999, with a Pearson correlation of 0.998, results similar to those obtained with the reference-based GBS-SNP-CROP pipeline (Table 5). In comparison, the reference-based TASSEL-GBS-32 bp and -64 bp analyses yielded lower Gower genetic similarities of 0.967, as well as reduced Pearson correlations (≤ 0.92). These same replicates of 'Opitz Male' were found to be only 0.948 similar by TASSEL-UNEAK, indicating a genotyping error rate of more than 60 times that of GBS-SNP-CROP (Mock Reference), despite calling 18 times fewer SNPs (1,160 vs. 21,318; Table 4B). This same basic pattern of results was found when analyzing biological replicates of A. arguta accession 'Dumbarton Oaks' (Table 5), suggesting that genotyping via GBS-SNP-CROP is relatively robust, prone to fewer genotyping errors while maintaining high numbers of SNPs, whether or not a reference is available.
Table 5 Comparative pipeline performances, in terms of consistency in genotyping biological replicates
Computation time
Compared to TASSEL-UNEAK, the GBS-SNP-CROP Mock Reference workflow processed over twice as much data, generated over 18 times more SNPs, the SNPs it called had higher average depth (69.3 vs. 44.7), and as a set they were better able to detect similarity between biological replicates; but this improved performance comes at the price of approximately 25 times longer computation time. Using a dedicated Unix workstation with a 2.6 GHz Dual Intel processor and 16 GB RAM, the computational time required to run the Mock Reference GBS-SNP-CROP pipeline using only the most read-abundant genotype to assemble the Mock Reference was approximately 11 h for this dataset, compared to only 27 min for the TASSEL-UNEAK analysis (Table 2). Similarly, due to its consideration of 3–4 times the amount of sequence data and its strategy of mapping reads rather than tags, the reference-based GBS-SNP-CROP analysis (~8.5 h) also requires significantly more computational time than either of the TASSEL-GBS analyses (35–70 min). Table 1 presents the computational times required for each of the steps within the reference-free GBS-SNP-CROP-MR01 workflow.
GBS-SNP-CROP is a complete bioinformatics pipeline developed to support curation, research, and breeding programs wishing to utilize GBS for the cost-effective genome-wide characterization of plant genetic resources in the absence of a reference genome. Although the pipeline was created primarily with orphan crop characterization in mind, its underlying strategy is sufficiently general to suggest its potential utility in any situation (plant, animal, or micro-organismal) where reduced-representation genomic data (e.g. GBS) is analyzed for SNPs, such as studies in population genetics, evolutionary ecology, conservation biology, and genetic linkage analysis.
As indicated by the example analysis presented here, the pipeline performs quite favorably compared to TASSEL-UNEAK, not only in terms of a significantly higher number of identified SNPs but also in terms of an increased average read depth and a greatly reduced genotyping error rate. Remarkably, the reference-independent version of GBS-SNP-CROP was also shown to outperform the reference-based TASSEL-GBS pipeline in terms of these same metrics. In contrast, the reference-based version of GBS-SNP-CROP appears outperformed by TASSEL-GBS in terms of the number of called SNPs, though again its genotyping error rate is lower. Given the low proportion of shared SNPs among these reference-based analyses, however, GBS-SNP-CROP may be useful even in this case, able to detect large numbers of additional high-quality SNPs missed by the tag-based and read length-restricted approach of TASSEL-GBS. Indeed, with the capacity to make full use of variable length, paired-end GBS data for high-density SNP genotyping of plant populations, whether or not a reference genome is available, GBS-SNP-CROP is a flexible and easily modifiable tool worthy of consideration by interested programs.
Plant material, GBS data, and genotypes sampled
A collection of 48 tetraploid kiwiberry (Actinidia arguta) genotypes, each carrying two sets of 29 chromosomes (2n = 4× = 116) with an estimated total genome size of 1C = 1.5 Gbp [49], was sampled from the USDA National Clonal Germplasm Repository (Davis, CA) for this study. Genomic DNA was extracted from ~1 g of fresh young leaves from each accession using a modified CTAB protocol, and a multiplexed GBS library was prepared according to the two enzyme (PstI-MspI) protocol described by Poland et al. [11]. Using the first 96 6–10 bp barcodes from that protocol, the 48 accessions were multiplexed along with 2 biological replicates (accessions "Opitz Male" and "Dumbarton Oaks") and 46 breeding lines, resulting in a 96-plex library which was sequenced on two lanes (i.e. one complete flowcell) of an Illumina 2500 HiSeq machine at the Hubbard Center for Genome Studies, University of New Hampshire (http://hcgs.unh.edu/). FASTQ files of the sequence data were generated using CASAVA 1.8.3 [50]; and these raw sequences have been deposited in the NCBI Sequence Read Archive (SRA Accession number SRR2296676). A table of the 48 genotypes used in this analysis, along with their assigned barcodes, can be found in Additional file 4.
Pipeline evaluation and testing
To evaluate the performance of GBS-SNP-CROP, we analyzed the GBS data from the 48 accessions described above (plus 2 biological replicates) using seven different analyses. First, we executed three variations of GBS-SNP-CROP without a reference genome (Table 1, Stage 2). In the first Mock Reference analysis (GBS-SNP-CROP-MR48), we assembled the Mock Reference from centroids identified by clustering first within each genotype and then across all 48 genotypes in the population. In the second Mock Reference analysis (GBS-SNP-CROP-MR05), clustering was done across only the five most read-abundant genotypes (accessions "ORUS 2–16", "DACT 213", "40537C", "ORUS 1–6", and "Chang Bai Mountain 3"). In the third analysis (GBS-SNP-CROP-MR01), the Mock Reference was built using the within-genotype centroids from only the single most read-abundant line (accession "ORUS 2–16"). These three different approaches were followed to examine the effects of reducing the number of genotypes used to build the Mock Reference on both computational time and the number and quality of identified SNPs. For all Mock Reference analyses, we used PEAR v.0.96 [32] to merge reads using mainly default parameters, except for specifying a minimum assembled read length of 32 bp. For clustering, we used USEARCH v.8.0.162 [33], specifying the "cluster_fast" algorithm with a nucleotide similarity threshold of 93 % to allow up to two mis-matches within the shortest assembled reads (32 bp).
For comparison with the Mock Reference analyses described above, we ran GBS-SNP-CROP using a published reference genome from the closely related diploid (2n = 2× = 58) species A. chinensis [51] with an estimated genome size 1C = 758 Mbp [48]. The only difference between this reference-based analysis (GBS-SNP-CROP-RG) and the Mock Reference analyses above is that in the former we skipped Stage 2 ("Build the Mock Reference ") of the GBS-SNP-CROP workflow (see Table 1).
For both GBS-SNP-CROP analyses, the CASAVA-processed sequence data were subjected to basic quality filtering. Specifically, reads were trimmed based on a sequence of three contiguous bases with an average Phred score Q ≤30, and trimmed reads shorter than 32 bp were culled. These procedures were performed using the Trimmomatic software v.0.33 [31] with the following parameters: LEADING:30 SLIDINGWINDOW:4:30 TRAILING:30 MINLEN:32. Also for both analyses, alignment was carried out using BWA v.0.7.12 [34]; and the resultant alignment files were processed with SAMtools v.1.2 [35].
For the next analysis, we used the Network-Based SNP Discovery Protocol with no reference genome (TASSEL-UNEAK v.3.0). The TASSEL-UNEAK pipeline was run using mainly its default parameters, with two changes: (1) In the "UMergeTaxaTagCountPlugin" step, the "-c" flag was increased from 5 to 10; and (2) The error tolerance rate ("-e" flag on "UTagCountToTagPairPlugin") was decreased from 0.03 to 0.01. These modifications were made in an effort to match the default parameters of the TASSEL-GBS analyses, thereby facilitating comparison.
Finally, we used TASSEL-GBS v.5.2.11 to carry out two more reference-based analyses, one with "Maximum Tag Length" (mxTagL) = 32 bp and the other with mxTagL = 64 bp. For all TASSEL analyses (TASSEL-GBS-32 bp, TASSEL-GBS-64 bp, and TASSEL-UNEAK), we set the minimum minor allele frequency to 5 % and accepted only those markers for which genotypes were called for at least 75 % of the population.
Comparing called SNPs among pipelines
Identifying shared and non-shared SNPs called by the reference-based pipelines (GBS-SNP-CROP-RG and the TASSEL-GBS pipelines) is straightforward due to the unique coordinate positions of the SNPs within the common A. chinensis reference genome. Comparing called SNPs between the reference-independent pipelines (TASSEL-UNEAK and GBS-SNP-CROP-MR01) and reference-based pipelines is less simple due to the fact that no common reference (and thus coordinate system) exists. To enable such important comparisons, we first located the positions of all called SNPs (Table 4B) within the individual centroids used to construct the Mock Reference (GBS-SNP-CROP-MR01) and within the unique 64 bp tags used within the TASSEL-UNEAK pipeline. We then mapped all the putative SNP-containing centroids/tags to the A. chinensis reference genome and located the corresponding A. chinensis coordinate position of each called SNP. Finally, the allele compositions of any supposedly common SNPs were verified before such SNPs were declared as shared between pipelines.
The data set supporting the results of this article is available in the NCBI Sequence Read Archive [SRA Accession number SRR2296676].
McCouch S, Baute GJ, Bradeen J, Bramel P, Bretting PK, Buckler E, et al. Agriculture: Feeding the future. Nature. 2013;499:23–4.
Tester M, Langridge P. Breeding technologies to increase crop production in a changing world. Science. 2010;327:818–22.
Godfray HCJ, Beddington JR, Crute IR, Haddad L, Lawrence D, Muir JF. Food Security: The Challenge of Feeding 9 Billion People. Science. 2010;327:812–8.
Naylor RL, Falcona WP, Goodmanb RM, Jahnc MM, Sengoobad T, Teferae H, et al. Biotechnology in the developing world: a case for increased investments in orphan crops. Food Policy. 2004;29(1):15–44.
Mayes S, Massawe FJ, Alderson PG, Roberts JA, Azam-Ali SN, Hermann M. The potential for underutilized crops to improve security of food production. J Exp Bot. 2011;63(3):1075–9. doi:10.1093/jxb/err396.
Kennedy G, Nantel G, Shetty P. The scourge of hidden hunger: global dimensions of micronutrient deficiencies. Food Nutrition and Agriculture. 2003;32:8–16.
Tadele Z. Role of orphan crops in enhancing and diversifying food production in Africa. African Technology Development Forum Journal. 2009;6(3):9–15.
Altieri MA, Funes-Monzote FR, Petersen P. Agroecologically efficient agricultural systems for smallholder farmers: contributions to food sovereignty. Agron Sustain Dev. 2012;32(1):1–13.
Pérez-de-Castro AM, Vilanova S, Cañizares J, Pascual L, Blanca LM, Díez MJ, et al. Application of Genomic Tools in Plant Breeding. Curr Genomics. 2012;13(3):179–95.
Elshire RJ, Glaubitz JC, Sun Q, Poland JA, Kawamoto K, Buckler ES, et al. A robust, simple Genotyping-by-Sequencing (GBS) approach for high diversity species. PLoS One. 2011;6(5):e19379. doi:10.1371/journal.pone.0019379.
Poland JA, Brown PJ, Sorrells ME, Jannink JL. Development of high-density genetic maps for barley and wheat using a novel two-enzyme Genotyping-by- Sequencing approach. PLoS One. 2012;7(2):e32253. doi:10.1371/journal.pone.0032253.
Poland JA, Rif TW. Genotyping-by-Sequencing for Plant Breeding and Genetics. Plant Genome. 2012;5:92–102.
Poland JA, Endelman J, Dawson J, Rutkoski J, Wu S, Manes Y, et al. Genomic Selection in Wheat Breeding using Genotyping-by-Sequencing. The Plant Genome. 2012;5:103–13.
Peterson GW, Dong Y, Horbach C, Fu YB. Genotyping-By-Sequencing for Plant Genetic Diversity Analysis: A Lab Guide for SNP Genotyping. Diversity. 2014;6(4):665–80.
Lu F, Lipka AE, Glaubitz J, Elshire R, Cherney JH, Cherney JH, et al. Switchgrass genomic diversity, ploidy, and evolution: novel insights from a network-based SNP discovery protocol. PLoS Genet. 2013;9(1):e1003215. doi:10.1371/journal.pgen.1003215.
Fu YB, Cheng B, Peterson GW. Genetic diversity analysis of yellow mustard (Sinapis alba L.) germplasm based on genotyping by sequencing. Genetic Resource Crop Evolution. 2014;61:579–94.
Lombardi M, Materne M, Cogan NOI, Rodda M, Daetwyler HD, Slater AT, et al. Assessment of genetic variation within a global collection of lentil (Lens culinaris Medik.) cultivars and landraces using SNP markers. BMC Genet. 2014;15:150. doi:10.1186/s12863-014-0150-3.
Wang B, Tan HW, Fang W, Meinhardt LW, Mischke S, Matsumoto T, et al. Developing single nucleotide polymorphism (SNP) markers from transcriptome sequences for identification of longan (Dimocarpus longan) Germplasm. Horticulture Research. 2015;2:14065. doi:10.1038/hortres.2014.65.
Cabezas JA, Ibanez I, Lijavetzky D, Velez D, Bravo G, Rodriguez V, et al. A 48 SNP set for grapevine cultivar identification. MC Plant Biology. 2011;11:153.
Wu B, Zhong GY, Yue JQ, Yang RT, Li C, Li YJ, et al. Identification of Pummelo Cultivars by Using a Panel of 25 Selected SNPs and 12 DNA Segments. PLoS One. 2014;9(4):e94506. doi:10.1371/journal.pone.0094506.
Wong MML, Verma NG, Ramsay L, Yuan HY, Caron C, Diapari M, et al. Classification and Characterization of Species within the Genus Lens Using Genotyping-by-Sequencing (GBS). PLoS One. 2015;10(3):e0122025. doi:10.1371/journal.pone.0122025.
Narum SR, Buerkle CA, Davey JW, Miller MR, Hohenlohe PA. Genotyping-by-sequencing in ecological and conservation genomics. Mol Ecol. 2013;22(11):2841–7.
Leggett RM, MacLean D. Reference-free SNP detection: dealing with the data deluge. BMC Genomics. 2014;15(4):S10.
Kumar S, Banks TW, Cloutier S. SNP Discovery through Next-Generation Sequencing and Its Applications. International Journal of Plant Genomics. 2012;2012:831460. doi:10.1155/2012/831460.
Varshney RK, Ribaut JM, Buckler ES, Tuberosa R, Rafalski JA, Langridge P. Can genomics boost productivity of orphan crops? Nat Biotechnol. 2012;30:1172–6.
African Orphan Crops Consortium (AOCC). http://africanorphancrops.org (2015). Accessed 30 Aug 2015.
Maranz S, Kpikpi W, Wiesman Z, Sauveur ADS, Chapagain B. Nutritional values and indigenous preferences for Shea Fruits (Vitellaria paradoxa C.F. Gaertn. F.) in African Agroforestry Parklands. Econ Bot. 2004;58(4):588–600.
Maranz S, Niang A, Kalinganire A, Konaté D, Kaya B. Potential to harness superior nutritional qualities of exotic baobabs if local adaptation can be conferred through grafting. Agrofor Syst. 2008;72(3):231–9.
Weerahewaa J, Rajapakseb C, Pushpakumarac G. An analysis of consumer demand for fruits in Sri Lanka 1981–2010. Appetite. 2013;60:252–8.
Glaubitz JC, Casstevens TM, Lu F, Harriman J, Elshire RJ, Sun Q, et al. TASSEL-GBS: A High Capacity Genotyping by Sequencing Analysis Pipeline. PLoS One. 2014;9(2):e90346. doi:10.1371/journal.pone.0090346.
Bolger AM, Lohse M, Usadel B. Trimmomatic: A flexible trimmer for Illumina Sequence Data. Bioinformatics. 2014;30(15):2114–20.
Zhang J, Kobert K, Flouri T, Stamatakis A. PEAR: a fast and accurate Illumina Paired-End reAd mergeR. Bioinformatics. 2014;30(5):614–20.
Edgar RC. Search and clustering orders of magnitude faster than BLAST. Bioinformatics. 2010;26(19):2460–1.
Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler Transform. Bioinformatics. 2009;25:1754–60.
Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer J, et al. The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009;25(16):2078–9.
Guo Y, Li J, Li CI, Long J, Samuels DC, Shyr Y. The effect of strand bias in Illumina short-read sequencing data. BMC Genomics. 2012;13:666.
R Development Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing. 2015.
Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MAR, Bender D, et al. PLINK: a toolset for whole-genome association and population-based linkage analysis. Am J Hum Genet. 2007;81(3):559–75.
Loua DI, Hussmannb JA, McBeea RM, Acevedoc A, Andinoc R, Pressb WH, et al. High-throughput DNA sequencing errors are reduced by orders of magnitude using circle sequencing. Proc Natl Acad Sci U S A. 2013;110(49):19872–7.
Fox EJ, Reid-Bayliss KS, Emond MJ, Loeb LA. Accuracy of Next Generation Sequencing Platforms. Next Generation Sequencing & Application. 2014: doi:10.4172/jngsa.1000106.
Calling sequencing SNPs. Illumina provides a SNP caller in the CASAVA software that identifies SNPs in RNA or DNA sequencing experiments. San Diego: Illumina; 2010. http://www.illumina.com. Accessed 22 Jul 2015.
Li R, Li Y, Fang X, Yang H, Wang J, Kristiansen K, et al. SNP detection for massively parallel whole-genome resequencing. Genome Res. 2009;19(6):1124–32.
Del Fabbro C, Scalabrin S, Morgante M, Giorgi FM. An Extensive Evaluation of Read Trimming Effects on Illumina NGS Data Analysis. PLoS One. 2013;8(12):e85024. doi:10.1371/journal.pone.0085024.
Dohm JC, Lottaz C, Borodina T, Himmelbauer H. Substantial biases in ultra-short read data sets from high-throughput DNA sequencing. Nucleic Acids Res. 2013;36(16):e105.
Eren AM, Vineis JH, Morrison HG, Sogin ML. A filtering method to generate high quality short reads using illumina paired-end technology. PLoS One. 2013;8(6):e66643. doi:10.1371/journal.pone.0066.
Wall JD, Tang LF, Zerbe B, Kvale MN, Kwok PY, Schaefer C, et al. Estimating genotype error rates from high-coverage next-generation sequence data. Genome Res. 2014;24(11):1734–9. doi:10.1101/gr.168393.113.
Nielsen R, Korneliussen T, Albrechtsen A, Li Y, Wang J. SNP Calling, Genotype Calling, and Sample Allele Frequency Estimation from New-Generation Sequencing Data. PLoS One. 2012;7(7):e37558. doi:10.1371/journal.pone.0037558.
Gower JC. A general coefficient of similarity and some of its function properties. Biometrics. 1971;27:857–74.
Hopping ME. Flow cytometric analysis of Actinidia species. N Z J Bot. 1994;32:85–93.
Casava 1.8.2. Quick reference guide. San Diego: Illumina; 2011. http://www.illumina.com. Accessed 22 Jul 2015.
Huang S, Ding J, Deng D, Tang W, Sun H, Liu D, et al. Draft genome of the kiwifruit Actinidia chinensis. Nat Commun. 2013;4:2640. doi:10.1038/ncomms364.
We thank D. Cantu for his critical feedback on an earlier version of this manuscript. The authors also wish to thank the journal's anonymous reviewers for their critical feedback. Partial funding for this work was provided by the New Hampshire Agricultural Experiment Station (Scientific Contribution Number 2626). This work is supported by the USDA National Institute of Food and Agriculture Multi-State Hatch Project NH 06611-R.
College of Life Sciences and Agriculture, Department of Biological Sciences, University of New Hampshire, Durham, NH, USA
Arthur T. O. Melo
& Iago Hale
College of Life Sciences and Agriculture, Genetics Graduate Program, University of New Hampshire, Durham, NH, USA
Radhika Bartaula
Search for Arthur T. O. Melo in:
Search for Radhika Bartaula in:
Search for Iago Hale in:
Correspondence to Iago Hale.
The authors declare they have no competing interests.
All authors contributed to the development, troubleshooting, and improvement of the pipeline. AM conducted the comparative pipeline analyses and performance evaluations. AM and IH developed the manuscript. IH coordinated and contributed to all aspects of the work. All authors have read and approved the final manuscript.
Suggested parameter values for GBS-SNP-CROP, based on ploidy scenarios and confidence considerations. AdditionalFile1.pdf presents rationale to guide user selection of ploidy-appropriate values for various parameters in Script 7 of GBS-SNP-CROP (-mnHoDepth0, -mnHoDepth1, -mnAlleleRatio, and –altStrength). (PDF 1231 kb)
Bubble plot showing the pair-wise percentage of shared markers called by the different pipelines. AdditionalFile2.pdf complements the visualization in Fig. 3, showing all pair-wise proportions of shared SNPs among the five evaluated pipelines. (PDF 732 kb)
The distribution of pre-filtered SNPs across three different depth classes: Low (<4), Acceptable (4–200), and Over-represented (>200). The bar plot in AdditionalFile3.pdf compares the distributions of average read depths for the pre-filtered SNPs called by the five evaluated pipelines. (PDF 43 kb)
List of the 48 A. arguta genotypes used for GBS-SNP-CROP development and analysis. AdditionalFile4.pdf presents the names of the genotypes from the USDA National Clonal Germplasm Repository used in this study, along with their barcodes and the number of parsed GBS reads obtained for each. (PDF 50 kb)
GBS-SNP-CROP
Orphan crops
Plant genetic resources
Sequence analysis (methods | CommonCrawl |
User:Maximilian Janisch/latexlist/Algebraic Groups/Group
From Encyclopedia of Mathematics
< User:Maximilian Janisch | latexlist | Algebraic Groups
This page is a copy of the article Group in order to test automatic LaTeXification. This article is not my work.
2010 Mathematics Subject Classification: Primary: 20-XX [MSN][ZBL]
One of the main types of algebraic systems (cf. Algebraic system). The theory of groups studies in the most general form properties of algebraic operations which are often encountered in mathematics and their applications; examples of such operations are multiplication of numbers, addition of vectors, successive performance (composition) of transformations, etc. The concept of a group is historically one of the first examples of abstract algebraic systems and served, in many respects, as a model for the restructuring of other mathematical disciplines at the turn into the 20th century, as a result of which the concept of a mathematical system (a structure) has become a fundamental concept in mathematics.
1 Definition.
2 Historical remarks.
3 Examples of groups.
4 Important classes of groups.
4.2 Comments
Definition.
A group is a non-empty set $G$ with one binary operation that satisfies the following axioms (the operation being written as multiplication):
1) the operation is associative, i.e. $(ab)c = a(bc)$ for any $a$, $b$ and $c$ in $G$;
2) the operation admits a unit, i.e. $G$ has an element $e$, known as the unit element, such that $ae=ea=a$ for any $a$ in $G$;
3) the operation admits inverse elements, i.e. for any $a$ in $G$ there exists an element $x$ in $G$, said to be inverse to $a$, such that $ax=xa=e$.
The system of axioms 1)–3) is sometimes replaced by an equivalent system of two axioms: 1); and 4) the operation admits left and right quotients, i.e. for any two elements $a$, $b$ in $G$ there exist elements $x$, $y$ in $G$, the left quotient and the right quotient of division of $b$ by $a$, such that $ax=b$, and $ya=b$.
It follows from this definition that the unit element in any group is unique, that the element inverse to any given element in the group is unique and that for any elements $a$, $b$ of $G$ both fractions obtained by dividing $a$ by $b$ are unique.
Historical remarks.
The origins of the idea of a group are encountered in a number of disciplines, the principal one being the theory of solving algebraic equations by radicals. Permutations were first employed to satisfy the needs of this theory by J.L. Lagrange (1771) in his Memoir on the algebraic solution of equations, and in a paper by A. Vandermonde (1771). It is the former paper which is of special importance in group theory, since it gives, in terms of polynomials, what is really a decomposition of a symmetric permutation group into (right) cosets with respect to subgroups. The deep connections between the properties of permutation groups and those of equations were pointed out by N.H. Abel (1824) and by E. Galois (1830). Galois must be credited with concrete advances in group theory: the discovery of the role played by normal subgroups (cf. Normal subgroup) in problems of solvability of equations by radicals, the discovery that the alternating groups (cf. Alternating group) of order $n \ge 5$ are simple, etc. C. Jordan's treatise (1870) on permutation groups played an important role in the systematization and development of this branch of algebra.
The idea of a group arose in geometry, in an independent manner, when the only then existing antique geometry had been replaced in the middle of the 19th century by numerous other "geometries" , and finding relations between them had become an urgent problem. This question was solved by studies in projective geometry, which dealt with the behaviour of geometric figures under various transformations. The stress in these studies gradually shifted to the study of the transformations themselves and their classification. Such a "study of geometric mappings" was extensively conducted by A. Möbius, who investigated congruence, similarity, affinity, collineation, and, finally, "elementary types of mappings" of geometric figures, that is, actually, their topological equivalence. A.L. Cayley (1854 and later) and other representatives of the English school of the theory of invariants (cf. Invariants, theory of) gave a more systematic classification of geometries: Cayley explicitly used the term "group" , made systematic use of the multiplication table which now carries his name (cf. Cayley table), proved that any finite group can be represented by permutations, and conceived a group as a system which is defined by its generating elements and defining relations. The final stage in this development was the Erlangen program of F. Klein (1872), who based the classification of geometries on the concept of a transformation group.
Number theory is the third source of the concept of a group. As early as 1761 (cf. Euler function, Primitive root, Series, Stability of an elastic system, Siegel disc, Theta-function, Trigonometric series, Two-term congruence, Variation of constants, Variational calculus, numerical methods of Variational calculus, Venn diagram) L. Euler, in his study of residues remaining in power division, actually used congruences (cf. Congruence) and their division into residue classes, which in group-theoretic language means the decomposition of groups into cosets of subgroups. C.F. Gauss, in his Disquisitiones arithmeticae, studied the cyclotomic equations (cf. Cyclotomic polynomials) and in fact determined subgroups of their Galois groups (cf. Galois group). He also studied the "composition of binary quadratic forms" in this context, and showed, in essence, that the classes of equivalent forms form a finite Abelian group with respect to composition.
Towards the end of the 19th century it was recognized that the group-theoretic ideas employed for a long time in various fields of mathematics were essentially the same, and the modern abstract idea of the concept of a group was finally developed. Thus, as early as 1895, S. Lie defined a group as a set of transformations that is closed under an operation that is associative, admits a unit element and inverse elements. The study of groups without assuming them to be finite and without making any assumptions as to the nature of their elements was first formulated as an independent branch of mathematics with the appearance of the book Abstract group theory by O.Yu. Shmidt (1916).
Examples of groups.
The examples below illustrate the role played by groups in algebra, in other branches of mathematics and in natural sciences.
a) Galois groups. Let $K$ be a finite, separable and normal extension of a field $k$. The automorphisms of $K$ leaving the elements of $k$ fixed form a group $Gal(K/k)$ with respect to composition, called the Galois group of the extension $K/k$. The principal theorem in Galois theory states that the mapping which associates to every subgroup of $Gal(K/k)$ its fixed subfield (i.e. the subfield of $K$ whose elements are fixed under the subgroup of $Gal(K/k)$) is an anti-isomorphism of the lattice of subgroups of $Gal(K/k)$ onto the lattice of intermediate subfields between $k$ and $K$.
The application to the problem on the solvability of equations by radicals is as follows. Let $f$ be a polynomial in $x$ over $k$ and let $K$ be a splitting field (cf. Splitting field of a polynomial) of $f$. The group $Gal(K/k)$ is called the Galois group of $f$ over $k$ (its elements are naturally formed by the permutations of the roots of the equation $f(x)=0$). The result is that the equation $f(x)=0$ is solvable in radicals if and only if the Galois group of $f$ is solvable (cf. Solvable group).
In this and other similar examples groups appear in the form of automorphism groups (cf. Automorphism) of mathematical structures. This is one of the most important ways of appearance, ensuring groups a special place in algebra. In the words of Galois, automorphisms of arbitrary structures can always be "grouped" , while a ring structure or any other useful structure on a set of automorphisms is successfully introduced in special cases only.
b) Homology groups. The leading idea in homology theory is the application of the theory of (Abelian) groups to the study of a category of topological spaces. To each space $X$ is associated a family of Abelian groups $H_0(X),H_1(X),\ldots$ while each continuous mapping $f: X \rightarrow Y$ defines a family of homomorphisms $f_n: H_n(X) \rightarrow H_n(Y)$, $n = 0, 1, \ldots$. The study of the homology groups $H_n(X)$ (cf. Homology group) and their homomorphisms by the tools of group theory often makes it possible to deal with a topological problem. A typical example is the extension problem: Is it possible to extend a mapping $g: A \rightarrow Y$, defined on a subspace $A$ of $X$, to all of $X$, i.e. is it possible to represent $g$ as the composite of the imbedding $h: A \rightarrow X$ and some continuous mapping $f: X \rightarrow Y$? If so, then in the homology spaces one has $g_n = f_n h_n$, i.e. each homomorphism $g_n: H_n(A) \rightarrow H_n(Y)$ can be factored through $H_n(X)$ with a given homomorphism $h_n$. If this algebraic problem is unsolvable, then the initial topological problem is unsolvable as well. Important positive results can be obtained in this way.
Homology groups illustrate another typical manner of application of groups: the study of non-algebraic objects by means of algebraic systems which reflect their behaviour. This is in fact the fundamental method of algebraic topology. A similar method, in particular homology groups, is also used with success in the study of algebraic systems themselves — groups, rings, etc. (e.g. in the theory of group extensions).
c) Symmetry groups. The concept of a group makes it possible to describe the symmetries of a given geometrical figure. To any figure one associates the set of spatial transformations that map it onto itself. This set is a group under composition. It also characterizes the symmetry of the figure. This was in fact the approach of E.S. Fedorov (1890) to the problem of classification of regular spatial systems of points, which is one of the basic problems in crystallography (cf. Crystallography, mathematical). There are only 17 plane crystallographic groups (cf. Crystallographic group), which were found directly; there are 230 3-dimensional crystallographic groups, which could be exhaustively classified only by the use of group theory. This is historically the first example of the application of group theory to natural sciences.
Group theory plays a similar role in physics. Thus, the state of a physical system is represented in quantum mechanics by a point in an infinite-dimensional vector space. If the physical system passes from one state into another, its representative point undergoes some linear transformation. The ideas of symmetry and the theory of group representations (cf. Representation of a group) are of prime importance here.
These examples illustrate the contribution of group theory to all classifications where symmetry plays a role. The study of symmetry is actually equivalent to the study of automorphisms of (not necessarily mathematical) systems, and for this reason group theory is indispensable in solving such problems.
Important classes of groups.
The "final objective" of group theory is to describe all group operations or, in other words, all groups, up to isomorphism. Group theory comprises several parts, which are often distinguished by special conditions imposed on the group operation or by the introduction of additional structures into the group, related in some way with the group operation.
The oldest branch of group theory, which is still intensively studied, is the theory of finite groups (cf. Finite group). One of its important tasks is to determine the finite simple groups (cf. Simple finite group). These include many classical groups of matrices over finite fields, and also "sporadic" simple finite groups (Mathieu groups, cf. Mathieu group, etc.). At the other end there are finite solvable groups (cf. Solvable group) in which specific subgroup systems (Hall, Carter, etc., cf. Carter subgroup; Hall subgroup) are usually studied, since these largely determine the structure of the group itself. Finite groups often appear as permutation groups or as matrix groups over finite fields. A large independent branch of the theory of finite groups is the study of representations by matrices and permutations.
A typical method of study of infinite groups is to impose on them some finiteness condition (cf. Group with a finiteness condition). Here, the main interest is centred on periodic groups, locally finite groups, groups with the maximum condition for subgroups (Noetherian groups), groups with the minimum condition for subgroups (Artinian groups), residually-finite groups, groups of finite rank (cf. Rank of a group), and finitely-generated groups (cf. Periodic group; Noetherian group; Artinian group; Residually-finite group; Finitely-generated group).
In the study of Abelian groups (cf. Abelian group) important roles are played by complete Abelian groups, torsion-free Abelian groups and periodic Abelian groups, and inside these groups by pure subgroups and primary subgroups. The study of any given Abelian group is reduced to a large extent to the theories of the classes listed above with the aid of the theory of extensions of Abelian groups, which is mainly developed by homological methods (cf. Extension of a group).
Broader than the class of Abelian groups are the classes of nilpotent groups and of solvable groups (cf. Nilpotent group; Solvable group), the theory of which has also reached a fairly advanced stage. The most useful extensions of nilpotency and solvability are local nilpotency (cf. Locally nilpotent group), local solvability (cf. Locally solvable group) and the normalizer condition, as well as numerous properties determined by the presence of subnormal systems (cf. Subgroup system) of various types in a group. Of importance are special classes of solvable and nilpotent groups: supersolvable groups, polycyclic groups (cf. Supersolvable group; Polycyclic group).
An important branch of group theory is the theory of transformation groups, including permutation groups and the theory of linear groups (cf. Permutation group; Linear group). A number of important classes of groups is defined by the introduction of additional structures compatible with the group operation; this includes topological groups, Lie groups, algebraic groups, and ordered groups (cf. Topological group; Lie group; Algebraic group; Ordered group). Of the other classes of groups, the following are worthy of mention: groups which are free in some variety (cf. Free group), complete groups (cf. Complete group), groups having some property residually (cf. Residually-finite group), groups defined by imposing conditions on their generating elements and defining relations, and groups distinguished by imposing certain conditions on the lattice of subgroups.
[1] M.I. Kargapolov, J.I. [Yu.I. Merzlyakov] Merzljakov, "Fundamentals of the theory of groups" , Springer (1979) (Translated from Russian) MR0551207 Zbl 0549.20001
[2] A.G. Kurosh, "The theory of groups" , 1–2 , Chelsea (1955–1956) (Translated from Russian) MR0109842 MR0080089 MR0071422 Zbl 0111.02502
[3] M. Hall jr., "The theory of groups" , Chelsea, reprint (1976) MR0414669 Zbl 0354.20001
[4] O.Yu. Shmidt, , Selected works on mathematics , Moscow (1959) pp. 17–70 (In Russian)
[5] H. Wussing, "The genesis of the abstract group concept" , M.I.T. (1984) (Translated from German) MR0746617 Zbl 0547.01001
[6] E.S. Fedorov, , Symmetry and the structure of crystals. Fundamental works , Moscow (1949) pp. 111–255 (In Russian)
[7] N.N. Bogolyubov, A.A. Logunov, I.T. Todorov, "Introduction to axiomatic quantum field theory" , Benjamin (1975) (Translated from Russian) MR452277
Similar remarks as to the (homotopy) extension problem apply to the (homotopy) lifting problem, in which it is required to fill in a diagram like the one on the right below (the one on the left is the diagram of the extension problem).
\begin{equation} \left. \begin{array} { l l l } { A } & { \rightarrow Y } & { \square } \\ { \downarrow } & { \square } & { } & { \square } \\ { X } & { \square } & { } & { A } \end{array} \right. \end{equation}
An important direction in group theory not mentioned in the article above is combinatorial group theory and the study of group by means of generators and relations [a3], [a4].
Certainly, abstract group theory was considered long before 1916. Thus, W. Burnside, writing in 1897, quotes Cayley as saying that "a group is defined by means of the laws of combination of its symbols" , and goes on to explain why he, in his own book, does, on the whole, not take that point of view; [a5], p. viii. L. Kronecker discussed axioms for abstract finite groups in 1870, cf. [a1] (see also [a10]), and the notion of abstract groups was introduced by Cayley in three papers starting in 1849, [a7]–[a9], though these papers received little attention at the time. This had certainly changed by the 1890's and a discussion of the basic definitions and some basic properties of abstract groups can be found in H. Weber's influential treatise [a6] (1896).
For (a history of) crystallographic groups cf. Crystallographic group.
[a1] L. Kronecker, "Auseinandersetzung einiger Eigenschaften der Klassenzahl idealer complexer Zahlen" Monatsber. K. Preuss. Akad. Wissenschaft. Berlin (1870) pp. 881–889 ((Also in: Werke, Vol. 1, p. 271)) Zbl 02.0097.01
[a2] H. Weyl, "Symmetry" , Princeton Univ. Press (1952) (Translated from German) MR0048449 Zbl 0046.00406
[a3] W. Magnus, A. Karrass, B. Solitar, "Combinatorial group theory: presentations in terms of generators and relations" , Wiley (Interscience) (1966) pp. 412 Zbl 0138.25604
[a4] H.S.M. Coxeter, W.O.J. Moser, "Generators and relations for discrete groups" , Springer (1957) MR0088489 Zbl 0077.02801
[a5] W. Burnside, "Theory of groups of finite order" , Dover, reprint (1955) (Translated from German) MR0069818 Zbl 0064.25105
[a6] H. Weber, "Lehrbuch der Algebra" , II , Vieweg (1899) pp. Buch 1, Abschnitt 1 Zbl 30.0093.01
[a7] A. Cayley, "Note on the theory of permutations" Phil. Mag. (3) , 34 (1849) pp. 527–529 ((Also in: Collected mathematical papers, Vol. I, 432–424))
[a8] A. Cayley, "On the theory of groups as depending on the symbolical equation $\theta^n=1$" Phil. Mag. (4) , 7 (1854) pp. 40–47 ((Also in: Collected mathematical papers, Vol. II, 123–130))
[a9] A. Cayley, "On the theory of groups as depending on the symbolical equation $\theta^n=1$. Second part" Phil. Mag. (4) , 7 (1854) pp. 408–409 ((Also in: Collected mathematical papers, Vol. II, 131–132))
[a10] G.F. Frobenius, "Neuer Beweis des Sylowschen Satzes" J. Reine Angew. Math. , 100 (1887) pp. 179–181
Maximilian Janisch/latexlist/Algebraic Groups/Group. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Maximilian_Janisch/latexlist/Algebraic_Groups/Group&oldid=44009
Retrieved from "https://encyclopediaofmath.org/index.php?title=User:Maximilian_Janisch/latexlist/Algebraic_Groups/Group&oldid=44009"
Group theory and generalizations | CommonCrawl |
Malaria Journal
Field durability of the same type of long-lasting insecticidal net varies between regions in Nigeria due to differences in household behaviour and living conditions
Albert Kilian1,2,
Hannah Koenker3,
Emmanuel Obi4,
Richmond A Selby5,
Megan Fotheringham6 &
Matthew Lynch3
Malaria Journal volume 14, Article number: 123 (2015) Cite this article
With the recent publication of WHO-recommended methods to estimate net survival, comparative analyses from different areas have now become possible. With this in mind, a study was undertaken in Nigeria to compare the performance of a specific long-lasting insecticidal net (LLIN) product in three socio-ecologically different areas. In addition, the objective was to assess the feasibility of a retrospective study design for durability.
In three states, Zamfara in the north, Nasarawa in the centre and Cross River in the south, four local government areas were selected one year after mass distribution of 100-denier polyester LLINs. From a representative sample of 300 households per site that had received campaign nets, an assessment of net survival was made based on rate of loss of nets and the physical condition of surviving nets measured by the proportionate hole index (pHI). Surveys were repeated after two and three years.
Over the three-year period 98% of the targeted sample size of 3,720 households was obtained and 94% of the 5,669 campaign nets found were assessed for damage. With increasing time since distribution, recall of having received campaign nets dropped by 11-22% and only 31-87% of nets actually lost were reported. Using a recall bias adjustment, attrition rates were fairly similar in all three sites. The proportion of surviving nets in serviceable condition differed dramatically, however, resulting in an estimated median net survival of 3.0 years in Nasarawa, 4.5 years in Cross River and 4.7 years in Zamfara. Although repairs on damaged nets increased from around 10% at baseline to 21-38% after three years, the average pHI value for each of the four hole size categories did not differ between repaired and unrepaired nets.
First, the differences observed in net survival are driven by living conditions and household behaviours and not the LLIN material. Second, recall bias in a retrospective durability study can be significant and while adjustments can be made, enough uncertainty remains that prospective studies on durability are preferable wherever possible. Third, repair does not seem to measurably improve net condition and focus should, therefore, be on improving preventive behaviour.
While further progress is made towards achieving universal coverage with insecticide-treated nets (ITNs) in Africa south of the Sahara [1], increasing focus is given to the question of how to sustain these successes, particularly through improvements in field durability in long-lasting insecticidal nets (LLINs). This is not only important to determine the optimal time for net replacement in sustaining universal coverage [2], but also to obtain optimal cost-effectiveness in LLIN procurement (cost per year of use) [3].
The World Health Organization (WHO) has, in recent years, developed clear definitions and methodologies to assess net durability and survival in field conditions [4-6]. The guidance has led to a significant increase of studies presenting comprehensive and comparable durability assessments in different regions [7-9] and for different products [10,11], which suggest considerable variation in net survival ranging from less than two years to four or more years. In addition, quantitative [9,12,13] as well as qualitative [14-16] data are becoming available to better understand the determinants of field performance with respect to environmental conditions and user behaviour, which demonstrate that such factors have at least as strong an influence as the physical specifications of the nets [17].
While the standard design for durability assessment is a prospective study where nets are identified at the time of distribution and then followed up over a given time or until they are lost [4], this is not always possible due to time or logistical constraints. Therefore, WHO guidelines also envisage a retrospective design where assessments are conducted using multiple cross-sectional surveys highlighting the potential of recall bias in this type of assessment [4]. The present study was undertaken in Nigeria to describe how field durability of a very similar LLIN product (100-denier polyester net) compares between three very different ecological and sociocultural areas of the country and to define the main determinants of potential differences. In addition, the study intended to explore the feasibility of such a retrospective approach to durability assessment, the level of bias involved and how this could be accounted for in the analysis.
This study was a multi-site, retrospective assessment of net durability and survival with three rounds of annual, representative cluster-sampling household surveys. At each survey round a representative sample of households that had received nets from the preceding campaign were included and the attrition (nets received and lost) and physical integrity of surviving nets measured. While clusters (settlements) remained the same over time, households were re-sampled at each round so that annual samples can be considered independent. Nested within this observational study was an intervention study on the impact of behaviour change communication (BCC) on net care and repair behaviour, the design and results of which are reported separately [18].
Three states were purposively selected to represent three distinct ecological and climatic settings in Nigeria and in each one a rural local government area (LGA, equivalent to a district) was chosen as study site (Figure 1). Zamfara State is located in the dry savannah in the north (North West Zone) with an average annual rainfall of 600 mm between May and October (six months). The selected LGA was Shinkafe with an estimated 2012 population of 163,868 inhabitants. Nasarawa State is situated in the Guinea Savannah of central Nigeria (North Central Zone) with an average annual rainfall of 1,400 mm between March and November (nine months). The selected LGA was Toto (2012 population 142,184) with an additional LGA selected as the intervention site for the previously mentioned care and repair study (Kokona LGA, population 131,046). The third location was Cross River State in the southeast of the country (South East Zone) dominated by tropical primary and secondary forests of the Niger Delta and an average annual rainfall of 2,400 mm almost all year round (11 months). The selected site was Abi LGA with a 2012 population estimate of 171,896.
Map of Nigeria showing the three study states and four local government areas (LGAs). In Nasarawa solid shape = control site (Toto LGA), striped = intervention site (Kokona LGA).
The three sites also differed in their sociocultural context: Zamfara has a majority Hausa ethnic group who are predominantly Muslim, while Cross River's population is predominantly Efik ethnicity and Christian. Between these two states Nasarawa represents a mix of ethnic and cultural influences.
Campaigns and campaign nets
Nigeria's National Malaria Elimination Programme started the first round of mass campaigns for the distribution of LLINs to all households in 2009 in Kano State [19] and targeted two LLINs for every household registered by the mobilization teams. The three selected states had their campaign within three months of each other, starting with Nasarawa in December 2010, Cross River in January 2011 and Zamfara in March 2011. At all three sites the same or very similar LLIN product was used, a polyester LLIN with a 100-denier yarn strength which was Permanet 2.0® (Vestergaard) in Zamfara and Nasarawa, and DAWA Plus 2.0® (Tana Netting) in Cross River.
Sampling and sample size
Sampling for the cross-sectional household surveys was done in two major steps. First, clusters defined as settlements (villages) were selected with probability proportionate to population size. Because no sampling frame was available for the village population, the population lists of wards (administrative unit below LGA) were used to allocate clusters and then within each selected ward an up-to-date list of all settlements was obtained from the state authorities and one settlement selected using simple random sampling. These clusters were maintained throughout the three annual survey rounds.
For the selection of households within each cluster and survey all households were mapped on the survey day and the required number selected using random number lists. If a community had more than 200 households it was divided into approximately equal sections with the help of local leaders and one of the sections randomly selected for mapping. A household was defined as all people 'eating from the same pot'. Each selected household was then screened to assess whether they had participated in the mass campaign and received any nets. If they had not received any campaign nets, the household was dropped and a replacement household from the random list selected. Up to ten replacements were available per cluster.
For the calculation of sample size, the following assumptions were made: alpha error of 0.05 (95% confidence intervals), beta error of 0.2 (power 80%), design effect of 1.75, non-response of 5% and an average of 1.8 nets received per household that participated in the campaign. Based on the calculations a sample of 20 clusters with 15 households (300 households per site and time point) was determined as sufficient, resulting in a sample of 597 campaign nets to be assessed per site in year one, 498 in year two and 332 in year three. Assuming a net survival of 50% after three years, the expected precision of the survival estimate was ±5.6 percentage-points, sufficient to detect a 11.2% difference in survival between the sites at the end of the study, which was considered programmatically relevant. For the last survey round (year three) the number of clusters in Toto LGA, Nasarawa, was increased to 28 (420 households) in order to increase power for the nested care and repair study due to some contamination of intervention (radio) into the control area [18].
Field procedures
For each state, three teams of three interviewers and one supervisor were selected and trained, as well as one overall state coordinator. These teams largely stayed together throughout the three-year study period with very little fluctuation. Teams were trained for one week before each survey. The training consisted of two parts: first, training on the survey sampling and interviewing procedures with detailed work on the questionnaire and how it would be used in the local languages in a standardized fashion. This was followed by a theoretical and practical training on the assessment of the physical condition of nets following a training manual previously developed based on other durability studies and using a template for the determination of hole size categories and a tally sheet to assist in counting the number of holes on the net [20]. Campaign nets were identified using a visual aid of labels and packaging of all available LLIN brands.
A structured questionnaire was used to gather data on household characteristics, nets received from mass campaigns, any nets lost and the reasons for the loss, net care and repair behaviour and attitudes, exposure to care and repair messages, and characteristics as well as assessment of existing campaign nets. Holes in the nets were categorized into four distinct groups as recommended by WHO [4]: size one (0.5-2 cm in diameter), size two (2–10 cm), size three (10–25 cm) and size four (larger than 25 cm). The presence and number of repairs were also counted on each net. Respondents with nets that had any signs of damage were asked how any of the holes had occurred and five categories were recorded allowing multiple responses: "torn on an object", "pulled and torn", "seam came open" (summarized as mechanical damage), "damage from mice or rats" (rodent damage), and "burns from flame or sparks" (thermal damage). Some modifications of the questionnaire were made for the third survey round with more detailed questions being added with respect to attitudes and practices regarding net durability, care and repair.
All surveys were done between March and April in the three years following the campaign (2012–2014) with the exact time elapsed since the campaign varying between 1.1 and 1.2 years for the first round, 2.1 to 2.3 years for the second round and 3.1 to 3.3 years at the third round (see Additional file 1).
Data preparation and analysis
Data were collected on paper forms in the field and then entered by qualified staff into an EpiData 3.1 data base (EpiData Association, Odense, Denmark) using double entry and record validation. The cleaned versions of the datasets were then transferred for further processing and analysis to the Stata 13.1 software package (Stata Corp., College Station, TX, USA).
Assessment of the physical integrity of nets followed the most recent recommendations of WHO [4-6] and was based on a two-step approach. First, the proportionate hole index (pHI) was calculated for each net based on the number of holes in each size category and multiplying them with the recommended weights:
$$ \mathrm{pHI} = \left(\#\mathrm{size}\ 1\ \mathrm{holes}\right) + \left(\#\mathrm{size}\ 2\ \mathrm{holes} \times 23\right) + \left(\#\mathrm{size}\ 3\ \mathrm{holes} \times 196\right) + \left(\#\mathrm{size}\ 4\ \mathrm{holes} \times 578\right). $$
Based on the pHI value nets were then divided into three categories:
Good: total hole surface area <0.01 sq m or pHI <64
Acceptable: total hole surface area >0.01 < =0.1 sq m or pHI >64 < =642
Torn: total hole surface area >0.1 sq m or pHI >642
The first two categories were then combined as:
Serviceable: Net is either good or acceptable
The rate of attrition was calculated as the proportion of campaign nets lost among all nets originally received. Based on the reported reasons for net loss, attrition was further divided into: i) due to "wear and tear" defined as discarding of nets by throwing away, destroying or using them for other purposes, since previous data from a multi-country analysis has shown that these re-purposed nets are predominantly torn and considered no longer usable [21]; and, ii) due to giving them away for others to use [4]. The functional survival to time point x was calculated as the number of campaign nets still in serviceable condition at time x divided by the number originally received and not given away (i.e., surviving nets plus loss to "wear and tear") [5].
Based on the considerable recall bias observed in the data (Table 1) an adjustment was made for the estimation of campaign nets received and lost. Details of the calculations are shown in Additional file 2. In short, the number of campaign nets reported as received after two and three years, respectively, was inflated by the ratio between reported nets received per person in the household in that survey compared to the first one (one year after the campaign). The number of campaign nets lost was taken as the difference between campaign nets received and those actually seen in the survey and identified as campaign nets by LLIN brand. The number lost to "wear and tear" was obtained by applying the proportion from the data to the adjusted number of nets lost. This adjustment for lost nets was done for all three survey rounds.
Table 1 Magnitude of recall bias for campaign nets received and lost
Following the recommendations of WHO [5], the estimated net survival was plotted against hypothetical survival curves with defined median survival times [6] and details of these functions are found in Additional file 3.
Median estimated net survival was calculated from at least two time points, the lowest of which was below 85% using the following formula:
$$ tm=t1+\frac{\left(t2-t1\right)\times \left(p1-50\right)}{\left(p1-p2\right)} $$
where tm is the median survival time, t1 and t2 the first and second time points in years and p1 and p2 the proportion surviving to first and second time point respectively in per cent. The confidence interval of the median net survival was obtained by applying the formula to the lower and upper limits of p1 and p2, respectively.
To capture care and repair attitudes of households, an attitude score was constructed based on responses to eight statements introduced at the third survey round. These statements used a four-level Likert scale, where 1 was "strongly disagree" and 4 was "strongly agree". These were recoded during analysis to have −2 be "strongly disagree" and +2 be "strongly agree". Two statements were negatively phrased, and therefore were inversely recoded to make a positive response +2. Attitude scores for each respondent were summed and divided by eight to calculate an overall attitude score. Scores were then categorized into four groups: equal or less than zero (negative attitude); 0.01-0.74 (somewhat positive attitude); 0.75-1.49 (positive attitude); and, 1.50-2.00 (very positive attitude).
The wealth index was computed at the household level using principal component analysis (PCA). The variables for household amenities, assets, livestock, and other characteristics that are related to a household's socio-economic status were used for the computation. All variables were dichotomized except those of animal ownership where the total number owned was used. The first component of the PCA was used as the wealth index. Households were then classified according to their index value into quintiles within each study site and time point.
For all statistical analyses the cluster design was taken into account by applying the survey family of commands and thereby adjusting confidence intervals (CI) for the design effect. The CI for the adjusted net survival rates were obtained by calculating the exact binomial 95% CI from the adjusted numerator and denominator and then inflating the CI by the design effect obtained from the data for the proportion of campaign nets in serviceable conditions.
In order to account for potential confounders in the analysis of key outcomes multivariate regression models were used, logistic regression for dichotomous outcomes and linear regression for continuous variables. Models were constructed using backwards elimination and Wald tests for significant parameters. Variables that define the data structure such as site and time point were included in all models irrelevant of significance level.
Because the data from the nested care and repair study did not show a significant difference between the control site (Toto LGA) and the intervention site (Kokona LGA) due to the radio contamination [18] both sites were included for Nasarawa in the durability analysis.
Ethical approval was obtained from the Johns Hopkins School of Public Health Institutional Review Board (IRB #4108) and from the National Health Research Ethics Committee, Federal Ministry of Health in Nigeria (NHREC/01/01/2007). Respondents were informed about the purpose of the study in the dominant local language (Hausa or Efik) using a written script and the interview proceeded when verbal consent was given. This consent form contained information on the objectives of the survey, the risks, benefits and freedom of the participation, as well as information on confidentiality plus respondent rights.
The sample
Out of a total of 3,720 households to be sampled according to protocol, 3,649 (98.1%) valid interviews from households that had received campaign nets were obtained with a range by year and site between 92.0 and 100% (see Additional file 1). The number of campaign nets found in the sampled houses was 5,669 in total but decreased over time. In the first survey, the average number of campaign nets per sampled household was 1.79, very close to the 1.8 assumed for the sample size calculations. This rate decreased to 1.53 nets per household in the second survey round and 1.37 in the third. Field teams were able to assess 93.8% of all campaign nets for physical damage and the range across sites and surveys was between 82.6 and 99.2% (see Additional file 1).
As anticipated by the study design, house characteristics, assets and sociodemographic variables differed significantly between sites with a north–south gradient for many of the variables (see Additional file 4). At the Zamfara site the vast majority of houses had thatch or grass roofs, mud walls and floors made from earth or clay. In contrast, at the Cross River site more than 70% of houses had roofs made from sheets (iron or aluminium), plastered or brick walls and tile floors. The Nasarawa site was mixed, with mostly sheet roofs but only 43% plastered walls and 42% of houses had earth floors. Ownership of mobile phones or television sets increased from the north to centre to south with the exception of radios, which was lower in Cross River as 65% of households owned a TV set. Education also showed a strong gradient with only 17% of heads of household at the Zamfara site having had at least some secondary school while this was 27% in Nasarawa and 45% in Cross River. The north had a higher proportion of polygamous households, more households with children under five and a higher child density (child to adult ratio). Mean household size was around eight persons in Zamfara and Nasarawa and between five and six in Cross River.
Attrition, integrity and survival
Table 1 presents the level of recall bias for campaign nets received and lost documented in the surveys. Compared to the first survey, one year after the campaign, respondents' recall of the number of nets received from the campaign at year three reduced by 21.5% in Zamfara, by 16.6% in Cross River and by 10.6% in Nasarawa. The recall bias for nets lost was even larger, with generally only half of the nets lost based on the difference between those received and found in the survey reported by the respondents. In Zamfara and Cross River the recall worsened from the first to the third survey while in Nasarawa, the site of the nested care and repair BCC impact study, recall was highest at the last survey with 72%.
Results from the attrition and net integrity analysis are summarized in Table 2. The loss of campaign nets for any reason over time was very similar at all three sites with attrition rates – adjusted for recall bias – increasing from between 6 and 22% in year one to between 29 and 34% in year three. Among the lost nets the proportion that were discarded due to "wear and tear" increased over time at all sites and was very similar in Nasarawa and Cross River increasing from 34% in year one to 55 and 64%, respectively, in year three. In comparison to these two states a larger proportion of lost nets was given away for others to use in Zamfara with only 5% of all lost nets reported to have been discarded in year one, 25% in year two and 40% in year three. Considering only the discarded or re-purposed nets, the attrition rate for campaign nets due to "wear and tear" after three years was 13% in Zamfara, 18% in Nasarawa and 19% in Cross River.
Table 2 Attrition and integrity of campaign nets up to three years after distribution
The physical condition of surviving nets (Table 2) deteriorated significantly faster in Nasarawa with only slightly more than half of the nets still in serviceable condition after three years compared to 88% in Cross River and 90% in Zamfara.
The resulting, estimated, functional survival of campaign nets is shown in Table 3 and Figure 2. There was a striking difference between the crude and recall-adjusted estimates of functional net survival especially in Zamfara and Cross River with 15.2 and 18.4 percentage-point differences, respectively, after three years. In Nasarawa the discrepancy was not quite as high with an 11.2 percentage-point difference but here the overall survival estimate was more than 30 percentage points lower than at the other two sites reaching only 42% at the third survey round.
Table 3 Functional survival up to three years and median survival estimates for campaign nets
Survival in functional condition of campaign nets (100-denier polyester LLIN) up to three years after distribution in comparison to hypothetical survival curves of defined median survival. Solid lines = recall adjusted estimates; dashed line = crude estimates; horizontal dotted line = median survival; vertical arrows indicate where the functional survival curves reach or are projected to reach the median.
Plotting the survival results against the hypothetical survival functions (Figure 2) shows that the adjusted rates follow quite closely the assumed net decay, with Zamfara consistently just below the five-year curve, Nasarawa at all time-points around the three-year curve, and Cross River around the four-year curve although with some fluctuation between the second and third data points. Calculating the median survival from two data points leads to similar results (Table 3) with a value of 4.7 years in Zamfara, 3.0 in Nasarawa and 4.5 in Cross River (based on year one and year two results).
Use of nets
Of the campaign nets found in the survey, 60.8% had been used the previous night to sleep under and 66.5% had been used at any time in the past week. In a multivariate logistic regression model, use of the campaign net the previous night was more likely in Cross River with an adjusted odds ratio (OR) of 1.9 (p <0.0001) compared to the other sites, if the net was used over a bed frame or foam mattress compared to mat or the ground (adjusted OR 2.7, p <0.0001), if the household belonged to the highest wealth quintile (adjusted OR 1.4, p=0.005), and if there were any children under five in the household (adjusted OR 1.3, p=0.02). Use was less likely if the net had never been washed (adjusted OR 0.62, p=0.001). Interestingly, use was only marginally influenced by the physical condition of the net with a slightly increased use for nets in good condition (adjusted OR 1.2, p=0.12), but no decrease in use with increasing number of holes.
Reasons given for not using the nets were mostly based on perceptions of no threat or discomfort or dislike (84.6%), i.e., that it was too hot under the net (60.5%), that there was no malaria (28.6%), or that the net was too dirty (3.8%). Reasons that are related to condition of the net or its availability and usefulness were given for 15.4% of nets not used and the most common were that the net was too old or torn (8.5%), was currently not needed by any household member (7.3%), that the usual user of the net was not around last night (2.2%), and that net was not available due to washing (1.5%). Multiple answers were allowed for these responses.
Washing of nets
At all three sites the proportion of nets ever washed increased over time (p <0.001) but the washing rate differed between sites, being significantly lower in Cross River with only 66.8% of campaign nets ever washed after three years compared to 93.9% in Zamfara and 90.5% in Nasarawa (p <0.0001). The wash frequency also differed between sites. Among nets that had been washed at all in the previous six months, 23.1% had been washed four or more times in Nasarawa, 12.3% in Zamfara, and only 4.8% in Cross River (p <0.0001). Slightly more than half of the campaign nets had been washed with bar soap (56.7%) while 42.2% used a detergent. Only 0.3% used bleach. There were no major differences in soap use between sites. Similarly, the pattern of drying the nets was similar with most nets (62.0%) dried on a washing line, 13.7% on the ground, 11.5% over bushes and 12.8% inside.
Causes of damage
If a net was found to have any holes the respondents were asked whether they knew how these damages occurred. The response rate was similar in Zamfara (84.0%) and Nasarawa (85.6%) but lower in Cross River (70.2%, p=0.0001). In Zamfara and Nasarawa the response rate increased over time from 77.9% at the first survey to 87.7% at the third survey but remained constant over time in Cross River. The number of different damage mechanisms reported per net increased over time from 1.2 per damaged net in the first survey to 1.9 in the third and was consistently higher in Nasarawa with 2.2 in the three year survey compared to only 1.3 in Zamfara as well as Cross River.
Damage patterns did not change over time but differed significantly between the three sites as shown in Figure 3. Overall mechanical damage was the dominating mechanism reported, at 56.5% in Zamfara, 74.8% in Nasarawa and 74.1% in Cross River. However, the net getting stuck on a sharp object was more often reported in Zamfara and Nasarawa, while damage by pulling on the net was more commonly reported in Cross River. Rodent damage was very high in Zamfara (51.5%) and Nasarawa (55.7%) but less common in Cross River (16.1%). Thermal damage from flames or sparks was low at all sites ranging from 5.8% in Zamfara to 9.9% in Cross River.
Reported causes of damage for campaign nets with holes for which any mechanisms was reported (multiple responses possible).
One year after the campaign the proportion of campaign nets with holes that showed any sign of repair was similarly low at all three sites (p=0.6) ranging from 6.0% in Zamfara to 10.9% in Cross River and 10.4% in Nasarawa. At year two, i.e., after a first round of BCC interventions in Nasarawa in the nested care and repair impact study, the repair rate had significantly increased in Nasarawa to 23.5% compared to 12.4% in Zamfara and 12.2% in Cross River (p=0.005) and was highest in the Nasarawa BCC intervention site (Kokona LGA) with 28.1% compared to 19.8% in the control site (Toto LGA), but this difference was not significant (p=0.14). However, after three years repair rates had also further increased in Zamfara (22.7%) and even more in Cross River (38.1%) while rates in Nasarawa remained about the same (21.0% overall, 26.5% intervention and 17.8% control). Of all the nets with any repairs, 42% had full repairs meaning that the hole in question was completely closed, 32% had both fully and partially repaired holes and 26% had only partial repairs.
Four factors could be identified in a multivariate logistic regression analysis as independent drivers of the probability that any repairs were found on a damaged campaign net and these are shown in Table 4. Likelihood of repair increased continuously with increasing deterioration of the net and a net considered "torn" was almost three times as likely to have any repairs compared to nets in good condition. This observation was consistent across all three sites and surveys. The second factor was exposure to any messages on care and repair and this also showed a dose–response relationship, i.e., the more exposure, measured as number of care and repair messages recalled, the higher the likelihood that repairs were made. When the composite measure of the care and repair attitude score was used, which was only available for the third survey, the increase in repair behaviour with improving attitude becomes even more prominent with an adjusted OR of 4.1 (95% CI 2.0, 8.7) for households with a very positive attitude (Figure 4). Repair behaviour also improved with time since distribution, showing a doubling of the likelihood of repairs in the second year, but no further increase in the third. Finally, there was a difference in repair behaviour between the sites and, surprisingly, it was highest in Cross River although no explicit care and repair campaign had been implemented there beyond the general net-related BCC that did include some messages on "handling nets with care". Other factors such as wealth quintile, educational level of the head of household or presence of children in the family had no impact on repair behaviour.
Table 4 Determinants of repairing any holes in campaign nets with any damage across all three surveys and sites from multivariate logistic regression model (N=2,522)
Adjusted odds-ratio of campaign nets showing any signs of repair in relation to care and repair attitude of household respondent three years after distribution. Adjustment variables were site and physical condition of net.
When the approximate hole surface area estimated by the pHI was compared between damaged nets with or without repairs, the median pHI was much higher for nets with repairs (657) compared to those without (199). This was driven by the fact that more damaged nets were more likely to have repairs as shown above. When the analysis was done by category of physical condition of the net, the damaged surface area was found to be exactly the same for nets with and without repairs (Table 5). A multivariate regression analysis also confirmed that this finding was consistent across surveys and sites.
Table 5 Mean and median proportionate hole index by hole size category and repair status for campaign nets with any damage across all surveys and sites (N=2,522)
Determinants of net integrity
A series of multivariate logistic regression models were run to explore the determinants of the physical condition of the campaign nets and how they varied between sites and over time. This showed that in general the strength of associations between physical condition and the determinants included in the models increased over time, (i.e., from the first to the third survey) as nets deteriorated and the findings were similar for the outcome of a net being in good compared to serviceable condition. Table 6 presents the result of the final model for year three that included data on the care and repair attitude score. Across the study sites, a positive attitude was associated with a higher likelihood of a net being in serviceable condition (adjusted OR 2.9, p=0.003). In Nasarawa, the site of the BCC intervention, there was also a strong dose–response relationship with increasing odds of a net being in serviceable condition as the attitude score increases (adjusted OR 2.3 for each unit of increase, p=0.002) which was absent or very weak in the other sites, so that overall the dose–response was not clearly visible (Table 6). However, the interaction term between attitude score and states was not significant (p=0.8) indicating the relationship of condition and attitude did not principally differ between states.
Table 6 Determinants of surviving campaign nets being in serviceable condition three years after distribution based on multivariate logistic regression model (N=1,519)
Nets hanging and folded or tied up as well as those securely stored showed the best physical condition while the probability of being in serviceable condition was reduced by 37% if the net was hanging loose and by 66% if it was taken down but not stored. For the latter it was, however, not clear from the data whether this was because damaged nets were taken down and left lying around or whether they were damaged (e.g., by rodents) because they were lying around.
The presence of children aged under five in the household was associated with poorer condition of the nets and there was a statistically significant decrease in the probability of a net being serviceable with an increasing number of young children. The association with the care and repair attitude was driven by the Nasarawa data and although not different in principle (non-significant interaction term in the model) was much weaker in Zamfara and Cross River.
Nets also showed a significantly poorer physical condition if they were used over a mat (56% reduction of probability to be in serviceable condition) or on the ground (73%), if the household was in the poorest wealth quintile (48%) and if the sleeping room was crowded (20% for each additional person). Nets that were reported to have been used every day of the previous week were more likely to be in serviceable condition but here the cause and effect is most likely reversed, i.e., they were used more regularly because they were in better condition. No significant association with net integrity was found for education of head of household or the washing frequency of nets in the last six months. Finally, campaign nets from Nasarawa had a 90% lower probability to be in serviceable condition compared to those in Zamfara and – all other things being equal – were still 61% less likely to be serviceable compared to Cross River.
Determinants of reported rodent damage were assessed in a separate model which revealed four major factors: the net being used on the ground rather than a bed, mattress or mat (adjusted OR 3.6, p <0.0001); household being in the poorest wealth quintile (adjusted OR 1.7, p=0.005); food being stored in the room of the net (adjusted OR 1.6, p=0.030); and the net not hanging and not being stored (adjusted OR 1.6, p=0.047). In addition, a significant difference between sites was found with rodent damage being twice as likely in Nasarawa (adjusted OR 2.0, p=0.014) compared to Zamfara and significantly less likely in Cross River (adjusted OR 0.05, p <0.0001).
Based on the results from the models of determinants of damage, some of the factors at household and net levels that, in addition to climate and housing, were shown to impact on the net survival outcome were compared between sites and results are presented in Table 7. Somewhat surprisingly, the attitude towards care and repair three years after the campaign was similar or even slightly better in Zamfara and Cross River compared to Nasarawa even though the latter was the site for the BCC intervention and had significantly more exposure to messages on care and repair. This suggests that either households in Zamfara and Cross River had generally a better attitude towards taking care of their possessions or improvements in attitude were more easily induced by general messages on nets, or it was a combination of the two.
Table 7 Differences in uni-variate analysis between sites in factors that potentially impact on net survival (survey three only)
Generally the reported presence of rodents around the houses was very common, but least common in Cross River and highest in Nasarawa. In contrast, storing food or crops in rooms that were also used for sleeping was significantly more common in Zamfara. Cooking inside sleeping rooms was generally uncommon and differed only marginally between sites. The density of children under five as well as the crowding of people within one room was highest in Zamfara even though the mean number of persons per households was slightly lower than in Nasarawa (see Table 1).
The type of sleeping place over which the campaign nets had been used the previous night was more commonly a bed frame in Zamfara but at the same time this site had the highest proportion of nets found over mats or directly over the ground. In Zamfara, nets were also more often tied or folded up when they were hanging and securely stored when not. Nasarawa showed a low proportion of tying or folding the hanging nets and a high rate of nets not hanging and not properly stored in a box or cupboard, i.e., lying openly in the room.
The primary objective of this study was to apply the most recent methodology of estimating functional survival of LLINs in the field recommended by WHO [4-6] in three different areas in Nigeria using a retrospective study design in order to assess the magnitude of local variation for net survival of the same type of net (100-denier polyester LLIN). After slightly more than three years, the functional survival of the campaign nets ranged from 41.9% in Nasarawa to 70.1% in Cross River and 74.7% in Zamfara. This corresponded to an estimated median net survival of 3.0, 4.5 and 4.7 years, respectively, i.e., more than one year difference between sites.
To date only one study has been published that applies the new WHO recommendations to estimate functional net survival and this involves two brands of LLIN (one 100-denier polyester, the other 118-denier polyethylene) that were tested in one site in Cambodia [11]. After three years functional survival was 61.2% for the polyester and 58.1% for the polyethylene LLIN corresponding to a median survival of around 3.5 years.
Two other recent studies have included measures of attrition and integrity, but did not combine them into a functional survival estimate. However, both studies provide evidence of a significant variation of results between sites, similar to that observed in this study. In Benin, a 150-denier polyethylene LLIN was followed prospectively for 18 months in two sites in the north and two sites in the south of the country [9]. Attrition due to discarding and re-purposing (which is referred to as attrition due to "wear and tear" in this study), was overall 17% after 18 months with a range between sites between 10 and 32%. Even larger was the variation between sites in the proportion of surviving nets in serviceable condition, which was 52% in the poorest performing site and 82% in the best performing. Although not reported in the paper, the functional survival can be estimated from the provided information based on the same formula applied in this study as varying between 31.4 and 71.2% after 18 months, which, based on the hypothetical survival curves, would correspond to a median survival between only 1.0 and 2.2 years.
In Rwanda, LLINs were tested in three sites within the country and at each site two types of nets, one polyethylene and one polyester (no information on denier given), were sampled in neighbouring communities and prospectively followed for 24 months. Only overall attrition rates were assessed which varied between 16 and 36% after two years. While the proportion of nets in two of the sites was quite similar with around 50% still in serviceable condition for both types of nets, the third site had much poorer performance with only 10% in serviceable condition for the polyethylene LLIN and 37% for the polyester LLIN.
Two additional studies use at least a comparable methodology and both provide evidence that a median functional survival of four or more years is, indeed, possible. In rural western Kenya, a 150-denier polyethylene LLIN was studied in a cross-sectional survey five years after distribution but using actual distribution registers to verify the number of nets received [8]. The authors report an attrition rate (all causes) of 28% but this excluded households that were sampled but which no longer had any of their nets. The overall attrition after five years can, therefore, be estimated at 35 to 40%. At the same time, 61% of the surviving nets were still in serviceable condition which suggests a median survival in this environment of between 4.0 and 4.5 years. Finally, in Uganda a prospective study of a 75-denier polyester LLIN over 42 months [7] showed an all-cause attrition of 20% and nets in serviceable condition of 87% which would roughly correspond to a median functional survival of 4.5 years. In summary, these data suggest that the findings from Nigeria are plausible with respect to the between-site variation as well as the level of median functional survival of the LLIN.
The current WHO guidelines for field-testing net durability suggest prospective studies as the primary design, but also mention the possibility of retrospective studies [4]. The second objective of this study was to explore the feasibility of such a retrospective design that depends on multiple, independent, cross-sectional surveys, and to assess the magnitude of the potential recall biases that could influence the estimation of net attrition. The largest discrepancies were found between the nets reported lost and the actual loss as defined by the difference between nets received and those seen during the survey (see Table 2), although with some fluctuation over time and between sites. The bias in recall of nets received was of lesser magnitude and declined systematically with increasing time since distribution as would be expected. The overall magnitude of the recall bias, i.e., the difference between crude and adjusted functional survival estimate was very significant in two of the sites (see Figure 2 and Table 3) suggesting that without adjustment for the recall bias, results would have been very misleading. Although it cannot be said with certainty that all possible biases have been captured with the adjustments, three arguments support the view that the adjusted results are realistic. First, the adjusted survival curves as shown in Figure 2 are surprisingly well aligned with the hypothetical curves showing that at each time point the projected median survival was very similar. Second, the major decline and differences between sites in functional survival were driven by the net integrity, which was directly observed and not subject to recall bias. Third, the estimated all-cause attrition rates after adjustment (see Table 2) agree quite well with those reported in the literature. Batt et al. [22] report a 21% attrition rate from India after three years based on a prospective follow-up. Similarly, Fettene et al. [23] report a rate of 28% after two to three years in Ethiopia, and from a study by Hassan et al. [24] in Sudan, attrition of 19% after 18 months can be estimated. The previously mentioned prospective study from Rwanda [10] found between 16 and 36% after 24 months and the one from Kenya [8] around 40% after five years. Other studies report slightly higher rates of 20% attrition after 12 months in Uganda [25] and Liberia [26], 43% after 18 months in Benin [9] and 45% after 24 months in Sudan [27] while another publication reports a low rate of only 20% after 42 months from a prospective study in Uganda [7]. However, while the results from the recall-adjusted retrospective estimates of durability from this study seem feasible, some uncertainty remains and a prospective approach would always be preferable.
The major cause of holes reported by the survey respondents was mechanical damage at all three sites affecting 57 to 75% of all damaged nets, followed by holes caused by rodents, which were frequent in two sites (Zamfara 51% and Nasarawa 56%), but much lower in the third (Cross River 18%). The third damage category, thermal damage from burns or sparks, was generally low, less than 10%, and similar at all sites. This order of magnitude of the three major mechanisms of damage, primarily mechanical failures followed by animal and thermal damage, have been confirmed in the only laboratory-based textile analysis of over 500 damaged nets randomly sampled from seven sites in Africa and Asia recently presented to a WHO consultation on net durability (Russell, pers comm). A sub-sample of year two nets from all three sites of this study was part of that textile analysis of damage and the Nigeria data from the laboratory mostly confirm the pattern between sites and the order of damage causes expressed as hole frequency by mechanism: mechanical damage was 45% of all holes in Zamfara, 79% in Nasarawa and 66% in Cross River, while rodents were most common in Zamfara 51%, but low in Cross River (25%) and Nasarawa (17%). Thermal damage was only between 3 and 5% of all holes (Wheldrake, pers comm). This suggests that determination of damage by interview of net owners is not precise, but gives a reasonably exact idea of the dominating mechanisms of damage in an area.
Three other studies have attempted to capture the causes of damage quantitatively from household interviews and found similar results. The previously mentioned study from Benin [9] reports 84% mechanical damage after 18 months, 11% thermal damage (but with significant variation between sites of 2 to 29%) and 2% from rodents. In a cross-sectional sample of nets from various sources and ages Mutuku and colleagues report 63% mechanical damage (excluding "don't know" responses), 12% from animals and 12% from "fire" at the Kenyan coast near Mombasa [12]; and from a refugee camp in Western Uganda, Spencer et al. [25] report after 12 months, 46% of damage to be from rats, 24% from tears and 8% from burns. A more semi-quantitative assessment is reported by Picado et al. [28] who state that in India most damage was reportedly caused by animals (dogs, goats and rats), while in Nepal the most common cause was mechanical damage from nails and sticks. Other publications only mention the causes of damage reported by respondents without quantifying, but generally agree that mechanical, rodents and thermal are the major causes [21,27,29-33].
The analysis of determinants of damage in this study revealed a number of factors that are associated with poverty (poor housing, crowding, absence of adequate sleeping places, poorest wealth quintile) as well as behavioural aspects such as letting the net hang loose during the day, not storing it properly when not in use (rodents), having food or crops stored in the same room (rodents), having young children with access to the bedroom, and the general attitude of the household towards net care and repair. Particularly for the Nasarawa site, these mechanisms were confirmed by qualitative research where members in focus group discussions mentioned "children, rodent, everyday handling that is not gentle and characteristic of the sleeping place" as the main causes of damage [16]. Similar findings also are reported from Senegal [15] and the association of poor housing [34] and storage of food [35] with increased rodent presence is also well documented. While the aspects of care and repair attitudes has been discussed in more detail in the context of the BCC impact study results [18], it is noteworthy here that attitude scores increased at all three sites even though no explicit BCC activities had been undertaken in Zamfara and Cross River. However, some level of exposure to net-related BCC messages is likely to have occurred in these states also, as they are part of the USAID funded Malaria Action Plan for States (MAPS). This, in combination with a higher level of net culture that had previously been described for the north and southeast [36], might well explain the observed improved care and repair attitude over time. Overall, the differences in living conditions and attitudes (see Table 7) between the sites explains the significant variation in estimated net survival between the sites, which would most likely have been even bigger had there not been an intensive BCC campaign in Nasarawa.
In the past, the discussion on improvement of net care and repair has very much focused on calls for repair of nets [37,38] based on generally low levels of observed repairs made of less than 20% of damaged nets [12,22,37-40], although in some cases rates between 30 and 64% have been found [41-44]. In addition, there is some evidence that repair behaviour can be induced by BCC [45]. This latter observation can be confirmed by the present study where rates of repair increased from around 10% one year after net distribution to between 21 and 38% after three years, being driven by care and repair attitude and the level of damage on the net. However, this study, for the first time, also looked at the impact of repairs on the holed surface area and found no detectable difference. Although the use of the pHI can only be considered a rough approximation of damaged area due to its underlying assumptions and potential measurement errors, this does suggest that a strategy that focuses more on prevention of holes rather than attempts to fix damage may be more promising.
Differences of more than one year in estimated median survival of a 100-denier polyester LLIN between three areas of Nigeria were driven by living conditions and household behaviour and attitudes, providing evidence that where and how an LLIN is used is at least as important for durability as the textile design or structure of the net.
Recall bias in a retrospective durability study can be significant and while adjustments can be made, some uncertainty remains such that prospective studies on durability are preferable wherever possible.
Repair of damaged nets can be induced by improved attitude towards care and repair, but does not seem to measurably improve net condition. Focus should, therefore, be on preventive behaviour that protects the net from damage, such as folding or tying the net up every day, keeping children away, avoiding storing food or crops in the same room, and storing the net safely when not in use.
World Health Organization. World Malaria Report 2013. Geneva 2013 http://www.who.int/entity/malaria/publications/world_malaria_report_2013/wmr2013_no_profiles.pdf?ua=1 (accessed 31.10.2014).
WHO Malaria Policy Advisory Committee and Secretariat. Malaria policy advisory committee to the WHO: conclusions and recommendations of fifth biannual meeting (March 2014). Malar J. 2014;13:253.
WHO. Guidelines for procuring of public health pesticides, WHO/HTM/NTD/WHOPES/2012.4, Geneva 2012. http://whqlibdoc.who.int/publications/2012/9789241503426_eng.pdf (accessed 31.10.2014).
World Health Organization. Guidelines for Monitoring the Durability of Long-lasting Insecticidal Mosquito Nets Under Operational Conditions. Geneva: WHO/HTM/NTD/WHOPES/2011.5; 2011. http://whqlibdoc.who.int/publications/2011/9789241501705_eng.pdf?ua=1 (accessed 29.10.2014).
World Health Organization. WHO guidance note for estimating the longevity of long-lasting insecticidal nets in malaria control. Geneva: 2013. http://www.who.int/entity/malaria/publications/atoz/who_guidance_longevity_llins/en/index.html (accessed 31.10.2014).
World Health Organization. Estimating functional survival of long-lasting insecticidal nets from field data. Vector Control Technical Expert Group Report to MPAC September 2013. http://www.who.int/malaria/mpac/mpac_sep13_vcteg_llin_survival_report.pdf (accessed 31.10.2014).
Kilian A, Byamukama W, Pigeon O, Gimnig J, Atieli F, Koekemoer L, et al. Evidence for a useful life of more than three years for a polyester-based long-lasting insecticidal mosquito net in Western Uganda. Malar J. 2011;10:299.
Mejía P, Teklehaimanot HD, Tesfaye Y, Teklehaimanot A. Physical condition of Olyset nets after five years of utilization in rural western Kenya. Malar J. 2013;12:158.
Gnanguenon V, Azondekon R, Oke-Agbo F, Beach R, Akogbeto M. Durability assessment results suggest a serviceable life of two, rather than three years for the current long-lasting insecticidal (mosquito) net (LLIN) intervention in Benin. Malar J. 2014;14:69.
Hakizimana E, Cyubahiro B, Rukundo A, Kabayiza A, Mutabazi A, Beach R, et al. Monitoring long-lasting insecticidal net (LLIN) durability to validate net serviceable life assumptions, in Rwanda. Malar J. 2014;13:344.
Van Roey K, Sovannaroth S, Sochanta T, Touch MS, Pigeon O, Sluydts V, et al. A phase II trial to evaluate the efficacy, fabric integrity and community acceptance of Netprotect using a recommended insecticidal net as positive control. Malar J. 2014;13:256.
Mutuku FM, Khambira M, Bisanzio D, Mungai P, Mwanzo I, Muchiri EM, et al. Physical condition and maintenance of mosquito bed nets in Kwale County, coastal Kenya. Malar J. 2013;12:46.
Azondekon R, Gnanguenon V, Oke-Agbo F, Houevoessa S, Green M, Akogbeto M. A tracking tool for long-lasting insecticidal (mosquito) net intervention following a 2011 national distribution in Benin. Parasit Vectors. 2014;7:6.
Loll DK, Berthe S, Faye SL, Wone I, Koenker H, Arnold B, et al. User-determined end of net life in Senegal: a qualitative assessment of decision-making related to the retirement of expired nets. Malar J. 2013;12:337.
Loll DK, Berthe S, Faye SL, Wone I, Arnold B, Koenker H, et al. "You need to take care of it like you take care of your soul": perceptions and behaviours related to mosquito net damage, care, and repair in Senegal. Malar J. 2014;13:322.
Hunter GC, Scandurra L, Acosta A, Koenker H, Obi E, Weber R. "We are supposed to take care of it": a qualitative examination of care and repair behaviour of long-lasting, insecticide-treated nets in Nasarawa State, Nigeria. Malar J. 2014;13:320.
Skovmand O, Bosselmann R. Strength of bed nets as function of denier, knitting pattern, texturizing and polymer. Malar J. 2011;10:87.
Article PubMed Central CAS PubMed Google Scholar
Koenker K, Kilian A, Hunter G, Acosta A, Scandurra L, Fagbemi F, et al. Impact of a behaviour change intervention on long-lasting insecticidal net care and repair behaviour and net condition in Nasarawa State, Nigeria. Malar J. 2015;14:18.
Ye Y, Patton E, Kilian A, Dovey S, Eckert E. Can universal insecticide-treated net campaigns achieve equity in coverage and use? The case of northern Nigeria. Malar J. 2012;11:32.
Networks Project. LLIN hole assessment; surveyor job aids: www.rollbackmalaria.org/files/files/partnership/wg/wg_itn/docs/ws4/Net_Hole_Assessment_Training_Surveyo_Job%20aid.pdf (accessed 29.10.2014).
Koenker H, Kilian A, Zegers de Beyl C, Onyefunafoa EO, Selby RA, Abeku T, et al. What happens to lost nets: a multi-country analysis of reasons for LLIN attrition using 14 household surveys in four countries. Malar J. 2014;13:464.
Bhatt RM, Sharma SN, Uragayala S, Dash AP, Kamaraju R. Effectiveness and durability of Interceptor long-lasting insecticidal nets in a malaria endemic area of central India. Malar J. 2012;11:189.
Fettene M, Balkew M, Gimblet C. Utilization, retention and bio-efficacy studies on PermaNet in selected villages in Buie and Fentalie districts of Ethiopia. Malar J. 2009;8:114.
Hassan SHE, Malik EM, Okoued SI, Eltayeb EM. Retention and efficacy of long-lasting insecticide-treated nets distributed in eastern Sudan: a two-step community-based study. Malar J. 2008;7:85.
Article PubMed Central Google Scholar
Spencer S, Grant AD, Piola P, Tukpo K, Okia M, Garcia M, et al. Makaria in camps for internally-displaced persons in Uganda: evaluation of an insecticide-treated bednet distribution programme. Trans R Soc Trop Med Hyg. 2004;98:719–27.
Banek K, Kilian A, Allan R. Evaluation of Interceptor long-lasting insecticidal nets in eight communities in Liberia. Malar J. 2010;9:84.
Ritmeijer K, Davies C, van Zorge R, Wang SJ, Schorscher J, Dongu'du SI, et al. Evaluation of a mass distribution programme for fine –mesh impregnated bednets against visceral leishmaniasis in eastern Sudan. Trop Med Int Health. 2007;12:404–14.
Picado A, Singh SP, Vanlerberghe V, Uranw S, Ostyn B, Kaur H, et al. Residual activity and integrity of PermaNet 2.0 after 24 months of household use in a community randomized trial of long lasting insecticidal nets against visceral leishmaniasis in India and Nepal. Trans R Soc Trop Med Hyg. 2012;106:150–9.
Erlanger TE, Enayati AA, Hemingway J, Mshainda H, Tami A, Lengeler C. Field issues related to effectiveness of insecticide-treated nets Tanzania. Med Vet Entomol. 2004;18:153–60.
Githinji S, Herbst S, Kistermann T, Noor AM. Mosquito nets in a rural area of Western Kenya: ownership, use and quality. Malar J. 2010;9:250.
Protopopoff N, Van Bortel W, Marcaotty T, Van Herp M, Maes P, Baza D, et al. Spatial targeted vector control in the highlands of Burundi and its impact on malaria transmission. Malar J. 2007;6:158.
Loha E, Teferea K, Lindtjørn. Freely distributed bed-net use among Chano Mille residents, south Ethiopia: a longitudinal study. Malar J. 2013;12:23.
Alaii JA, van den Borne, Kachur SP, Shelley K, Mwenesi H, Vulule JM, et al. Community reactions to the introduction of permethrin-treated bed nets for malaria control during a randomized controlled trial in Western Kenya. Am J Trop Med Hyg. 2003;68 Suppl 4:128–36.
Bonner PC, Schmidt WP, Belmain SR, Oshin B, Baglole D, Borchert M. Poor housing quality increases risk of rodent infestation and Lassa fever in refugee camps in Sierra Leone. Am J Trop Med Hyg. 2007;77:169–75.
Eisen R, MacMillan K, Atiku LA, Mpanga JT, Zielinski-Gutierrez E, Graham CB, et al. Identification of risk factors of plague in the West Nile region of Uganda. Am J Trop Med Hyg. 2014;90:1047–58.
Kilian A, Koenker H, Baba E, Onyefunafoa EO, Selby RA, Lokko K, et al. Universal coverage with insecticide-treated nets – applying the revised indicators for ownership and use to the Nigeria 2010 malaria indicator survey data. Malar J. 2013;12:314.
Graves PM, Ngondi JM, Hwang J, Getachew A, Gebre T, Mosher AW, et al. Factors associated with mosquito net use by individuals in households owning nets in Ethiopia. Malar J. 2011;10:354.
Batisso E, Habte T, Tesfaye G, Getachew D, Tekalegne A, Kilian A, et al. A stitch in time: a cross-sectional survey looking at long lasting insecticide-treated bed net ownership, utilization and attrition in SNNPR, Ethiopia. Malar J. 2012;11:183.
Norris LC, Norris DE. Efficacy of long-lasting insecticidal nets in use in Macha, Zambia, against the local Anopheles arabiensis population. Malar J. 2011;10:254.
Wills AB, Smith SC, Anshebo GY, Graves PM, Endeshaw T, Shargie EB, et al. Physical durability of PermaNet 2.0 long-lasting insecticidal nets over three to 32 months of use in Ethiopia. Malar J. 2013;12:242.
Kachur SP, Phillips-Howards PA, Odhacha AM, Ruebush TK, Oloo AJ, Nahlen BL. Maintenance and sustained use of insecticide-treated benets and curtains three years after after controlled trial western Kenya. Trop Med Int Health. 1999;4:728–35.
Smith S, Joshi UB, Grabowski M, Selanikio J, Nobiya T, Aapore T. Evaluation of bednets after 38 months of household use in Northwestern Ghana. Am J Trop Med Hyg. 2007;77 suppl 6:243–8.
Shirayama Y, Phompida S, Kuroiwa C, Miyoshi M, Okumura J, Kobayashi J. Maintenance behavior and long-lasting insecticide-treated nets (LLITNs) previously introduced into Bourpar district, Khammouane province, Lao PDR. Public Health. 2007;121:122–9.
Tsuzuki A, Khamlome B, Kawada H, Eto H, Phompida S, Takagi M. The efficacy and physical condition of Olyset insecticide-treated nets after 5 years use in rural Lao PDR. Southeast Asian J Trop Med Public Health. 2011;42:268–73.
Panter-Brick C, Clarke SE, Lomas H, Pinder M, Lindsay SW. Culturally compelling strategies for behaviour change: a social ecology model and case study in malaria prevention. Soc Sci Med. 2006;62:2810–25.
We would like to thank the dedicated survey teams as well the State and LGA Roll Back Malaria staff and especially the three state coordinators, Tuna Ibrahim (Zamfara), Grace Ozioma Okoye (Nasarawa) and Eti May Ann (Cross River). Special thanks also go to the superb data entry manager Efik Udofia. We appreciate all the support from the Malaria Consortium Office Nigeria, especially from Kolawole Maxwell. This study was made possible by the generous support of the American people through the United States Agency for International Development (USAID) under the President's Malaria Initiative (PMI), USAID/JHU Cooperative Agreement No. GHS-A‐00‐09-00014-00 for the NetWorks Project. The contents are the responsibility of the authors and do not necessarily reflect the views of USAID/PMI or the US Government.
Tropical Health LLP, Montagut, Spain
Albert Kilian
Malaria Consortium, London, UK
Johns Hopkins Bloomberg School of Public Health Center for Communication Programs, Baltimore, MD, USA
Hannah Koenker & Matthew Lynch
Malaria Consortium Nigeria Office, Abuja, Nigeria
Emmanuel Obi
Malaria Consortium Africa Office, Kampala, Uganda
Richmond A Selby
United States Agency for International Development, President's Malaria Initiative, Washington, DC, USA
Megan Fotheringham
Hannah Koenker
Correspondence to Albert Kilian.
AK, HK, MF, and ML developed the concept of the study, AK, HK, EO, and RAS planned and implemented the survey, AK undertook the data analysis and all co-authors participated in data interpretation. Initial manuscript was drafted by AK with all co-authors contributing to final version. All authors read and approved the final manuscript.
Additional file 1:
Households sampled and nets assessed at different locations and time points. Targeted and achieved sample for households and nets at each site and time point.
Flow diagram of adjustment process for recall bias of nets received and lost. A step-by-step presentation of calculations of adjustment for recall biases.
Hypothetical loss functions with defined median survival. Annually remaining nets in the hypothetical loss functions that show the median survival and can be used to re-create the graphs.
House characteristics and socio-economic background by site. Details of household background characteristics comparing sites and using pooled data from all three surveys.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Kilian, A., Koenker, H., Obi, E. et al. Field durability of the same type of long-lasting insecticidal net varies between regions in Nigeria due to differences in household behaviour and living conditions. Malar J 14, 123 (2015). https://doi.org/10.1186/s12936-015-0640-4
Net durability
Submission enquiries: [email protected] | CommonCrawl |
Do electrons really perform instantaneous quantum leaps?
This is not a duplicate, non of the answers gives a clear answer and most of the answers contradict.
There are so many questions about this and so many answers, but none of them says clearly if the electron's change of orbitals as per QM can be expressed at a time component or is measurable (takes time or not), or is instantaneous, or if it is limited by the speed of light or not, so or even say there is no jump at all.
I have read this question:
Quantum jump of an electron
How do electrons jump orbitals?
where Kyle Oman says:
So the answer to how an electron "jumps" between orbitals is actually the same as how it moves around within a single orbital; it just "does". The difference is that to change orbitals, some property of the electron (one of the ones described by (n,l,m,s)) has to change. This is always accompanied by emission or absorption of a photon (even a spin flip involves a (very low energy) photon).
and where DarenW says:
A long time before the absorption, which for an atom is a few femtoseconds or so, this mix is 100% of the 2s state, and a few femtoseconds or so after the absorption, it's 100% the 3p state. Between, during the absorption process, it's a mix of many orbitals with wildly changing coefficients.
Does an electron move from one excitation state to another, or jump?
where annav says:
A probability density distribution can be a function of time, depending on the boundary conditions of the problem. There is no "instantaneous" physically, as everything is bounded by the velocity of light. It is the specific example that is missing in your question. If there is time involved in the measurement the probability density may have a time dependence.
and where akhmeteli says:
I would say an electron moves from one state to another over some time period, which is not less than the so called natural line width.
the type of movement in electron jump between levels?
where John Forkosh says:
Note that the the electron is never measured in some intermediate-energy state. It's always measured either low-energy or high-energy, nothing in-between. But the probability of measuring low-or-high slowly and continuously varies from one to the other. So you can't say there's some particular time at which a "jump" occurs. There is no "jump".
How fast does an electron jump between orbitals?
If you look at the spectral lines emitted by transiting electrons from one energy level to another, you will see that the lines have a width . This width in principle should be intrinsic and calculable if all the possible potentials that would influence it can be included in the solution of the quantum mechanical state. Experimentally the energy width can be transformed to a time interval using the Heisneberg Uncertainty of ΔEΔt>h/2π So an order of magnitude for the time taken for the transition can be estimated.
H atom's excited state lasts on average $10^{-8}$ secs, is there a time gap (of max 2*$10^{-8}$ secs) betwn. two consec. photon absorpt.-emiss. pairs?
So it is very confusing because some of them are saying it is instantaneous, and there is no jump at all. Some are saying it is calculable. Some say it has to do with probabilities, and the electron is in a mixed state (superposition), but when measured it is in a single stable state. Some say it has to do with the speed of light since no information can travel faster, so electrons cannot change orbitals faster then c.
Now I would like to clarify this.
Do electrons change orbitals as per QM instantaneously?
Is this change limited by the speed of light or not?
quantum-mechanics electrons quantum-electrodynamics orbitals
Árpád SzendreiÁrpád Szendrei
$\begingroup$ You seem to ask a lot of these questions, and the answer is almost always some version of "it depends on how you're defining the word in question." $\endgroup$ – probably_someone Jun 28 '19 at 8:43
$\begingroup$ We should not debate what this or that person posted here. Did you investigate any scuentific literature on this? $\endgroup$ – my2cts Jun 28 '19 at 12:49
$\begingroup$ @my2cts en.wikipedia.org/wiki/Atomic_electron_transition "Atomic electron transition is a change of an electron from one energy level to another within an atom[1] or artificial atom.[2] It appears discontinuous as the electron "jumps" from one energy level to another, typically in a few nanoseconds or less. It is also known as an electronic (de-)excitation or atomic transition or quantum jump. The damping time constant (which ranges from nanoseconds to a few seconds) relates to the natural, pressure, and field broadening of spectral lines. $\endgroup$ – Árpád Szendrei Jun 28 '19 at 12:54
$\begingroup$ @my2cts "However, in 2019 it was demonstrated that the evolution of each completed jump is continuous, coherent and deterministic." nature.com/articles/s41586-019-1287-z $\endgroup$ – Árpád Szendrei Jun 28 '19 at 12:54
$\begingroup$ From what I can tell, none of the quoted answers are actually saying that the jump is discontinuous. If you want to reconcile them all in your head, the only ingredient you really need is to remember that making an energy measurement takes time. If you accept that, I don't think any of your quotes contradict each other, and I don't think any of them make a claim about instantaneity. $\endgroup$ – Jahan Claes Jun 28 '19 at 19:51
In every reasonable interpretation of this question, the answer is no. But there are historical and sociological reasons why a lot of people say the answer is yes.
Consider an electron in a hydrogen atom which falls from the $2p$ state to the $1s$ state. The quantum state of the electron over time will be (assuming one can just trace out the environment without issue) $$|\psi(t) \rangle = c_1(t) |2p \rangle + c_2(t) | 1s \rangle.$$ Over time, $c_1(t)$ smoothly decreases from one to zero, while $c_2(t)$ smoothly increases from zero to one. So everything happens continuously, and there are no jumps. (Meanwhile, the expected number of photons in the electromagnetic field also smoothly increases from zero to one, via continuous superpositions of zero-photon and one-photon states.)
The reason some people might call this an instantaneous jump goes back to the very origins of quantum mechanics. In these archaic times, ancient physicists thought of the $|2 p \rangle$ and $|1 s \rangle$ states as classical orbits of different radii, rather than the atomic orbitals we know of today. If you take this naive view, then the electron really has to teleport from one radius to the other.
It should be emphasized that, even though people won't stop passing on this misinformation, this view is completely wrong. It has been known to be wrong since the advent of the Schrodinger equation almost $100$ years ago. The wavefunction $\psi(\mathbf{r}, t)$ evolves perfectly continuously in time during this process, and there is no point when one can say a jump has "instantly" occurred.
One reason one might think that jumps occur even while systems aren't being measured, if you have an experimental apparatus that can only answer the question "is the state $|2p \rangle$ or $|1s \rangle$", then you can obviously only get one or the other. But this doesn't mean that the system must teleport from one to the other, any more than only saying yes or no to a kid constantly asking "are we there yet?" means your car teleports.
Another, less defensible reason, is that people are just passing it on because it's a well-known example of "quantum spookiness" and a totem of how unintuitive quantum mechanics is. Which it would be, if it were actually true. I think needlessly mysterious explanations like this hurt the public understanding of quantum mechanics more than they help.
In the context of nonrelativistic quantum mechanics, nothing is limited by the speed of light because the theory doesn't know about relativity. It's easy to take the Schrodinger equation and set up a solution with a particle moving faster than light. However, the results will not be trustworthy.
Within nonrelativistic quantum mechanics, there's nothing that prevents $c_1(t)$ from going from one to zero arbitrarily fast. In practice, this will be hard to realize because of the energy-time uncertainty principle: if you would like to force the system to settle into the $|1 s \rangle$ state within time $\Delta t$, the overall energy has an uncertainty $\hbar/\Delta t$, which becomes large. I don't think speed-of-light limitations are relevant for common atomic emission processes.
knzhouknzhou
$\begingroup$ Upvote for "are we there yet?" (and for the rest, too). $\endgroup$ – Peter - Reinstate Monica Jun 28 '19 at 8:46
$\begingroup$ @Maxter The expected energy of the atom is continuously changing, that's true. Meanwhile, the state of the electromagnetic field goes from $|0 \text{ photons} \rangle$ to $|1 \text{ photon} \rangle$ via continuous superpositions. So the expected energy of the electromagnetic field also goes up continuously, though the states we are superposing each have a whole number of photons of discrete energy. $\endgroup$ – knzhou Jun 28 '19 at 15:30
$\begingroup$ Okay, but: 1. changing amplitude between two orthogonal physical states is really not at all like physically moving along a continuum of states which have a natural norm, as in your road trip analogy. Just at the level of pedagogy, I think that it leads to more confusion when you imply that $|x_1\rangle + |x_2\rangle$ is the same as $|(x_1+x_2)/2\rangle$ (Maxter's comments are an example). $\endgroup$ – Rococo Jun 28 '19 at 15:34
$\begingroup$ @Rococo 1. I thought I was implying the exact opposite: the only reason one would have a problem is if one didn't apply the idea of superposition + unitary evolution consistently elsewhere, i.e. to both the atom and the field. 2. Sorry, I was unclear here, and just edited a bit to address this. I'm not trying to say anything about how measurement works, I'm saying that the way we measure can naively lead to the (incorrect) conclusion that unitary evolution in between measurements also contains jumps. $\endgroup$ – knzhou Jun 28 '19 at 15:44
$\begingroup$ @Rococo I think you're aiming at a level of precision above the context of the OP's question: to 99.99% of people who will ever hear the phrase, "quantum leaps" refers to the wrong pictures I'm arguing against, while to the 0.01% that do study open quantum systems, it will be clear that what I'm saying isn't in contradiction to what they're learning! So that's why I chose to simplify this way. But it would also be valuable if you were to write an answer from the more advanced perspective. $\endgroup$ – knzhou Jul 1 '19 at 23:28
No. Instantaneous state transfer violates causality, which is a premise of all rational deterministic theories in natural philosophy. Like two magnets clicking together once they are in close proximity, the state transfer can occur very quickly relative to our perception and so can be considered "approximately" instantaneous, but this approximation only applies to systems that do not take time periods of this finer granularity into account. The term "instant" is often hyperbole, as it depends on your measurement interval--all that it conveys is that the event occurs within a lapse of time too small to be measured using the present apparatus.
I don't see why the speed of the transfer would be limited by the perceived speed of light.
pygoscelespygosceles
$\begingroup$ Don't superluminal speeds lead to causality violation? $\endgroup$ – Peter Shor Jun 29 '19 at 20:03
$\begingroup$ Clearly not. Causality is more general and fundamental than any supposed limitation on the speed of matter. Deterministic models can exist that do not include the assumption of such a limit. $\endgroup$ – pygosceles Jul 2 '19 at 17:44
$\begingroup$ Why does instantaneous state transfer violate causality when superluminal state transfer doesn't. Especially since, for any superluminal state transfer, there's a reference frame in which it is instantaneous. $\endgroup$ – Peter Shor Jul 2 '19 at 18:01
$\begingroup$ My argument does not depend on instantaneous state transfer, it refutes it. Unless I am mistaken, the premise you are using is based on a flavor of relativity that defies definition of simultaneity and hence cannot opine on the subject of instantaneousness. The behavior of superluminal matter would be undefined in that framework since manifestly a different set of laws would have to be applied to it. $\endgroup$ – pygosceles Jul 2 '19 at 18:49
(An edit for all you editors out there: I know the 'How to edit' says 'fix grammatical and spelling errors'; but before you start correcting '-sing' to '-zing', please check with a dictionary whether this is in fact British English spelling. We ain't all from the States;-)
This is a good question, certainly in the sense that it asks something we are not yet equipped to answer with much certainty; thus it provokes us to think harder. I can't give you a better answer than what knzhou has already done.
However, I think it bears repeating, that QM is very hard to understand, not least because it tries to explain observations that are made by, in a sense, throwing a lot of particles rather forcefully at something very small, from very far away, and then seeing what happens. We have no way of observing an electron moving around a nucleus, even if it does; our methods of observation force us to think in purely statistical terms about what really amounts to 'steady states': electrons smeared out over an orbital.
QM does a marvellous job, without a doubt, but I think it is reasonable to ask whether this is because it offers genuine insight into what individual particles do or are; or whether this is instead due to the use of clever statistics. In analogy, think of how we can't predict what any individual person will do over the course of a day, but we can make very good predictions of what a population is likely to do.
Edit on 2 July 2019
The whole subject of quantum mechanics continues to be an area of contention, which to me is a sign of good health; science is at its heart about scrutinising and challenging theory. That, however, also means that we can never pronounce with absolute certainty, that we know the truth - it lies in the logical nature of the empirical method: experiments, however sophisticated, can, even in the ideal situation of perfect measurement, only ever disprove a theoretical prediction with absolute certainty. "Past Performance Is No Guarantee of Future Results" is even more true in science than in the world of investment.
So, about downvoting; I don't mind it, but please leave a comment to explain why, don't be an anonymous coward. I think those of us who take the time and make the effort to answer the sometimes very difficult questions that people ask, deserve better, for one thing. And of course, if you have an insight, why not share it?
@TCooper: I fully agree with your sentiment - people who are interested in science, are attracted exactly to the unanswered questions; they are curious and feel excited about the things that are yet to be discovered.
"thermomagnetic condensed boson": kzhou's answer is very much the orthodox, correct QM answer, but there is a lot of uncertainty about why it makes logical sense to talk about wavefunctions in the first place. It is in fact important to understand this part, not least because on the one hand, we know that QM and General Relativity aren't compatible, and on the other hand, wavefunctions as simply complex valued functions do not make good sense when space is not flat; they will at least have to be sections of the complex bundle over the space-time manifold.
j4nd3r53nj4nd3r53n
$\begingroup$ I'd like to know why this answer was down voted? It seems like it's the only answer acknowledging our (human populations current scientists on the frontier of particle physics, not me) knowledge of this really isn't absolute or definitive. Would someone who down voted explain why? I'm a layman without a doubt, just love reading about these things from time to time - but this answer did make a lot of sense to me..? $\endgroup$ – TCooper Jun 29 '19 at 0:33
$\begingroup$ @TCooper Maybe because it isn't logical. It says we are not yet able to answer with much certainty and at the same time that he cannot give a better answer than kzhou's. But kzhou's gave a very concise answer, a big "NO". That goes against the first sentence of this answer. Furthermore this answer isn't actually an answer to the asked question, but just a pack of philosophical words with a faulty logic. $\endgroup$ – thermomagnetic condensed boson Jun 29 '19 at 12:09
$\begingroup$ Is there hard evidence, you're certain will never be disproved, that shows knzhou's answer is 100% correct? This post acknowledges that knzhou's is the current the best answer, but highlights the limitations in the methods used to arrive at that conclusion. It leaves open that our current best understanding will more likely than not seem ancient like the ideas knzhou mocks in his answer at some point in the future. What if in the future the time it takes an electron in a hydrogen atom to fall from the 2p state to the 1s state will be the contant used to define one instant? $\endgroup$ – TCooper Jul 1 '19 at 22:27
$\begingroup$ I apologise for not responding for a while - I have been away. Please see my edit - my comment is going to be too long for the space allowed here. $\endgroup$ – j4nd3r53n Jul 2 '19 at 7:41
$\begingroup$ -1: "Throwing a lot of particles at something from very far away, and seeing what happens." This is not quantum mechanics, this is high energy physics (which does indeed use quantum mechanics, but isn't all of quantum mechanics). One can do very-well-controlled AMO—atomic, molecular, and optical physics—to study quantum mechanics without throwing very small pieces of matter at each other at enormous speeds. $\endgroup$ – Peter Shor Jul 20 '19 at 14:46
Not the answer you're looking for? Browse other questions tagged quantum-mechanics electrons quantum-electrodynamics orbitals or ask your own question.
Does the new finding on "reversing a quantum jump mid-flight" rule out any interpretations of QM?
Can you please show me a final atomic model which demonstrates movement of electrons inside it?
Does a relaxing electron really accelerate?
Do electrons actually reside in orbitals?
Quantum numbers and radial probability of the electrons
Orbitals and electron jumping
Virtual photon exchange instantaneously | CommonCrawl |
Is molecular evolution faster in the tropics?
Matthew G. Orton ORCID: orcid.org/0000-0002-6551-25121,2,
Jacqueline A. May1,
Winfield Ly2,
David J. Lee2 &
Sarah J. Adamowicz1
Heredity volume 122, pages513–524(2019)Cite this article
The evolutionary speed hypothesis (ESH) suggests that molecular evolutionary rates are higher among species inhabiting warmer environments. Previously, the ESH has been investigated using small numbers of latitudinally-separated sister lineages; in animals, these studies typically focused on subsets of Chordata and yielded mixed support for the ESH. This study analyzed public DNA barcode sequences from the cytochrome c oxidase subunit I (COI) gene for six of the largest animal phyla (Arthropoda, Chordata, Mollusca, Annelida, Echinodermata, and Cnidaria) and paired latitudinally-separated taxa together informatically. Of 8037 lineage pairs, just over half (51.6%) displayed a higher molecular rate in the lineage inhabiting latitudes closer to the equator, while the remainder (48.4%) displayed a higher rate in the higher-latitude lineage. To date, this study represents the most comprehensive analysis of latitude-related molecular rate differences across animals. While a statistically-significant pattern was detected from our large sample size, our findings suggest that the EHS may not serve as a strong universal mechanism underlying the latitudinal diversity gradient and that COI molecular clocks may generally be applied across latitudes. This study also highlights the merits of using automation to analyze large DNA barcode datasets.
The latitudinal diversity gradient is one of the most striking and general features of biodiversity (Hillebrand 2004). While numerous ecological and evolutionary mechanisms have been proposed to explain this pattern (reviews by Mittelbach et al. 2007; Dowle et al. 2013; Gillman and Wright 2014; Fine 2015; Schluter 2016), the latitudinal diversity gradient remains an active and important area of enquiry in biodiversity science. According to the "evolutionary speed" hypothesis (ESH), the rate of evolution is higher in the tropics, likely due to shorter generation times, higher mutation rates, and/or a faster rate of selection (Rohde 1992). The rate of molecular evolution has been correlated with the rate of net diversification across a variety of taxonomic groups (Barraclough and Savolainen 2001; Davies et al. 2004; Lanfear et al. 2010; Bromham et al. 2015). Therefore, evolutionary speed could provide a general mechanism underlying the latitudinal diversity gradient across many taxonomic groups.
Despite the broad appeal and apparent explanatory power of this mechanism, there remains considerable uncertainty about whether rates of molecular evolution tend to be higher in the tropics. Findings have varied dramatically across different taxonomic groups and across studies. Significantly higher rates of molecular evolution have been detected at lower latitudes for diverse taxa, including mammals (Gillman et al. 2009), birds (Gillman et al. 2012), amphibians (Wright et al. 2010), aquatic turtles (Lourenço et al. 2013), marine fish (Wright et al. 2011), angiosperms (Davies et al. 2004; Wright et al. 2006; Gillman et al. 2010), and Foraminifera (Allen et al. 2006) (differences in elevation or water depth were considered together with latitude in some studies). By contrast, latitude was not significantly associated with branch lengths in lizards and snakes (Rolland et al. 2016), terrestrial turtles (Lourenço et al. 2013), water beetles (Fujisawa et al. 2015), or birds (Bromham and Cardillo 2003).
The wide variation in effect sizes among these studies may be partially explained by real differences in the strength of the latitude/rates correlation among taxa. In particular, ectotherms are expected to display a stronger trend than endotherms on theoretical grounds (Allen et al. 2006), as they experience variation in body temperature with environmental temperature; additional taxon-specific biological traits, environmental factors, and evolutionary processes may also contribute to the different findings. However, a large component of the variability may also be attributed to methodological differences. Diverse ecological, geographical, and phylogenetic inclusion criteria have been applied; some studies used only very closely-related sisters, for which relative rate estimation can be unreliable (Welch and Waxman 2008), while others included more phylogenetically-distant lineages, which may differ in many biological traits and habitat features (e.g. discussed in Wright et al. (2009)). Sample size has also varied considerably, with most studies analyzing a modest sample size (e.g. dozens of pairs), which may introduce noise into estimates of effect sizes. Additionally, sister-pair, whole-tree, and non-phylogenetic methodologies have all been employed, together with different genetic regions and a wide variety of methods for estimating branch lengths and branch length ratios between sister lineages. Methodological choices have been the topic of vigorous discussion (Weir and Schluter 2011; Gillman et al. 2011), and the conclusions drawn about variability in molecular evolutionary rates across latitudes influence the interpretation of general findings in macroevolution and macroecology (Weir and Schluter 2007, 2011; Gillman et al. 2009, 2011).
In this study, we use a large dataset of mitochondrial cytochrome c oxidase subunit I (COI) DNA sequences to test whether rates of molecular evolution are generally higher at lower latitudes in animals. By analyzing a dataset including eight thousand pairs of latitudinally-separated lineages spanning six animal phyla, and by using consistent analysis methods across taxa, we present the most comprehensive test to date of the generality of the molecular evolutionary speed hypothesis.
Filtering and aligning DNA sequence data sets
From Jan. 23-Mar. 17, 2017, we used the API of the Barcode of Life Data Systems (BOLD; Ratnasingham and Hebert 2007) to download all public specimen records for six animal phyla (SI: Parsing of BOLD datasets and Table 1). The records were next filtered to retain those with a Barcode Index Number (BIN; Ratnasingham and Hebert 2013) identifier, latitude (lat) and longitude (long), and a COI-5P barcode sequence. Sequences with internal gap or N content exceeding 1% of the total sequence length were eliminated. To facilitate alignment, sequences less than 640 base pairs (bp) and >1000 bp in length, not including gap characters, were also excluded. Sequences considered unlikely to be biologically relevant were removed from the analysis (SI: Sequence Removal Criteria). A single representative sequence ("centroid") was selected from each BIN for further analysis. For each BIN, a DNA multiple sequence alignment was first performed. We performed all alignments using the muscle algorithm (Edgar 2004) from the R package muscle version 3.18.0 (available from http://bioconductor.org/packages/muscle/); details on alignment settings are available in SI: Alignment Settings. A pairwise distance matrix was then generated for each BIN using the TN93 (Tamura and Nei 1993) model of nucleotide substitution implemented in the R package Ape version 4.1 (Paradis et al. 2004). Details on choice of nucleotide substitution model are available in SI: Choice of Nucleotide Substitution Model. The centroid was defined as the sequence displaying the smallest average pairwise distance to all other sequences in the BIN.
Table 1 Summary of signed branch length ratios between pairs of Barcode Index Numbers (BINs) inhabiting lower vs. higher latitudes
The five smaller phyla and the majority of Arthropoda were analyzed at the class level, using the taxonomic hierarchy on BOLD. Analyzing most datasets at higher taxonomic levels permitted the inclusion of records lacking lower-level taxonomy (e.g. marine larvae). However, Insecta was analyzed at a lower taxonomic level due to computational limitations related to estimating genetic distances using a multiple sequence alignment for each taxon. Insecta was subdivided into Coleoptera, Diptera, Hymenoptera, Lepidoptera, and the remainder of Insecta. Due to their size (>20 K BINs), Hymenoptera, Diptera, and Lepidoptera were further subdivided into the separate geographical regions of North America, South America, Eurasia + Africa, and Australasia (SI: Geographic Divisions) for preliminary analysis and then recombined for final analysis.
For each taxonomic group, a DNA multiple sequence alignment and pairwise distance matrix were generated using the centroid sequences. Sequences exhibiting >0.15 pairwise distance to all others within its dataset were removed from analysis, as our next step involved analyzing closely-related BINs only; this step also resulted in the removal of contaminations involving phylogenetically distant taxa. For each of the taxonomic groups, we then selected a reference sequence (SI: Criteria for Reference Sequence Selection), which was trimmed to a standard length of 620 bp, for inclusion while generating a final alignment. Aligned sequences were then trimmed by the start and end positions of the reference sequence to standardize sequence length for further analysis.
Pairing related lineages that differ in latitude
Using the final trimmed alignment for each dataset, preliminary pairings of BINs were first established by using pairwise distances. We do not assume that a mitochondrial gene represents the consensus species tree for our study taxa; rather, using a gene-specific approach to generate pairs for gene-specific molecular rates analysis overcomes problems related to incomplete lineage sorting, which can result in a bias towards estimating higher rates in more diverse taxa (Mendes and Hahn 2016). BINs exhibiting between 0.02 and 0.15 genetic divergence were considered candidate pairs. A minimum of 0.02 was set to avoid very small ingroup distances, which can yield extreme or unreliably-estimated (Welch and Waxman 2008) rate ratios when there are minimal changes in one or both members since the last common ancestor. Additionally, by using this lower threshold, we expect that the majority of our pairs will consist of different biological and evolutionary species (Ratnasingham and Hebert 2013), thus avoiding mixing intraspecific and interspecific comparisons. Our selected upper threshold reflects patterns of COI sequence variability that have been studied across four of the animal phyla included in our study, which indicate signs of transitional saturation at ca. 0.17–0.18 divergence (Carr 2010; Luo et al. 2011, Loeza-Quintana 2017). After conducting the test recommended by Welch and Waxman (2008), we found that branch length differences did not scale with branch length over our chosen range of divergences (SI: Testing for Unreliably Estimated Rate Ratios).
Candidate pairs passing the divergence criteria were retained if they additionally exhibited a difference of at least 20 degrees in median absolute latitude. Latitudes were converted from decimal degree format to absolute values prior to taking the median for each BIN. Original co-ordinates were used for calculating the median for mapping. Pairs in which the latitudinal range of either BIN overlapped by more than 25% with the latitudinal range of the other BIN were omitted. If a BIN occurred in multiple pairings that met the latitude criterion, we retained the pairing with the smallest ingroup distance (see Fig. 1 in Gillman et al. 2009, for justification). For those BINs bearing a species-level identification, median latitude values from BOLD were validated against latitude data obtained from the Global Biodiversity Information Facility (https://www.gbif.org) (further details in SI: GBIF Validation and Table S2). The latitudinal information obtained from the BOLD records corresponds well with the GBIF data for most groups and particularly so for well-studied taxa. It should be noted, however, that the groups that exhibited lower correlation values between the two datasets (e.g. Annelida, Cnidaria, and Collembola) are understudied and often present more challenges in terms of identification and cryptic diversity; we therefore argue that the use of Molecular Operational Taxonomic Units such as BINs, as opposed to traditional Linnaean species labels, will likely improve the accuracy of the latitudinal information for these lesser-known groups.
Sister lineages discovered for (a) Echinodermata (n = 56), (b) Perciformes (n = 54), and (c) Hymenoptera within North America (n = 517) separated by a minimum of 20 degrees in median absolute latitude and 0.02–0.15 sequence divergence, presented as examples of the total pairings analyzed in this study (n = 8037; see Fig. S7 for plots for all taxa). The point for each Barcode Index Number (BIN) included in a pair is plotted according to its median latitude and median longitude on Kavrayskiy VII map projections using the data visualization software plotly (https://plot.ly/). Pairings of lineages were color and symbol coded by the difference in median absolute latitude
For comparing relative rates of molecular evolution, it is advantageous to use a close relative for the outgroup, but it must be a valid outgroup that does not fall within the ingroup (Robinson et al. 1998). Therefore, each latitudinally-separated BIN pair was next assigned an outgroup, which was as closely related as possible yet also at least 1.3X more distant from each ingroup BIN than the ingroup distance. If an outgroup BIN was not available in a dataset that met the 1.3X divergence criterion for both lineages, then the pairing was omitted. Analysis was repeated using a 1.5X threshold for selected taxa; results were similar (SI: Choice of Outgroup Threshold).
Signed branch length ratios for latitudinally-separated pairs
In addition to the ingroup distance, genetic distances to the outgroup were calculated for each member of a BIN pairing. In total, 53 pairings (0.65% of total pairings) were excluded where both members were equidistant from the outgroup. For each remaining pair, the three distances were used to estimate branch lengths for each ingroup member from their point of divergence, assuming an additive distance matrix for the three-taxon phylogeny. Branch lengths for each ingroup member (BL_a and BL_b) were calculated according to:
$$\begin{array}{l}{\rm{BL}}\_{\rm{a}} = \left( {{\rm{IG}}\_{\rm{ab/2}}} \right) + \left( {{\rm{OG}}\_{\rm{a}} - {\rm{OG}}\_{\rm{b/2}}} \right)\cr {\rm{BL}}\_{\rm{b}} = \left( {{\rm{IG}}\_{\rm{ab/2}}} \right) + \left( {{\rm{OG}}\_{\rm{b}} - {\rm{OG}}\_{\rm{a/2}}} \right)\end{array}$$
where IG_ab represents ingroup distance between ingroup members, OG_a represents distance to outgroup for ingroup member a, and OG_b represents distance to outgroup for ingroup member b. Distances for IG_ab, OG_a, and OG_b were calculated using the TN93 substitution model (Tamura and Nei 1993). Branch length ratio was then determined by dividing the larger of the two branch lengths (BL_a or BL_b) by the smaller of the two distance values. A positive or negative sign was then assigned to the branch length ratio for each pair based on direction, with positive signs used for pairs where the lower-latitude BIN exhibited a longer branch length. An alternative method for determining branch length ratios has been previously applied (e.g. Wright et al. 2011); thus, for comparison, we also calculated ratios by dividing the branch length with lower latitude over the branch length with higher latitude.
Phylogenetic pseudoreplication occurs when the same segments of branch length are included in multiple pairings, which would inflate the degrees of freedom for statistical testing. We investigated whether any ingroup sequence displayed a genetic distance to any sequence in any other ingroup pair in its dataset that was smaller than its own ingroup distance. In such instances, signed branch length ratios were averaged across the involved pairings to create a single data point prior to statistical testing (SI: Pseudoreplicate Determination and Averaging).
For each phylum and for selected large classes and orders, we tested whether there were significantly more than 50% positive signed branch length ratios using a binomial test. We additionally tested whether the median signed ratio differed from a null expectation of 0 using a Wilcoxon signed-rank test. Sequential Bonferroni correction (Holm 1979) was performed at the phylum level (Table 1).
Linear regression analysis of phylogenetically independent contrasts
Phylogenetically Independent Contrasts (PICs) were calculated, standardized for branch length, and subjected to regression analysis in R. Both latitude and temperature were considered as predictor variables, with branch length contrasts (i.e. standardized differences) as the response. Temperature data were obtained from the open source climate modeling tool WorldClim version 2 (http://worldclim.org/version2; Fick and Hijmans 2017; more details on these data available in SI: Parsing of Average Annual Global Temperature Data; Fig. 2). A linear regression model fitted through the origin was generated for each test (SI: Linear Regression Analysis of PICs and Table S5). A multiple regression of the PICs, including both temperature and latitude as predictors of branch length, was also performed for the Arthropoda data set (Table S5).
Linear regression analysis of standardized, phylogenetically independent contrasts (PICs) for all pairings of latitude-separated sister lineages belonging to Arthropoda (n = 7900), whereby each point represents one sister pair and the fitted regression line is forced through the origin. (a) PICs in median latitude vs. signed PICs in branch length (positive sign = larger branch length at lower latitude) (slope of regression = 5.36E-06). (b) PICs in median temperature vs. signed PICs in branch length (slope of regression = 6.73E-06). Each pairing of lineages is separated by a minimum of 20 degrees in median absolute latitude and 0.02–0.15 sequence divergence. Each pairing of lineages is color/symbol coded differently according to major insect order or gray/plus symbol for remaining orders within Arthropoda using the data visualization software plotly (https://plot.ly/)
Second and third codon position analysis
Linear regression analyses of PICs described above were performed for Arthropoda according to the 2nd and 3rd codon positions of the multiple DNA sequence alignments only. Divergences of pairings (TN93 substitution model) were recalculated according to the 2nd or 3rd codon position of the alignments, filtering out pairs with a divergence value of 0 at the second codon position; indeterminate (very high) divergence values between pairings were filtered out for the third codon position (SI: Codon-based analysis and Table S5).
Whole tree-based analyses
Complementary tree-based analyses were also performed for selected taxonomic groups (Actinopterygii (ray-finned fishes), Aves (birds), and Papilionidae (swallowtail butterflies)) for comparison with the findings based upon the sister-pair approach. Maximum likelihood COI gene trees were constructed in RAxML (Stamatakis 2014) using a binary constraint tree from the literature, and the root-to-tip branch lengths were determined for each lineage. The method of phylogenetic generalized least squares (PGLS) (Grafen 1989) was employed, with the estimated branch lengths used as the response variable to test the effects of latitude on molecular evolutionary rate (for detail, see SI: Tree-based analyses).
R code, platform, versioning and datasets
Taxonomic groups with fewer than/more than 10 K BINs were analyzed using the highly-commented R code publicly available through the following links:
https://github.com/m-orton/Evolutionary-Rates-Analysis-Pipeline/blob/master/EvolutionaryComparisonPipelineSmallTaxa.R
https://github.com/m-orton/Evolutionary-Rates-Analysis-Pipeline/blob/master/EvolutionaryComparisonPipelineLargeTaxa.R
Taxonomic groups with fewer than 10 K BINs were analyzed on the Elastic Compute Cloud instance r4.xlarge offered by Amazon Web Services, while those with more than 10 K BINs were run on the Elastic Compute Cloud instance r4.8xlarge (https://aws.amazon.com/ec2/). All taxonomic groups used a community Amazon Machine Image (AMI) created by Louis Aslett with R version 3.3.1 (https://www.r-project.org/) and R Studio version 0.99.903 (https://www.rstudio.com/) running on Ubuntu version 16.04.2 (64-bit).
Considering all six animal phyla, there was only a weak trend of higher rates of COI evolution in lineages inhabiting latitudes closer to the tropics. Of 8037 pairs of lineages exhibiting a contrast of at least 20 degrees in median absolute latitude and separated by 0.02–0.15 sequence divergence (Tamura-Nei model), 4146 (51.6%) displayed a higher rate of molecular evolution in the lower-latitude member, and 3891 had a higher rate in the higher-latitude lineage (binomial test p-value = 0.0046; Table 1). The median signed branch length ratio (larger/smaller branch length, with positive signs for pairs with higher rates at lower latitude) was 1.001, indicating a slight bias towards higher rates at lower latitudes (Wilcoxon test p-value = 0.0029). An alternative method for calculating branch length ratios (lower-latitude/higher-latitude branch length) yielded a similar directional pattern, with a median value of 1.001 and a mean value of 1.046.
While the overall results were dominated by the large sample size of Arthropoda, patterns were generally similar across taxa, including both endotherms and ectotherms (Table 1, Table S3). However, a few groups displayed a stronger association between molecular rates and latitude. For example, Echinodermata and Chordata exhibited higher rates of molecular evolution at lower latitudes significantly more often than expected by chance, including when using the Wilcoxon test following correction for multiple tests (Table 1). In addition to using the Tamura-Nei model for calculating genetic distances, the general time-reversible model (GTR + I + G) was also used for all phyla excluding Arthropoda and yielded a similar trend favouring higher rates at lower latitudes for both Chordata and Echinodermata when using the Wilcoxon test (SI: Choice of Nucleotide Substitution Model; Table S4).
Patterns were also similar to the overall trends when confining the analysis to those pairs in which one member inhabits the tropical zone (−23.437 to 23.437 latitude), with 3449 pairs (52.2%) exhibiting a higher rate at the lower latitude and 3152 (47.8%) with a higher rate at the higher latitude. In addition, trends were similar upon analyzing the subset of the data with the largest latitudinal differences. Of the 2304 pairs separated by ≥30° in median absolute latitude, 1193 (51.7%) of the pairs possessed positive signed ratios.
The results of the PICs for Arthropoda (Fig. 2 and Table S5) yielded little to no directional trend for both median latitude and median temperature with slope and R-squared values for each regression line being near 0, further supporting the results of the binomial and Wilcoxon tests in Table 1. When restricting PICs to either the 2nd and 3rd codon position of the DNA multiple alignments, this lack of directional trend remained largely unchanged (Table S5). Similarly, multiple regression including both temperature and latitude as predictors did not yield significant results (Table S6).
Finally, the results of the complementary tree-based analyses of selected taxa largely agree with the results of the sister pair pipeline (Table S7). Latitude appears to have either a non-significant effect (e.g. Aves, Papilionidae) or a slightly negative effect on molecular evolutionary rate (e.g. Actinopterygii), thus indicating a weak trend of higher rates at tropical latitudes in fish, in agreement with the sister-pair analysis for that taxon.
Is molecular evolution faster in the tropics? Yes, but not by much
In this study, we set out to answer this question for animals, using a specific protein-coding region of the mitochondrial genome that has been widely used for taxonomic identification (Hebert et al. 2003), biodiversity research (e.g. Stahlhut et al. 2013), and molecular clock calibrations (Knowlton and Weigt 1998; Lessios 2008; Loeza-Quintana and Adamowicz 2018). In addition to using consistent analysis methods across six animal phyla, we used public DNA barcode data to generate the largest dataset to date of phylogenetically independent pairs of lineages differing in latitude (>8000 pairs), a ca. 50 to 150-fold larger sample size than in prior studies (Bromham and Cardillo 2003; Allen et al. 2006; Wright et al. 2011, 2006, 2010; Gillman et al. 2009, 2010, 2012; Lourenço et al. 2013; Fujisawa et al. 2015; Rolland et al. 2016). Despite several prior studies reporting a strong relationship between latitude and rates of molecular evolution, and others reporting no relationship, we found only a weak trend here. Generally, rates of COI evolution in animals were unrelated or only weakly related to latitude.
Nevertheless, we detected a statistically significant trend using our large sample, with just over half (51.6%) of the pairs exhibiting higher rates of molecular evolution closer to the tropics and stronger directional trends in specific taxonomic groups. Arthropods displayed a significant directional pattern, driven mainly by a slightly stronger trend and large sample sizes in the Arachnida and the insect orders Diptera and Hymenoptera. A significant trend (56% positive pairs) was also found within the bony fish (Actinopterygii), with the most noticeable trend observed in the large order Perciformes (67%), mirroring the overall directional pattern reported for marine fish by Wright et al. (2010), but with a smaller effect size. A marked directional trend was also found in the Echinodermata (66%), although the number of pairs for this phylum was small; this group would be worth further investigation after more comprehensive barcode coverage becomes available in public databases. In addition, when restricting the analysis to pairs differing by ≥30° in absolute median latitude, Echinodermata showed an even stronger directional trend of 82%. These taxon-specific latitudinal trends may be linked to biological traits that vary across groups. For example, within echinoderms, possible contributing mechanisms may include the longer generation times in polar regions as well as the commonality of a brooding mode of reproduction near the poles (Pearse et al. 2009), which can influence both rates and patterns of substitution (Foltz 2003). Multivariable methods (Bromham and Cardillo 2003; Fujisawa et al. 2015) could be further developed in the future to separate out the effects of latitude/temperature and traits upon molecular rates.
Reconciling findings across studies: a matter of scale?
Our main finding of a near-even latitudinal pattern in COI rates contrasts markedly with the strong trends reported in several prior studies, which indicated that a large majority of pairs exhibited higher rates in the tropics or at higher temperatures or lower latitudes. Additionally, branch length ratios were large in some of these previous studies (e.g. 1.47 in Gillman et al. (2009), 1.61 in Wright et al. (2010), as contrasted with 1.05 overall here, using the same form of branch length ratio). Several factors may contribute to these discrepancies.
In particular, the phylogenetic scale of latitudinal comparisons may contribute to differing results. Generally, the studies reporting the largest effect sizes focused on very closely related taxa, which were paired such that ecological traits were as consistent as possible (Wright et al. 2006, 2010, 2011; Gillman et al. 2009). This approach has the benefit of focusing on effects relating to latitude or temperature differences as much as possible, while keeping other traits consistent. However, such pairings are somewhat subjective as to what qualifies a pair as being similar enough for inclusion. Moreover, the approach employed (Gillman et al. 2009) was criticized for the application of the stated inclusion/exclusion criteria, for model overfitting, and for the statistical analysis (Weir and Schluter 2011). Upon replying to these concerns, Gillman et al. (2011) reported a lower effect size than in the initial paper (1.28 vs. 1.47 for Cytochrome b for mammals inhabiting warmer vs. cooler environments), which was still higher than ratios reported here and in studies that considered more distantly related taxa (Bromham and Cardillo 2003) or that conducted whole-tree analysis for large taxonomic groups (Fujisawa et al. 2015; Rolland et al. 2016). In sum, these results suggest that rates may vary predictably with latitude (and/or elevation and depth) in very closely-paired taxa, for which unreliable rate ratio estimates are also more likely (Welch and Waxman 2008), but not more generally such that latitude would reliably predict rates across larger phylogenies.
Secondly, results may be impacted by the spatial scale of study as well as the way that environmental factors are measured. For example, Dugo-Cota et al. (2015) found no significant effect of latitude upon the rate of molecular evolution in glass frogs but a significant impact of temperature. Over the scale of their study, which was primarily confined to the tropics, latitude was not significantly correlated with temperature. The authors advocate using direct environmental data, rather than proxies such as latitude. In our study, all of our pairs differed by at least 20° in median absolute latitude, and 29% of pairs differed by more than 30°; as well, we analyzed temperature data in addition to latitude differences for these same pairs. Therefore, we would have expected to be able to detect an effect of temperature on rates if such a relationship were present. Our ability to compare with some of the prior research was limited due to modest COI barcode coverage for some vertebrate groups to date, due to historical preference for other markers for some vertebrate taxa (e.g. Cytochrome b). Nevertheless, our results agree with prior taxonomically and spatially broad studies that reported no latitudinal trend in branch lengths (Fujisawa et al. 2015; Rolland et al. 2016).
Despite methodological differences and varied findings among prior studies, we have found that our results converged between sister-pair and whole-tree analytical approaches for selected taxonomic groups we subjected to both analysis types (fish, birds, butterflies). For example, a PGLS regression analysis using 4658 species of fish and controlling for the number of nodes present along the root-to-tip distances revealed a significant negative correlation (p < 0.0001) between latitude and branch length, but again with a small effect size and low explanatory power (R2 = 0.11; Table S7), mirroring the sister-pair results. A subsequent multiple regression analysis performed by May (2017), who considered a total of 32 environmental and biological parametres, found that the significant effect of latitude disappeared when that variable was considered in a multivariable context. May (2017) discovered that life history traits, such as age at maturity, were more strongly related to branch lengths than environmental parametres. Therefore, latitudinal gradients in life history traits may contribute to explaining the weak latitudinal gradient in molecular rates that we detected here in the univariate analyses in some taxa, such as fish.
Reconciling findings across studies: choice of gene region
An additional factor that may contribute to discrepant results among studies is the choice of genetic region(s). Due to its widespread adoption for DNA-based animal identification (Hebert et al. 2003) as well as biodiversity and applied research (Adamowicz 2015), the standardized COI barcode region has become the most-sequenced genetic region for many animal groups. There are currently >6 M DNA barcode records housed on the Barcode of Life Data Systems (BOLD, accessed June 25, 2018; Ratnasingham and Hebert 2007). Therefore, this marker is an ideal choice for performing a large-scale, single-marker study of evolutionary rates. Located at the core of cell respiration in an enzymatically active part of the cytochrome oxidase protein, the COI barcode region has been shown to be relatively conserved, likely via purifying selection, thus allowing for species-level identification, while also possessing variability at third codon positions as well as in specific regions of low functional constraint (Pentinsaari et al. 2016). These characteristics, combined with data availability, make the marker suitable for studies such as ours, which seeks to elucidate the factors that influence the rate of mitochondrial molecular evolution.
Other studies in animals that have investigated the relationship between molecular rates and latitude using mitochondrial markers have reported inconsistent findings to our own, whereby rates were found to be significantly higher at warmer latitudes. However, these studies have generally reported smaller sample sizes and have been mostly restricted to specific groups in Chordata including fishes and amphibians at the Cytochrome B (cyt b) gene as well as the 12S and 16S ribosomal RNA genes (Wright et al. 2010 and 2011), mammals at cyt b (Gillman et al. 2009), and turtles at COI, ND4, and cyt b (Lourenço et al. 2013). Birds, by contrast, have yielded inconsistent findings where significantly faster rates in the tropics were reported for cyt b in one study (Gillman et al. 2012), while there were no significant trends in cyt B or ND2 in another study (Bromham and Cardillo 2003). In arthropods, only one study has been performed which used the COI region in water beetles and reported insignificant results using phylogenetic methods and a large sample size of 5032 sequences (Fujisawa et al. 2015).
To date, few studies in animals have investigated the relationship between molecular rates and latitude at nuclear gene regions. Lourenço et al. (2013) reported statistically insignificant results at the RAG1, RAG2, and c‐mos gene regions in aquatic turtles. Insignificant results were also reported in scaled reptiles by Rolland et al. (2016) across 9 different nuclear genes and 3 mitochondrial genes using a larger sample size of 1651 species, analyzed through resampling sets of 51–141 species pairs differing in temperature.
Overall, it appears that choice of gene region (either mitochondrial or nuclear) can influence results in molecular rate studies in animals, with studies involving nuclear genes reporting no effect on molecular rates and studies involving mitochondrial genes generally reporting statistically significant trends at warmer latitudes but only when restricted to Chordata. However, the studies with the larger sample sizes, including Rolland et al. (2016) and Fujisawa et al. (2015), corroborate our findings based upon the COI barcode region that little to no trend in molecular rates is present when the molecular rate analyses are expanded to larger numbers of sequences.
The evolutionary speed hypothesis and the latitudinal diversity gradient
Our primary finding of a weak latitudinal trend in molecular evolution rates was surprising. Several lines of evidence suggest that mutation rates are higher under higher energy expenditure, which can be facilitated by higher availability of environmental energy. While Lanfear et al. (2010) did not find a significant impact of metabolic rate upon molecular rates in a large comparative study, more recent research has indicated that active metabolic rate is a significant predictor of rates in poison frogs (Santos 2012), unlike basal metabolic rate, which may be decoupled from the rate of mitochondrial DNA evolution (Dowle et al. 2013). The availability of environmental energy relates to temperature as well as other conditions, particularly water availability (Goldie et al. 2010; Gillman and Wright 2014). Thus, evolutionary speed may be partially decoupled from latitude, as mediated through water and other resource availability, with variability across the Earth in these patterns. These considerations again point to the utility of further study using multivariable approaches.
In addition to the potential impact of energy availability upon mutation rates, latitudinal gradients in factors that influence the fixation rate should also be considered in the study of relative rates and molecular clocks. In general, the mutation rate is expected to predict the substitution rate for neutral mutations, regardless of population size, while selection is expected to eliminate or fix mutations with fitness consequences more readily in large populations. By contrast, nearly neutral, yet slightly deleterious mutations, which would be weeded out by natural selection in large populations, are expected to drift to fixation more readily when effective population size (Ne) is small (Ohta 1992; Woolfit 2009). While abundance can be large in boreal regions, population extinction rates, and population size fluctuations may be lower in the tropics (Pyron and Wiens 2013), which may increase the long-term Ne of tropical species in contrast to temperate and polar regions. However, this trend remains to be explored across taxa. Large and growing datasets of standardized DNA sequences will open new avenues for studying gradients in evolutionary processes on a large spatial scale. Given that predictions arising from the nearly neutral theory have been borne out in comparative studies using mitochondrial DNA sequence data (e.g. Mitterboeck and Adamowicz 2013; Fujisawa et al. 2015; Mitterboeck et al. 2017), patterns of genetic variability may be further explored to test for variability in Ne across large spatial and taxonomic scales.
Evidence from selected taxa indeed suggests latitudinal structuring of traits correlated with Ne. For example, Fujisawa et al. (2015) suggested that the weak (and nonsignificant) trend between rates and latitude that they observed in water beetles may have been an indirect effect of the latitudinal gradient in habitat availability/occupancy. In contrast to latitude, habitat (lentic vs. lotic) was a significant predictor of branch lengths, likely mediated by differences in Ne among the species occupying these two habitat types. Interestingly, Lourenço et al. (2013) discovered a habitat-by-latitude interaction in predicting rates in turtles. Whereas there was no difference in rates across latitudes in terrestrial turtles, aquatic turtles displayed a significant trend of higher rates at lower latitudes. As the aquatic designation was comprised of species inhabiting freshwater, marsh, and marine environments, an interesting avenue for further research would be to examine the latitudinal gradient in habitat occupancy in turtles and to test the correlation between rates and latitude/temperature within each of these habitat categories. These complex patterns suggest that future studies should target gradients in life history traits and Ne, rather than latitude or temperature alone, and should employ multivariable analytical approaches in order to advance molecular rates research.
As we found minimal variation in substitution rates across latitudes, we suggest that little to no correction for latitude may be required when using molecular clocks for COI to date evolutionary events for many animal taxa. However, further research is needed to test components of the ESH in more detail, including environmental correlates of variability in the mutation rate (as contrasted with substitution rate), investigating latitudinal gradients in habitat occupancy and Ne, and characterizing spatial patterning in the speed of selection. Although COI did not show a substantial latitudinal pattern, even a modest increase in the rate of introduction of novel variants upon which selection can act may contribute to evolutionary speed in the tropics and may be one contributing factor towards the latitudinal diversity gradient. The potentially larger Ne in the tropics may permit the more effective action of positive selection at lower latitudes, a process which may influence other genes more strongly.
This research also highlights the utility and future promise of using large datasets of standardized sequences for evolutionary study, particularly when coupled with bioinformatics pipelines that enable automation and allow analyses to be repeated in the future in light of rapidly increasing data availability. This work also showcases the contribution that intensive biodiversity research at focal sites can make to global-scale studies in evolution and macroecology. Interestingly, targeted regional barcoding campaigns (e.g. Stahlhut et al. 2013; Wirta et al. 2016; Janzen and Hallwachs 2016) are apparent when viewing the latitudinally-separated pairs of Hymenoptera (Fig. 1C), even though the pairings were formed entirely informatically. Moreover, metabarcoding studies are increasingly generating multi-marker datasets of standardized molecular regions to reduce primer biases and improve taxon recovery (Cristescu 2014), which would allow research such as presented here to be expanded to markers beyond COI. Moreover, large data sets derived from metagenomics studies will create new opportunities for furthering our knowledge of the evolutionary origins, distribution, and future trends of biodiversity.
All DNA sequences analyzed in this work are publicly available through BOLD. The multiple sequence alignments with reference sequences for each taxonomic group studied and the results for the latitudinally-separated BIN pairs are now publicly available on Github though the following links:
https://github.com/m-orton/Evolutionary-Rates-Analysis-Pipeline/tree/master/SisterPairingDatasets
https://github.com/m-orton/Evolutionary-Rates-Analysis-Pipeline/tree/master/SupplementalDataSets
R code used to generate the results presented in this manuscript is publicly available through the Github links provided in the Materials and Methods section.
Adamowicz SJ (2015) International Barcode of Life: evolution of an international research community. Genome 58:151–162
Allen AP, Gillooly JF, Savage VM, Brown JH (2006) Kinetic effects of temperature on rates of genetic divergence and speciation. Proc Natl Acad Sci USA 103:9130–9135
Barraclough TG, Savolainen V (2001) Evolutionary rates and species diversity in flowering plants. Evolution 55:677–683
Bromham L, Cardillo M (2003) Testing the link between the latitudinal gradient in species richness and rates of molecular evolution. J Evol Biol 16:200–207
Bromham L, Hua X, Lanfear R, Cowman PF (2015) Exploring the relationships between mutation rates, life history, genome size, environment, and species richness in flowering plants. Am Nat 185:507–524
Carr CM (2010). The Polychaeta of canada: exploring diversity and distribution patterns using DNA barcodes. MSc Thesis, University of Guelph, Guelph, Ontario, Canada.
Cristescu ME (2014) From barcoding single individuals to metabarcoding biological communities: towards an integrative approach to the study of global biodiversity. Trends Ecol Evol 29:566–571
Davies TJ, Savolainen V, Chase MW, Moat J, Barraclough TG (2004) Environmental energy and evolutionary rates in flowering plants. Proc Biol Sci 271:2195–2200
Dowle EJ, Morgan-Richards M, Trewick SA (2013) Molecular evolution and the latitudinal biodiversity gradient. Heredity 110:501–510
Dugo-Cota Á, Castroviejo-Fisher S, Vilà C, Gonzalez-Voyer A (2015) A test of the integrated evolutionary speed hypothesis in a Neotropical amphibian radiation. Glob Ecol Biogeogr 24:804–813
Edgar RC (2004) MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res 32:1792–1797
Fick SE, Hijmans RJ (2017) WorldClim 2: new 1-km spatial resolution climate surfaces for global land areas. Int J Climatol 37:4302–4315
Fine P (2015) Ecological and evolutionary drivers of geographic variation in species diversity. Annu Rev Ecol Evol Syst 46:369–392
Foltz DW (2003) Invertebrate species with nonpelagic larvae have elevated levels of nonsynonymous substitutions and reduced nucleotide diversities. J Mol Evol 57:607–612
Fujisawa T, Vogler AP, Barraclough TG (2015) Ecology has contrasting effects on genetic variation within species versus rates of molecular evolution across species in water beetles. Proc R Soc B Biol Sci 282:20142476
Gillman LN, Keeling DJ, Gardner RC, Wright SD (2010) Faster evolution of highly conserved DNA in tropical plants. J Evol Biol 23:1327–1330
Gillman LN, Keeling DJ, Ross HA, Wright SD (2009) Latitude, elevation and the tempo of molecular evolution in mammals. Proc R Soc B Biol Sci 276:3353–3359
Gillman LN, McBride P, Keeling DJ, Ross HA, Wright SD (2011) Are rates of molecular evolution in mammals substantially accelerated in warmer environments? Reply. Proc R Soc B Biol Sci 278:1294–1297
Gillman LN, McCowan LSC, Wright SD (2012) The tempo of genetic evolution in birds: body mass and climate effects. J Biogeogr 39:1567–1572
Gillman LN, Wright SD (2014) Species richness and evolutionary speed: the influence of temperature, water and area. J Biogeogr 41:39–51
Goldie X, Gillman L, Crisp M, Wright S (2010) Evolutionary speed limited by water in arid Australia. Proc R Soc B Biol Sci 277:2645–2653
Grafen A (1989) The Phylogenetic Regression. Philos Trans R Soc B Biol Sci 326:119–157
Hebert PDN, Cywinska A, Ball SL, DeWaard JR (2003) Biological identifications through DNA barcodes. Proc R Soc B Biol Sci 270:313–321
Hillebrand H (2004) On the generality of the latitudinal diversity gradient. Am Nat 163:192–211
Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat 6:65–70
Janzen DH, Hallwachs W (2016) DNA barcoding the Lepidoptera inventory of a large complex tropical conserved wildland, Area de Conservacion Guanacaste, northwestern Costa Rica. Genome 59:641–660
Knowlton N, Weigt LA (1998) New dates and new rates for divergence across the Isthmus of Panama. Proc R Soc B Biol Sci 265:2257–2263
Lanfear R, Ho SYW, Love D, Bromham L (2010) Mutation rate is linked to diversification in birds. Proc Natl Acad Sci USA 107:20423–20428
Lessios HA (2008) The Great American Schism: Divergence of Marine Organisms After the Rise of the Central American Isthmus. Annu Rev Ecol Evol Syst 39:63–91
Loeza-Quintana T (2017). Molecular clocks and rates of evolution in marine invertebrates. PhD Thesis, University of Guelph, Guelph, Ontario, Canada.
Loeza-Quintana T, Adamowicz SJ (2018) Iterative calibration: a novel approach for calibrating the molecular clock using complex geological events. J Mol Evol 86:118–137
Lourenço JM, Glémin S, Chiari Y, Galtier N (2013) The determinants of the molecular substitution process in turtles. J Evol Biol 26:38–50
Luo A, Zhang A, Ho SY, Xu W, Zhang Y, Shi W et al. (2011) Potential efficacy of mitochondrial genes for animal DNA barcoding: a case study using eutherian mammals. BMC Genom 12:84
May JA (2017). A new bioinformatics pipeline to reveal the correlates of molecular evolutionary rates in ray-finned fishes. MSc Thesis, University of Guelph, Guelph, Ontario, Canada.
Mendes FK, Hahn MW (2016) Gene tree discordance causes apparent substitution rate variation. Syst Biol 65:711–721
Mittelbach GG, Schemske DW, Cornell HV, Allen AP, Brown JM, Bush MB et al. (2007) Evolution and the latitudinal diversity gradient: speciation, extinction and biogeography. Ecol Lett 10:315–331
Mitterboeck TF, Adamowicz SJ (2013) Flight loss linked to faster molecular evolution in insects. Proc R Soc B Biol Sci 280:20131128
Mitterboeck TF, Liu S, Adamowicz SJ, Fu J, Zhang R, Song W, Meusemann K, Zhou X (2017) Positive and relaxed selection associated with flight evolution and loss in insect transcriptomes. GigaScience 6(10):1–14
Ohta T (1992) The nearly neutral theory of molecular evolution. Annu Rev Ecol Syst 23:263–286
Paradis E, Claude J, Strimmer K (2004) APE: analyses of phylogenetics and evolution in R language. Bioinformatics 20:289–290
Pearse JS, Mooi R, Lockhart SJ, Brandt A (2009). Krupnik I, Lang MA, and Miller SE (eds). Brooding and Species Diversity in the Southern Ocean: Selection for Brooders or Speciation within Brooding Clades? In: Smithsonian at the poles: contributions to International Polar Year science, Smithsonian Institution Scholarly Press, Washington, DC, USA, pp 181–196.
Pentinsaari M, Salmela H, Mutanen M, Roslin T (2016) Molecular evolution of a widely-adopted taxonomic marker (COI) across the animal tree of life. Sci Rep 6:35275
Pyron RA, Wiens JJ (2013) Large-scale phylogenetic analyses reveal the causes of high tropical amphibian diversity. Proc R Soc B Biol Sci 280:20131622
Ratnasingham S, Hebert PDN (2007) BOLD: the Barcode of Life Data System. Mol Ecol Notes 7:355–364. http://www.barcodinglife.org
Ratnasingham S, Hebert PDN (2013) A DNA-based registry for all animal species: the Barcode Index Number (BIN) system. PLoS One 8:e66213
Robinson M, Gouy M, Gautier C, Mouchiroud D (1998) Sensitivity of the relative-rate test to taxonomic sampling. Mol Biol Evol 15:1091–1098
Rohde K (1992) Latitudinal gradients in species diversity: the search for the primary cause. Oikos 65:514–527
Rolland J, Loiseau O, Romiguier J, Salamin N (2016) Molecular evolutionary rates are not correlated with temperature and latitude in Squamata: an exception to the metabolic theory of ecology? BMC Evol Biol 16:95
Santos JC (2012) Fast molecular evolution associated with high active metabolic rates in poison frogs. Mol Biol Evol 29:2001–2018
Schluter D (2016) Speciation, ecological opportunity, and latitude. Am Nat 187:1–18
Stahlhut JK, Fernández-Triana J, Adamowicz SJ, Buck M, Goulet H, Hebert PD et al. (2013) DNA barcoding reveals diversity of Hymenoptera and the dominance of parasitoids in a sub-arctic environment. BMC Ecol 13:2
Stamatakis A (2014) RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics 30:1312–1313
Tamura K, Nei M (1993) Estimation of the number of nucleotide substitutions in the control region of mitochondrial DNA in humans and chimpanzees. Mol Biol Evol 10:512–526
Weir JT, Schluter D (2007) The latitudinal gradient in recent speciation and extinction rates of birds and mammals. Science 315:1574–1576
Weir JT, Schluter D (2011) Are rates of molecular evolution in mammals substantially accelerated in warmer environments? Reply. Proc Biol Sci 278:1294–1297
Welch JJ, Waxman D (2008) Calculating independent contrasts for the comparative study of substitution rates. J Theor Biol 251:667–678
Wirta H, Várkonyi G, Rasmussen C, Kaartinen R, Schmidt NM, Hebert PDN et al. (2016) Establishing a community-wide DNA barcode library as a new tool for arctic research. Mol Ecol Resour 16:809–822
Woolfit M (2009) Effective population size and the rate and pattern of nucleotide substitutions. Biol Lett 5:417–420
Wright SD, Gillman LN, Ross HA, Keeling DJ (2010) Energy and the tempo of evolution in amphibians. Glob Ecol Biogeogr 19:733–740
Wright SD, Gillman LN, Ross HA, Keeling DJ (2009) Slower tempo of microevolution in island birds: implications for conservation biology. Evolution. 63:2275–2287
Wright S, Keeling J, Gillman L (2006) The road from Santa Rosalia: a faster tempo of evolution in tropical climates. Proc Natl Acad Sci USA 103:7718–7722
Wright SD, Ross HA, Keeling DJ, McBride P, Gillman LN (2011) Thermal energy and the rate of genetic evolution in marine fishes. Evol Ecol 25:525–530
This work was supported through a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) to SJA as well as by the "Food from Thought" research program led by the University of Guelph and supported by the Canada First Research Excellence Fund (CFREF). We thank Fatima Mitterboeck for helpful discussions during the early stages of this project and Tzitziki Loeza-Quintana for discussions and sharing the results of her transitional saturation analyses for the COI DNA barcode region. We would also like to thank Dan Fieldhouse for his help in the early coding stages of the project. We also greatly appreciate the efforts of the numerous contributors of sequence data to BOLD, who made the data public that we analyzed for this study.
Department of Integrative Biology & Biodiversity Institute of Ontario, University of Guelph, 50 Stone Road East, Guelph, ON, N1G 2W1, Canada
Matthew G. Orton, Jacqueline A. May & Sarah J. Adamowicz
School of Biological Sciences and Applied Chemistry, Seneca College, 1750 Finch Ave E, North York, ON, M2J 2X5, Canada
Matthew G. Orton, Winfield Ly & David J. Lee
Matthew G. Orton
Jacqueline A. May
Winfield Ly
David J. Lee
Sarah J. Adamowicz
Correspondence to Sarah J. Adamowicz.
The authors declare that they have no conflict of interest.
Orton, M.G., May, J.A., Ly, W. et al. Is molecular evolution faster in the tropics?. Heredity 122, 513–524 (2019). https://doi.org/10.1038/s41437-018-0141-7
Received: 09 October 2017
Revised: 26 June 2018
Issue Date: May 2019
Evolutionary history and past climate change shape the distribution of genetic diversity in terrestrial mammals
Spyros Theodoridis
, Damien A. Fordham
, Stuart C. Brown
, Sen Li
, Carsten Rahbek
& David Nogues-Bravo
Nature Communications (2020)
Tropical plants evolve faster than their temperate relatives: a case from the bamboos (Poaceae: Bambusoideae) based on chloroplast genome data
Wencai Wang
, Siyun Chen
, Wei Guo
, Yongquan Li
& Xianzhi Zhang
Biotechnology & Biotechnological Equipment (2020)
The Effects of Ecological Traits on the Rate of Molecular Evolution in Ray-Finned Fishes: A Multivariable Approach
, Zeny Feng
, Matthew G. Orton
& Sarah J. Adamowicz
Journal of Molecular Evolution (2020)
Latitudinal biodiversity gradients at three levels: Linking species richness, population richness and genetic diversity
Elizabeth R. Lawrence
, Dylan J. Fraser
& Brian McGill
Global Ecology and Biogeography (2020)
Recalibrating the molecular clock for Arctic marine invertebrates based on DNA barcodes
Tzitziki Loeza-Quintana
, Christina M. Carr
, Tooba Khan
, Yash A. Bhatt
, Samantha P. Lyon
, Paul D.N. Hebert
Genome (2019)
Podcast Picks
Heredity ISSN 1365-2540 (online) | CommonCrawl |
Robust Control Theory And Model Uncertainty
Mainstream mathematical economics has a very strong influence from optimal control theory. As I discussed previously, optimal control was abandoned as a modelling strategy decades ago by controls engineers; it only survives for path-planning problems, where you are relatively assured that you have an accurate model of the overall dynamics of the system. In my articles, I have referred to robust control theory as an alternative approach. In robust control theory, we are no longer fixated on a baseline model of the system, we incorporate model uncertainty. These concepts are not just the standard hand-waving that accompanies a lot of pop mathematics; robust control theory is a practical and rigorous modelling strategy. This article is a semi-popularisation of some of the concepts; there is some mathematics, but readers may be able to skip over it.
It should be noted that I have serious doubts about the direct application of robust control theory to economics. In fact, others have discussed robust control (sometimes under the cooler-sounding name $H_\infty$ control). As such, the examples I give probably have little direct application to economics. Instead, the objective is to explain how we can work with model uncertainty in a rigorous fashion.
This article reflects the thinking in robust control theory up until the point I left academia (in 1998). I have not paid attention to subsequent developments, but based on the rather glacial pace of advance in control theory at the time, I doubt that I missed much. It should be noted that there were disagreements about the approach; I was part of the dominant clique, and this article reflects that "mainstream" approach. (I discuss briefly one alternative approach at the end, only because it was actually adopted by an economist as a modelling strategy.)
For readers with a desire to delve further into robust control theory, the text Feedback Control Theory, by John C. Doyle, Bruce A. Francis, and Allen R. Tannenbaum, was the standard text when I was a doctoral student. There are more recent texts, but the ones that I saw were only available in hardcover.
The Canonical Control Problem
The standard control problem runs as follows.
We have a system that we wish to control. By tradition, it is referred to as the plant, and is denoted P. We have a baseline model $P_0$, which comes from somewhere -- physics, empirical tests, whatever. (This model is provided by the non-controls engineers.)
We design a controller -- denoted $K$ -- that is to stabilise the system. It provides a feedback control input $u$ that is used to guide the plant's operation.
We typically assume that we are analysing the system around some operating point that we can treat as a linear system. (My research was in nonlinear control, and the amount of theory available is much smaller.)
We often assume that the observed output ($y$) is corrupted by noise ($n$), and there may be disturbances ($d$) with finite energy that also acts as an input to the plant.
The diagram above shows the layout of the system, with variables indicated. We assume that each variable is a single variable (not a vector).
If the system is linear, time invariant, and discrete time, we can use the z-transform to analyse the system. (In continuous time, we use the Laplace transform.) The z-transform uses the frequency domain to analyse systems; we are not tied to any state-space model.
The reason why we use the z-transform is that it turns the operation of a system into a multiplication. If $Y(z)$ is the z-transform of $y(k)$, then:
Y(z) = P_0(z) (D(z) - U(z)).
(By convention, we use a negative feedback for the control output u.)
We can now calculate the transfer function from the disturbance d to the plant output y (ignoring the noise n).
U(z) = K(z) Y(z).
Y(z) = P_0 D(z) - P_0(z) K(z) Y(z).
We arrive at:
Y(z) = P_0(z) (1 + P_0(z) K(z))^{-1} D(z).
The term $P_0(z) (1 + P_0(z) K(z))^{-1}$ is the closed-loop model of the system (the area in dotted lines in the above diagram). If the closed loop model of the system is stable, standard linear system theory tells us that the closed loop will reject noise and disturbances. The zero point is a stable equilibrium, using the rigorous dynamical system definition of equilibrium (and not the hand-waving metaphysical definition used in economics).
The above was standard system theory; optimal control worked within this framework. Any notion of uncertainty was assumed to be handled by either the disturbance or noise.
In robust control, we assume that the "true" plant model lies close to our baseline model, but it is not exactly the baseline model. We can express this uncertainty in a number of ways. The diagram above shows one standard possibility: the actual plant is equal to the baseline plant, which is locked in a feedback loop configuration with an unknown system $\Delta$.
We obviously cannot do much analysis if there are no constraints on $\Delta$, the true system could be literally anything. We constrain $\Delta$ so that its gain in the frequency domain is less than or equal to 1. (This is denoted as $\| \Delta \|_\infty < 1$, or the infinity norm is less than one. (This is where $H_\infty$ control gets its name.) This characterisation was developed by the late George Zames, a Professor at McGill University.
We can then manipulate the systems to calculate the baseline closed loop model, and have it in a loop with $\Delta$. We then can apply a fixed point theorem -- called the Small Gain Theorem in control theory (which is also due to George Zames) -- to show that if the infinity norm of the baseline closed loop model is less than 1, the overall system will be stable. In other words, the controller will stabilise the true plant, for any $\Delta$ in the set of possible perturbations.
By contrast, optimal control was highly aggressive in its usage of the baseline model. Almost any perturbation of the system from the assumed model would result in instability. Alternatively, the numerical procedure to determine the optimal control law was numerically unstable.
The above specification of uncertainty is standard, but is somewhat naive. In practice, we have a rough idea what sort of uncertainty we are up against. We can extend the analysis to allow ourselves the ability to shape the uncertainty in the frequency domain. For example, we usually have a good idea what the steady-state operating gain of a system is, but often have little idea what the high frequency dynamics are. We shape the frequency domain characterisation in such a fashion, and we thus constrain how to design our control laws.
Applications to Economics?
The direct application of control theory is in the design of policy responses, as was done in the Dynamic Stochastic General Equilibrium literature. The difficulty with trying to apply robust control is that we do not really have a good notion of the baseline system. Also the true models are certainly not linear.
Another issue is that the type of uncertainty we face is somewhat different. We know with certainty that accounting identities will always hold. The true uncertainty we face is the behaviour of economic sectors. I made an initial stab at analysis that exploits this feature in an earlier article. However, I do not see an obvious way to shoehorn that type of model into existing robust control theoretical frameworks.
However, the realisation that we can rigorously discuss model uncertainty means that should not be treating uncertainty via parameter uncertainty.
Hansen and Sargent's Approach
In 2008, Lars Peter Hansen and Thomas J. Sargent published the book Robustness, which was an attempt to bring robust control theory to economics. I started reading the book with high expectations, and gave up fairly quickly.
Within control theory, there were a number of differing approaches to robust control. Back when I was a junior academic, I would have had to be diplomatic and pretend that they were all equally valid. Since I am no longer submitting papers to control engineering journals, I am now free to write what I think. My view was that those alternative approaches were largely bad ideas, and were only useful for expanding publication counts.
One such alternative approach was to apply game theory. Instead of a truly uncertain model, you are facing a malevolent disturbance that knows the weak points of whatever control law you are going to apply. You are forced to use a less aggressive control strategy, so that it is not vulnerable to this interference. You ended up with the same final set of design equations as in robust control, but that was just a lucky accident of linear models. Any nonlinearity destroyed the equivalence of the approaches; that is, an uncertain nonlinear system could not be emulated with a game theory framework. (Since I worked in nonlinear control, I largely managed to ignore that literature. However, I was forced to work through a textbook detailing it as part of a study group, and I hated every minute of it.)
Working entirely from memory, I believe that you also largely lost the ability to shape the uncertainty in the frequency domain. That is, the fact that we generally know more about the steady-state characteristics of a system than the high frequency response is lost, since we have a "malevolent actor" that is responding at an extremely high frequency. For linear system design, you can cover this up with kludges that allow you to restore the equivalence to standard robust control design equations, but there was no theoretical justification for these kludges from within the game-theoretic framework.
Given mainstream economists' love of game theory, it was perhaps not surprising that Hansen and Sargent chose that formalism. You end up with a new variant of DSGE macro, but still without any true model uncertainty. It may be better than the optimal control based DSGE macro, but that's setting the bar very low. I was thinking of writing a review of the book, but I would have been forced to be all academic-y. The resulting review would have been painful to write (and read). It may be that there are more redeeming features to the approach than I saw in my first reading of the book, but I remain skeptical.
Labels: Control Theory, Models
Joe Leote December 3, 2017 at 3:29 PM
The link below is to a 125 page PHD thesis. It opens as pdf in Firefox.
http://digilib.gmu.edu/xmlui/bitstream/handle/1920/9862/Carrella_gmu_0883E_10909.pdf?sequence=1&isAllowed=y
Quote page 4-5: "I built a cybernetic model, that is a world where agents are represented by closed-loop controllers fighting one another. But it isn't obvious why I would choose PID controllers in particular as a way to model trial and error. ... The main reason is the argument put forward in [Bennett, 1993] (also cited in [Hawkins"et al., 2014]) is that "in the absence of any knowledge of the process to be controlled, the PID controller is the best form of controller. This matches very well with the purpose of my agents. Rather than thinking cybernetically top-down with the Gosplan being the well-informed well-meaning ultimate controller I wanted bottom up equilibrium coming with no centralized information. PID controllers are ideal for this."
Brian Romanchuk December 3, 2017 at 5:11 PM
All I can say is that I think it was best for all concerned that I was not brought in as an external examiner for that thesis.
Roger Sparks December 3, 2017 at 4:15 PM
"We know with certainty that accounting identities will always hold."
I cannot confidently support this statement. I think an event of borrowing-from-one's-self (as when government creates money) bypasses accounting identities to become an event driving perturbation. I think of this event as a throwing of an off-on switch to disengage/engage some part of the control feedback loop.
Accounting identities imply that accounting is done correctly, there is no behaviour content.
Money creation is captured by accounting identities, modulo measurement errors. | CommonCrawl |
npg asia materials
Soft-packaged sensory glove system for human-like natural interaction and control of prosthetic hands
A soft neuroprosthetic hand providing simultaneous myoelectric control and tactile feedback
Guoying Gu, Ningbin Zhang, … Xuanhe Zhao
Tactile display of softness on fingertip
Gabriele Frediani & Federico Carpi
Toward a new generation of smart skins
Takao Someya & Masayuki Amagai
Skin-conformable printed supercapacitors and their performance in wear
Anna Railanmaa, Ayat Soltani, … Donald Lupo
Multichannel haptic feedback unlocks prosthetic hand dexterity
Moaed A. Abd, Joseph Ingicco, … Erik D. Engeberg
Electrospun bundled carbon nanofibers for skin-inspired tactile sensing, proprioception and gesture tracking applications
Debarun Sengupta, Joshua Romano & Ajay Giri Prakash Kottapalli
Battery-free, wireless soft sensors for continuous multi-site measurements of pressure and temperature from patients at risk for pressure injuries
Yong Suk Oh, Jae-Hwan Kim, … John A. Rogers
A tailored, electronic textile conformable suit for large-scale spatiotemporal physiological sensing in vivo
Irmandy Wicaksono, Carson I. Tucker, … Canan Dagdeviren
Skin-electrode iontronic interface for mechanosensing
Pang Zhu, Huifeng Du, … Chuan Fei Guo
Min Ku Kim ORCID: orcid.org/0000-0002-2576-57931 na1,
Ramviyas Nattanmai Parasuraman2,3 na1,
Liu Wang ORCID: orcid.org/0000-0001-7014-99764 na1,
Yeonsoo Park5,
Bongjoong Kim5,
Seung Jun Lee5,
Nanshu Lu4,
Byung-Cheol Min2 &
Chi Hwan Lee1,5,6
NPG Asia Materials volume 11, Article number: 43 (2019) Cite this article
People with hand amputations experience strenuous daily life challenges, often leading to lifelong use of a prosthetic hand(s) and services. Modern advanced prosthetic hands must be able to provide human hand-like sensory perceptions to receive external stimuli during daily activities while simultaneously replicating a realistic appearance and physical properties to naturally integrate in social contexts; however, the practical realization of these issues are impeded by a lack of effective methodologies. Herein, we present an optimal set of materials, design layouts, and fabrication schemes to construct an easy-to-wear seamless electronic glove (e-glove) suitable for arbitrary hand shapes that provides all of the desired human hand-like features. The system configuration involves a connection to a control wristwatch unit for real-time display of sensory data measured and remote transmission to the user. The experimental and computational studies provide details regarding the underlying principles of the materials selection, mechanics design, and operational management of the entire system. The demonstration of the e-glove system in interactions with human subjects illustrates the utility, comfort, and convenience of this device.
The human hand is one of the foremost parts of the body and serves as a versatile physical instrument for daily and social activities. Any disfigurement of this powerful physical instrument can affect a person's quality of life due to reduced manual dexterity and sensory reception and an unnatural appearance1,2. As a result, nearly 50% of all people with hand amputations require psychological intervention for isolation accompanied by depression, fatigue, anxiety, or even suicidal ideation2,3. Current evidence-based treatments rely on the use of the prosthetic hand(s) to restore vital mobility that is important in many daily and social interactions, such as gripping, handshakes, gentle stroking, and petting4,5. The recent development of flexible and stretchable materials and electronics provides both mechanical softness and sensory perception to detect changes in external stimuli including pressure, temperature, and hydration6,7,8,9,10,11. However, the seamless integration of these materials around prosthetic hands is still challenging due to geometric complexity, resulting in continued disfigurement and poor physical coupling. The ability to directly assemble flexible and stretchable electronic circuit materials and devices on a commercial stretchable glove could provide an easy-to-wear seamless platform suitable for arbitrary prosthetic hands12,13,14.
Herein, we present a set of advanced materials, design layouts, and fabrication schemes for the realization of an electronic sensory glove (e-glove) system that is directly built on a commercial stretchable nitrile glove, allowing seamless fit on arbitrary prosthetic hands through the intrinsic ergonomic design of the glove for any hand shapes and sizes within the regular range of adult hand sizes. The e-glove system is configured with flexible and stretchable forms of multimodal sensors to collect various types of information, such as pressure, temperature, and moisture while simultaneously offering realistic human hand-like appearance, softness, and warmth. The capabilities of the real-time display of the sensory data measured on a control wristwatch unit and remote transmission to an external reader for data postprocessing provide benefits and convenience to the user. The fabrication of the e-glove system involves a cost-effective hybrid printing technology that combines screen-printing and transfer-printing methods and is tailored to laminate multiple layers of electronic circuit materials on a commercial stretchable glove. Both the experimental and computational investigations reveal key features of the underlying materials and structures and the mechanics aspects of the design variables. The results demonstrate the utility of the e-glove system during the interactions and control of a prosthetic hand with objects and humans in many daily and social settings.
The commercial software ABAQUS (standard 6.13) was used to study the mechanical behavior of the prototype devices under stretching, bending and folding. A multilayered model was built, i.e., an encapsulated (hencap = 500 μm)-PDMS/Ag (hPDMS = 80 μm, hAg = 45 μm)-epoxy adhesive (hepoxy = 70 μm)-nitrile glove (hPDMS = 150 μm) composite. Since a serpentine-shaped Ag trace was embedded in the PDMS, a partition of exactly the same shape as the serpentine layout was created with effective mechanical properties, e.g., Young's modulus:
$$E_{{\text{eff}}} = E_{{\text{Ag}}}\left( {\frac{{h_{{\text{Ag}}}}}{{h_{{\text{PDMS}}} + h_{{\text{Ag}}}}}} \right) + E_{{\text{PDMS}}}\left( {\frac{{h_{{\text{PDMS}}}}}{{h_{{\text{PDMS}}} + h_{{\text{Ag}}}}}} \right)$$
where EAg ≈ 0.2 MPa (obtained from the vendor) and EPSMS ≈ 2.5 MPa. For the epoxy adhesive layer and glove, Eepoxy ≈ 3.38 MPa (urethane based), and Eglove ≈ 21 MPa (from the product sheet). Because the thickness of each layer (~mm) was substantially smaller than the lateral dimensions of the device (~cm), the device was modeled as a composite shell with element S4R. Incompressible neo-Hookean constitutive behavior was assigned to all the layers.
COMSOL 5.3 electric currents and heat transfer modules were used for the time-dependent joule-heating simulation. The heat was generated when an electric current of 0.24 A was applied to the serpentine-shaped Ag traces. The heating time was 120 s. The thermal properties of the materials were obtained from the vendor product sheets and https://thermtest.com/materials-database.
Fabrication of the e-glove system
The fabrication began with a commercial stretchable nitrile glove (Kimberly-Clark, Irving, TX, USA) by gluing an epoxy (Loctite 4902, Henkel, Rocky Hill, CT, USA) over the surface to provide adhesive support, followed by curing in an oven at 70 °C for 10 min. A conductive Ag ink (125–19FS, Creative Materials, Ayer, MA, USA) was screen-printed on the surface of the glove to define the electrical interconnectors through a mesh screen (Ryonet, Vancouver, WA, USA) featured with photolithographic-patterned filamentary serpentine traces. Active sensing elements were then delivered from the donor substrate to the desired locations of the glove in a spatially distributed manner by using sequential transfer printing operations. Subsequently, the glove was annealed in an oven at 70 °C for 2 h to secure the bonding between the sensing elements and the conductive Ag ink. The entire structure was dip-coated in an uncured silicone elastomer (EcoflexTM, Smooth-On, Macungie, PA, USA) to form a thin sealing layer on the surface, followed by complete curing at 70 °C for 30 min. The steps of transfer printing and dip-coating were iterated to stack multiple layers consisting of different types of sensors. A subsequent dip-coating of the entire structure with a silicone elastomer (Dragon Skin Series, Smooth-On) to form the outermost skin layer completed the fabrication.
Incorporation of human hand-like appearance into the skin layer
The process began by gently pouring a solution of silicone elastomer (Body DoubleTM, Smooth-On) on a volunteer's hand to generate a custom-fitted mold. An agent, a 1:4 mixture of petroleum jelly and mineral spirits, was applied to the interior surface of the mold for an easy and clean release. A mixture of silicone elastomer (Dragon Skin Series, Smooth-On, USA) and skin-tone colorant (Slig-PigTM, Smooth-On, USA) was thinly applied to the interior surface of the mold. The aforementioned as-prepared e-glove was placed inside the mold and then annealed at room temperature for 1 h. The release of the cured structure from the mold completed the process.
Fabrication of control wristwatch unit
The fabrication began by printing a wristwatch case and buttons by using 3D printing equipment (Fortus 400mc, Stratasys, Eden Prairie, MN, USA) with a fused deposition of ABS (Acrylonitrile butadiene styrene) plastics. The internal printed circuit board (PCB) was fabricated on a Cu/PI film (12 μm/12 μm thick, Pyralux AP121200EM, DuPont, Durham, NC USA). A dry film photoresist (Riston MM540, DuPont, USA) was applied to the Cu/PI film by using a hot roller laminator (AL13P, Apache Laminator, USA), followed by photolithographic patterning and a wet etching process (CE-100, Transene Company, Danvers, MA, USA) to form the solder pads and traces. The resulting metal patterns were electroplated with a layer of tin (Sn) (Bright Electroless Tin, Transene, USA) for oxidation protection. Other necessary electronic components such as a microcontroller unit, multiplexer, and switches were soldered on the PCB and assembled inside the 3D printed wristwatch case along with an organic light-emitting diode (OLED) display (1673, Adafruit Industries, New York, NY, USA). The rigid housing case was thoroughly sealed with insulating tape (Kapton) to reduce the risk of potential short circuits. A summary of the electronic components used in this unit appears in Supplementary Table S1. Finally, the entire structure was mounted on a commercial wristband by using an epoxy adhesive.
Fabrication of artificial hand
The fabrication began with fused deposition of ABS plastics using 3D printing equipment (Fortus 400mc, Stratasys, USA) to print 15 parts consisting of both the finger and palm sections. The printed parts were assembled using cyanoacrylate adhesive and neoprene rubber sheets (2 mm thick) to form the basic structural frame of a hand. Each joint of the artificial hand was connected with fishing lines (Sufix 832, Rapala VMC, Minnetonka, MN, USA), allowing movement of the fingers and thumb by external adjustment of the line tension.
Recording of pressure and temperature
The arrays of pressure and temperature sensors were configured in the same way. Custom-miniaturized constant current preamplifier circuits consisting of op-amp (TLV333, Texas Instruments, Dallas, TX, USA) and bipolar junction transistors (BJT, MMBT3906SL, ON Semiconductor, Phoenix, AZ, USA) were used to supply a constant current of 100 μA to the sensors. Multiplexers with 32 channels (ADG726, Analog Devices, Norwood, MA, USA) controlled by a microcontroller unit (RFD77101, RF Digital, Hermosa Beach, CA, USA) were used to switch between the sensors while the voltage drop was measured across the sensing elements during recording. The changes in the voltage corresponding to external stimuli of pressure and temperature were measured by a 16-bit resolution analog-to-digital converter. The data measured were displayed on the control wristwatch unit and simultaneously transmitted to an external reader such as a commercial smartphone or tablet via Bluetooth communication.
Recording of hydration
A capacitive hydration sensor with interdigitated microelectrode arrays was assembled on the index fingertip of the e-glove. To facilitate the direct contact of the embedded hydration sensor with environmental moisture, a 3 × 3 array of small openings (~1 mm each in diameter) were punched through the outermost skin layer on the fingertip. A capacitance-to-digital converter (FDC1004, Texas Instruments, USA) was used in conjunction with a microcontroller unit (RFD77101, RF digital, USA) to measure the capacitance during recording. The data were processed by the microcontroller unit and then displayed on the control wristwatch unit and remotely transmitted to an external reader such as a commercial smartphone or tablet.
Fabrication of networked Ag nanowire-mesh
The fabrication began by synthesizing Ag nanowires with a mixture of 50 mL of ethylene glycol (EG, 9300–03, J.T.Baker, Radnor, PA, USA), 400 μL of copper (II) chloride (CuCl2, 4 mM, 487847, Sigma-Aldrich, St. Louis, MO, USA), and 15 mL of polyvinylpyrrolidone (PVP, 0.147 M, 856568, Sigma-Aldrich, USA) in a preheated oil bath (CG-1100, Chemglass, Vineland, NJ, USA) at 150 °C for 1 h15. Approximately 15 mL of Ag nitrate (AgNO3, 0.094 M) was injected at a rate of 0.5 mL/min by using a syringe pump (AL-4000, World Precision Instruments, Sarasota, FL, USA) until the color of the solution changed from ivory to grey. This step was repeated 3~5 times while adding 10 mL of the mixed precursor solution in each step. The resulting solution was cooled to room temperature, followed the addition of 450 mL of acetone. The mixture was then centrifuged to extract the Ag nanowires from the mixture. The as-prepared Ag nanowires were filtered through a Teflon filter (0.2 μm pore size, Sterlitech, Kent, WA, USA) by using a vacuum-assisted Buchner funnel (1960–054, Whatman, UK) to form a sheet of highly networked Ag nanowire-mesh.
Fabrication of electrophysiological recording electrodes
The aforementioned as-prepared networked Ag nanowire-mesh was placed on a Si wafer coated with bilayers of poly(methyl methacrylate) (PMMA, 1 μm) and PI (1 μm) on the surface, followed with standard photolithography and reactive-ion-etching (RIE) etching to define the necessary patterns in filamentary serpentine traces for the electrophysiological (EP) electrodes. The resulting structure was immersed in a solution of acetone for ~1 h to dissolve the PMMA layer, allowing the EP electrodes to release from the Si wafer to subsequently be installed in the e-glove.
Recording of EP signals
The EP recording electrodes were installed around the tip of the thumb of the e-glove. Then, the EP electrodes were attached to the skin of the chest and forearm of a volunteer (age: 30) during the recording of electrocardiogram (ECG) and electromyogram (EMG) signals, respectively. The signals measured were remotely sent to an external computing system by using a portable wireless unit (BioRadioTM, Great Lakes NeuroTechnologies, Cleveland, OH, USA) that was connected to the e-glove. Commercial software (BioCapture, Great Lakes NeuroTechnologies, USA) was used with filtering at 60 Hz and bandpass at 0.5~100 Hz (ECG) and 10~500 Hz (EMG). The filtered data were remotely exported to MATLAB for data postprocessing.
Studies on human subjects
All the studies on human subjects were approved by the Purdue Institutional Review Board (protocol #: 1711019949) and conducted in compliance with the applicable regulations.
Figure 1a shows a series of optical images of a representative e-glove platform that contains multiple stacked arrays of sensor elements (insets), including (1) a total of 20 pressure sensors (6.5 mm × 4.6 mm each) evenly distributed over the entire area, (2) a capacitive-based moisture sensor (7 mm × 6.3 mm) on the index fingertip, and (3) 16 resistive-based temperature sensors (1 mm × 1 mm each) at the center of the palm. The representative electrical characteristics of the embedded sensor elements as a function of the externally applied stimuli are summarized in Fig. 1b. The results indicate that the sensitivities of the pressure, temperature and moisture sensors are ~10 µA/kPa, ~1 pF/20 µL moisture level, and ~0.6 mV/°C within the ranges of the applied pressure of 0~200 kPa, moisture of 0~100%, and temperature of 20~50 °C, respectively, without suffering any degradation in performance. The fabrication begins by gluing a thin layer of a flexible epoxy (Loctite 4902, Henkel, USA) on the surface of a commercial stretchable nitrile glove (Kimberly-Clark, USA) to serve as an adhesive. Subsequent screen-printing of a flexible Ag paste (125–19FS, Creative Materials, USA, ~0.05 Ω/sq/mil) configured with a fractal serpentine layout (inset image) defines a stretchable form of conducting interconnectors. The employment of a pick-and-place transfer printing method results in the delivery of active sensor elements to predefined locations in an array layout that meets the spatial resolution requirements16,17,18. Dip-coating of the entire structure in an uncured silicone elastomer (EcoflexTM, Smooth-On, USA), followed by complete curing at 70 °C for ~30 min, leads to a thin sealing layer (~300 μm thick) over the surface to serve as electrical insulation for the subsequent layer. These steps can be iterated to provide stacked layers of sensor arrays for multimodal sensing capabilities. Finally, lamination of another thin sealing layer (~300 μm thick) with a silicone elastomer (Dragon Skin Series, Smooth-On, USA) forms the outermost skin layer not only to provide human skin-like mechanical softness and resilience but also to ensure the mechanical integrity and reduce any potential risk of interfacial delamination19. The details of the assembly procedures appear in the Methods section.
Fig. 1: Basic layouts and configurations of the e-glove system.
a A series of optical images for a representative e-glove platform that contains multiple stacked arrays of sensor elements including pressure (left), moisture (middle), and temperature (right) sensors. Scale bar is 25 mm. The inset images show an enlarged view of the embedded sensor elements. Scale bars are 4 mm (left), 3 mm (middle) and 1 mm (right), respectively. b Representative electrical characteristics of the embedded sensor elements as a function of externally applied stimuli. c Optical images of a custom-built wristwatch unit connected to the e-glove system. Scale bars are 6 cm (left) and 1 cm (right), respectively. d Optical image of the embedded internal circuitry in the wristwatch unit. Scale bar is 5 mm
The abilities to provide a real-time display of the sensory data measured and to remotely transmit the data to an external reader (e.g., commercial smartphone or tablet) for data postprocessing can improve the workflow between the e-glove system and the user, thereby offering operational efficiency and convenience. Figure 1c shows a custom-built control wristwatch unit that is wired to the e-glove system via a flexible anisotropic conductive film (ACF) cable (HST-9805–210, Elform, Fallon, NV, USA). The enlarged image highlights the organic light-emitting diode (OLED) display for the display of information, user interface navigation, and operational switches for function setting and control. The internal circuitry (Fig. 1d) of the control wristwatch unit includes (1) a 32-bit ARM Cortex M0 processor-based microcontroller (RFD77101, RF Digital, USA, 10 × 7 × 2.5 mm) for data collection and wireless transmission via BluetoothTM, (2) a rechargeable battery (3.6 × 2.0 × 0.56 cm, 350 mAh) for a power source, (3) a differential amplifier (INA333, Texas Instruments, USA) for front end detection and amplification of electrical signals, and (4) a 3D-printed package made of ABS plastics for housing. The overall workflow diagram of the embedded circuits appears in Supplementary Fig. S1 with more details in the Methods section. The use of the wristwatch allows the provision of immediate feedback to the prosthetic user through visual cues, which can provide two-dimensional data perception/visualization customizable to individual needs.
Prosthetic hands encounter many complex operations in daily and social activities including shaking a hand, tapping or punching an object, and holding hot/cold and dry/wet surfaces4. Given these circumstances, the real-time detection of pressure, temperature, and hydration from a prosthetic hand can provide useful information to the user. To illustrate this possibility, representative uses of the e-glove system in several daily circumstances envisioned are demonstrated by using a 3D-printed artificial hand as a surrogate for a prosthetic hand (Supplementary Fig. S2 & Movie S1). Figure 2a shows an optical image of the e-glove grasping a baseball; the monitoring of the pressure exerted across the whole palm area is carried out by an array of 20 pressure sensors. The inset image shows an embedded single sensor element that includes a pressure-sensitive polymer (Velostat™, 3 M, Maplewood, MN USA). Figure 2b presents the results of postprocessed data, revealing detailed visual information about how hard/easy the prosthetic hand holds the baseball in a spatially resolved manner. Representative results of the electrical characteristics of the embedded sensor element appear in Fig. 2c, exhibiting a sensitivity of ~4 μS/kPa. The results indicate that the e-glove system is capable of distinguishing delicate changes in pressure that a human hand might experience in daily activity with a dynamic range (linear response) up to ~100 kPa. The effects of different skin layer thickness (100–500 μm) and variations in environmental temperature (30–50 °C) on the sensing performance appear in Supplementary Fig. S3a. The experimental results characterizing the repeatability and reliability of the sensor under cyclic loading at different levels of applied pressure are summarized in Supplementary Fig. S3b.
Fig. 2: Demonstration of human hand-like multimodal perception.
a Optical image of the e-glove system grasping a baseball. Scale bar is 25 mm. b Results of the recording of pressure. c Change of conductance as a function of pressure applied for the embedded single sensor element. d Optical image of the e-glove system touching a wet diaper. Scale bar is 5 cm. e Results of the recording of hydration. f Results of control measurements by using a commercial hydration sensor. g Optical image of the e-glove system holding a cup of hot water. Scale bar is 5 cm. h Results of the recording of temperature. i Results of control measurements by using a commercial infrared (IR) sensor. j Optical image of electrophysiological (EP) electrodes installed around the thumb of the e-glove system. The inset SEM image highlights the embedded networked Ag nanowire-mesh. Scale bars are 4 mm and 600 nm (inset), respectively. k ECG (top) and EMG (bottom) results measured from the human skin. l Control measurement results from commercial EP recording electrodes
Another important sensory function for replicating human hand-like perception is the ability to detect moisture and temperature7. Figure 2d provides an example for the use of the e-glove system to identify the dampness of a wet diaper by using an embedded capacitive hydration sensor positioned around the fingertip. Representative measurement results appear in Fig. 2e, indicating that an abrupt increase of the capacitance occurs when the e-glove touches a wet area of the diaper. A separate control measurement using a commercial moisture sensor (SEN-13322, SparkFun Electronics, Niwot, CO, USA) provides consistent results (Fig. 2f). The change in the capacitance over time for different levels of moisture appears in Supplementary Fig. S4. The use of the e-glove system to detect the temperature on the surface of a cup containing hot water (~80 °C) appears in Fig. 2g. The embedded sensor positioned on the palm area contains a 4 × 4 array of temperature sensors made of Au (100 nm thick) and filamentary serpentine interconnectors (Au, 300 nm thick). Figure 2h presents the measurement results of the spatial temperature distribution when the e-glove system remains in contact with the cup. For a control comparison, real-time, simultaneous monitoring of the temperature occurs by using a commercial infrared (IR) camera (FLIR SC645, sensitivity: 0.05 °C) to confirm the surface temperature (Fig. 2i). In these demonstrations, the data measured are displayed on the screen of a control wristwatch unit (single point monitoring) and wirelessly transferred to an external reader such as a smartphone (multiple array monitoring), as appearing in Supplementary Fig. S5. The corresponding power consumption and estimated operation time for these sensor elements are summarized in Supplementary Table S2.
Another interesting aspect arises from the versatility of the e-glove system to provide extended capabilities beyond human sensory perception; i.e., to identify heart rates for on-demand access to health care and to monitor muscle fatigue during/after sport and exercise20. Fig. 2j shows an experimental demonstration that involves the use of the e-glove system for recording the electrical activities of the heart and muscles, such as ECGs and eEMGs, via the human skin. A separate prototype device consisting of an EP sensor on the outermost surface of the tip of the thumb is demonstrated by a using highly networked Ag nanowire-mesh (inset) patterned in a standard two-electrode configuration to serve as the EP electrodes. The use of a networked Ag nanowire-mesh offers useful features that enable high-fidelity coupling between the EP electrodes and the human skin against various loading conditions such as stretching and scratching21. The measurement results in Fig. 2k demonstrate the high-level recording of ECGs (top) and EMGs (bottom) while the EP electrodes remain in direct contact on the chest and the forearm, respectively (Supplementary Fig. S6). The ECGs and EMGs measured demonstrate clear detection of the P, Q, R, S, and T waves and electrical currents generated in the muscles during contraction (neuromuscular activities), respectively. These recordings are qualitatively comparable with those obtained using commercial EP recording electrodes (RedDotTM, 3 M, USA) (Fig. 2l). The details of the EP measurements appear in the Methods section.
Human skin is elastic, flexible, and stretchable. Accordingly, the e-glove system demands the corresponding physical properties without any degradation in the performance of the embedded electronic materials. To achieve these physical properties, several strategies are used as follows: (1) the outermost skin layer of the e-glove system is comprised of a thin layer (~300 μm thick) of a silicone elastomer (Young's modulus (E) ≈ 0.5 MPa) that can provide softness and resilience similar to those of adult human skin19, (2) the constituent materials of the e-glove system (e.g., a nitrile glove for the substrate, flexible Ag paste for interconnectors, and silicone elastomers for insulation/encapsulation) are flexible to accommodate mechanical loads during use and fitting, and (3) the filamentary serpentine traces incorporated along the electrical interconnectors provide the ability to mechanically isolate embedded semiflexible and rigid electronic components (e.g., capacitive hydration and temperature sensors) against stretching22.
Figure 3a (top) shows a representative optical image of a unit filamentary serpentine trace of the flexible Ag paste on a nitrile glove under stretching at 40%, displaying no visible defects. The results of the finite element analysis (FEA) in Fig. 3a (bottom) reveal the maximum principal strains (ε~33%) of the constituent material (i.e., Ag paste). Representative images of the damaged units with different geometries after stretching beyond the fracture limit (50~100%) appear in Supplementary Fig. S7. The corresponding FEA results under different stretching conditions and by using a testbed unit embedded with a rigid sensor element are summarized in Supplementary Fig. S8. The experimental and computational (FEA) results of the testbed unit under bending (Fig. 3b) and folding (Fig. 3c) produce consistent results. Figure 3d shows the measurement results of the relative resistance (R/R0) of the testbed unit under stretching up to 40% (left), bending/folding (middle), and twisting up to 180° (right). The results confirm that the R/R0 barely changes within less than ~5% for the mechanical deformations and then completely recovers when released. These results are consistent against repeated cycles of folding, while the R/R0 increases up to ~2 and ~3 against 2000 cycles of stretching at 30 and 60% strains, respectively (Supplementary Fig. S9).
Fig. 3: Mechanical behaviors of replicating human skin-like properties.
a Experimental and finite element analysis (FEA) results for a representative testbed unit under stretching at 40% strain. Scale bar is 7 mm. b Results for the testbed unit under bending at 90°. Scale bar is 5 mm. c Results for the testbed unit under folding at 180°. Scale bar is 6 mm. d Experimental data of normalized relative resistance (R/R0) curves under stretching up to 40% strain and release back to 0% (left), bending to 180° and back to 0° (middle) and twisting to 180° and back to 0° (right)
Prosthetic hands with realistic human hand-like appearance and warmth can help users naturally integrate into social environments4. The outermost skin layer of the e-glove system can incorporate human skin tones, textures, artificial nails, and other features. Figure 4a shows representative examples of the e-glove systems colored with a range of commercial pigments (Slic PigTM, Flesh tone silicone pigment, Smooth-On, USA), wherein a detailed surface texture is obtained by exploiting a molding technique (see the Methods section for the details). Enlarged views of textured fingerprints (top) and an artificial nail (bottom) highlight the details of these features. Representative system-level demonstrations of the e-glove systems in several circumstances envisioned such as shaking a hand, tapping a ball, touching a wet diaper, and holding a cup of hot water appear in Supplementary Movies S2–S5, respectively.
Fig. 4: Demonstration of human hand-like appearance and warmth.
a Optical image of the e-glove systems featured with several different skin tones, textures, and nails. The enlarged images highlight the detailed features. Scale bars are 2.5 cm (left), 5 mm (right top) and 6 mm (right bottom), respectively. b Temperature measured for the warmed skin of the e-glove system under stretching up to 40% strain. c Temperature measured over time by increasing the applied power from 100 mW to 400 mW. d Demonstration of the embedded automatic shutdown upon an intended incident of overheating beyond the preset temperature of 40 °C. e IR image (left) and FEA results (right) for the warmed skin of the e-glove system maintained at ~35 °C. Scale bar is 2.5 cm. f FEA results of the temperature distributions at several selected layers of the e-glove system. Scale bar is 3 cm
To replicate human hand-like warmth, a stretchable Joule-heating system is incorporated under the outermost skin layer of the e-glove system. The basic electronic components of this system include (1) serpentine resistive patterns of a flexible Ag paste for the Joule-heating element, which is stretchable up to ~40% strain without any degradation in the performance (Fig. 4b), (2) a microcontroller unit that is capable of maintaining consistent temperature by exploiting the embedded proportional-integral-derivative (PID, P = 2, I = 5 and D = 0) (Fig. 4c), and (3) an automatic shutdown unit to eliminate potential overheating risk by which an immediate shutdown of the entire system occurs when any inadvertent incident of overheating beyond the preset limit (40 °C) is detected (Fig. 4d). The basic circuit configuration of the internal electronics appears in Supplementary Fig. S10a, wherein a miniaturized p-i-n Si diode-based temperature sensor chip (RN142ZS, p-i-n diode, 0.6 mm × 0.3 mm, Rohm Semiconductor, Japan, sensitivity: ~2.24 mV/°C) is added to this shutdown unit to detect overheating events (Supplementary Fig. S10b). In this scheme, the trigger of controlled heat (warmth) occurs by pushing a button on the control wristwatch unit in an on-demand manner (Supplementary Movie S6) whenever necessary (i.e., before shaking a hand), while the resistive-based temperature sensors at the center of the palm remain deactivated. Figure 4e shows the experimental (IR, left) and computational (FEA, right) results of the warmed e-glove system in which the skin temperature remains at a preset temperature of ~35 °C. The exploded view (Fig. 4f) of the FEA results reveals the temperature distribution of several selected layers of the e-glove system, implying that the prosthetic hand experiences similar or slightly lower temperature than that of the outer skin layer (~35 °C) due to reduction of the temperature through the adhesive layer (i.e., the epoxy) and the substrate (i.e., the nitrile glove), which have low thermal conductivities of 0.1 and 0.24 W/mK, respectively.
Experimental demonstrations of the e-glove systems in interactions with human subjects provide assessments of how well the systems replicate the details of a real human hand; a close resemblance to a real hand can enhance the confidence and ability of the prosthetic hand user in many social interactions. Figure 5a presents a within-subjects experimental design that includes four different prototypes featured with human hand-like softness and skin tone (A), along with textures (B), warmth (C), and texture and warmth (D), all deployed in a randomized sequence to eliminate learning bias. A total of 32 subjects, including 24 males and eight females, with an average age of 30 were recruited for this study. Seventeen of the subjects had seen or interacted with the prosthetic hand before the tests. The subjects were asked to interact with each of the prototypes sequentially by touching, poking, scratching/rubbing, and handshaking gently or firmly (Fig. 5b). Subsequently, the subjects were asked to complete a questionnaire consisting of 12 questions totaling 60 points with ratings on a scale from 1 (low) to 5 (high) to evaluate the comfort, warmth, convenience, and human-like feeling after every interaction (Supplementary Fig. S11), and finally, rank the prototypes in a comparative evaluation. The average duration of a subject study was ~40 min, and no skin irritations or adverse effects to the subjects' hands were observed throughout the studies.
Fig. 5: Assessment of prosthetic hand-human interaction scenarios.
a Optical image (top frame) of the experimental setup for the four different prototypes. The IR and optical images (bottom frame) show the human hand-like warmed and textured e-glove prototypes, respectively. Scale bars are 7 cm (top), 6 cm (left bottom), and 5 cm (right bottom), respectively. b Optical images of the participants interacting with the prototypes. Scale bar is 60 cm. c Statistical analysis results of the subject rating score, one-way ANOVA with two-tailed paired sample t-test post hoc test in the human-hand interaction study. d Results of subject responses on the prototypes A–D, ranked from 1 (best) to 4 (worst). e Results of the subject ranking of the prototypes as normalized percentages of the categories of warmth, human-like, pleasant, and unease
Figure 5c presents the results of one-way repeated measures analysis of variance (ANOVA) test23, indicating a substantial difference (F(3,93) = 17.94, p < 0.00001) at p ≤ 0.05 in the subjects' preference to the e-glove prototypes with human hand-like features. Mauchly's test for sphericity (Χ2(5) = 2.79, p = 0.73) confirms that no violation on the sphericity (univariate) assumption exists24. The results of post hoc tests using the two-tailed paired samples t-test on the dependent means25 reveal that prototypes B–D with at least one human hand-like feature (either texture, warmth or both) yield significantly higher rating scores compared with the counterpart (A–B: t(31) = 4.99, p < 0.00005; A–C: t(31) = 3.19, p < 0.00324; A–D: t(31) = 6.21, p < 0.00001). While there is no significant difference between prototypes B and D (t(31) = 1.53, p = 0.13638), there are significant differences between B and C (t(31) = −2.40, p = 0.02246) and D and C (t(31) = 4.16, p = 0.00024). The corresponding summary table appears in Supplementary Table S3. The results appearing in Fig. 5d, e support that prototype D is the most preferred (ranked) while prototype C is not preferred over prototypes B and D.
The results presented here demonstrate that various electronic circuits and sensors can be printed on a commercial stretchable nitrile glove that already possesses the desired ergonomic design, allowing for seamless integration with arbitrary hand shapes. This integration has been challenging using conventional approaches that typically involve the use of multiple flexible sensors wrapped around prosthetic hands to accommodate the geometric complexity of the hand shape7. The user interface via a wristwatch unit provides the advantages of the capability for real-time display of the sensory data measured on a control wristwatch unit, remote data transmission to an external reader for data postprocessing, providing benefits and convenience to the user, and a multisensory feedback/display in one device (e.g., temperature, pressure, and humidity sensing two-dimensional data of the whole palm). Adding additional sensory cues through audio and tactile/vibrational feedback to further improve the user interface would be interesting26. The realistic human hand-like features of the e-glove system offer an expanded set of options for the daily activities of prosthetic hand users, with the potential to improve their mental health and well-being by helping the user more naturally integrate into social contexts. Although this study focuses on applications for general passive prosthetic hands, the results also suggest opportunities in the integration of the e-glove system with the recently emerging cutting-edge active prosthetic hands controlled by the mind, voice, and/or muscles of the user27,28,29,30. The hybrid printing method for the fabrication of the e-glove system is cost-effective and compatible with various electronic materials and sophisticated design layouts, thereby foreshadowing the implications for a wide range of users; further development of an e-glove system that can fit the hand of a small child or an extra-large adult is also important. Finally, the application of the established e-glove platform can also be extended to smart gloves for assistive robotic hands, automotive factory workers, and home-based rehabilitations31.
The authors declare that all data supporting the findings of this study are available within the paper and its supplementary information.
Mckechnie, P. & John, A. Anxiety and depression following traumatic limb amputation: A systematic review. Inj.-Int. J. Care Inj. 45, 1859–1866 (2014).
Grob, M., Papadopulos, N. A., Zimmermann, A., Biemer, E. & Kovacs, L. The psychological impact of severe hand injury. J. Hand Surg.-Eur. 33, 358–362 (2008).
Gallagher, P. & Desmond, D. Measuring quality of life in prosthetic practice: benefits and challenges. Prosthet. Orthot. Int. 31, 167–176 (2007).
Cabibihan, J., Pattofatto, S., Jomaa, M., Benallal, A. & Carrozza, M. Towards humanlike social touch for sociable robotics and prosthetics: comparisons on the compliance, conformance and hysteresis of synthetic and human fingertip skins. Int. J. Soc. Robot. 1, 29–40 (2009).
Cordella, F. et al. Literature review on needs of upper limb prosthesis users. Front. Neurosci. 10, 209 (2016).
Kim, D. et al. Epidermal electronics. Science 333, 838–843 (2011).
Kim, J. et al. Stretchable silicon nanoribbon electronics for skin prosthesis. Nat. Commun. 5, 5747 (2014).
Wang, S. et al. Skin electronics from scalable fabrication of an intrinsically stretchable transistor array. Nature 555, 83–88 (2018).
Kaltenbrunner, M. et al. An ultra-lightweight design for imperceptible plastic electronics. Nature 499, 458–463 (2013).
Chossat, J.-B., Shin, H.-S., Park, Y.-L. & Duchaine, V. Soft Tactile skin using an embedded ionic liquid and tomographic imaging. J. Mech. Robotics-Trans. Asme 7, 021008 (2015).
Chortos, A., Liu, J. & Bao, Z. A. Pursuing prosthetic electronic skin. Nat. Mater. 15, 937–950 (2016).
Ota, H. et al. Application of 3D printing for smart objects with embedded electronic sensors and systems. Adv. Mater. Technol. 1, 1600013 (2016).
Mishra, R. K. et al. Wearable flexible and stretchable glove biosensor for on-site detection of organophosphorus chemical threats. ACS Sensors 2, 553–561 (2017).
Boley, J. W., White, E. L. & Kramer, R. K. Mechanically sintered gallium-indium nanoparticles. Adv. Mater. 27, 2355–2360 (2015).
Lee, P. et al. Highly stretchable and highly conductive metal electrode by very long metal nanowire percolation network. Adv. Mater. 24, 3326–3332 (2012).
Carlson, A., Bowen, A., Huang, Y., Nuzzo, R. & Rogers, J. Transfer printing techniques for materials assembly and micro/nanodevice fabrication. Adv. Mater. 24, 5284–5318 (2012).
Kim, S. et al. Microstructured elastomeric surfaces with reversible adhesion and examples of their use in deterministic assembly by transfer printing. Proc. Natl Acad. Sci. USA 107, 17095–17100 (2010).
Wie, D. S. et al. Wafer-recyclable, environment-friendly transfer printing for large-scale thin-film nanoelectronics. Proc. Natl Acad. Sci. USA 115, 7236–7244 (2018).
Lee, C. et al. Soft core/shell packages for stretchable electronics. Adv. Funct. Mater. 25, 3698–3704 (2015).
Heikenfeld, J. et al. Wearable sensors: modalities, challenges, and prospects. Lab a Chip 18, 217–248 (2018).
Han, S. et al. Mechanically reinforced skin-electronics with networked nanocomposite elastomer. Adv. Mater. 28, 10257–10265 (2016).
Kim, D. H. et al. Optimized structural designs for stretchable silicon integrated circuits. Small 5, 2841–2847 (2009).
Girden, E. R. ANOVA: Repeated measures, Vol. 84. (Sage Publications, Inc., Thousand Oaks, CA, USA, 1992).
Mauchly, J. W. Significance test for sphericity of a normal n-variate distribution. Ann. Math. Stat. 11, 204–209 (1940).
Norušis, M. J. SPSS 14.0 guide to data analysis. (Prentice Hall, Upper Saddle River, NJ, USA, 2006).
Cipriani, C. et al. A novel concept for a prosthetic hand with a bidirectional interface: A Feasibility Study. IEEE Trans. Biomed. Eng. 56, 2739–2743 (2009).
Moran, C. W. Revolutionizing prosthetics 2009 modular prosthetic limb-body interface: overview of the prosthetic socket development. Johns. Hopkins Apl. Tech. Dig. 30, 240–249 (2011).
Hutchinson, D. T. The quest for the bionic arm. J. Am. Acad. Orthop. Surg. 22, 346–351 (2014).
Raspopovic, S. et al. Restoring natural sensory feedback in real-time bidirectional hand prostheses. Sci. Trans. Med. 6, 222ra19 (2014).
Tabot, G. A. et al. Restoring the sense of touch with a prosthetic hand through a brain interface. Proc. Natl Acad. Sci. USA 110, 18279–18284 (2013).
Gurari, N., Kuchenbecker, K. J. & Okamura, A. M. Perception of springs with visual and proprioceptive motion cues: implications for prosthetics. IEEE Trans. Hum.-Mach. Syst. 43, 102–114 (2013).
C.H.L acknowledges funding support from the Eli Lilly and Company (F.00120802.02.013) and the Purdue Research Foundation (PRF).
These authors contributed equally: Min Ku Kim, Ramviyas Nattanmai Parasuraman, Liu Wang
Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA
Min Ku Kim & Chi Hwan Lee
Department of Computer and Information Technology, Purdue University, West Lafayette, IN, 47907, USA
Ramviyas Nattanmai Parasuraman & Byung-Cheol Min
Department of Computer Science, University of Georgia, Athens, GA, 30602, USA
Ramviyas Nattanmai Parasuraman
Department of Aerospace Engineering and Engineering Mechanics, University of Texas at Austin, Austin, TX, 78712, USA
Liu Wang & Nanshu Lu
School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
Yeonsoo Park, Bongjoong Kim, Seung Jun Lee & Chi Hwan Lee
Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, IN, 47907, USA
Chi Hwan Lee
Min Ku Kim
Liu Wang
Yeonsoo Park
Bongjoong Kim
Seung Jun Lee
Nanshu Lu
Byung-Cheol Min
C.H.L. devised the concept of the e-glove system. M.K.K., R.N.P., L.W., N.L., B.C.M., C.H.L. designed the research. M.K.K., Y.P., B.K., S.J.L., C.H.L. developed and implemented the e-glove system. R.N.P, B.C.M. performed the validation studies on human-prosthetic hand interactions. L.W., N.L. performed the modeling calculations. M.K.K., R.N.P., L.W., N.L., B.C.M., C.H.L. wrote the paper. All authors reviewed and commented on the paper.
Correspondence to Nanshu Lu, Byung-Cheol Min or Chi Hwan Lee.
Purdue University has filed a provisional patent application related to this technology.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information.
Supplementary Movie S1.
Kim, M.K., Parasuraman, R.N., Wang, L. et al. Soft-packaged sensory glove system for human-like natural interaction and control of prosthetic hands. NPG Asia Mater 11, 43 (2019). https://doi.org/10.1038/s41427-019-0143-9
Revised: 05 May 2019
Soft actuators-based skill training wearables: a review on the interaction modes, feedback types, VR scenarios, sensors utilization and applications
Priyanka Ramasamy
Enrique Calderon-Sastre
Yuichi Kurita
ROBOMECH Journal (2023)
Highly durable machine-learned waterproof electronic glove based on low-cost thermal transfer printing for amphibious wearable applications
Shengshun Duan
Jiayi Wang
Jun Wu
Nano Research (2022)
Design framework for a seamless smart glove using a digital knitting system
Yewon Song
Seulah Lee
Jihyun Bae
Fashion and Textiles (2021)
Tunable seesaw-like 3D capacitive sensor for force and acceleration sensing
Jilong Ye
Fan Zhang
npj Flexible Electronics (2021)
Wearable electronics: making artificial hands fit like a glove
A stretchy glove loaded with sensors can be slipped over any prosthetic hand to help amputees regain sensory perceptions. Devices meant to replace hands have complex movements, which make them challenging to integrate with wearable electronic sensors. Min Ku Kim from Purdue University in West Lafayette, USA and colleagues have overcome this problem by combining pressure, temperature, hydration, and electrophysiological sensors into nitrile gloves similar to those worn in laboratories. The 'e-glove' is fabricated by printing components such as silver ink-based thermal resistors onto the glove and then coating the device with a soft, skin-like silicone polymer. Signals received from the e-glove are displayed on a wristwatch display. The team demonstrated sensing of everyday items, including damp diapers, while simultaneously keeping tabs on a user's heartrate and muscle exertion.
NPG Asia Materials (NPG Asia Mater) ISSN 1884-4057 (online) ISSN 1884-4049 (print) | CommonCrawl |
Half-integer spin and infinitesimal rotations
On p. 692 of 'Quantum Mechanics' by Cohen-Tannoudji, he states that:
Every finite rotation can be decomposed into an infinite number of infinitesimal rotations, since the angle of rotation can vary continuously, and since:
$\begin{equation} \mathcal{R}_{\textbf{u}}(\alpha+d\alpha)=\mathcal{R}_{\textbf{u}}(\alpha)\mathcal{R}_{\textbf{u}}(d\alpha)=\mathcal{R}_{\textbf{u}}(d\alpha)\mathcal{R}_{\textbf{u}}(\alpha), \end{equation}$
where $\mathcal{R}_{\textbf{u}}(d\alpha)$ is an infinitesimal rotation about the axis $\textbf{u}$. Thus, the study of the rotation group can be reduced to an examination of infinitesimal rotations.
Here, $\mathcal{R}_{\textbf{u}}(\alpha)$ represents a geometrical rotation, i.e., it acts on the coordinate space $\Re^{3}$, and with it is associated a rotation operator $R(\alpha)$ which acts on the state space.
In particular, he uses this formulation with infinitesimal rotations to then show that the rotation operator for an infinitesimal rotation about $\textbf{u}$ is:
\begin{equation} R_{\textbf{u}}(d\alpha)=1-\dfrac{i}{\hbar}d\alpha \hspace{0.2em} \textbf{J}\cdot\textbf{u}, \end{equation}
where $\textbf{J}$ is the total angular momentum operator. From this, one can show that the rotation operator for some finite angle is:
\begin{equation} R_{\textbf{u}}(\alpha)=e^{-\frac{i}{\hbar}\alpha \hspace{0.2em} \textbf{J}\cdot\textbf{u}}. \end{equation}
A well known example of such a rotation operator is when $\textbf{J}=\textbf{S}$, i.e., the angular momentum consists of spin only, and when $s$ is allowed to take half-integer values only, such as $\frac{1}{2}$ or $\frac{3}{2}$. In this case, one can show that $R_{\textbf{u}}(2\pi)=-\mathbb{1}$, rather than $+\mathbb{1}$, as one gets in the case of integer spin particles.
Cohen-Tannoudji explains this partly through the fact that we constructed our finite angle rotation operator from a composition of infinitesimal rotation operators, with the footnote:
However, limiting ourselves to infinitesimal rotations, we lose sight of a 'global' property of the finite rotation group: the fact that a rotation through an angle of $2\pi$ is the identity transformation. The rotation operators constructed from infinitesimal operators do not always have this global property. In certain cases (and here he references spin-1/2 particles), the operator associated with a $2\pi$ rotation is not the unit operator but its opposite.
It is not immediately clear to me from the construction he gave why the possible values of $j$ and the fact that we used infinitesimal operators to construct a finite one should be related. How does this relationship come about?
quantum-mechanics quantum-spin group-theory rotation lie-algebra
Oliver LuntOliver Lunt
$\begingroup$ What exactly do you mean by "possible values of $\mathbf{J}$?" Do you mean the matrix elements? Or do you mean $j$, the angular momentum of the particle? $\endgroup$ – Ryan Unger Feb 16 '15 at 11:59
$\begingroup$ Sorry for the confusion; I mean $j$. $\endgroup$ – Oliver Lunt Feb 16 '15 at 12:26
$\begingroup$ Related: physics.stackexchange.com/q/96045/2451 , physics.stackexchange.com/q/96569/2451 and links therein. $\endgroup$ – Qmechanic♦ Feb 16 '15 at 13:22
Different Lie groups can have the same (up to isomorphism) Lie algebra. This is the case of, say, $SO(3)$ and $SU(2)$, the latter being the universal 2-cover of the former. When you are given a Lie algebra $\mathfrak g$ and you want to integrate it to a Lie group $G$ having $\mathfrak g$ as a Lie algebra, you will end up with a simply connected group. Hence, if you start with $SO(3)$ and determine its Lie algebra $\mathfrak{so}(3)\cong\mathfrak{su}(2)$, its integration will give you $SU(2)$, which is simply connected and indeed is the 2-cover of $SO(3)$.
Phoenix87Phoenix87
Say you had didn't know about quantum mechanics and no idea what a spinor is. You're given an "operator" that rotates things. One of the most basic assumptions you will make is that a rotation of $2\pi$ changes nothing. This is really quite reasonable.
As @Phoenix87 said, we can identify $SU(2)$ has having a Lie algebra isomorphic to that of $SO(3)$. We find that for every integer and half-integer $j$ there exists an $SU(2)$ irrep $$R^{(j)}_\mathbf{u}(\alpha)=\exp(-i\alpha\mathbf{u}\cdot\mathbf{J}^{(j)})$$ The physical interpretation of $j$ is the angular momentum of the particle. This is a rotation operator.
Naively one would expect $$R^{(j)}_\mathbf{u}(2\pi)=\mathbb{1}$$ Indeed this is what we find when $j$ is an integer. However, when $j$ is a half integer, we find that a rotation of $2\pi$ is $-\mathbb{1}$. This is more deeply linked to the notion of spinors. One may show that $$R^{(j)}_\mathbf{u}(2\pi)=e^{i2\pi j}$$ holds for any $j$.
From the infinitesimal form of the rotation operator, this is not clear. We need the finite version.
I should mention that we have a topological restriction placed upon rotations: $$R(4\pi)=\mathbb{1}$$
Ryan UngerRyan Unger
Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-spin group-theory rotation lie-algebra or ask your own question.
Idea of Covering Group
What is the physical significance of the double connectivity of $\rm SO(3)$ group manifold?
Some small questions about quantum spin and rotations
Connection to spin 1/2 electron system?
Infinitesimal Rotations
How can it be derived that particles described by the Dirac equation must have spin 1/2?
Writing Breit-Pauli spin-spin-coupling Hamiltonian as a sum of irreducible spin tensor operators
Tensor product representation of $SO(3)$ in the Hilbert space of particle with spin $S$
Euler rotations and unitary unimodular rotation group in QM
Queries about rotation in QM for spin $s = 1$ system
Rotations of eigenstates of $S_z$
Why do we use infinitesimal forms of operators? | CommonCrawl |
Some mathematical problems related to the second order optimal shape of a crystallisation interface
DCDS Home
Weak differentiability of scalar hysteresis operators
June 2015, 35(6): 2423-2442. doi: 10.3934/dcds.2015.35.2423
On a Cahn-Hilliard type phase field system related to tumor growth
Pierluigi Colli 1, , Gianni Gilardi 1, and Danielle Hilhorst 2,
Dipartimento di Matematica "F. Casorati", Università di Pavia, Via Ferrata 1, 27100 Pavia
Laboratoire de Mathématiques, CNRS et Université de Paris-Sud, 91405 Orsay
Received January 2014 Revised April 2014 Published December 2014
The paper deals with a phase field system of Cahn-Hilliard type. For positive viscosity coefficients, the authors prove an existence and uniqueness result and study the long time behavior of the solution by assuming the nonlinearities to be rather general. In a more restricted setting, the limit as the viscosity coefficients tend to zero is investigated as well.
Keywords: Phase field model, viscous Cahn-Hilliard equations, tumor growth, asymptotic analysis., long-time behavior, well-posedness.
Mathematics Subject Classification: 35B20, 35K20, 35K35, 35R3.
Citation: Pierluigi Colli, Gianni Gilardi, Danielle Hilhorst. On a Cahn-Hilliard type phase field system related to tumor growth. Discrete & Continuous Dynamical Systems - A, 2015, 35 (6) : 2423-2442. doi: 10.3934/dcds.2015.35.2423
V. Barbu, Nonlinear Semigroups and Differential Equations in Banach spaces,, Noordhoff International Publishing, (1976). Google Scholar
H. Brezis, Opérateurs Maximaux Monotones Et Semi-groupes de Contractions Dans Les Espaces de Hilbert,, North-Holland Math. Stud. 5, 5 (1973). Google Scholar
J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system I. Interfacial free energy,, J. Chem. Phys., 28 (1958), 258. doi: 10.1063/1.1744102. Google Scholar
V. Cristini, H. B. Frieboes, X. Li, J. S. Lowengrub, P. Macklin, S. Sanga, S. M. Wise and X. Zheng, Nonlinear modeling and simulation of tumor growth,, in Selected Topics in Cancer Modeling: Genesis, (2008), 113. Google Scholar
V. Cristini, X. Li, J. S. Lowengrub and S. M. Wise, Nonlinear simulations of solid tumor growth using a mixture model: Invasion and branching,, J. Math. Biol., 58 (2009), 723. doi: 10.1007/s00285-008-0215-x. Google Scholar
V. Cristini and J. S. Lowengrub, Multiscale Modeling of Cancer: An Integrated Experimental and Mathematical Modeling Approach,, Cambridge University Press, (2010). doi: 10.1017/CBO9780511781452. Google Scholar
V. Cristini, J. S. Lowengrub and Q. Nie, Nonlinear simulation of tumor growth,, J. Math. Biol., 46 (2003), 191. doi: 10.1007/s00285-002-0174-6. Google Scholar
R. Denk, M. Hieber and J. Prüss, Optimal $L^p$-$L^q$-estimates for parabolic boundary value problems with inhomogeneous data,, Math. Z., 257 (2007), 193. doi: 10.1007/s00209-007-0120-9. Google Scholar
C. M. Elliott and A. M. Stuart, Viscous Cahn-Hilliard equation. II. Analysis,, J. Differential Equations, 128 (1996), 387. doi: 10.1006/jdeq.1996.0101. Google Scholar
C. M. Elliott and S. Zheng, On the Cahn-Hilliard equation,, Arch. Rational Mech. Anal., 96 (1986), 339. doi: 10.1007/BF00251803. Google Scholar
S. Frigeri, M. Grasselli and E. Rocca, On a diffuse interface model of tumor growth,, preprint arXiv:1405.3446 [math.AP] (2014), (2014), 1. Google Scholar
A. Hawkins-Daarud, K. G. van der Zee and J. T. Oden, Numerical simulation of a thermodynamically consistent four-species tumor growth model,, Int. J. Numer. Methods Biomed. Eng., 28 (2012), 3. doi: 10.1002/cnm.1467. Google Scholar
D. Hilhorst, J. Kampmann, T. N. Nguyen and K. G. van der Zee, Formal asymptotic limit of a diffuse-interface tumor-growth model,, preprint (2013), (2013), 1. Google Scholar
J.-L. Lions, Quelques Méthodes de Résolution Des Problèmes Aux Limites Non Linéaires,, Dunod; Gauthier-Villars, (1969). Google Scholar
J. S. Lowengrub, H. B. Frieboes, F. Jin, Y.-L. Chuang, X. Li, P. Macklin, S. M. Wise and V. Cristini, Nonlinear modeling of cancer: Bridging the gap between cells and tumors,, Nonlinearity, 23 (2010). doi: 10.1088/0951-7715/23/1/R01. Google Scholar
J. T. Oden, A. Hawkins and S. Prudhomme, General diffuse-interface theories and an approach to predictive tumor growth modeling,, Math. Models Methods Appl. Sci., 20 (2010), 477. doi: 10.1142/S0218202510004313. Google Scholar
R. E. Showalter, Monotone Operators in Banach Spaces and Nonlinear Partial Differential Equations,, Math. Surveys Monogr. 49, 49 (1997). Google Scholar
J. Simon, Compact sets in the space $L^p(0,T; B)$,, Ann. Mat. Pura Appl. (4), 146 (1987), 65. doi: 10.1007/BF01762360. Google Scholar
S. Zheng, Nonlinear Evolution Equations,, Chapman Hall/CRC Monogr. Surv. Pure Appl. Math. 133, 133 (2004). doi: 10.1201/9780203492222. Google Scholar
Jan Prüss, Vicente Vergara, Rico Zacher. Well-posedness and long-time behaviour for the non-isothermal Cahn-Hilliard equation with memory. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 625-647. doi: 10.3934/dcds.2010.26.625
Pierluigi Colli, Gianni Gilardi, Elisabetta Rocca, Jürgen Sprekels. Asymptotic analyses and error estimates for a Cahn-Hilliard type phase field system modelling tumor growth. Discrete & Continuous Dynamical Systems - S, 2017, 10 (1) : 37-54. doi: 10.3934/dcdss.2017002
Andrea Signori. Optimal treatment for a phase field system of Cahn-Hilliard type modeling tumor growth by asymptotic scheme. Mathematical Control & Related Fields, 2019, 0 (0) : 0-0. doi: 10.3934/mcrf.2019040
Annalisa Iuorio, Stefano Melchionna. Long-time behavior of a nonlocal Cahn-Hilliard equation with reaction. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 3765-3788. doi: 10.3934/dcds.2018163
Giulio Schimperna, Antonio Segatti, Ulisse Stefanelli. Well-posedness and long-time behavior for a class of doubly nonlinear equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 15-38. doi: 10.3934/dcds.2007.18.15
Andrea Giorgini. On the Swift-Hohenberg equation with slow and fast dynamics: well-posedness and long-time behavior. Communications on Pure & Applied Analysis, 2016, 15 (1) : 219-241. doi: 10.3934/cpaa.2016.15.219
Igor Chueshov, Irena Lasiecka, Justin Webster. Flow-plate interactions: Well-posedness and long-time behavior. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 925-965. doi: 10.3934/dcdss.2014.7.925
Desheng Li, Xuewei Ju. On dynamical behavior of viscous Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2207-2221. doi: 10.3934/dcds.2012.32.2207
Haydi Israel. Well-posedness and long time behavior of an Allen-Cahn type equation. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2811-2827. doi: 10.3934/cpaa.2013.12.2811
Maurizio Grasselli, Nicolas Lecoq, Morgan Pierre. A long-time stable fully discrete approximation of the Cahn-Hilliard equation with inertial term. Conference Publications, 2011, 2011 (Special) : 543-552. doi: 10.3934/proc.2011.2011.543
Sergey Zelik, Jon Pennant. Global well-posedness in uniformly local spaces for the Cahn-Hilliard equation in $\mathbb{R}^3$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 461-480. doi: 10.3934/cpaa.2013.12.461
Ciprian G. Gal, Maurizio Grasselli. Longtime behavior of nonlocal Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 145-179. doi: 10.3934/dcds.2014.34.145
Alain Miranville, Ramon Quintanilla, Wafa Saoud. Asymptotic Behavior of a Cahn-Hilliard/Allen-Cahn System with Temperature. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2257-2288. doi: 10.3934/cpaa.2020099
Michele Colturato. Well-posedness and longtime behavior for a singular phase field system with perturbed phase dynamics. Evolution Equations & Control Theory, 2018, 7 (2) : 217-245. doi: 10.3934/eect.2018011
Irena Pawłow, Wojciech M. Zajączkowski. On a class of sixth order viscous Cahn-Hilliard type equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 517-546. doi: 10.3934/dcdss.2013.6.517
Riccarda Rossi. On two classes of generalized viscous Cahn-Hilliard equations. Communications on Pure & Applied Analysis, 2005, 4 (2) : 405-430. doi: 10.3934/cpaa.2005.4.405
Pierluigi Colli, Gianni Gilardi, Philippe Laurençot, Amy Novick-Cohen. Uniqueness and long-time behavior for the conserved phase-field system with memory. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 375-390. doi: 10.3934/dcds.1999.5.375
Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1
Pierluigi Colli, Gianni Gilardi, Paolo Podio-Guidugli, Jürgen Sprekels. An asymptotic analysis for a nonstandard Cahn-Hilliard system with viscosity. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 353-368. doi: 10.3934/dcdss.2013.6.353
Tian Ma, Shouhong Wang. Cahn-Hilliard equations and phase transition dynamics for binary systems. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 741-784. doi: 10.3934/dcdsb.2009.11.741
Pierluigi Colli Gianni Gilardi Danielle Hilhorst | CommonCrawl |
Identifying the most effective behavioural assays and predator cues for quantifying anti-predator responses in mammals: a systematic review protocol
Natasha D. Harrison ORCID: orcid.org/0000-0001-5779-01871,
Ben L. Phillips1,2,
Jan M. Hemmi1,
Adrian F. Wayne1,3,
Rochelle Steven1,4 &
Nicola J. Mitchell1
Environmental Evidence volume 10, Article number: 38 (2021) Cite this article
Mammals, globally, are facing population declines. Strategies increasingly employed to recover threatened mammal populations include protecting populations inside predator-free havens, and translocating animals from one site to another, or from a captive breeding program. These approaches can expose predator-naïve animals to predators they have never encountered and as a result, many conservation projects have failed due to the predation of individuals that lacked appropriate anti-predator responses. Hence robust ways to measure anti-predator responses are urgently needed to help identify naïve populations at risk, to select appropriate animals for translocation, and to monitor managed populations for trait change. Here, we outline a protocol for a systematic review that collates existing behavioural assays developed for the purpose of quantifying anti-predator responses, and identifies assay types and predator cues that provoke the greatest behavioural responses.
We will retrieve articles from academic bibliographic databases and grey literature sources (such as government and conservation management reports), using a Boolean search string. Each article will be screened for the satisfaction of eligibility criteria determined using the PICO (Population—Intervention—Comparator—Outcome) framework, to yield the final article pool. Using metadata extracted from each article, we will map all known behavioural assays for quantifying anti-predator responses in mammals and will then examine the context in which each assay has been implemented (e.g. species tested, predator cue characteristics). Finally, with mixed effects modelling, we will determine which of these assays and predator cue types elicit the greatest behavioural responses (standardised difference in response between treatment and control groups). The final review will highlight the most robust methodology, will reveal promising techniques on which to focus future assay development, and will collate relevant information for conservation managers.
The need to quantify anti-predator responses
Mammals are experiencing an alarming rate of extinction [1,2,3] due to anthropogenic impacts such as habitat loss and fragmentation, illegal hunting, and exotic predators [4]. Redressing this loss of biodiversity requires well-informed and well-tested management interventions. Many of these interventions will need to be underpinned by a mechanistic understanding of species' behaviour.
How an animal responds to predators has substantial bearing on its ability to survive. Predation, particularly from introduced predators, has been a major driver of mammal declines and extinctions around the world [5,6,7,8,9]. This is especially true for individuals and populations that have had limited or no exposure to predators, such as many island populations [10, 11], individuals raised in captivity and those moved to an environment with novel predators [12,13,14]. Improving our understanding of how animals behave in response to predatory stimuli should provide crucial insights for their conservation management and can improve our ability to retain antipredator traits in managed populations [12, 15, 16]. An animal's response to predators may be behavioural (e.g. spatial and temporal avoidance [17, 18], avoiding detection [19] and evasion [20]) or physical responses (e.g. chemical [21] and physical defences [22]). Behavioural responses are likely to be more plastic and responsive at shorter time frames than physical responses, and are therefore particularly important when considering the acute impacts of predators on the persistence of predator-naïve species.
Common strategies employed to prevent faunal extinctions include captive breeding [23], translocations (the deliberate movement of animals from one population or site for release in another [24]) and establishment of populations in predator-free havens (areas isolated from predators through a geographical or physical barrier, such as islands or fenced enclosures [25,26,27]). Such approaches have secured a number of populations of mammals, including African elephants [28, 29], European lynx [30], elk [31], giant pandas [32], and Tasmanian devils [33]. Despite their initial successes, these strategies are at risk of longer term failure because they can expose naïve individuals to novel contexts for which they may lack appropriate behavioural responses. Further, such populations become vulnerable to acute population collapses from uncontrolled predator incursions.
Australia provides a compelling case study to illustrate the challenges of mammal conservation. More than one third of modern mammal extinctions have occurred in Australia, largely due to the introduction of feral cats and foxes [34]. In response, havens free of introduced predators are a key component of conserving much of the remaining mammal fauna [26, 27, 35]. Australia's current network of havens provides habitats for at least 32 mammal species, and has secured at least 188 populations and sub-populations [26]. Evidence is emerging, however, that in the absence of feral and/or native predators, havened populations no longer exhibit anti-predator behaviours [13, 36,37,38,39,40]. This renders individuals in these populations fundamentally unfit for reintroduction back into areas where predators still persist. Because the success of many translocations has ultimately been compromised by predation [35, 41, 42], the future of mammal conservation in Australia, and more broadly, hinges on developing methods and strategies that can quantify and conserve antipredator behaviours in havened and translocated populations [39].
To undertake an adaptive management approach, we require monitoring and evaluation of anti-predator responses in mammalian species. Despite awareness that behavioural traits such as boldness or shyness can influence conservation outcomes, measuring such traits is rarely incorporated into monitoring and management [16, 43]. Anti-predator responses have only recently been identified as a potential barrier to the success of conservation projects [13, 37,38,39], and while an array of academic literature exists that details various methods for measuring these behaviours [15, 38, 39, 44,45,46,47,48], accessing the methodologies, comparing them for rigor and identifying the most appropriate measure is labour intensive. Stakeholders, such as conservation and population managers, are likely to be seeking this information, but also likely to be limited by the time and resources necessary to find it. Ultimately, we currently lack a robust framework for the universal monitoring and evaluation of anti-predator traits [49]. The first step to developing such a framework is to understand which behavioural assays have been conducted, which are most effective (capture or provoke the greatest behavioural response), and whether the type of predator cue is important. In the absence of this crucial information, the adoption of inappropriate and poorly-performing behavioural metrics may prevail.
Identification and engagement of stakeholders
In addition to the review team, stakeholders relevant to this review have been identified as those who research or manage animal populations, for example, members of species recovery teams (Fig. 1). To ensure the information collected throughout this review is tailored toward the target audience, and thus of the most relevance for application, a variety of stakeholders from each of the categories in Fig. 1 were consulted during the development of this protocol. We invited 27 stakeholders to comment on the draft protocol, and after receiving 16 replies (ten from Australia and six from other countries), we incorporated their suggestions.
End-user stakeholder groups (right-hand boxes) consulted when designing a systematic review of methods that quantify anti-predator behaviour in mammals. Arrows indicate each groups' broad interests in the various steps (left-hand boxes) required for improving conservation outcomes
Objective of the review
We will present all known behavioural assays for measuring or quantifying anti-predator responses in mammals by collating information into an accessible format. Specifically, we will: (1) reveal different methods, (2) describe the context within which each method was conducted, and (3) highlight methods or aspects that warrant further examination, thus guiding the future development of behavioural assays. Further, using a modelling approach, we will then identify which types of behavioural assays and predator cues elicit the greatest responses in mammals (difference in effect size between the treatment and control conditions). A formal evidence synthesis is required to explore all potential methods and to avoid bias toward those published in academic journals, because much information may come from governmental reports and species recovery plans [16, 50]. The final review will act as a guide: it will highlight existing methodologies and provide additional information to assess their relevance, allowing stakeholders to easily select the most appropriate and effective behavioural assay for their purpose.
Using the PICO (Population—Intervention—Comparator—Outcome) framework [51], we have broken our review into two questions that will define our search scope. We will first systematically map all known methodologies answering a primary question: what behavioural assays have been used to quantify anti-predator responses in mammals? The elements of this question are:
Free-living, wild-caught, or captive mammals (global).
A behavioural assay that quantifies anti-predator responses to predator exposure
A behavioural assay that quantifies anti-predator responses to predator cues
Articles that conform to both the Population and Intervention criteria will be used to answer this primary question. A secondary question we seek to answer will be assessed quantitatively by modelling the metadata collected from each article, asking: which assay-types and predator cues elicit the greatest behavioural responses? This question utilises the same Population and Intervention criteria as the primary question, but requires further assessment using Comparator and Outcome criteria to select studies for the systematic review. The additional elements of the secondary question are:
Comparison between levels of predator exposure (e.g. before versus after exposure, exposure versus no exposure) or comparison between exposure to a predator cue versus a control.
Difference in the behavioural response between the treatment (e.g. predator/predator cue exposure) and control conditions. Metrics of responses will differ between studies depending on assay type and will be compared using standardised effect sizes.
Articles that involve at least one Comparator element can then additionally be considered for the systematic review to investigate which Intervention elements (behavioural assays and predator cues) produce the greatest Outcome. The PICO elements of our two questions are illustrated in Fig. 2.
Elements of target questions illustrated using the PICO framework
Searching for articles
To develop a search strategy, an initial scoping exercise was conducted using a test-list of 10 benchmark articles that assess anti-predator responses (Additional file 1), each selected as they cover a variety of different assays and predator scenarios. The titles, key words, and abstracts of each scoping article were mined, both manually, and using word clouds (R package wordcloud [52]; in the R environment [53]), to determine the most appropriate search terms [54]. An initial search string was then created using Boolean operators to combine the relevant terms based on the review team's knowledge, and the terms identified from the scoping articles. Trial searches were conducted using the Web of Science: Core Collection. We systematically removed terms that appeared to broaden the search outside the scope of the review. To ensure the proposed strategy adequately returned relevant literature, the search output was scanned for relevant articles and each of the scoping benchmark articles. Unreturned articles were then closely inspected, and the search strategy was adjusted until it retrieved all 10 benchmark articles [51]. The comprehensiveness of the search strategy was then tested using a list of 5 independent articles (Additional file 1), all of which were retrieved by the final search strategy.
Search strategy
To begin collating articles for this review, bibliographic databases will be searched using the following search string (which will be modified for each specific database language).
TS = ((("antipredator response$" OR "anti-predator response$" OR "antipredator behavio$r" OR "anti-predator behavio$r" OR "escape behavio$r" OR "giving$up density" OR "FID" OR "GUD" OR "flight initiation distance") AND ("predator exposure" OR " prey naivete" OR "naive prey" OR "los$" OR "trait" OR "predator avoid*")) OR (("predator recognition" OR "predator exposure" OR "predation risk" OR "introduced predator$" OR "novel predator$" OR "predator odour") AND ("naive prey" OR "prey naivete" or "escape behavio$r" OR "giving$up density" OR "flight initiation distance" OR "FID" OR "GUD" OR "predator odour")) OR (("antipredator response$" OR "anti-predator response$" OR "antipredator behavio$r"OR "anti-predator behavio$r" OR "escape behavio$r") AND ("predator recognition" OR "predator exposure" OR "introduced predator$" OR "novel predator$"))).
Based on the subject matter covered by each, we will search the following bibliographic databases from which to collect peer-reviewed journal articles: Web of Science (Core Collection, BIOSIS Citation Index, Zoological Record, CAB abstracts) and Scopus.
To reduce bias toward published literature, we aim to also search a variety of grey literature sources [49, 50]. Using our search string above, we will collate theses and dissertations from two bibliographic databases specific to grey literature: Proquest Dissertation and EThOS: UK Theses and Dissertations. Conference proceedings will be searched in the Web of Science database using the predetermined search strategy. The following website will also be searched, using the search terms "anti-predator" and "antipredator": opengrey.eu; trove.nla.gov.au. Specialist documents will be searched from within the following repositories, using the search terms "anti-predator" and "antipredator": IUCN general publications (https://portals.iucn.org/library/dir/publications-list); IUCN Conservation Planning Specialist Group (http://www.cpsg.org/document-repository); Conservation Evidence (http://www.ConservationEvidence.com); WWF (https://www.worldwildlife.org/publications). A web-based search engine, Google (www.google.com), will be searched to supplement our search results. The first 50 links returned using each combination of the search terms "anti-predator/antipredator" and "behaviour/behavior", will be inspected and added to the article pool if not yet identified [55].
Additional literature
Based on the knowledge of the review team and stakeholders, additional publications not identified by the search strategy may also be included.
Search results will be limited to articles written and published in English (due to the language capabilities of the review team). All database and grey-literature searches will be documented, and this information will be made available with the final review publication. All searches will be conducted within two years of the final analysis being submitted for publication.
Article screening and study eligibility criteria
Duplicate articles will be removed, and article screening will be conducted through CADIMA [51, 56]. To remove bias, two screeners will independently review articles at title and abstract level simultaneously to determine relevance, followed by the full text versions, to decide which meet the inclusion criteria. Each screener will assess an overlap of 10% of all articles (to a maximum of 50 articles screened) at both the title/abstract stage, and at the full text stage. Reliability between screeners will be assessed using Kappa calculations (with values > 0.5 deemed acceptable [12, 57]). In instances where screeners do not agree on the inclusion/exclusion of an article, they will discuss, and then consult a third member of the review team if necessary. If theses or dissertations have additionally been published as journal articles or specialist reports, we will assess the methods described in both, and only include the article that provides the most detail. While not anticipated, if reviewers find themselves assessing their own work, a third impartial member of the review team will oversee the assessment of any conflicting articles. A full list of excluded articles will be made available with the final review, detailing reasoning for their exclusion.
Each article will be screened against eligibility criteria based on the PICO framework as outlined in Table 1. The screeners will first review each article by title and abstract simultaneously, to assess the satisfaction of the eligibility criteria (Table 1).
Table 1 Study eligibility criteria based on PICO (Population—Intervention—Comparator—Outcome) framework
Articles that satisfy the Population and Intervention eligibility criteria will be used to pursue the primary question, and will then additionally be assessed against the Comparator and Outcome eligibility criteria for inclusion in the secondary quantitative component where they may address the effectiveness of the Intervention elements; either assay types or predator cue types. All articles considered for this analysis must have incorporated at least one of the Comparator elements and all of the Outcome elements listed in Table 1. In articles with more than one predator cue or population type (e.g. current, historic and control predator cues or exposure > 5 years ago, in the last five years and never exposed), we will extract the effect size (difference between the treatment condition and the control) of the cue or population that was hypothesized by the authors to elicit the largest response (thus limiting the number of data entries from each article to one per assay).
Study validity assessment
Studies that satisfy the Population and Intervention criteria but not the Comparator and Outcome criteria will not be critically appraised and will exclusively be used in the narrative synthesis identifying different methodologies for quantifying anti-predator responses. Those studies that do satisfy the four Population, Intervention, Comparator and Outcome eligibility criteria will undergo further critical appraisal using the CEE critical appraisal tool (Additional file 2, [59]). Critical appraisal will be undertaken by two members of the review team, and each appraiser will assess on overlap of 5% of studies (to a maximum of 20) to ensure consistency. If appraisers reach different conclusions around any study, the validity criteria will be refined, and consistency checking will be repeated.
Data coding and extraction strategy
Once screened, the following meta-data variables will be extracted or scored where possible:
Size (small < 5 kg, medium 5–20 kg, large > 20 kg)
Assay type (e.g. flight initiation distance, trap behaviour, giving-up density)
Behaviour measured (e.g. avoidance, docility, exploratory behaviour, fear)
What equipment is required (e.g. camera traps, specialist equipment)
Type of predator exposure
Comparison between populations with varying exposure to predators (yes/no)
Use of predator cue (yes/no)
Direct or contextual
Olfactory, visual, or acoustic
Type of cue (e.g. faeces, urine, call, taxidermied model)
Cue properties
Did the cue move?
Size of cue (small < 5 kg, medium 5–20 kg, large > 20 kg)
Type of predator (e.g. terrestrial or aerial)
Robustness of methods
Number of individuals
Number of populations (treatment groups)
Number of repeat measures per individual
Number of repeat measures per population
Measure of repeatability
Within individuals
Within populations
Was there a control treatment (exposure or cue)
If/how the methods were validated (e.g. fate of individuals, success criteria)
Effect size (difference in means between treatment and control group)
Mean response (and standard deviation) of treatment group
Sample size of treatment group
Mean response (and standard deviation) of control group
Sample size of control group
For the quantitative component, we will extract the mean response of each treatment, its corresponding variance (standard deviation, standard error or variance), and the sample size for each treatment. In articles where this information is presented graphically, we will calculate the measures from the figures (with the axes as scale bars) using the software Image J [60]. Metadata will be scored using a customised data sheet (Additional file 3; adapted from [61]) by two members of the review team. Each member will crosscheck 5% of articles (to a maximum of 20) to ensure consistency, and if differences are found in the extracted information, the meta-data protocol will be refined and cross check will begin again until all data extracted is consistent. Where any information is unclear or missing, authors will be contacted. After contacting authors, if the treatment/control standard deviations or sample sizes are absent, or if more than 50% of metadata are still missing, the article will be excluded from the quantitative review component. Extracted data will be made available with the full review as supplementary material.
Potential effect modifiers/reasons for heterogeneity
The following additional factors to be investigated by the review were compiled using the expertise of the review team, incorporating suggestions from stakeholders. We may unintentionally exclude some useful data by only searching articles written in the English language. There may be a bias in the types of animals for which measures have been developed, for example, threatened or charismatic species. The type of predator cue used may substantially affect the outcome, as less effective cues may not be representative of an individuals' response to a true predation event [62,63,64,65]. For the most robust quantification of behaviour, methodology should use repeat measures, incorporate measures of repeatability, and validate the assays, for example, by quantifying the fitness outcomes of various behavioural responses [66, 67]. With such a systematic review, we hope to highlight where biases may be occurring, and reveal areas where more robust methodology is needed to guide the development of behavioural assays.
The results from this systematic review will be presented both in a narrative synthesis (to address the primary question) and with a quantitative analysis (to address the secondary question) [51]. To answer the first question, what behavioural assays have been used to quantify anti-predator responses in mammals, each article and the associated meta-data will be detailed in a table of findings that will divide studies up based on the different assay-types. Specific examples of different methods will be discussed in further detail within the text of the review. Some descriptive statistics based on the meta-data will be used to reveal patterns such as species tested. We will discuss techniques that are used regularly and aspects of existing methodology that have been well developed and tested. For example, we will quantify the number of replicates per study, reveal the proportion of studies that incorporated measures of repeatability, and assess how existing methods have been validated (and describe the mechanisms used). We will also discuss features that are lacking from existing methodology, or characteristics that are poorly represented (e.g. specific taxonomic groups). There will be a section that features suggestions for future development of behavioural assays.
The secondary question, which assay-types and predator cues elicit the greatest behavioural response, will be answered based on the meta-data extracted surrounding the experimental design of each study. Using the treatment means, standard deviations and sample size extracted from each study, we will calculate a standardized measure of effect size for differences between means using Hedges' g [58]:
$$g=\frac{{\mu }_{t - }{\mu }_{c }}{{s}_{p}}$$
where \({\mu }_{t}\) is the mean of the treatment group, \({\mu }_{c}\) is the mean of the control group and \({s}_{p}\) is the pooled standard deviation. The formula for pooled standard deviation is:
$${s}_{p}=\sqrt{\frac{\left({n}_{t}-1\right){s}_{t}^{2}+\left({n}_{c}-1\right){s}_{c}^{2}}{\left({n}_{t}-1\right)+ \left({n}_{c}-1\right)}}$$
where \({n}_{t}\) and \({s}_{t}\) are the number of observations and standard deviation for the treatment group respectively, and \({n}_{c}\) and \({s}_{c}\) are the number of observations and standard deviation for the control group respectively. Hedges' g was chosen over other effect size measures such as Cohen's d, as it is suited to a range of sample sizes and because it facilitates comparisons across studies by weighting each measure based on the number of observations [68]. We will build two mixed effects models using R [53] to identify which predator cue types and behavioural assay types elicit the greatest difference in effect size (Hedges' g), while controlling for potential confounding factors where possible. We will include each study's unique identifier as a random effect in both models to account for the non-independence of multiple effect sizes from each study. The protocol for this review adheres to the ROSES guidelines (see Additional file 4 for checklist).
The only data used in the preparation of this manuscript was for the scoping exercise, which is available in the additional material. The datasets generated and/or analysed during throughout the duration of this study will be made available in the Dryad digital repository.
Ceballos G, Ehrlich PR. Mammal population losses and the extinction crisis. Science. 2002;296(5569):904–7.
Schipper J, Chanson JS, Chiozza F, Cox NA, Hoffmann M, Katariya V, et al. The status of the world's land and marine mammals: diversity, threat, and knowledge. Science. 2008;322(5899):225.
Spooner FEB, Pearson RG, Freeman R. Rapid warming is associated with population decline among terrestrial birds and mammals globally. Glob Change Biol. 2018;24(10):4521–31.
Newbold T, Hudson LN, Hill SLL, Contu S, Lysenko I, Senior RA, et al. Global effects of land use on local terrestrial biodiversity. Nature. 2015;520(7545):45–50.
Ross AK, Letnic M, Blumstein DT, Moseby KE. Reversing the effects of evolutionary prey naiveté through controlled predator exposure. J Appl Ecol. 2019;56(7):1761–9.
Salo P, Korpimäki E, Banks PB, Nordström M, Dickman CR. Alien predators are more dangerous than native predators to prey populations. Proc Biol Sci. 2007;274(1615):1237–43.
Radford JQ, Woinarski JCZ, Legge S, Baseler M, Bentley J, Burbidge AA, et al. Degrees of population-level susceptibility of Australian terrestrial non-volant mammal species to predation by the introduced red fox (Vulpes vulpes) and feral cat (Felis catus). Wildl Res. 2018;45(7):645–57.
Murphy BP, Woolley L-A, Geyle HM, Legge SM, Palmer R, Dickman CR, et al. Introduced cats (Felis catus) eating a continental fauna: the number of mammals killed in Australia. Biol Cons. 2019;237:28–40.
Clavero M, García-Berthou E. Invasive species are a leading cause of animal extinctions. Trends Ecol Evol. 2005;20(3):110.
Sax Dov F, Gaines Steven D, Brown JH. Species invasions exceed extinctions on islands worldwide: a comparative study of plants and birds. Am Nat. 2002;160(6):766–83.
Loehle C, Eschenbach W. Historical bird and terrestrial mammal extinction rates and causes. Divers Distrib. 2012;18(1):84–91.
Greggor AL, Price CJ, Shier DM. Examining the efficacy of anti-predator training for increasing survival in conservation translocations: a systematic review protocol. Environ Evid. 2019;8(1):11.
Jolly CJ, Phillips BL. Rapid evolution in predator-free conservation havens and its effects on endangered species recovery. Conserv Biol. 2021;35(1):383–5.
Tavecchia G, Viedma C, Martínez-Abraín A, Bartolomé M-A, Gómez JA, Oro D. Maximizing re-introduction success: assessing the immediate cost of release in a threatened waterfowl. Biol Cons. 2009;142(12):3005–12.
West R, Letnic M, Blumstein DT, Moseby KE. Predator exposure improves anti-predator responses in a threatened mammal. J Appl Ecol. 2018;55(1):147–56.
Greggor AL, Blumstein DT, Wong BBM, Berger-Tal O. Using animal behavior in conservation management: a series of systematic reviews and maps. Environ Evid. 2019;8(1):23.
Grassel SM, Rachlow JL, Williams CJ. Spatial interactions between sympatric carnivores: asymmetric avoidance of an intraguild predator. Ecol Evol. 2015;5(14):2762–73.
Higdon SD, Diggins CA, Cherry MJ, Ford WM. Activity patterns and temporal predator avoidance of white-tailed deer (Odocoileus virginianus) during the fawning season. J Ethol. 2019;37(3):283–90.
Hébert M, Versace E, Vallortigara G. Inexperienced preys know when to flee or to freeze in front of a threat. Proc Natl Acad Sci. 2019;116(46):22918–20.
Stankowich T, Coss RG. Effects of risk assessment, predator behavior, and habitat on escape behavior in Columbian black-tailed deer. Behav Ecol. 2006;18(2):358–67.
Medill SA, Renard A, Larivière S. Ontogeny of antipredator behaviour in striped skunks (Mephitis mephitis). Ethol Ecol Evol. 2011;23(1):41–8.
Emlen DJ. The evolution of animal weapons. Annu Rev Ecol Evol Syst. 2008;39(1):387–413.
Rummel L, Martínez-Abraín A, Mayol J, Ruiz-Olmo J, Mañas F, Jiménez J, et al. Use of wild–caught individuals as a key factor for success in vertebrate translocations. Anim Biodivers Conserv. 2016;39(2):207–91.
Langridge J, Sordello R, Reyjol Y. Outcomes of wildlife translocations in protected areas: what is the type and extent of existing evidence? A systematic map protocol. Environmental Evidence. 2020;9(1):16.
Hayward MW, Kerley GIH. Fencing for conservation: Restriction of evolutionary potential or a riposte to threatening processes? Biol Cons. 2009;142(1):1–13.
Legge S, Woinarski JCZ, Burbidge AA, Palmer R, Ringma J, Radford JQ, et al. Havens for threatened Australian mammals: the contributions of fenced areas and offshore islands to the protection of mammal species susceptible to introduced predators. Wildl Res. 2018;45(7):627–44.
Ringma J, Legge S, Woinarski J, Radford J, Wintle B, Bode M. Australia's mammal fauna requires a strategic and enhanced network of predator-free havens. Nat Ecol Evol. 2018;2(3):410–1.
Evans K, Moore R, Harris S. The social and ecological integration of captive-raised adolescent male african elephants (Loxodonta africana) into a wild population. PLoS ONE. 2013;8(2):e55933.
Pinter-Wollman N, Isbell LA, Hart LA. Assessing translocation outcome: Comparing behavioral and physiological aspects of translocated and resident African elephants (Loxodonta africana). Biol Cons. 2009;142(5):1116–24.
Müller J, Wölfl M, Wölfl S, Müller DWH, Hothorn T, Heurich M. Protected areas shape the spatial distribution of a European lynx population more than 20 years after reintroduction. Biol Cons. 2014;177:210–7.
Muller LI, Murrow JL, Lupardus JL, Clark JD, Yarkovich JG, Stiver WH, et al. Genetic structure in Elk persists after translocation. J Wildl Manag. 2018;82(6):1124–34.
Wei F, Swaisgood R, Hu Y, Nie Y, Yan L, Zhang Z, et al. Progress in the ecology and conservation of giant pandas. Conserv Biol. 2015;29(6):1497–507.
Thalmann S, Peck S, Wise P, Potts JM, Clarke J, Richley J. Translocation of a top-order carnivore: tracking the initial survival, spatial movement, home-range establishment and habitat use of Tasmanian devils on Maria Island. Australian Mammalogy. 2016;38(1):68–79.
Woinarski JCZ, Burbidge AA, Harrison PL. Ongoing unraveling of a continental fauna: decline and extinction of Australian mammals since European settlement. Proc Natl Acad Sci. 2015;112(15):4531–40.
Morris SD, Brook BW, Moseby KE, Johnson CN. Factors affecting success of conservation translocations of terrestrial vertebrates: a global systematic review. Global Ecology and Conservation. 2021;28:e01630.
Muralidhar A, Moore FL, Easton LJ, Jamieson IG, Seddon PJ, van Heezik Y. Know your enemy? Conservation management causes loss of antipredator behaviour to novel predators in New Zealand robins. Anim Behav. 2019;149:135–42.
Blumstein DT, Daniel JC. The loss of anti-predator behaviour following isolation on islands. Proceedings. 2005;272(1573):1663–8.
Blumstein DT, Daniel JC, Springett BP. A test of the multi-predator hypothesis: rapid loss of antipredator behavior after 130 years of isolation. Ethology. 2004;110(11):919–34.
Jolly CJ, Webb JK, Phillips BL. The perils of paradise: an endangered species conserved on an island loses antipredator behaviours within 13 generations. Biol Lett. 2018;14:6.
Cooper WE, Pyron RA, Garland T. Island tameness: living on islands reduces flight initiation distance. Proceedings. 2014;281(1777):1–7.
Moseby KE, Cameron A, Crisp HA. Can predator avoidance training improve reintroduction outcomes for the greater bilby in arid Australia? Anim Behav. 2012;83(4):1011–21.
Moseby KE, Read JL, Paton DC, Copley P, Hill BM, Crisp HA. Predation determines the outcome of 10 reintroduction attempts in arid South Australia. Biol Cons. 2011;144(12):2863–72.
Berger-Tal O, Blumstein DT, Carroll S, Fisher RN, Mesnick SL, Owen MA, et al. A systematic survey of the integration of animal behavior into conservation. Conserv Biol. 2016;30(4):744–53.
Tay NE, Fleming PA, Warburton NM, Moseby KE. Predator exposure enhances the escape behaviour of a small marsupial, the burrowing bettong. Anim Behav. 2021;175:45–56.
Blumstein DT, Mari M, Daniel JC, Ardron JG, Griffin AS, Evans CS. Olfactory predator recognition: wallabies may have to learn to be wary. Anim Conserv. 2002;5(2):87–93.
Saxon-Mills EC, Moseby K, Blumstein DT, Letnic M. Prey naïveté and the anti-predator responses of a vulnerable marsupial prey to known and novel predators. Behav Ecol Sociobiol. 2018;72(9):151.
Steindler LA, Blumstein DT, West R, Moseby KE, Letnic M. Exposure to a novel predator induces visual predator recognition by naïve prey. Behav Ecol Sociobiol. 2020;74(8):102.
Bannister H, Brandle R, Moseby K. Antipredator behaviour of a native marsupial is relaxed when mammalian predators are excluded. Wildl Res. 2018;45(8):726–36.
Berger-Tal O, Greggor AL, Macura B, Adams CA, Blumenthal A, Bouskila A, et al. Systematic reviews and maps as tools for applying behavioral ecology to management and policy. Behav Ecol. 2019;30(1):1–8.
Rothstein HR, Sutton AJ, Borenstein M. Publication Bias in Meta-Analysis: Prevention. Assessment and Adjustments: Wiley; 2005.
Collaboration for Environmental Evidence. Guidelines for Systematic Review and Evidence Synthesis in Environmental Management. www.environmentalevidence.org/Documents/Guidelines/Guidelines4.2.pdf; 2013.
Fellows I. wordcloud: Word Clouds. R package version 2.6. . 2018.
R Core Team. R: A language and environment for statistical computing. . Vienna, Austria: R Foundation for Statistical Computing; 2020.
Foo YZ, O'Dea RE, Koricheva J, Nakagawa S, Lagisz M. A practical guide to question formation, systematic searching and study screening for literature reviews in ecology and evolution. Methods Ecol Evol. 2021.
Smart JM, Burling D. Radiology and the internet: a systematic review of patient information resources. Clin Radiol. 2001;56(11):867–70.
Kohl C, McIntosh EJ, Unger S, Haddaway NR, Kecke S, Schiemann J, et al. Online tools supporting the conduct and reporting of systematic reviews and systematic maps: a case study on CADIMA and review of existing tools. Environ Evid. 2018;7(1):8.
Edwards P, Clarke M, DiGuiseppi C, Pratap S, Roberts I, Wentz R. Identification of randomized controlled trials in systematic reviews: accuracy and reliability of screening records. Stat Med. 2002;21(11):1635–40.
Hedges LV. Distribution theory for glass's estimator of effect size and related estimators. J Educ Stat. 1981;6(2):107–28.
Konno K, Livoreil B, AS P. CEECAT: Collaboration for Environmental Evidence Critical Appraisal Tool Version 0.2 (prototype). 2021.
Schneider CA, Rasband WS, Eliceiri KW. NIH Image to ImageJ: 25 years of image analysis. Nat Methods. 2012;9:671–5.
Snijders L, Greggor AL, Hilderink F, Doran C. Effectiveness of animal conditioning interventions in reducing human–wildlife conflict: a systematic map protocol. Environ Evid. 2019;8(1):10.
Nolte DL, Mason JR, Epple G, Aronov E, Campbell DL. Why are predator urines aversive to prey? J Chem Ecol. 1994;20(7):1505–16.
Smith ME, Belk MC. Risk assessment in western mosquitofish (Gambusia affinis): do multiple cues have additive effects? Behav Ecol Sociobiol. 2001;51:101–7.
Griffin AS, Blumstein DT, Evans CS. Training captive-bred or translocated animals to avoid predators. Conserv Biol. 2000;14(5):1317–26.
Edwards MC, Ford C, Hoy JM, FitzGibbon S, Murray PJ. How to train your wildlife: a review of predator avoidance training. Appl Anim Behav Sci. 2021;234:105170.
Bell AM, Hankison SJ, Laskowski KL. The repeatability of behaviour: a meta-analysis. Anim Behav. 2009;71:771–83.
Hantula DA. Editorial: replication and reliability in behavior science and behavior analysis: a call for a conversation. Perspectives on Behavior Science. 2019;42(1):1–11.
Harrison F. Getting started with meta-analysis. Methods Ecol Evol. 2011;2(1):1–10.
The authors thank Terena Solomons for training NDH in searching bibliographic databases and the stakeholders for their feedback on earlier drafts of the review protocol.
NDH is funded through the Australian Commonwealth Government RTP Scholarship, the Hermon Slade Foundation (HSF21054 to Nicola Mitchell) and the Holsworth Wildlife Research Endowment. The funding bodies played no role in the development of this study.
School of Biological Sciences, University of Western Australia, Crawley, WA, 6009, Australia
Natasha D. Harrison, Ben L. Phillips, Jan M. Hemmi, Adrian F. Wayne, Rochelle Steven & Nicola J. Mitchell
School of BioSciences, University of Melbourne, Parkville3010, VIC, Australia
Ben L. Phillips
Biodiversity and Conservation Science, Department of Biodiversity, Conservation and Attractions, Manjimup, WA, 6258, Australia
Adrian F. Wayne
WWF-Australia, Selby St, Wembley, WA, 6018, Australia
Rochelle Steven
Natasha D. Harrison
Jan M. Hemmi
Nicola J. Mitchell
NDH wrote the draft protocol. All authors conceived the study and contributed substantially to the final manuscript. All authors read and approved the final manuscript.
Correspondence to Natasha D. Harrison.
This study does not report on any studies conducted on humans, and exclusively reports on studies conducted on animals by other researchers. The authors rely on the responsibility of the relevant researchers to have sought the approval of an appropriate ethics committee. Studies that fail to report ethics approval (or fail to disclose this information once authors have been contacted if required) will be excluded from the final review.
List of benchmark and test scoping articles.
CEE critical appraisal tool sheet.
Customised data collection sheet.
ROSES checklist.
Harrison, N.D., Phillips, B.L., Hemmi, J.M. et al. Identifying the most effective behavioural assays and predator cues for quantifying anti-predator responses in mammals: a systematic review protocol. Environ Evid 10, 38 (2021). https://doi.org/10.1186/s13750-021-00253-9
Anti-predator behaviour
Behavioural adaptation
Behavioural assay
Predator avoidance
Predator cue
Prey naivete | CommonCrawl |
What makes a reaction network "chemical"?
Stefan Müller ORCID: orcid.org/0000-0002-3541-78561,
Christoph Flamm ORCID: orcid.org/0000-0001-5500-24152 &
Peter F. Stadler ORCID: orcid.org/0000-0002-5016-51912,3,4,5,6,7
Reaction networks (RNs) comprise a set X of species and a set \(\mathscr {R}\) of reactions \(Y\rightarrow Y'\), each converting a multiset of educts \(Y\subseteq X\) into a multiset \(Y'\subseteq X\) of products. RNs are equivalent to directed hypergraphs. However, not all RNs necessarily admit a chemical interpretation. Instead, they might contradict fundamental principles of physics such as the conservation of energy and mass or the reversibility of chemical reactions. The consequences of these necessary conditions for the stoichiometric matrix \(\mathbf {S}\in \mathbb {R}^{X\times \mathscr {R}}\) have been discussed extensively in the chemical literature. Here, we provide sufficient conditions for \(\mathbf {S}\) that guarantee the interpretation of RNs in terms of balanced sum formulas and structural formulas, respectively.
Chemically plausible RNs allow neither a perpetuum mobile, i.e., a "futile cycle" of reactions with non-vanishing energy production, nor the creation or annihilation of mass. Such RNs are said to be thermodynamically sound and conservative. For finite RNs, both conditions can be expressed equivalently as properties of the stoichiometric matrix \(\mathbf {S}\). The first condition is vacuous for reversible networks, but it excludes irreversible futile cycles and—in a stricter sense—futile cycles that even contain an irreversible reaction. The second condition is equivalent to the existence of a strictly positive reaction invariant. It is also sufficient for the existence of a realization in terms of sum formulas, obeying conservation of "atoms". In particular, these realizations can be chosen such that any two species have distinct sum formulas, unless \(\mathbf {S}\) implies that they are "obligatory isomers". In terms of structural formulas, every compound is a labeled multigraph, in essence a Lewis formula, and reactions comprise only a rearrangement of bonds such that the total bond order is preserved. In particular, for every conservative RN, there exists a Lewis realization, in which any two compounds are realized by pairwisely distinct multigraphs. Finally, we show that, in general, there are infinitely many realizations for a given conservative RN.
"Chemical" RNs are directed hypergraphs with a stoichiometric matrix \(\mathbf {S}\) whose left kernel contains a strictly positive vector and whose right kernel does not contain a futile cycle involving an irreversible reaction. This simple characterization also provides a concise specification of random models for chemical RNs that additionally constrain \(\mathbf {S}\) by rank, sparsity, or distribution of the non-zero entries. Furthermore, it suggests several interesting avenues for future research, in particular, concerning alternative representations of reaction networks and infinite chemical universes.
Most authors will agree that a chemical reaction network consists of a set X of chemical species or compounds and a set \(\mathscr {R}\) of chemical reactions, each describing the transformation of some (multi)set of educts into a (multi)set of products. Depending on the application, this basic construction may be augmented by assigning properties such as mass, energy, sum formulas, or structural formulas to the compounds. Similarly, reactions may be associated with rate constants, equilibrium constants, and so on. A formal theory of reaction networks (RN) describes a reaction on a given set of compounds X as a stoichiometric relation, i.e., as a pair of formal sums of chemical species \(x \in X\):
$$\begin{aligned} \sum _{x\in X} s^-_{xr} \, x \rightarrow \sum _{x\in X} s^+_{xr} \, x . \end{aligned}$$
The left-hand side in Eq. (1) lists the educts and the right-hand side gives the products of the reaction. The stoichiometric coefficients \(s^-_{xr}\in \mathbb {N}_0\) and \(s^+_{xr}\in \mathbb {N}_0\) denote the number of species \(x\in X\) that are consumed (on the left-hand side) or produced (on the right-hand side) by the reaction r, respectively. A species \(x\in X\) is an educt in reaction r if \(s^-_{xr}>0\) and a product if \(s^+_{xr}>0\). If \(s^+_{xr}=s^-_{xr}=0\), then species x does not take part in reaction r and is suppressed in the conventional chemical notation. The formal sums \(\sum _{x\in X} s^-_{xr} \, x\) and \(\sum _{x\in X} s^+_{xr} \, x\) form the complexes of educts \(r^-\) and products \(r^+\) of the reaction r. We denote the set of reactions under considerations by \(\mathscr {R}\) and call the pair \((X,\mathscr {R})\) a reaction network (RN). Throughout this contribution we will assume that both X and \(\mathscr {R}\) are non-empty and finite. Excluding explicit catalysis, that is, forbidding \(s^-_{xr} \, s^+_{xr}>0\), it suffices to consider the stoichiometric matrix \(\mathbf {S}\in \mathbb {N}_0^{X \times \mathscr {R}}\). Its entries \(\mathbf {S}_{xr} = s^+_{xr} - s^-_{xr}\) describe the net production or consumption of species x in reaction r. In many practical applications, e.g. in the context of metabolic networks, RNs are embedded in an open system. In that manner, the consumption of nutrients and the production of waste can be modeled. We will return to this point only after discussing chemical RNs in isolation, i.e., as closed systems.
Several graph representations have been considered as (simplified) models of a RN, see [1] for a recent summary. In contrast to the pair \((X,\mathscr {R})\), they do not always completely represent the RN.
The S-graph (species graph, compound graph, or substrate network in the context of metabolic networks) has the species as its vertices. A (directed) edge connects x to y if the RN contains a reaction that has x as an educt and y as a product [2, 3]. The corresponding construction in the kinetic setting is the interaction graph with undirected edges whenever \(\partial [x]/\partial [y]\not \equiv 0\), which are usually annotated by the sign of the derivative [4]. S-graphs have also proved to be useful in approximation algorithms for the minimal seed set problem [5], which asks for the smallest set of substrates that can generate all metabolites. Complementarily, reaction graphs model reactions as nodes, while edges denote shared molecules [6].
The complex-reaction graph simply has the complexes \(\mathscr {C}\) (the left- and right-hand sides of the reactions) as its vertex set and the reactions \(\mathscr {R}\) as its edge set. That is, two complexes \(r^-\) and \(r^+\) are connected by a directed edge if there is a reaction \(r=(r^-,r^+)\in \mathscr {R}\). Its incidence matrix \(\mathbf {Z} \in \mathscr {R}^{\mathscr {C}\times \mathscr {R}}\) (with entries \(\mathbf {Z}_{cr}=-1\) if \(c=r^-\), \(\mathbf {Z}_{cr}=1\) if \(c=r^+\), and \(\mathbf {Z}_{cr}=0\) otherwise) is linked to the stochiometric matrix via \(\mathbf {S}=\mathbf {Y}\mathbf {Z}\), where the entries of the (stoichiometric) complex matrix \(\mathbf {Y} \in \mathbb {R}^{X \times \mathscr {C}}\) are the corresponding stochiometric coefficients. The complex-reaction graph plays a key role in the analysis of chemical reaction networks with mass-action kinetics and arbitrary rate constants, as studied in classical "chemical reaction network theory" (CRNT) [7,8,9]. It gives rise to notions such as "complex balancing" and "deficiency", which allow the formulation of strong (global) stability results, see e.g. [10, 11].
SR-graphs (Species-reaction networks) are bipartite graphs with different types of nodes for chemical species and reactions, respectively [12, 13]. As such, they can be endowed with additional annotations or extended with multiple edges to represent stoichiometric coefficients. In this extended form, they are faithful representations of chemical RNs. Alternatively, the edges are often annotated with the multiplicities of molecules, i.e., the stoichiometric coefficients; in this case, they completely specify the RN \((X,\mathscr {R})\). Undirected SR-graphs have a close relationship to classical deficiency theory [7, 9] and form the starting point for a qualitative theory of chemical RN kinetics [14]). More detailed information on qualitative kinetic behavior can be extracted from directed SR-graphs [15]. Both the S- and the R-graph can be extracted unambigously from an SR-graph.
The bipartite SR-graphs can be interpreted as the König's representation [16] of directed hypergraphs. The connection between hypergraph and graph representations is discussed in some more detail in [17]. While SR-graphs and directed hypergraphs can be transformed into each other, they carry a very different semantic. For instance, the notions of path and connectivity are very different for bipartite graphs and directed hypergraphs [18]. It has been argued, therefore that any graph representation of chemical networks necessarily treats edges as independent entities and thus fails to correctly capture the nature of chemical reactions [19, 20]. In a similar vein, [21] adopts the hypergraph representation and models (bio)chemical pathways as integer hyperflows to ensure mass balance at each vertex. Not every pair of an S- and R-graph implies an SR-graph, and if they do, the result need not be unique [6].
Over the last decade, many authors, including one of us, have investigated metabolic networks from a statistical perspective and reached the conclusion that they are distictly "non-random", presumably as the consequence of four billion years of evolution. This conclusion is typically reached by first converting a RN into one of the graph representations mentioned above. The choice of graphs is largely motivated by a desire to place metabolic or other chemical RNs within the scheme of small world and scale free networks and to analyze the RNs with the well-established tools of network science [19, 22]. Thus one concludes that graph-theoretical properties of metabolic networks are significantly different from the properties of randomly generated or randomized background models for chemical reaction networks [3, 23,24,25]. The insights gained from this "non-randomness" of metabolism, however, critically depend on what exactly the authors meant by "random", that is, how the background models are defined. In particular, it is important to understand whether differences between chemical networks and the background are caused by the implementation of universal properties (that any "chemistry-like" RN must satisfy) or whether they arise from the intrinsic structure of particular chemical networks.
To this end, however, we first need a comprehensive conception of what constitutes a chemistry-like reaction network. The different representations used in the literature highlight the fact that it is far from obvious which graphs or hypergraphs properly describe chemical RNs among a possibly much larger set of network models. There is a significant body of work in the literature that describes necessary conditions on the stoichiometric matrix \(\mathbf {S}\) that derive from key properties of chemical RNs, such as the conservation of mass or atoms in each reaction [8, 26,27,28,29,30]. In contrast, we are interested here in sufficient conditions with the aim of providing a concise characterization of RNs \((X,\mathscr {R})\) and their stoichiometric matrices \(\mathbf {S}\) that describe reaction system that can reasonably be considered as "chemistry-like". This is of practical relevance in particular for the construction of artifical chemistry models [31,32,33,34] and random "chemistries": It is still an open problem how random RNs can be constructed that can serve as fair, chemistry-like background models. We therefore start with a brief survey of random artificial chemistries and randomized RNs. As we shall see in the following section, oftentimes no explicit provisions are made to include "chemical" constraints such as the conservation of matter and energy into the background models.
Beyond the practical importance for the generation of random chemistries, it is also of interest to ask whether and to what extent the stoichiometry of a RN constrains the underlying chemistry, i.e., the composition of compounds and the type of reactions. Chemical reaction networks have been studied as a paradigm of computation that is quite different from, but theoretically equally powerful as Turing machines [35,36,37,38]. In the case of DNA based computing [39], the field has matured to the point that a compiler for translating chemical reaction networks into nucleic acid strand displacement systems has become available [40]. If chemical reaction networks are to be used as computing devices, a necessary intermediate step is to design reaction systems that implement a given stoichiometric matrix. Constraints on the chemistry imposed by the desired network stoichiometry itself thus become an issue in the design process, prompting us to ask whether there are chemical limitations to the realizability of RNs also beyond the constraints imposed by thermodynamics.
The main part of this contribution is the characterization of chemistry-like RNs. Starting from the principles of energy conservation and conservation of matter, we derive equivalent conditions on the stoichiometric matrix \(\mathbf {S}\). We then introduce realizability of RNs by sum formulas and structural formulas as a first step towards a formalization of chemistry-like networks, and show that conservation of matter is already sufficient to guarantee the existence of such chemistry-like representations. Finally we discuss the consequences of the mathematical results for the construction of random RNs and address some open research questions.
A brief survey of random and randomized chemical RNs
Chemical reaction networks are specified either as a set of chemical reactions or as a system of differential equations describing its kinetics. Graphical models have been extracted from both.
Simple graph models of RNs
S-graphs have been used to explore statistical properties of large RNs. In this line of research, empirical S-graphs are compared to the "usual" random networks models such as Erdős Renyí (ER) random graphs, Small World networks in the sense of Watts and Strogatz [41], or the Álbert-Barabasi model of preferential attachment. Generative models for random graphs with given degree distributions were introduced in [42]. Not surprisingly, chemical reaction networks do not very well conform to either one of them. As noted early on, however, R-graphs of metabolic networks at least qualitatively fit the small world paradigm [22]. More sophisticated analyses detected evidence for modularity and hierarchical organization in metabolic networks [43], using random graph models with the same degree distributions as contrasts. Arita noted, however, that S-graphs are poor representations of biochemical pathways and proposed an analysis in terms of atom traces, concluding that "the metabolic world [of E. coli] is not small in terms of biosynthesis and degradation" [44]. The motivation to focus on atom maps comes from the insight that two compounds that are linked by reactions are only related by the chemical transformation if they share at least one atom.
A versatile generator for bipartite graphs that can handle joint degree distributions is described in [45]. Surprisingly, bipartite random graph models apparently have not been used to model chemistry. Instead of generative models such as the ER graph or the preferential attachment model, null models are often specified in terms of rewiring, that is, edit operations on the graph. Rewiring rules define a Markov Process on a set of graphs that can produce samples of randomized networks. The key idea is to specify the rewiring procedure in such a way that it preserves graph properties that are perceived to be important [46, 47]. For example, the degrees of all vertices in a digraph are preserved when a pair of directed edges \(x_1y_1\) and \(x_2y_2\) is replaced by \(x_1y_2\) and \(x_2y_1\) as long as \(x_1\) and \(x_2\) have the same out-degree while \(y_1\) and \(y_2\) have the same in-degree. Randomization procedures for bipartite graphs have become available in the context of ecological networks [48] or trade networks [49]. To our knowledge they have not been used for SR graphs.
Random (directed) hypergraphs
In [50] a hypergraph is defined as a multiset of hyperedges, each of which in turn is a multiset of vertices. In this setting, a random hypergraph is specified by the probabilities \(p_k\) to include a hyperedge e with cardinality \(|e|=k\). Similar models for undirected hypergraphs are used e.g. in [51]. In a directed hypergraph, every hyperedge is defined as the pair \((e_-,e_+)\) consisting of the multisets \(e_-\) and \(e_+\). The construction of [50] thus naturally generalizes to directed hypergraphs specified by picking e with probability \(p_{|e_-|,|e_+|}\). In the context of chemistry this amounts to picking educt and product sets for reactions with probabilities depending on their cardinality. This type of random (directed) hypergraph models are the obvious generalizations of the Erdős Renyí (di)graphs. A certain class of random directed hypergraphs with \(|e^-|=2\) and \(|e^+|=1\) for all hyperedges e is considered in [52].
Hypergraphs are also amenable to rewiring procedures that ensures the preservation of certain local or global properties. For instance [17] proposes a scheme that preserves the number and cardinality of the hyperedges (replacing a randomly selected \((e_-,e_+)\) with a randomly selected pair of disjoint subsets \((e_-',e_+')\) with \(|e_-|=|e_-'|\) and \(|e_+|=|e_+'|\)). On this basis, the authors conclude that the hierarchical structure hypothesis proposed in [43] is not supported for metabolic networks when a clustering coefficient is defined for directed hypergraphs. [17] also compares S- and R-graphs of metabolic networks with ensembles of S- and R-graphs derived from randomized directed hypergraphs and cast further doubt on previously reported scaling results. Randomization procedures for hypergraphs that preserve local clustering are described in [53]. An approach that uses a chemical graph rewriting model to ensure soundness of reactions is described in the MSc thesis [54].
In [25] networks are constructed in a stepwise procedure starting with directed graphs whose arcs are then re-interpreted as directed hyperarcs by combining multiple arcs. This process is guided by matching the degree distribution of the implied S-graph.
Reaction universes: random subhypergraphs
Instead of generating a random RN directly from a statistical model or rewiring a given one, one can also start from a reaction universe RU, that is, a RN that contains all species of interest and all known or inferred reactions between them. Without losing generality we can think of the RU as a directed hypergraph in the sense of [50], where the multi-set formalism accounts for the stoichiometric coefficients. In contrast to the generative and rewiring approaches the a priori specification of an RU ensures a high level of chemical realism and RNs can now be sampled by randomly selecting subsets of directed hyperedges, that is, chemical reactions. If the RU already ensures conservation of matter or energy, these properties are inherited by the sub-networks. In order to generate random metabolic networks, reactions can be drawn from databases such as KEGG or EcoCyc [55, 56]. Such selections of reactions are sometimes called "metabolic genotypes" since the available reactions are associated with enzymes, whose presence or absence is determined by an organism's genome [55]. In some studies, additional constraints such as the production of biomass are exploited and networks are sampled e.g. by combining Flux-Balance Analysis (FBA) and a Markov Chain Monte Carlo (MCMC) approach [55, 57].
A characterization of chemistry-like reaction networks
In this section, we start from reaction networks that are specified as abstract stoichiometric relations, Eq. (1), and identify minimal constraints necessary to avoid blatantly unphysical behavior.
Notation and peliminaries
Let X be a finite set and let \(\mathscr {R}\) be a pair of formal sums of elements of X with non-negative integer coefficients according to Eq. (1). Then we call the pair \((X,\mathscr{R})\) a reaction network (RN). Equivalently, a RN is a directed, integer-weighted hypergraph with directed edges \((r^-,r^+)\) such that \(x\in r^-\) with weight \(s^-_{xr}>0\) and \(x\in r^+\) with weight \(s^+_{xr}>0\). The weights \(s^-_{xr}\) and \(s^+_{xr}\) are usually called the stoichiometric coefficients. We set \(s^-_{xr}=0\) and \(s^+_{xr}=0\) if \(x\notin r^-\) and \(x\notin r^-\), respectively. We deliberately dropped the qualifier chemical here since, as we shall see, not every RN \((X,\mathscr {R})\) makes sense as a model of a chemical system. In fact, the aim of this contribution is to characterize the set of RNs that make sense as models of chemistry.
Representation of a RN as König multigraph of the corresponding directed hypergraph. Round vertices (with chemical structures shown inside) designate compounds \(x\in X\), while reactions \(r\in R\) are shown as square vertices. Stoichometric coefficients are indicated by the number of edges from x to r for \(s^-_{xr}>0\) and r to x for \(s^+_{xr}>0\), respectively. A flow (an overall reaction) is given by non-negative integer multiples of individual reactions. Here the coefficients \(v_r\) are indicated in the square nodes for each reaction r. The flow shown here defines Oró's [58] route from HCN to adenine (marked by red triangles) and corresponds to the net reaction \({5 HCN \longrightarrow H5C5N5}\). Figure adapted from [59]
Such directed hypergraphs are most conveniently drawn as (bipartite) König multigraphs, with distinct types of vertices representing compounds \(x\in X\) and reactions \(r\in \mathscr {R}\), respectively. Stoichiometric coefficients larger than one appear as multiple edges. See the example in Fig. 1.
For each reaction \(r\in \mathscr {R}\), we define its support as \({{\,\mathrm{supp}\,}}(r)=\{x \mid s^-_{xr}+s^+_{xr}>0\}\); that is, \(x\in {{\,\mathrm{supp}\,}}(r)\) if it appears as an educt, a product, or a catalyst in r. The stoichiometric matrix of \((X,\mathscr {R})\) is \(\mathbf {S}\in \mathbb {N}_0^{X \times \mathscr {R}}\) with entries \(\mathbf {S}_{xr}= s^+_{xr} - s^-_{xr}\).
We distinguish proper reactions r, for which there is both \(x\in X\) with \({\mathbf {S}}_{xr}<0\) and \(y\in X\) with \({\mathbf {S}}_{yr}>0\), import reactions for which \({\mathbf {S}}_{xr}\ge 0\) for all \(x\in X\), and export reactions for which \({\mathbf {S}}_{xr}\le 0\) for all \(x\in X\). We write \(\varnothing\) for the empty formula, hence \(\varnothing\) \(\longrightarrow\) A and B \(\longrightarrow\) \(\varnothing\) designate the import of A and the export of B, respectively. Note that this definition also allows catalyzed import and export reactions, e.g., C \(\longrightarrow\) C + A or B + C \(\longrightarrow\) C.
In thermodynamics, a system is closed if it does not exchange matter with its environment, but may exchange energy in the form of work or heat [60]. For a RN, this rules out import and export reactions.
A RN \((X,\mathscr {R})\) is closed if all reactions \(r\in \mathscr {R}\) are proper.
Given an arbitrary RN \((X,\mathscr {R})\), there is a unique inclusion-maximal closed RN contained in \((X,\mathscr {R})\), namely \((X,\mathscr {R}^\text {p})\) with
$$\begin{aligned} \mathscr {R}^p= \{r\in \mathscr {R}\mid r \text { is proper}\}. \end{aligned}$$
We will refer to \((X,\mathscr {R}^p)\) as the proper part of \((X,\mathscr {R})\).
For every reaction r, one can define a reverse reaction \(\overline{r}\) that is obtained from r by exchanging the role of products and educts. That is, \(\overline{r}\) is the reverse of r iff, for all \(x\in X\), it holds that
$$\begin{aligned} s^-_{x\overline{r}} = s^+_{xr} \quad \text {and}\quad s^+_{x\overline{r}} = s^-_{xr} . \end{aligned}$$
While thermodynamics dictates that every reaction is reversible in principle (albeit possibly with an extremely low reaction rate), it is a matter of modeling whether sufficiently slow reactions are included in the reaction set \(\mathscr {R}\).
Chemical reactions can be composed and aggregated into "overall reactions". In the literature on metabolic networks, pathways are of this form. An overall reaction consists of multiple reactions that collectively convert a set of educts into a set of products. It can be represented as a formal sum of reactions \(\sum _{r\in \mathscr {R}} \mathbf {v}_r \, r\), where the vector of multiplicities \(\mathbf {v}\in \mathbb {N}^\mathscr {R}_0\) has non-negative integer entries. Thereby, \([\mathbf {S}\mathbf {v}]_x\) determines the net consumption or production of compound x in the overall reaction specified by \(\mathbf {v}\).
A vector \(\mathbf {v}\in \mathbb {N}_0^\mathscr {R}\) can be interpreted as an integer hyperflow in the following sense: If x is neither an educt nor a product of the overall reaction specified by \(\mathbf {v}\), then \([\mathbf {S}\mathbf {v}]_x = \sum _r (s^+_{xr}-s^-_{xr}) \mathbf {v}_r = 0\), i.e., every unit of x that is produced by some reaction r with \(\mathbf {v}_r>0\) is consumed by another reaction \(r'\) with \(\mathbf {v}_{r'}>0\).
The effect of an overall reaction can be represented via formal sums of species in two ways: as composite reactions,
$$\begin{aligned} \sum _{x\in X} \left( \sum _{r\in \mathscr {R}} s^-_{xr}\mathbf {v}_r\right) x \longrightarrow \sum _{x\in X} \left( \sum _{r\in \mathscr {R}} s^+_{xr}\mathbf {v}_r\right) x , \end{aligned}$$
or as net reactions,
$$\begin{aligned} \begin{aligned} \sum _{x\in X}&\left( \sum _{r\in \mathscr {R}} \left[ (s^-_{xr}-s^+_{xr})\mathbf {v}_r\right] _{+}\right) x \longrightarrow \sum _{x\in X} \left( \sum _{r\in \mathscr {R}} \left[ (s^+_{xr}-s^-_{xr})\mathbf {v}_r\right] _{+}\right) x . \end{aligned} \end{aligned}$$
Here we use the notation \([c]_+ = c\) if \(c>0\) and \([c]_+=0\) for \(c\le 0\). In Eq. (5), intermediates, i.e., formal catalysts are cancelled. Hence, the net consumption (or production) of a species x is \(\sum _{r\in \mathscr {R}}[(s^-_{xr}-s^+_{xr})\mathbf {v}_r]_{+}=-[\mathbf {S}\mathbf {v}]_x\) if \([\mathbf {S}\mathbf {v}]_x<0\) (or \(\sum _{r\in \mathscr {R}}[(s^+_{xr}-s^-_{xr})\mathbf {v}_r]_{+}=[\mathbf {S}\mathbf {v}]_x\) if \([\mathbf {S}\mathbf {v}]_x>0\)).
Fig. 1 shows the RN of Oro's prebiotic adenine synthesis from HCN and the integer hyperflow \(\mathbf {v}\) corresponding to the net reaction "5 HCN \(\longrightarrow\) adenine" as an example.
While a restriction to integer hyperflows \(\mathbf {v}\in \mathbb {N}_0^\mathscr {R}\) is necessary in many applications, see e.g. [21] for a detailed discussion, it appears mathematically more convenient to use the more general setting of fluxes \(\mathbf {v}\in \mathbb {R}^\mathscr {R}_\ge\) as in the analysis of metabolic pathways. To emphasize the connection with the body of literature on network (hyper)flows we will uniformly speak of flows.
For any vector \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\), we write \(\mathbf {v}\ge 0\) if \(\mathbf {v}\) is non-negative, \(\mathbf {v}>0\) if \(\mathbf {v}\) is non-negative and non-zero, that is, at least one entry is positive, and \(\mathbf {v}\gg 0\) if all entries of \(\mathbf {v}\) are positive. Analogously, we write \(\mathbf {v}\le 0\), \(\mathbf {v}<0\), and \(\mathbf {v}\ll 0\). In particular, a vector \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\) is called a flow if \(\mathbf {v}\ge 0\).
A non-trivial flow satisfies \(\mathbf {v}>0\), i.e., \(\mathbf {v}\ne 0\). Two flows \(\mathbf {v_1}\) and \(\mathbf {v_2}\) are called parallel if they describe the same net reaction. In particular, we therefore have \(\mathbf {S}\mathbf {v_1} = \mathbf {S}\mathbf {v_2}\) for parallel flows.
Futile cycles in a RN are non-trivial flows for which educts and products coincide and thus the net reaction is empty.
A flow \(\mathbf {v}>0\) is a futile cycle if \(\mathbf {S}\mathbf {v}=0\).
We use the term futile cycle in the strict sense to describe the concurrent activity of multiple reactions (or pathways) having no net effect other than the dissipation of energy. In the literature on metabolic networks often a less restrictive concept is used that allows certain compounds (usually co-factors, ATP/ADP, redox equivalents, or solvents) to differ between products and educts, see e.g. [61,62,63,64]. In this setting, the net reaction of concurrent glycolysis and gluconeogenesis, namely the hydrolysis of ATP, is viewed as energy dissipation rather than a chemical reaction. In our setting, \({\text{ATP}} + {{\text{H}}_{2}} {\text{O}} \longrightarrow {\text{ADP}}+ {{\text{P}}_i^{-}} + {\text{H}}^{+}\), is a net reaction like any other, and hence a futile cycle would only arise if recycling of ATP, i.e., ADP + \({\text{P}}_i^{-} + {{\text{H}}^{+}} \longrightarrow {\text{ATP}} + {{\text H}_{2}}{\text{O}}\), was included as well.
If a RN has a futile cycle, it also has an integer futile cycle \(\mathbf {v}\in \mathbb {N}_0^\mathscr {R}\), since \(\mathbf {S}\) has integer entries and thus its kernel has a rational basis, which can be scaled with the least common denominator to have integer entries.
A pair \((X',\mathscr {R}')\) is a subnetwork of \((X,\mathscr {R})\) if \(X'\subseteq X\), \(\mathscr {R}'\subseteq \mathscr {R}\), and \({{\,\mathrm{supp}\,}}(r)\subseteq X'\) implies \(r\in \mathscr {R}'\). We say that a property P of a RN is hereditary if "\((X,\mathscr {R})\) has P" implies that every subnetwork "\((X',\mathscr {R}')\) has P".
Chemical reactions are subject to thermodynamic constraints that are a direct consequence of the conservation of energy, the conservation of mass, and the reversibility of chemical reactions. In the context of chemistry, conservation of mass is of course a consequence of the conservation of atoms throughout a chemical reaction. In the following sections, we investigate how these physical principles constrain RNs. Since we have introduced RNs in terms of abstract molecules and reactions, Eq. (1), we express the necessary conditions in terms of the stoichiometric matrix \(\mathbf {S}\), which fully captures only the proper part of the RN. Throughout this work, therefore, we assume that \((X,\mathscr {R})\) is a closed RN, unless explicitly stated otherwise.
Thermodynamic constraints
Reaction energies and perpetuum mobiles
Every chemical reaction r is associated with a change in the Gibbs free energy of educts and products. We therefore introduce a vector of reaction (Gibbs free) energies \(\mathbf {g}\in \mathbb {R}^\mathscr {R}\) and write \((X,\mathscr {R},\mathbf {g})\) for a RN endowed with reaction energies. The reaction energy for an overall reaction is the total energy of the individual reactions involved. In terms of \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\), it can be expressed as
$$\begin{aligned} \sum _{r\in \mathscr {R}} \mathbf {g}_r\mathbf {v}_r = \mathbf {g}^\top \mathbf {v}= \langle \mathbf {g},\mathbf {v}\rangle , \end{aligned}$$
where \(\langle \cdot ,\cdot \rangle\) denotes the scalar product on \(\mathbb {R}^\mathscr {R}\).
Futile cycles may act as a chemical version of a perpetuum mobile. This is the case whenever a flow \(\mathbf {v}> 0\) with zero formal net reaction, \(\mathbf {S}\mathbf {v}= 0\), increases or decreases energy, i.e., if \(\langle \mathbf {g},\mathbf {v}\rangle \ne 0\).
Let \((X,\mathscr {R},\mathbf {g})\) be a RN with reaction energies. A flow \(\mathbf {v}> 0\) is a perpetuum mobile if \(\mathbf {S}\mathbf {v}=0\) and \(\langle \mathbf {g},\mathbf {v}\rangle \ne 0\).
The classical concept of a perpetuum mobile decreases its energy, \(\langle \mathbf {g},\mathbf {v}\rangle < 0\), thereby "creating" energy for its environment. An "anti" perpetuum mobile with \(\langle \mathbf {g},\mathbf {v}\rangle > 0\) would "annihilate" energy. Either situation violates energy conservation and thus cannot be allowed in a chemical RN. Obviously, there is no perpetuum mobile if \((X,\mathscr {R})\) does not admit a futile cycle.
In fact, thermodynamics dictates that Gibbs free energy is a state function. Two parallel flows \(\mathbf {v_1}\) and \(\mathbf {v_2}\) therefore must have the same associated net reaction energies. That is, \(\mathbf {S}\mathbf {v_1}=\mathbf {S}\mathbf {v_2}\) implies \(\langle \mathbf {g}, \mathbf {v^1}\rangle = \langle \mathbf {g},\mathbf {v^2}\rangle\). Equivalently, any vector \(\mathbf {v}=\mathbf {v^1}-\mathbf {v^2} \in \mathbb {R}^\mathscr {R}\) with \(\mathbf {S}\mathbf {v}=0\) must satisfy \(\langle \mathbf {g},\mathbf {v}\rangle =0\). That is, \(\mathbf {g}\in (\ker \mathbf {S})^\perp\).
Let \((X,\mathscr {R},\mathbf {g})\) be a RN with reaction energies. Then \((X,\mathscr {R},\mathbf {g})\) is thermodynamic if \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\) and \(\mathbf {S}\mathbf {v}=0\) imply \(\langle \mathbf {g},\mathbf {v}\rangle =0\), that is, if \(\mathbf {g}\in (\ker \mathbf {S})^\perp\).
Let \((X,\mathscr{R},\mathbf{g})\) be thermodynamic, \((X',\mathscr {R}')\) be a subnetwork of \((X,\mathscr{R})\), and \(\mathbf {g}'\) be the restriction of \(\mathbf {g}\) to \(\mathscr {R}'\). Then \(\mathbf {v}'\in \mathbb {R}^{\mathscr {R}'}\) corresponds to \(\mathbf {v}\in \mathbb {R}^{\mathscr {R}}\) with \({{\,\mathrm{supp}\,}}(\mathbf {v})\subseteq \mathscr {R}'\), and thus \(\mathbf {v}'\in \mathbb {R}^{\mathscr {R}'}\) and \(\mathbf {S}'\mathbf {v}'=0\) imply \(\mathbf {S}\mathbf {v}=0\) and further \(\langle \mathbf {g}',\mathbf {v}'\rangle =\langle \mathbf {g},\mathbf {v}\rangle =0\). Hence \((X',\mathscr {R}',\mathbf {g}')\) is again thermodynamic.
We note that the reaction energies of a reaction r and its reverse \(\overline{r}\) necessarily cancel:
Lemma 5
If r and \(\overline{r}\) are reverse reactions in a thermodynamic network \((X,\mathscr {R},\mathbf {g})\), then \(\mathbf {g}_{\overline{r}}=-\mathbf {g}_r\).
If r and \(\overline{r}\) are reverse reactions, then \(\mathbf {v}\) with \(\mathbf {v}_r=\mathbf {v}_{\overline{r}}=1\) (and \(\mathbf {v}_{r'}=0\) otherwise) satisfies \(\mathbf {S}\mathbf {v}=0\). Thus \(\langle \mathbf {g},\mathbf {v}\rangle = \mathbf {g}_r+\mathbf {g}_{\overline{r}}=0\). \(\square\)
Digression: molecular energies and Hess' Law
Every molecular species \(x\in X\) has an associated Gibbs free energy of formation. For notational simplicity, we write \(\mathbf {G}_x\) instead of the commonly used symbol \(G_\mathrm {f}(x)\). The corresponding vector of molecular energies is denoted by \(\mathbf {G}\in \mathbb {R}^X\). Molecular energies and reactions energies \(\mathbf {g}\in \mathbb {R}^\mathscr {R}\) are related by Hess' law: For every reaction \(r \in \mathscr {R}\), it holds that
$$\begin{aligned} \mathbf {g}_r = \sum _{x\in X} \mathbf {G}_x (s^+_{xr}-s^-_{xr}) = \sum _{x\in X} \mathbf {G}_x \, \mathbf {S}_{xr} . \end{aligned}$$
In matrix form, the relationship between reaction energies \(\mathbf {g}\) and molecular energies \(\mathbf {G}\) amounts to
$$\begin{aligned} \mathbf {g}= \mathbf {S}^\top \mathbf {G}. \end{aligned}$$
Let \((X,\mathscr {R})\) be a RN and \(\mathbf {g}\in \mathbb {R}^\mathscr {R}\) be a vector of reaction energies. Then \((X,\mathscr {R},\mathbf {g})\) is thermodynamic if and only if there is a vector of molecular energies \(\mathbf {G}\in \mathbb {R}^X\) satisfying Hess' law, Eq. (7).
By Definition 4, \((X,\mathscr {R},\mathbf {g})\) is thermodynamic if \(\mathbf {g}\in (\ker \mathbf {S})^\perp = {{\,\mathrm{im}\,}}\mathbf {S}^\top\), that is, if there is \(\mathbf {G}\) such that \(\mathbf {g}= \mathbf {S}^\top \mathbf {G}\), satisfying Hess's law. \(\square\)
Note that the vector of molecular energies \(\mathbf {G}\) is not uniquely determined by \(\mathbf {g}\) in general.
Reversible and irreversible networks
To begin with, we consider purely reversible or irreversible RNs.
A RN \((X,\mathscr {R})\) is reversible if \(r\in \mathscr {R}\) implies \(\overline{r}\in \mathscr {R}\) and irreversible if \(r\in \mathscr {R}\) implies \(\overline{r}\notin \mathscr {R}\).
In reversible networks, general vectors \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\) have corresponding flows \(\mathbf {{\tilde{v}}} \ge 0\) with the same net reactions and, in the case of thermodynamic networks, with the same energies.
Let \((X,\mathscr {R},\mathbf {g})\) be a reversible RN (with reaction energies), and let \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\) be a vector. Then there is a flow \({\mathbf{\tilde v}} \ge 0\) such that \(\mathbf {S} {\mathbf {{\tilde{v}}}} = \mathbf {S}\mathbf {v}\). If \((X,\mathscr {R},\mathbf {g})\) is thermodynamic, then further \(\langle \mathbf {g}, \mathbf {{\tilde{v}}} \rangle = \langle \mathbf {g}, \mathbf {v}\rangle\).
If \(\mathbf {v}\ge 0\), there is nothing to show. Otherwise, there are two flows \(\mathbf {v^1} \ge 0\) and \(\mathbf {v^2} > 0\) such that \(\mathbf {v}=\mathbf {v^1}-\mathbf {v^2}\). Since \((X,\mathscr {R})\) is reversible, each reaction \(r\in \mathscr {R}\) has a reverse \(\overline{r}\), and we define the reverse flow \(\mathbf {{\bar{v}}^2}>0\) such that \(\mathbf {{\bar{v}}^2}_{r} = \mathbf {v}^\mathbf {2}_{\overline{r}}\). By construction, it satisfies \(\mathbf {S}\mathbf {{\bar{v}}^2} = - \mathbf {S}\mathbf {v^2}\).
Now consider the flow \(\mathbf {{\tilde{v}}} = \mathbf {v^1}+\mathbf {{\bar{v}}^2} > 0\). It satisfies
$$\begin{aligned}\mathbf {S}\mathbf {{\tilde{v}}}= \mathbf {S}(\mathbf {v^1}+\mathbf {{\bar{v}}^2}) = \mathbf {S}(\mathbf {v^1} - \mathbf {v^2}) = \mathbf {S}\mathbf {v}.\end{aligned}$$
If the network is thermodynamic, then the reverse flow satisfies \(\langle \mathbf {g}, {{\bar{\mathbf{v}}}^2} \rangle = - \langle \mathbf {g}, \mathbf {v^2} \rangle\), by Lemma 5. Hence,
\(\displaystyle \langle \mathbf {g}, \mathbf {\tilde{v}} \rangle = \langle \mathbf {g}, \mathbf {v^1} + \mathbf {{\bar{v}}^2} \rangle = \langle \mathbf {g}, \mathbf {v^1} - \mathbf {v^2} \rangle = \langle \mathbf {g}, \mathbf {v}\rangle\). \(\square\)
By definition, a thermodynamic network cannot contain a perpetuum mobile. Conversely, by the result below, if a reversible network is not thermodynamic, then it contains a perpetuum mobile.
Let \((X,\mathscr {R},\mathbf {g})\) be a reversible RN with reaction energies. Then, the following two statements are equivalent:
\((X,\mathscr {R},\mathbf {g})\) is thermodynamic.
\((X,\mathscr {R},\mathbf {g})\) contains no perpetuum mobile.
Suppose \((X,\mathscr {R},\mathbf {g})\) is not thermodynamic. That is, there is \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\) with \(\mathbf {S}\mathbf {v}=0\) and \(\langle \mathbf {v}, \mathbf {g}\rangle \ne 0\). By Lemma 8, there is \(\mathbf {{\tilde{v}}}\ge 0\) with \(\mathbf {S}\mathbf {{\tilde{v}}}=0\) and \(\langle \mathbf {{\tilde{v}}}, \mathbf {g}\rangle \ne 0\), that is, a perpetuum mobile. \(\square\)
The exclusion of a perpetuum mobile is not sufficient in non-reversible systems.
Example 10
Consider the following RN (with reaction energies \(\mathbf {g}\)):
It contains one futile cycle,
\({\mathrm A} {\mathop {\rightarrow }\limits ^{1}}{\mathrm B} {\mathop {\rightarrow }\limits ^{\overline{1}}}{\mathrm A}\), \(\mathbf {v}=(1,1,0,0)^\top\) with \(\langle \mathbf {g},\mathbf {v}\rangle = 0\),
but no perpetuum mobile. However, it contains two parallel flows with different energies,
\({\mathrm A} {\mathop {\rightarrow }\limits ^{1}}{\mathrm B} {\mathop {\rightarrow }\limits ^{2}}{\mathrm C}\), \(\mathbf {v}=(1,0,1,0)^\top\) with \(\langle \mathbf {g},\mathbf {v}\rangle = -2\),
\({\mathrm A} {\mathop {\rightarrow }\limits ^{3}}{\mathrm C}\), \(\mathbf {v}=(0,0,0,1)^\top\) with \(\langle \mathbf {g},\mathbf {v}\rangle = -1\).
Hence, the RN (with reaction energies \(\mathbf {g}\)) is not thermodynamic. By setting \(\mathbf {g}_3=-2\), it can be made thermodynamic.
Many RN models are non-reversible, i.e., they contain irreversible reactions whose reverse reactions are so slow that they are neglected. From a thermodynamic perspective, irreversible reactions r must be exergonic, i.e., \(\mathbf {g}_r<0\). We first consider the extreme case that all reactions \(r\in \mathscr {R}\) are irreversible.
Let \((X,\mathscr {R},\mathbf {g})\) be an irreversible RN with reaction energies. Then, every futile cycle is a perpetuum mobile. Hence, if \((X,\mathscr {R},\mathbf {g})\) is thermodynamic, then there are no futile cycles.
Consider a futile cycle, that is, a flow \(\mathbf {v}> 0\) with \(\mathbf {S}\mathbf {v}=0\). Since all reactions are exergonic, \(\mathbf {v}_r>0\) implies \(\mathbf {g}_r<0\) and further \(\langle \mathbf {g}, \mathbf {v}\rangle < 0\), that is, \(\mathbf {v}\) is a perpetuum mobile. Now, if there is a futile cycle and hence a perpetuum mobile, then the network is not thermodynamic. \(\square\)
Thermodynamic soundness
We next ask whether a RN \((X,\mathscr {R})\) can always be endowed with a vector of reaction energies \(\mathbf {g}\) such that \((X,\mathscr {R},\mathbf {g})\) is thermodynamic.
Definition 12
A RN \((X,\mathscr {R})\) is thermodynamically sound if there is a vector of reaction energies \(\mathbf {g}\) such that \((X,\mathscr {R},\mathbf {g})\) is a thermodynamic network.
We note that thermodynamic soundness is a hereditary property of RNs, since we have seen above that if \((X,\mathscr {R},\mathbf {g})\) is a thermodynamic network so are all its subnetworks \((X',\mathscr {R}',\mathbf {g}')\).
Again, we first consider purely reversible or irreversible RNs.
Every reversible RN is thermodynamically sound.
Since \(\mathbf {S}\ne 0\) (the zero matrix), obviously \((\ker \mathbf {S})^\perp = {{\,\mathrm{im}\,}}\mathbf {S}^\top \ne \{0\}\) (the zero vector), and hence there is a non-zero \(\mathbf {g}\in (\ker \mathbf {S})^\perp\). \(\square\)
Theorem 14
An irreversible RN is thermodynamically sound if and only if there are no futile cycles.
By Gordan's Theorem (which is in turn a special case of Minty's Lemma [65], see Appendix B in [66]): Either there is a negative \(\mathbf {g}\in (\ker \mathbf {S})^\perp\) or there is a non-zero, non-positive \(\mathbf {v}\in \ker \mathbf {S}\). That is, either there is \(\mathbf {g}\ll 0\) with \(\mathbf {g}\in (\ker \mathbf {S})^\perp\) (the network is thermodyn. sound) or there is \(\mathbf {v}< 0\) with \(\mathbf {v}\in \ker \mathbf {S}\); equivalently, there is a futile cycle \(\mathbf {v}>0\). \(\square\)
It is not always obvious from the specification of an artificial chemistry model whether or not it is thermodynamically sound. As an example, we consider the artificial chemistry proposed in [67]. It considers only binary reactions (two educts) that produce two products, aiming to ensure conservation of particle numbers. In one variant, the networks only contains irreversible and thus exergonic reactions. It may produce, for instance, the following set of reactions:
$$\begin{aligned} \begin{aligned} {\mathrm{A + B}}&\longrightarrow {\mathrm{C + D}} , \\ {\mathrm{A + C}}&\longrightarrow \mathrm{{E + B}} , \\ \mathrm{{B + D}}&\longrightarrow \mathrm{{F + A}} , \\ \mathrm{{E + F}}&\longrightarrow \mathrm{{A + B}} . \end{aligned} \end{aligned}$$
Their sum corresponds to the flow \(\mathbf {v}= (1,1,1,1)^\top \ge 0\) and yields the exergonic composite reaction
$$\mathrm{2A + 2B + C + D + E + F} \longrightarrow \mathrm{2A + 2B + C + D + E + F} ,$$
that is, \({\mathbf {S}}{\mathbf {v}}=0\). Thus the model admits a futile cycle composed entirely of exergonic reactions and hence a perpetuum mobile. Thus it is not thermodynamically sound.
Mixed networks
In many applications, RNs contain both reversible and irreversible reactions, . There are two interpretations of such models:
In the (lax) sense used above, reversible reactions can be associated with arbitrary energies, while irreversible reactions are considered exergonic. That is, the reaction energies must satisfy \(\mathbf {g}_{r}<0\) for \(r\in \mathscr {R}_{\mathrm {irr}}\).
In a strict sense, the reaction energies assigned to irreversible reactions are much more negative than the reaction energies of the reversible ones. After scaling, one requires \(|\mathbf {g}_r|\le 1\) (that is, \(-1 \le \mathbf {g}_r \le 1\)) for \(r\in \mathscr {R}_{\mathrm {rev}}\) and \(|\mathbf {g}_r|\ge \gamma\) (that is, \(\mathbf {g}_r \le -\gamma\)) for \(r\in \mathscr {R}_{\mathrm {irr}}\) and (large) \(\gamma >1\). The intuition is that reactions r with \(\mathbf {g}_r \ge \gamma\) can be neglected.
The following example shows that thermodynamic soundness differs in the lax and strict senses.
for some \(g>0\). It contains two futile cycles:
\({\mathrm A} {\mathop {\rightarrow }\limits ^{1}}{\mathrm B} {\mathop {\rightarrow }\limits ^{2}}{\mathrm C} {\mathop {\rightarrow }\limits ^{3}}{\mathrm A}\), \(\mathbf {v}=(1,0,1,1)^\top\), \(\langle \mathbf {g},\mathbf {v}\rangle = 1-2g\).
By setting \(g=1/2\), the RN can be made thermodynamic. (Then the second futile cycle is not a perpetuum mobile.)
However, the RN in (10) cannot be seen as the limit of a thermodynamic, reversible network \(({\mathrm A}\leftrightarrow {\mathrm B}\leftrightarrow {\mathrm C}\leftrightarrow {\mathrm A})\) for large g. Thereby, one considers small \(\mathbf {g}_1,\mathbf {g}_{\overline{1}}\) and large negative \(\mathbf {g}_2,\mathbf {g}_3\) (and hence large positive \(\mathbf {g}_{\overline{2}},\mathbf {g}_{\overline{3}}\), that is, negligible reverse reactions \(\overline{2}, \overline{3}\)). Any such (limit of a) reversible RN contains a perpetuum mobile (the second futile cycle); equivalently, it is not thermodynamic.
A mixed network is thermodynamically sound if there are reaction energies \(\mathbf {g}\) such that \((X,\mathscr {R},\mathbf {g})\) is thermodynamic and \(\mathbf {g}_r<0\) for \(r\in \mathscr {R}_\mathrm {irr}\).
is strictly thermodynamically sound if, for all \(\gamma >1\), there are reaction energies \(\mathbf {g}\) such that \((X,\mathscr {R},\mathbf {g})\) is thermodynamic, \(|\mathbf {g}_r| \le 1\) for \(r\in \mathscr {R}_\mathrm {rev}\), and \(\mathbf {g}_r<0\) with \(|\mathbf {g}_r| \ge \gamma\) for \(r\in \mathscr {R}_\mathrm {irr}\).
The scaling condition can also be written in the form
$$\begin{aligned} \min _{r\in \mathscr {R}_{\text {irr}}} |\mathbf {g}_r| \ge \gamma \max _{r\in \mathscr {R}_{\mathrm {rev}}} |\mathbf {g}_r| \quad \text {for all } \gamma >1. \end{aligned}$$
A more detailed justification for strict thermodynamic soundness in mixed networks will be given in the next subsection when considering open RNs. Here, we focus on the relationship of thermodynamic soundness and futile cycles.
A mixed RN is thermodynamically sound if and only if there is no irreversible futile cycle.
By a "sign vector version" of Minty's Lemma: Either there is \(\mathbf {g}\in (\ker \mathbf {S})^\perp\) with \(\mathbf {g}_r<0\) for \(r \in \mathscr {R}_\mathrm {irr}\) (the network is thermodynamically sound) or there is a non-zero \(\mathbf {v}\in \ker \mathbf {S}\) with \(\mathbf {v}_r \le 0\) for \(r \in \mathscr {R}_\mathrm {irr}\) and \(\mathbf {v}_r = 0\) for \(r \in \mathscr {R}_\mathrm {rev}\); equivalently, there is a futile cycle \(\mathbf {v}>0\) with \({{\,\mathrm{supp}\,}}(\mathbf {v}) \subseteq \mathscr {R}_\mathrm {irr}\). \(\square\)
A mixed RN is strictly thermodynamically sound if and only if no futile cycle contains an irreversible reaction.
By Minty's Lemma: Let \(\gamma >1\). Either there is \(\mathbf {g}\in (\ker \mathbf {S})^\perp\) with \(\mathbf {g}_r \in [-1,1]\) for \(r \in \mathscr {R}_\mathrm {rev}\) and \(\mathbf {g}_r \in (-\infty ,-\gamma ]\) for \(r \in \mathscr {R}_\mathrm {irr}\) or there is \(\mathbf {v}\in \ker \mathbf {S}\) with
$$\begin{aligned} \sum _{r \in \mathscr {R}_\mathrm {rev}} \mathbf {v}_r \, [-1,1] + \sum _{r \in \mathscr {R}_\mathrm {irr}} \mathbf {v}_r \, (-\infty ,-\gamma ] > 0. \end{aligned}$$
Thereby, a sum of intervals is defined in the obvious way, yielding an interval which is positive if each of its elements is positive. Via \(\mathbf {v}\rightarrow -\mathbf {v}\), the interval condition (12) is equivalent to: there is \(\mathbf {v}\in \ker \mathbf {S}\) with
$$\begin{aligned} \sum _{r \in \mathscr {R}_\mathrm {rev}} \mathbf {v}_r \, [-1,1] + \sum _{r \in \mathscr {R}_\mathrm {irr}} \mathbf {v}_r \, [\gamma ,\infty ) > 0 . \end{aligned}$$
As necessary conditions, we find (i) \(\mathbf {v}_{r^*} > 0\) for some \(r^* \in \mathscr {R}_\mathrm {irr}\) and (ii) \(\mathbf {v}_{r} \ge 0\) for all \(r \in \mathscr {R}_\mathrm {irr}\). By Lemma 8, (iii) there is an equivalent flow with \(\mathbf {v}_{r} \ge 0\) for \(r \in \mathscr {R}_\mathrm {rev}\). That is, there is a futile cycle \(\mathbf {v}>0\) involving an irreversible reaction. For \(\gamma\) large enough, the necessary conditions are also sufficient for the interval condition (13). \(\square\)
We may characterize strict thermodynamic soundness for mixed networks also in geometric terms:
Corollary 19
Let , \(L_\mathrm {rev} = {{\,\mathrm{im}\,}}\mathbf {S}_\mathrm {rev}\), and \(C_\mathrm {irr} = {{\,\mathrm{cone}\,}}\mathbf {S}_\mathrm {irr}\). Then, \((X, \mathscr {R})\) is strictly thermodynamically sound if and only if it is thermodynamically sound and \(L_\mathrm {rev} \cap C_\mathrm {irr} = \{0\}\).
Substrate cycle. Reaction network (top) as a complex-reaction graph, involving substrate S, product P, enzymes E, F, and complexes ES, FP, and stoichiometric matrix \(\mathbf {S}\) (middle). In addition to the futile cycles \((1,1,0,0,0,0)^\top\) and \((0,0,0,1,1,0)^\top\), corresponding to the two (pairs of) reversible reactions, there is a non-trivial futile cycle \(\mathbf {v}= (1,0,1,1,0,1)^\top\), involving both reversible and irreversible reactions. (Note that this futile cycle is not an actual cycle of the graph.) As a result, the network is thermodynamically sound, but not strictly thermodynamically sound. In a metabolically relevant example from glycolysis/gluconeogenesis, the compounds are S = fructose-6-phosphate, P = fructose-1,6-bisphosphate, E = phosphofructokinase 1, and F = fructose-1,6-bisphosphatase, and reactions 2 and 4 involve additional compounds (bottom). As a consequence, there is no non-trivial futile cycle (in the strict sense of this work). In fact, the vector \(\mathbf {v}\) above then represents the net reaction \(\text{ATP} + \text{H}_{2}\text{O} \rightarrow \text{ADP} + \text{P}_\mathrm {i}\). Still, it is called a futile cycle or substrate cycle in the literature on metabolic networks. (In our approach, reactions producing/consuming the additional compounds \(\text{P}_\mathrm {i}\) must be added to the network to obtain a futile cycle. Such a futile cycle involves the active reactions in \(\mathbf {v}\), and hence the extended network cannot be strictly thermodynamically sound.)
Figure 2 illustrates the concepts of futile cycles and (strict) thermodynamical soundness in a metabolically relevant example.
Open (mixed) networks
Opening the RN, i.e., adding transport reactions alters the representation of reaction energies. We now have to consider chemical potentials involving concentrations, i.e., we replace the (Gibbs free) energies \(\mathbf {G}_x\) by \(\mathbf {G}_x + R\,T\ln [x]\), where [x] is the activity of x, which approximately coincides with the concentrations. A reaction r then proceeds in the forward direction whenenver the chemical potential of the products is smaller than the chemical potential of the educts, i.e., if
$$\begin{aligned} \sum _x s^+_{xr} (\mathbf {G}_x + R\,T\ln [x]) < \sum _x s^-_{xr} (\mathbf {G}_x + R\,T\ln [x])\,. \end{aligned}$$
This condition can be rewritten in terms of the reaction (Gibbs free) energies and (the logarithm of) the "reaction quotient", see e.g. [68]:
$$\begin{aligned} \mathbf {g}_r < - R\,T \sum _{x\in X} s_{xr}\ln [x] \end{aligned}$$
The activities [x] for \(x\in X\) therefore define an upper bound on the reaction energy \(\mathbf {g}_r\). In an open system, (internal) concentrations may be buffered as fixed values or are implictly determined by given influxes or external concentrations [69]. Given a specification of the environment, i.e., of the transport fluxes and/or buffered concentrations, the upper bound in Eq. (15) can have an arbitrary value. Thus, if an irreversible reaction in \(\mathscr {R}\) is meant to proceed forward for all conditions, it must be possible to choose \(\mathbf {g}_r<0\) arbitrarily small, i.e., \(|\mathbf {g}_r|\) arbitrarily large. This amounts to requiring that \((X,\mathscr {R}^p)\) is strictly thermodynamically sound. In many studies of reaction networks, one requires that a reaction proceeds forward in a given situation, but allows that it proceeds backward in other situations. In this (lax) interpretation of irreversibility it is sufficient to require that \((X,\mathscr {R}^p)\) is thermodynamically sound, but not necessarily strictly thermodynamically sound.
In Def. 16, we introduce (strict) thermodynamical soundness in terms of reaction energies, and in Thms. 17 and 18, we characterize it in terms of futile cycles. In a corresponding approach [70, 71], "extended" detailed balance is required for (closed) RNs with irreversible reactions at thermodynamic equilibrium. Thereby, activities [x], rate constants \(k_+,k_-\) and equilibrium constants K are explicitly used to formulate Wegscheider conditions for non-reversible RNs that are limits of reversible systems. The characterization of such systems in [70] is equivalent to our results.
Reversible completion
As models of chemistry, non-reversible networks are abstractions that are obtained from reversible thermodynamics networks by omitting the reverse of reactions that mostly flow into one direction.
Let \((X,\mathscr {R},\mathbf {g})\) be a thermodynamic RN with . The reversible completion of \((X,\mathscr {R},\mathbf {g})\) is the RN \((X,\mathscr {R}^*,\mathbf {g}^*)\) with and \(\mathbf {g}^*_r=\mathbf {g}_r\) for and \(\mathbf {g}^*_{\overline{r}}= -\mathbf {g}_r\) for \(r\in \mathscr {R}_\mathrm {irr}\).
Lemma 21
If \((X,\mathscr {R},\mathbf {g})\) is a thermodynamic RN, then its reversible completion \((X,\mathscr {R}^*,\mathbf {g}^*)\) is also a thermodynamic RN.
Let \(\overline{r}\in \mathscr {R}^*\) be the reverse reaction of \(r \in \mathscr {R}_\mathrm {irr}\). By Prop. 6, for every \(r\in \mathscr {R}\) there is a vector \(\mathbf {G}\in \mathbb {R}^X\) satisfying Hess' law. It suffices to show that \(\mathbf {G}\) still satisfies Hess' law for \((X,\mathscr {R}^*)\). By the definition of \(\overline{r}\), its reaction energy is \(\mathbf {g}^*_{\overline{r}} = \sum _{x\in X} \mathbf {G}_x (s^+_{x{\overline{r}}} - s^-_{x{\overline{r}}}) = \sum _{x\in X} \mathbf {G}_x (s^-_{xr}-s^+_{xr}) = -\mathbf {g}_r\), as required by Def. 20. Thus \((X,\mathscr {R}^*,\mathbf {g}^*)\) is also thermodynamic. \(\square\)
The following result is an immediate consequence of Lemma 21.
If the RN \((X,\mathscr {R})\) is thermodynamically sound, then its reversible completion is also thermodynamically sound, and the reaction energies \(\mathbf {g}\) can be choosen such that \(\mathbf {g}_r<0<\mathbf {g}_{\overline{r}}\) for all \(r\in \mathscr {R}_\mathrm {irr}\).
Mass conservation and cornucopias/abysses
Thermodynamic soundness is not sufficient to ensure chemical realism. As an example, consider the random kinetics model introduced in [72]. It assigns (a randomly chosen) energy G(x) to each \(x\in X\). Each reaction r is defined by randomly picking a set of educts \(e_r^-\) and products \(e_r^+\). A possible instance of this model comprises four compounds with molecular energies \(G({\mathrm A}) = -5\), \(G({\mathrm B}) = -5\), \(G({\mathrm C}) = -10\), and \(G({\mathrm X}) = -2\), and two reactions
$$\begin{aligned} \mathrm{A + B} \longrightarrow \mathrm{C + X}, \quad \mathrm{C} \longrightarrow \mathrm{A + B}. \end{aligned}$$
The first reaction is exergonic with \(\mathbf {g}_1=-2\) and the second has reaction energy \(\mathbf {g}_2=0\). The composite reaction, obtained as their sum, is \(\mathrm{A + B} \rightarrow \mathrm{A + B + X}\). Ignoring the effective catalysts A and B, the corresponding net reaction is \({\varnothing } \rightarrow \mathrm{X}\). In this universe, therefore, it is possible to spontaneosly create mass in a sequence of exergonic reactions. Reverting the signs of the energies reverts the two reactions and thus yields an exergonic reaction that makes X disappear.
We can again describe this situation in terms of flows. Recall that \([\mathbf {S}\mathbf {v}]_x\) is the net production or consumption of species x. The spontaneous creation or annihilation of mass thus corresponds to flows \(\mathbf {v}>0\) with \(\mathbf {S}\mathbf {v}>0\) or \(\mathbf {S}\mathbf {v}<0\), respectively.
Let \((X,\mathscr {R})\) be a RN. A flow \(\mathbf {v}>0\) is a cornucopia if \(\mathbf {S}\mathbf {v}>0\) and an abyss if \(\mathbf {S}\mathbf {v}<0\).
Systems with cornucopias or abysses cannot be considered as closed systems. The proper part of chemical reaction networks therefore must be free of cornucopias and abysses.
Since in a reversible network any vector \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\) can be transformed into an equivalent flow \(\mathbf {{\tilde{v}}} \ge 0\) (with \(\mathbf {S}\mathbf {{\tilde{v}}} = \mathbf {S}\mathbf {v}\)), cf. Lemma 8, we have the following characterization.
A reversible RN is free of cornucopias and abysses if and only if there is no vector \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\) such that \(\mathbf {S}\mathbf {v}>0\).
In fact, mass conservation rules out cornucopias and abysses. More generally, a reaction invariant is a property that does not change over the course of a chemical reaction [8, 27, 29]. Here, we are only interested in linear reaction invariants, also called conservation laws [73], that is, quantitative properties of molecules (such as mass) whose sum is the same for educts and products.
A linear reaction invariant or conservation law is a non-zero vector \(\mathbf {m}\in \mathbb {R}^X\) that satisfies \(\sum _{x\in X} \mathbf {m}_x \, s^+_{xr} = \sum _{x\in X} \mathbf {m}_x \, s^-_{xr}\) for all reactions \(r\in \mathscr {R}\), that is, \(\mathbf {m}^\top \mathbf {S}=0\).
A RN is conservative if it has a positive conservation law, that is, if there is \(\mathbf {m}\in \mathbb {R}^X\) such that \(\mathbf {m}\gg 0\) and \(\mathbf {m}^\top \mathbf {S}=0\).
By definition, a conservative network is free of cornucopias and abysses. Conversely, by the result below, if a reversible network is not conservative, then it contains a cornucopia (and an abyss).
A reversible RN \((X,\mathscr {R})\) is free of cornucopias and abysses if and only if it is conservative.
By Stiemke's Theorem (which is in turn a special case of Minty's Lemma): Either there is a non-zero, non-negative \(\mathbf {n} \in {{\,\mathrm{im}\,}}\mathbf {S}\) or there is a positive \(\mathbf {m}\in ({{\,\mathrm{im}\,}}\mathbf {S})^\perp = \ker \mathbf {S}^\top\). That is, either there is \(\mathbf {v}\in \mathbb {R}^\mathscr {R}\) with \(\mathbf {n} = \mathbf {S}\mathbf {v}> 0\) (corresponding to a cornucopia \(\mathbf {{\tilde{v}}}>0\)) or there is \(\mathbf {m}\gg 0\) with \(\mathbf {S}^\top \mathbf {m}=0\) (as claimed). \(\square\)
We therefore conclude that every closed chemical RN must have a positive reaction invariant. This is no longer true if the RN is embedded in an open system and mass exchange with the environment is allowed. By construction, each transport reaction violates at least one of the conservation laws of the closed system, since \([\mathbf {m}^\top \mathbf {S}]_{r}>0\) if r is an import reaction and \([\mathbf {m}^\top \mathbf {S}]_{r}<0\) if it is an export reaction. As discussed e.g. in [73], opening a RN by adding import or export reactions, can only reduce the number of conservation laws and cannot introduce additional constraints. Nevertheless, a RN must be chemically meaningful when the import and export reactions are turned off. That is, its proper part \((X,\mathscr {R}^p)\) must be conservative to ensure that it has a chemical realization.
Realizations of reaction networks
Conservation of atoms and moieties
Molecules are composed of atoms, which are – by definition – preserved in every chemical reaction. For each atom type a, there is a conservation law that accounts for the number of atoms of type a in each compound x. More precisely, denote by \(\mathbf {A}_{ax}\in \mathbb {N}_0\) the number of atoms of type a in molecule x, i.e., the coefficients in the chemical sum formula \(\sum _a \mathbf {A}_{ax} \, a\) for compound x. (Alternatively, we may think of sum formulas as multisets of atoms.) Conservation of atom a in reaction r therefore becomes
$$\begin{aligned} \sum _x \mathbf {A}_{ax} \mathbf {S}_{xr} = 0 . \end{aligned}$$
For all atoms and reactions and in matrix form, this condition reads \(\mathbf {A}\mathbf {S}=0\). Each row of the matrix \(\mathbf {A}\) thus is a non-negative linear reaction invariant, i.e., a non-negative conservation law.
Conserved moieties are groups of atoms that remain intact in all reactions in which they appear [26, 28, 30]. Like atoms, they lead to non-negative integer conservation laws.
However, (the vectors representing) conserved atoms or moieties need not span the left kernel of the stoichiometric matrix \(\mathbf {S}\) and need not be linearly independent. To see this, consider the following two RNs comprising a single reaction. For
$$\text{MgCO}_{3} \longrightarrow \text{MgO} + \text{CO}_{2}$$
with \(\mathbf {S}= (-1, 1, 1)^\top\), there are only two linearly independent conservation laws, e.g. (1, 1, 0) and (1, 0, 1), corresponding to the moieties MgO and CO2, while the three vectors for the atomic composition \(A_{\mathrm{Mg}}=(1,1,0)\), \(A_{\mathrm{C}}=(1,0,1)\), and \(A_{\mathrm{O}}=(3,1,2)\) are linearly dependent. On the other hand, as noted in [26],
$$\text{C}_{6}\text{H}_{5}\text{CH}_{3} + \text{H}_{2} \longrightarrow \text{C}_{6}\text{H}_{6} + \text{CH}_{4}$$
with \(\mathbf {S}=(-1,-1,1,1)^\top\) has three conservation laws but only two atom types, which correspond to the conservation laws \(A_{\mathrm{C}}=(7,0,6,1)\) and \(A_{\mathrm{H}}=(8,2,6,4)\). E.g. the phenyl-moiety \(M_{ph}=(1,0,1,0)\) or the methyl-moiety \(M_{\mathrm{CH_4}}=(1,0,0,1)\) form the missing third, linearly independent conservation law. The latter example also shows that atom conservation relations are not necessarily support-minimal among the non-negative integer left-kernel vectors of \(\mathbf {S}\). In fact, also (0, 1, 1, 0) and (0, 1, 0, 1) are left-kernel vectors of \(\mathbf {S}\), the chemical interpretation of which is less obvious.
These examples show that key chemical properties such as atom conservation or conservation of moieties are not encoded in the stoichiometric matrix \(\mathbf {S}\). In other words, two RNs can be isomorphic as hypergraphs but describe reactions between sets of compounds that are not isomorphic in terms of their sum formulas. For example, \(\mathbf {S}=(-1,-1,1,1)^\top\) is realized by the hydroalkylation of toluene in Eq. (19), but also by the inorganic reaction
$$\text{ MgO} + \text{H}_{2}\text{SO}_{4} \longrightarrow \text{MgSO}_{4} + \text{H}_{2}\text{O} ,$$
having four atom conservation laws, \(A_{\mathrm{Mg}}=(1,0,1,0)\), \(A_{\mathrm{O}}=(1,4,4,1)\), \(A_{\mathrm{H}}=(0,2,0,2)\), \(A_{\mathrm{S}}=(0,1,1,0)\), and three moiety convervation laws, e.g. \(M_{\mathrm{MgO}}= (1,0,1,0)\), \(M_{\mathrm{H_{2}O}}= (0,1,0,1)\), and \(M_{\mathrm{SO_3}}= (0,1,1,0)\).
"Semi-positive" conservation laws [26, 74] of a RN are the non-zero elements of the polyhedral cone
$$\begin{aligned} K(\mathbf {S}) = \left\{ \mathbf {y}\in \mathbb {R}^{X} \mid \mathbf {y}\mathbf {S}=0, \, \mathbf {y}\ge 0 \right\} , \end{aligned}$$
the non-negative left-kernel of \(\mathbf {S}\). Thereby, \(K(\mathbf {S})\) is an s-cone as defined in [75], given by a subspace (here: \(\ker \mathbf {S}^\top\)) and non-negativity conditions. Since the s-cone \(K(\mathbf {S})\) is contained in the non-negative orthant, its extreme (non-decomposable) vectors agree with its support-minimal vectors. Further, since \(\mathbf {S}\) is an integer matrix, all extreme vectors of \(K(\mathbf {S})\) are positive real multiples of integer vectors.
All potential moiety conservation laws (MCLs) [76] for a given stoichiometric matrix \(\mathbf {S}\) (but unknown atomic composition) are non-zero, integer elements of \(K(\mathbf {S})\), i.e., elements of the set
$$\begin{aligned} {\mathcal {K}}(\mathbf {S}) = \left\{ \mathbf {y}\in \mathbb {N}_0^{X} \mid \mathbf {y}\mathbf {S}=0 \right\} \setminus \{0\} . \end{aligned}$$
Clearly, \({\mathcal {K}}(\mathbf {S})\) contains the integer extreme vectors of \(K(\mathbf {S})\). Ultimately, one is interested in minimal MCLs, i.e., minimal elements of \({\mathcal {K}}(\mathbf {S})\), cf. [77]. (Minimal vectors are called maximal in [74].)
A vector \(\mathbf {y}\in {\mathcal {K}}(\mathbf {S})\) is minimal if there is no \(\mathbf {y'}\in {\mathcal {K}}(\mathbf {S})\) such that \(\mathbf {y'}<\mathbf {y}\).
In fact, integer minimality and integer non-decomposability are equivalent.
Let \(\mathbf {y}\in {\mathcal {K}}(\mathbf {S})\). The following statements are equivalent:
\(\mathbf {y}\) is minimal.
There are no two \(\mathbf {y}', \mathbf {y}''\in {\mathcal {K}}(\mathbf {S})\) such that \(\mathbf {y}=c'\mathbf {y}'+c''\mathbf {y}''\) with \(c',c''\in \mathbb {N}\).
Suppose \(\mathbf {y}'<\mathbf {y}\). Then \(\mathbf {y}=1\cdot (\mathbf {y}-\mathbf {y}')+1\cdot \mathbf {y}'\). Conversely, suppose \(\mathbf {y}=c'\mathbf {y}'+c''\mathbf {y}''\). Then \(\mathbf {y}', \mathbf {y}'' < \mathbf {y}\). \(\square\)
Most importantly, the minimal MCLs generate all MCLs.
Every element of \({\mathcal {K}}(\mathbf {S})\) is a finite integer linear combination of minimal elements of \({\mathcal {K}}(\mathbf {S})\).
By Noetherian induction on the partial order < on \(\mathbb {N}^X_0\) and Proposition 29. \(\square\)
Knowing all minimal MCLs allows to represent the compounds X of a RN \((X,\mathscr {R})\) in a minimal (most coarse-grained) way.
The minimal moiety representation (short: mm-representation) of a conservative RN \((X,\mathscr {R})\) is the matrix \(\mathbf {M}\in \mathbb {N}_0^{{\mathcal {M}} \times X}\), where the rows of \(\mathbf {M}\) are the minimal MCLs, and \({\mathcal {M}}\) is the corresponding set of abstract moieties.
For example, consider the abstract chemical reaction
$$\begin{aligned} {\mathrm{A + B} \longrightarrow 2 \,\mathrm{C}} \end{aligned}$$
with \(\mathbf {S}= (-1,-1,2)^\top\). There are three minimal MCLs denoted by the abstract moieties \({\mathcal {M}} = \{ \mathrm{X,Y,Z} \}\): on the one hand, \(M_{\mathrm{X}}=(2,0,1)\) and \(M_{\mathrm{Y}}=(0,2,1)\), which are (minimal) extreme vectors of \(K(\mathbf {S})\), on the other hand, \(M_{\mathrm{Z}}=(1,1,1)\), which is minimal, but not extreme. Hence, the mm-representation is given by
$$\begin{aligned} \mathbf {M} = \begin{pmatrix} 2&{}0&{}1\\ 0&{}2&{}1 \\ 1&{}1&{}1 \end{pmatrix}, \end{aligned}$$
and the reaction (23) can be represented as
$$\begin{aligned} {\mathrm{X_2Z + Y_2Z} \longrightarrow 2 \, \mathrm{XYZ}} . \end{aligned}$$
By definition, \({{\,\mathrm{im}\,}}\mathbf {M}^\top \subseteq \ker \mathbf {S}^\top\). In fact, \({{\,\mathrm{im}\,}}\mathbf {M}^\top = \ker \mathbf {S}^\top\), and hence there is an obvious lower bound for the number of minimal MCLs.
Let \(\mathbf {M}\in \mathbb {N}_0^{{\mathcal {M}} \times X}\) be the mm-representation of a conservative RN \((X,\mathscr {R})\) with stoichiometric matrix \(\mathbf {S}\). Then, \({{\,\mathrm{im}\,}}\mathbf {M}^\top = \ker \mathbf {S}^\top\) and hence \(|{\mathcal {M}}|\ge \dim \ker \mathbf {S}^\top\).
Since the left kernel of \(\mathbf {S}\) and hence \(K(\mathbf {S})\) contain a positive vector, we have \(\dim K(\mathbf {S}) = \dim \ker \mathbf {S}^\top {=}{:}d\). Hence, (the extreme vectors of) \(K(\mathbf {S})\) and therefore also (the corresponding minimal integer vectors of) \({\mathcal {K}}(\mathbf {S})\) generate \(\ker \mathbf {S}^\top\), that is, \({{\,\mathrm{im}\,}}\mathbf {M}^\top = \ker \mathbf {S}^\top\). Hence, the number of minimal MCLs is greater equal d, that is, \(|{\mathcal {M}}|\ge \dim \ker \mathbf {S}^\top\). \(\square\)
By instantiating the abstract moieties \(\{ \mathrm{X,Y,Z} \}\) with sum formulas (multisets of atoms), every chemical realization of the reaction can be obtained. In general, we define an instance as follows.
A sum formula instance (short: sf-instance) of a RN \((X,\mathscr {R})\) with stoichiometric matrix \(\mathbf {S}\) is a matrix \(\mathbf {A}\in \mathbb {N}_0^{{\mathcal {A}} \times X}\) for some non-empty, finite set \({\mathcal {A}}\) of "atoms" such that
each column of \(\mathbf {A}\) is non-zero, and
\(\mathbf {A} \mathbf {S}= 0\).
Def. 33 in particular allows that \(\mathbf {A}\) comprises a single row. By condition (i), this row vector is a strictly positive conservation law, which, as a linear combination of MCLs, may be chosen to be integer valued. Conversely, if \((X,\mathscr {R})\) admits an sf-instance, then the column-sum \(\mathbf {m}= \mathbf {1}^\top \mathbf {A}\in \ker \mathbf {S}^\top\) is a strictly positive integer conservation law and thus in particular an sf-instance with \(|{\mathcal {A}}|=1\). Taken together, we have shown the following existence result.
A RN \((X,\mathscr {R})\) admits an sf-instance if and only if it is conservative.
The entry \(\mathbf {m}_x\) of \(\mathbf {m}\) can be interpreted as the total number of atoms in compound \(x\in X\). In [78], a RN is called primitive atomic if each reaction preserves the total number of atoms. Thus a RN is primitive atomic if and only if it is conservative, cf. [78].
Isomers and sum formula realizations
In order to gain a better understanding of sf-instances for a RN \((X,\mathscr {R})\), we consider net reactions of the form \({\mathrm X\rightarrow \mathrm Y}\) in the reversible completion of \((X,\mathscr {R})\). That is, we ask whether it is possible, in principle, to convert X into Y, irrespective of whether the conversion is thermodynamically favorable. From a chemical perspective, if such a net isomerization reaction exists, then X and Y must be compositional isomers. These will play a key role in our discussion of realizations of \((X,\mathscr {R})\) in terms of sf-instances.
Before we proceed, we first give a more formal account of net isomerization reactions. Recall that a net reaction derives from an overall reaction, which in turn is specified by an integer hyperflow. Instead of working explicitly in the reversible completion, we may instead consider vectors \(\mathbf {v}\in \mathbb {Z}^{\mathscr {R}}\) with negative entries \(\mathbf {v}_r<0\), representing the reverse of irreversible reactions \(r\in \mathscr {R}\).
Let \((X,\mathscr {R})\) be a RN with stoichiometric matrix \(\mathbf {S}\). A vector \(\mathbf {v}\in \mathbb {Z}^{\mathscr {R}}\), satisfying \(k{:}{=}-[\mathbf {S}\mathbf {v}]_x = [\mathbf {S}\mathbf {v}]_y\in \mathbb {N}\) for some \(x,y \in X\) and \([\mathbf {S}\mathbf {v}]_z=0\) for all \(z\in X\setminus \{x,y\}\), specifies a net isomerization reaction \(k\,x\rightarrow k\,y\). Two (distinct) compounds \(x,y\in X\) are obligatory isomers if \((X,\mathscr {R})\) admits a net isomerization reaction \(k\,x\rightarrow k\,y\). We write \(x\rightleftharpoons y\) if \(x=y\) or x and y are obligatory isomers.
The binary relation \(x\rightleftharpoons y\) introduced in Def. 35is an equivalence relation.
By definition, \(\rightleftharpoons\) is reflexive. If \(\mathbf {v}\) specifies the net isomerization reaction \(k\,x\rightarrow k\,y\), then \(-\mathbf {v}\) specifies \(k\,y\rightarrow k\,x\), and thus \(\rightleftharpoons\) is symmetric. To verify transitivity, suppose \(x\rightleftharpoons y\) and \(y\rightleftharpoons z\), i.e., there are vectors \(\mathbf {v}^1\) and \(\mathbf {v}^2\) that specify the net isomerization reactions \(p\,x \rightarrow p\,y\) and \(q\,y \rightarrow q\,z\). Then \(\mathbf {v}= q\mathbf {v}_1 + p\mathbf {v}_2\) satisfies \([\mathbf {S}\mathbf {v}]_x=-pq\), \([\mathbf {S}\mathbf {v}]_z=pq\), \([\mathbf {S}\mathbf {v}]_y=0\), and \([\mathbf {S}\mathbf {v}]_u=0\) for all \(u\in X\setminus \{x,y,z\}\), and thus specifies the net isomerization reaction \((pq)\,x \rightarrow (pq)\,z\). Thus, \(\rightleftharpoons\) is transitive. \(\square\)
The intuition is to define a sum formula realization of a RN as a matrix \(\mathbf {A}\) that (i) is an sf-instance of the RN and (ii) assigns different atomic compositions to x and y whenever \(x\not \rightleftharpoons y\), that is, whenever x and y are not isomers. In the following, we will see that such a definition both ensures chemical realism and leads to a useful mathematical description. The next result relates net isomerization reactions to the structure of \(\ker \mathbf {S}^\top\) (and ultimately to compositional isomers as given by MCLs and sf-instances).
Let \((X,\mathscr {R})\) be a RN with stoichiometric matrix \(\mathbf {S}\). Then \(x\rightleftharpoons y\) if and only if \(\mathbf {m}_x=\mathbf {m}_y\) for all \(\mathbf {m}\in \ker \mathbf {S}^\top\).
First suppose \(x\rightleftharpoons y\). Then either \(x=y\) (in which case the assertion is trivially true) or there is a net isomerization reaction \(k\,x\rightarrow k\,y\) specified by the vector \(\mathbf {v}\). Let \(\mathbf {m}\in \ker \mathbf {S}^\top\). By the definition of \(\mathbf {v}\), we have \(0 = \mathbf {m}^\top \mathbf {S}\mathbf {v}= \sum _{z\in X} \mathbf {m}_z[\mathbf {S}\mathbf {v}]_z = \mathbf {m}_x[\mathbf {S}\mathbf {v}]_x +\mathbf {m}_y[\mathbf {S}\mathbf {v}]_y = (\mathbf {m}_x-\mathbf {m}_y)[\mathbf {S}\mathbf {v}]_x\) and \([\mathbf {S}\mathbf {v}]_x\ne 0\). Hence, \(\mathbf {m}_x=\mathbf {m}_y\).
Now suppose \(\mathbf {m}_x=\mathbf {m}_y\) for all \(\mathbf {m}\in \ker \mathbf {S}^\top\) and consider the vector \(\mathbf {w}\in \mathbb {Z}^{X}\) with \(\mathbf {w}_x=-1\), \(\mathbf {w}_y=1\), and \(\mathbf {w}_z=0\) for all \(z\in X\setminus \{x,y\}\). Clearly, \(\langle \mathbf {m},\mathbf {w}\rangle =0\) for all \(\mathbf {m}\in \ker \mathbf {S}^\top\), that is, \(\mathbf {w}\in (\ker \mathbf {S}^\top )^\perp = {{\,\mathrm{im}\,}}\mathbf {S}\). Thus there is \(\mathbf {v}\in \mathbb {R}^{\mathscr {R}}\) such that \(\mathbf {w}=\mathbf {S}\mathbf {v}\). Since \(\mathbf {S}\in \mathbb {Z}^{X\times \mathscr {R}}\), the solution \(\mathbf {v}\) of this linear equation is rational. Writing \({{\,\mathrm{lcd}\,}}(\mathbf {v})\) for the least common denomintor of the entries in \(\mathbf {v}\), we obtain the integer vector \({{\,\mathrm{lcd}\,}}(\mathbf {v})\, \mathbf {v}\in \mathbb {Z}^{\mathscr {R}}\), specifying the net isometrization reaction \({{\,\mathrm{lcd}\,}}(\mathbf {v})\, x\rightarrow {{\,\mathrm{lcd}\,}}(\mathbf {v})\, y\). By definition, \(x\rightleftharpoons y\). \(\square\)
The proof of Thm. 37 also provides a simple algorithm to compute integer hyperflows \(\mathbf {v}\) that specify net isomerization reactions and to identify the obligatory isomers: For each pair \(x,y\in X\), construct \(\mathbf {w}\) with \(\mathbf {w}_x=-1\) and \(\mathbf {w}_y=1\) being the only non-zero entries and solve the linear equation \(\mathbf {S}\mathbf {v}=\mathbf {w}\). We have \(x\rightleftharpoons y\) if and only if a solution exists, in which case the desired integer hyperflow is \({{\,\mathrm{lcd}\,}}(\mathbf {v})\,\mathbf {v}\).
Reaction network (left) and stoichiometric matrix \(\mathbf {S}\) (top right) showing reactions \(r_1\)-\(r_4\), Eq. (26), in gray and the isomerization reactions \(r_5\)-\(r_7\), Eq. (27) in light red. For the basic system (gray) we have \(\dim \ker \mathbf {S}^\top =3\). The three MCLs are shown below \(\mathbf {S}\). In the full system, \(r_1\)-\(r_7\), we have \(\dim \ker \mathbf {S}^\top =1\) with the unique MCL shown at the bottom right. In the full system U, V, and W form obligatory isomers of the monomer D. Similarly, X and Y are also obligatory isomers composed of two D units, while Z is a trimer of D units. The vector \(\mathbf {v}=(-1,0,1,0,0,1,0)\) is represented by the composite reaction \({\mathrm{X + (U + W) + V} \rightarrow \mathrm{(U + V) + Y + W}}\) and specifies the net isometrization reaction \({ \mathrm X \rightarrow \mathrm Y }\)
We next show that obligatory isomers cannot be distinguished by sf-instances, and conversely, compounds that are not obligatory isomers are distinguished by certain sf-instances.
Let \((X,\mathscr {R})\) be a RN with stoichiometric matrix \(\mathbf {S}\) and \(\mathbf {A}\in \mathbb {N}_0^{{\mathcal {A}}\times X}\) be an sf-instance. If \({{\,\mathrm{im}\,}}\mathbf {A}^\top = \ker \mathbf {S}^\top\), then the following statements are equivalent:
\(x,y \in X\) are obligatory isomers;
\(\mathbf {A}_{ax}=\mathbf {A}_{ay}\) for all \(a\in {\mathcal {A}}\).
If \({{\,\mathrm{im}\,}}\mathbf {A}^\top \subseteq \ker \mathbf {S}^\top\), then (i) implies (ii).
Let \(x,y \in X\) be distinct. On the one hand, by Theorem 37, statement (i) is equivalent to \(\mathbf {m}_x=\mathbf {m}_y\) for all \(\mathbf {m} \in \ker \mathbf {S}^\top\). On the other hand, statement (ii) is equivalent to \(\mathbf {m}_x=\mathbf {m}_y\) for all \(\mathbf {m} \in {{\,\mathrm{im}\,}}\mathbf {A}^\top\). If \({{\,\mathrm{im}\,}}\mathbf {A}^\top = \ker \mathbf {S}^\top\), then (i) and (ii) are equivalent. If \({{\,\mathrm{im}\,}}\mathbf {A}^\top \subseteq \ker \mathbf {S}^\top\), that is, if the rows of \(\mathbf {A}\) are elements of \(\ker \mathbf {S}^\top\), then (i) implies (ii). \(\square\)
Any sf-instance \(\mathbf {A}\) whose rows span \(\ker \mathbf {S}^\top\) not only identifies obligatory isomers, but also assigns distinct sum formulas to any distinct compounds \(x,y\in X\) that are not obligatory isomers. In this case, there is at least one row (corresponding to atom a) for which \(\mathbf {A}_{ax}\ne \mathbf {A}_{ay}\). This provides the formal justification for a mathematical definition of sum formula realizations.
A sum formula realization (short: sf-realization) of a RN \((X,\mathscr {R})\) with stoichiometric matrix \(\mathbf {S}\) is a matrix \(\mathbf {A}\in \mathbb {N}_0^{{\mathcal {A}} \times X}\) for some non-empty, finite set \({\mathcal {A}}\) of "atoms" such that
each column of \(\mathbf {A}\) is non-zero and
\({{\,\mathrm{im}\,}}\mathbf {A}^\top = \ker \mathbf {S}^\top\).
As an illustration, consider the RN
$$\begin{aligned} {\mathrm{U + V} \longrightarrow \mathrm X},&\quad {\mathrm{U + W} \longrightarrow \mathrm Y}, \\ {\mathrm{X + W} \longrightarrow \mathrm Z},&\quad {\mathrm{Y + V} \longrightarrow \mathrm Z}, \end{aligned}$$
depicted on the left side of Fig. 3. The RN can be instantiated by the sum formulas \({\mathrm U} = {\mathrm A}\), \({\mathrm V}={\mathrm B}\), \({\mathrm W}={\mathrm C}\), \({\mathrm X}={\mathrm{AB}}\), \({\mathrm Y}={\mathrm{AC}}\), \({\mathrm Z} = {\mathrm{ABC}}\). The corresponding matrix \(\mathbf {A}\) (middle right in Fig. 3) is not only an sf-instance, its rows also span \(\ker \mathbf {S}^\top\), and hence it is an sf-realization. (In fact, it is also the mm-representation.) A "reduced representation" can be obtained by assuming that U, V, and W are compositional isomers corresponding to the same moiety D, that is, \({\mathrm U} = {\mathrm V} = {\mathrm W} = {\mathrm D}\). As a consequence, X and Y are also compositional isomers, \({\mathrm X} = {\mathrm Y} = {\mathrm D_2}\), and further \({\mathrm Z} = {\mathrm D_3}\). The corresponding matrix \(\mathbf {A'}\) still defines an sf-instance, but its rows do not span \(\ker \mathbf {S}^\top\). Now consider an extension of the RN in Eq. (26), by adding three isometrization reactions,
$$\begin{aligned} {{\text{U}} \longrightarrow {\text{V}}}, \quad {{\text{V}} \longrightarrow {\text{W}}}, \quad {{\text{U}} \longrightarrow {\text{W}}}. \end{aligned}$$
In the extended network given by Eq. (26) and Eq. (27), we have \(\dim \ker \mathbf {S}^\top =1\), and thus there is a unique MCL. The reactions in Eq. (27) now enforce that U, V, and W are compositional isomers and thus correspond to the same moiety D. This coincides with the "reduced representation" \(\mathbf {A'}\) for the RN in Eq. (26). The distinction is that, for the RN of Eq. (26), we may (but do not have to) assume that U, V, and W are isomers, whereas in the extended network, no other interpretation is possible.
Finally, we characterize RNs that admit an sf-realization.
A RN \((X,\mathscr{R})\) admits an sf-realization if and only if it is conservative.
Suppose \((X,\mathscr {R})\) admits an sf-realization, which, in particular, is an sf-instance. By Prop. 34, \((X,\mathscr {R})\) is conservative. Conversely, suppose \((X,\mathscr {R})\) is conservative. By definition, the mm-representation is an sf-instance, and by Lemma 32, it is an sf-realization. \(\square\)
Reaction network of the formose reaction describing pre-biotic carbohydrate formation [79]. The RN is drawn here in a simplified form showing aldol and retro-aldol reactions (those with 1 educt and 2 products, and vice versa) without their reverse reactions. The stoichiometric matrix of the full network comprising all 38 reaction connecting the 29 compounds is provided as Addition file 1. Compounds are labeled by the number of carbon atoms. C1a (in the center) designates formaldehyd, C3c is dihydroxy acetone. The network was drawn an analyzed with MØD [21]. All compounds with the same number of carbons are obligatory isomers. Moreover, all sum formula representations are of the from \(A_n\), with A denoting the moiety corresponding to the formaldehyd unit
Obligatory isomers put some restriction on sf-instances. Still, there is surprising freedom for sf-realizations. We say that two sf-realizations \(\mathbf {A}\) and \(\mathbf {A'}\) are equivalent, \(\mathbf {A}\sim \mathbf {A'}\), if there are integers \(p,q\in \mathbb {N}\) such that \(p\mathbf {A}=q\mathbf {A'}\). One easily checks that \(\sim\) is an equivalence relation. If \(\dim \ker \mathbf {S}^\top =1\), then all \(\mathbf {m}\in \dim \ker \mathbf {S}^\top\) are multiples of the unique minimal MCL. All sum formulas are then of the form \({D}_k\), and thus we can think of compounds simply as integers \(k\in \mathbb {N}\). Every reaction thus can be written in the form \(\sum _k s_{kr}^- {D}_k \rightarrow \sum _k s_{kr}^+ {D}_k\) with \(\sum _k (s_{kr}^+ - s_{kr}^-)k = 0\). An example of practical interest is the rearrangement chemistry of carbohydrates, found in metabolic networks such as the pentose phosphate pathway (PPP) or the non-oxidative part of the Calvin-Benson-Bassham (CBB) cycle in the dark phase of photosynthesis. Carbohydrates may be seen as a "polymers" of formaldehyd units and can therefore be written as \({D_k =(\text{CH}_{2}\text{O})_k}\). The PPP interconverts pentoses (e.g. ribose) and hexoses (such as glucose), in an atom-economic (no waste) rearrangement network possessing the overall reaction \({6 (\text{CH}_{2}\text{O})_5 \iff 5 (\text{CH}_{2}\text{O})_6}\). In a similar fashion five 3-phosphoglycerates are reconfigured via carbohydrate chemistry into three ribulose-5-phosphate which results in the overall reaction of \({5 (\text{CH}_{2}\text{O})_3 \rightarrow 3 (\text{CH}_{2}\text{O})_5}\) if focusing on the sugar component. Carbohydrate reaction chemistry is particularly well-suited for the implementation of isomerization networks, and the logic and structure of the design space of alternative networks implementing the same overall reaction has been explored using mathematical and computational models [21, 80]. Fig. 4 shows the RN of the prebiotic carbohydrate formation according to [79]. The analysis of the corresponding stoichiometric matrix, available as Additional file 1, shows that all Cn compounds are obligatory isomers. Furthermore, their sum formulas are necessarily multiples of the C1 unit, which corresponds to formaldehyd in the formose reaction.
For \(\dim \ker \mathbf {S}^\top > 1\), there is an infinite set of sf-realizations that are pairwisely inequivalent. To see this, construct matrices \(\mathbf {A}_{t} = (t_1\mathbf {y}^1,t_2\mathbf {y}^2,\ldots ,t_k\mathbf {y}^k)^\top\) from \(k=\dim \ker \mathbf {S}^\top >1\) linearly independent (minimal) MCLs \(\mathbf {y}^i\) and with \(t\in \mathbb {N}^k\). Clearly, every such matrix \(\mathbf {A}_{t}\) is an sf-realization. Furthermore, \(\mathbf {A}_{t}\sim \mathbf {A}_{t'}\) if and only if there are \(p,q\in \mathbb {N}\) such that \(p{t}= q{t'}\). Hence \(\mathbf {A}_{t}\not \sim \mathbf {A}_{t'}\) if there are two distinct indices \(1\le i<j\le k\) such that \(t_i / t'_i \ne t_j / t'_j\). Clearly, there is an infinite set \(T \subseteq \mathbb {N}^k\) of integer vectors such that this inequality is satisfied for all distinct \({t},{t'}\in T\). For instance, one may choose distinct primes for all entries of \({t} \in T\). Thus there are infinitely many pairwisely inequivalent sf-realizations. Furthermore, the choice of the (minimal) MCLs is not unique, in general, allowing additional freedom for sf-realizations. Finally, one may produce more complex sf-realizations by appending additional rows to \(\mathbf {A}\) that are linear combinations of the basis vectors. Therefore we have the following result.
Let \((X,\mathscr {R})\) be a conservative RN with stoichiometric matrix \(\mathbf {S}\). If \(\dim \ker \mathbf {S}^\top >1\), then there are infinitely many in-equivalent sf-realizations of \((X,\mathscr {R})\).
Structural formula realizations
A structural formula represents a chemical species as a (connected) molecular graph, whose vertices are labeled by atom types and edges refer to chemical bonds. Lewis structures [81] are equivalent to vertex-labeled multigraphs in which each bonding electron pair is represented as an individual edge, and each non-bonding electron pair as a loop. In particular, double or triple bonds are shown as two or three parallel edges. The educt and product complexes \(r^-\) and \(r^+\) of a reaction r can then be represented as the disjoint unions of the educt and product graphs, respectively. A chemical reaction is a graph transformation that converts the educt graph into the product graph such that vertices and their labels are preserved [33, 82]. Only the bonds are rearranged. Since electrons are conserved, and each edge or loop accounts for two electrons, any reaction must preserve the sum of vertex degrees and thus the number of edges. Fig. 5 shows an example.
Multigraph representation for the reaction \({\text{H}_{2}\text{SO}_{4} \longrightarrow \text{SO}_{3} + \text{H}_{2}\text{O}}\). Atoms shown in color: H, black; O, red; S, yellow. Non-bonding electron pairs are represented by loops, double bonds by two parallel edges
This idea can be generalized to sf-realizations in which "atoms" are viewed as moieties. We may then interpret the vertices of a multigraph as "fragments" of species that are endowed with a certain number of "valencies" or "half bonds". These must be "saturated" by binding to free valencies of other moieties or they must be used to form internal bonds within a moiety. In graph theory, the degree of a vertex is simply the number of incident edges. In chemistry, a related notion is the valency of an atom, i.e., the number of bonds (counting bond order) that can be formed by an atom. Each type of atom/moiety therefore has a fixed degree that we can think of as the number of halfbonds. Each of these may bind to other moieties or form a "loop", i.e., match up with another halfbond of the same vertex. Correspondingly, the degree d(u) of a vertex u in a multigraph is defined as the number of edges that connect u with other vertices plus twice the number of loops. A reaction thus preserves electrons if and only if its only effect is to rearrange the bonds in the multigraph. The valency \({{\,\mathrm{val}}}(a)\) of an atom of type a is most naturally interpreted as the number of electrons in the outer shell. Loops then correspond to non-bonding electron pairs. This notion of valency matches Frankland's "atomicity" and conforms to the IUPAC terminology [83]. Much of the chemical literature, however, uses the term valency loosely for the number of bonds; it is then not an unambigous property of an element or atom and changes with the oxidation state.
Let \({\mathcal {A}}\) be a non-empty, finite set, \({{\,\mathrm{val}\,}}:{\mathcal {A}}\rightarrow \mathbb {N}\) be an arbitrary function, and \(\sum _{a\in {\mathcal {A}}} n_a \, a\) be a sum formula. A multigraph \(\Gamma = (V,E,\alpha )\) with loops and vertex coloring \(\alpha :V\rightarrow {\mathcal {A}}\) is a corresponding structural formula if it satisfies the following conditions:
Each vertex \(u\in V\) corresponds to a moiety \(\alpha (u)\), in particular, \(|\{u\in V:\alpha (u)=a\}|=n_a\).
\(d(u)={{\,\mathrm{val}}}(\alpha (u))\) for all \(u\in V\), i.e., the vertex degree of u is given by the corresponding moiety.
\(\Gamma\) is connected.
The structural formulas specified in Def. 42 do not cover all Lewis structures. In particular, neither explicit charges nor unpaired electrons are covered. While these are important from a chemical perspective, we shall see below that such extensions are not needed for our purposes since the straightforward multigraphs in Def. 42 already provide sufficient freedom to obtain representations for all conservative RNs. Extensions to radicals and charges will be briefly considered in the Discussion section.
Let \((X,\mathscr {R})\) be a RN, \({\mathcal {A}}\) be a non-empty, finite set, and \({{\,\mathrm{val}}}:{\mathcal {A}}\rightarrow \mathbb {N}\) be an arbitrary function. A Lewis instance is an assignment of vertex-colored multigraphs \(\Gamma _x=(V_x,E_x,\alpha _x)\) to all \(x\in X\) such that
vertex degrees satisfy \(d(u)={{\,\mathrm{val}}}(\alpha _x(u))\), for all \(u\in V_x\) and \(x\in X\), and
the corresponding matrix \(\mathbf {A} \in \mathbb {N}_0^{{\mathcal {A}}\times X}\) defined by \(\mathbf {A}_{ax} = |\{ u\in V_x:\alpha _x(u)=a\}|\) is an sf-instance.
Furthermore, \(x\mapsto \Gamma _x\) is a Lewis realization if \(\mathbf {A}\) is an sf-realization.
Clearly, every Lewis realization has a corresponding sf-realization. Given an sf-realization, we therefore ask when there is a corresponding Lewis realization. By Def. 42 and 43, we have the following result.
A RN \((X,\mathscr {R})\) has a Lewis realization with corresponding sf-realization \(\mathbf {A}\in \mathbb {N}_0^{{\mathcal {A}}\times X}\) for some non-empty, finite set \({\mathcal {A}}\), if and only if there is a function \({\,\mathrm{val}}:{\mathcal{A}}\rightarrow \mathbb {N}\) such that for the sum formula \(\sum _{a\in {\mathcal {A}}} \mathbf {A}_{ax} \, a\) (for \(x\in X\)) there is a corresponding structural formula \(\Gamma _x\).
For the 'if' part, let \(\sum _{x\in {\mathcal {A}}} \mathbf {A}_{ax} \,a\) be the sum formula for \(x\in X\). By assumption, there exists a vertex-colored multigraph \(\Gamma _x=(V_x,E_x,\alpha _x)\) for x such that (i) vertex degrees satisfy \(d(u)={{\,\mathrm{val}}}(\alpha _x(u))\) and (ii) the corresponding matrix equals the sf-realization \(\mathbf {A}\). Analogously, for the 'only if' part. \(\square\)
The appeal of this characterization is that it does not use any properties of the RN \((X,\mathscr {R})\), at all. In fact, it is easy to see that such a representation always exists.
Let \({\mathcal {A}}\) be a nonempty, finite set and \(\sum _{a\in {\mathcal {A}}} n_a \, a\) be a sum formula. Then, there exists a corresponding structural formula with \({\,\mathrm{val}}(a)=2\) for all \(a\in {\mathcal {A}}\).
If the sum formula is given by \(n_a=1\) and \(n_{a'}=0\) for all \(a'\in {\mathcal {A}}\setminus \{a\}\), i.e., if it is single moiety, then the corresponding structural formula is a single vertex with color a and a loop. Otherwise, arrange the \(|V|=\sum _a n_a\) vertices, of which exactly \(n_a\) are colored by a, in a cycle and connect the vertices along the cycle. Then every vertex u satisfies \(d(u)={{\,\mathrm{val}}}(\alpha (u))=2\) and the graph is connected. \(\square\)
The result extends to any constant function \({{\,\mathrm{val}}}(a)=2k\) (with \(k \in \mathbb {N}\)) by adding \(k-1\) loops to each vertex. As an immediate consequence of Lem. 44 and 45, we have the following result.
\((X,\mathscr {R})\) has a Lewis realization if and only if it has an sf-realization.
Using Prop. 40, we can characterize RNs that admit a Lewis realiztion.
A RN \((X,\mathscr {R})\) admits a Lewis realization if and only if it is conservative.
Construction of non-isomorphic multigraphs with valency 4 in the proof of Prop. 48. The first three isomers are a cycle (with loops), a cycle with a single triple-bond indicating an "origin", and a graph with an additional double bond. In the third graph, the asymmetric arrangement of the double and triple bonds implies an unambiguous ordering of the remaining vertices (numbered from 1 to n). Non-isomorphic graphs are obtained converting a pair of loops into a double bound. Since each vertex has at most one bond in addition to the cycle, the resulting graphs correspond to Kleitman's "irreducible diagrams" [84]. If crossings of bonds are excluded, the resulting induced subgraphs with vertex set \(\{1,\dots ,n\}\) are isomorphic to RNA secondary structures on sequences of n monomers. The number \(S_n\) of secondary structures grows asymptotically \(\sim 2.6^n\) [85]
Interestingly, the simple multigraphs in Def. 42 are sufficient to represent all conservative RNs and thus (the proper part of) all chemical networks. Radicals and other chemical species whose structures cannot be expressed in terms of electron pairs therefore do not add to the universe of chemically realistic RNs. For more details, see the Discussion section.
Like an sf-realization, a Lewis realization does not necessarily assign distinct multigraphs \(\Gamma _x\) and \(\Gamma _y\) to distinct compounds x and y. In the case of sf-realizations, obligatory isomers must have the same sum formula. In Lewis realizations, however, they need not have the same multigraph.
For every conservative RN \((X,\mathscr {R})\) there exists an injective Lewis realization \(x\mapsto \Gamma _x\).
Sf-representations can be constructed to have an arbitrary number of atoms or moieties for each \(x \in X\), that is, the vertex sets \(V_x\) of the corresponding multigraphs \(\Gamma _x\) can be chosen arbitrarily large. Set \({{\,\mathrm{val}}}(a)=4\) for all \(a\in {\mathcal {A}}\) and construct an initial Lewis representation of compounds as cycles, as in the proof of Lemma 44, but with an additional loop at each vertex. Consider two obligatory isomers \(x\rightleftharpoons y\), and let the (adjacent) vertices \(u,v\in V_x\) be connected (by a single edge). Now replace the two loops at the corresponding vertices \(u,v\in V_y\) by two additional edges between u and v. If the equivalence class of obligatory isomers contains more than two compounds, choose sets of pairs of disjoint positions along the cycles and replace pairs of loops by double edges. This yields circular matchings, familiar e.g. from the theory of RNA secondary structures [85, 86]. Setting \(n=|V_x|-5\), one can construct crossing-free circular matchings on n vertices, whose number grows faster than \(2.6^n\), see also Fig. 6. Thus, if \(V_x\) is chosen large enough, an arbitrarily large set of obligatory isomers can be represented by non-isomorphic multigraphs. Note, finally, that the construction of non-isomorphic graphs does not depend on (the cardinality of) the atom set \({\mathcal {A}}\), and thus the construction is also applicable in the case \(|{\mathcal {A}}|=1\), i.e., \(\dim \ker \mathbf {S}^\top =1\). \(\square\)
The proof in particular shows that the number of vertices required to accommodate the obligatory isomers grows only logarithmically in the size of the equivalence classes of obligatory isomers.
Characterization of chemistry-like reaction networks
In this contribution, we have characterized reaction networks that are chemistry-like in the sense that they are consistent with the conservation of energy and mass and allow an interpretation as transformations of chemical molecules. It is worth noting that we arrive at our results without invoking mass-action kinetics, which has been the focus of interest in chemical reaction network theory since the 1970s [7,8,9]. Instead, we found that basic arguments from thermodynamics (without kinetic considerations) are sufficient. The main results of this contribution can be summarized as follows:
A closed RN \((X,\mathscr {R})\) is thermodynamically sound if and only if it does not contain an irreversible futile cycle. In particular, every reversible networks is thermodynamically sound. If irreversible reactions are meant to proceed in a given direction for all external conditions (after opening the RN by adding transport reactions), then \((X,\mathscr {R})\) must be strictly thermodynamically sound. Equivalently, a futile cycle must not contain an irreversible reaction. An analogous result was obtained by [70] assuming mass-action kinetics.
A RN \((X,\mathscr {R})\) is free of cornucopias and abysses if and only if it is conservative.
Both thermodynamic soundness and conservativity are completely determined by the stoichiometric matrix \(\mathbf {S}\), i.e., they are unaffected by catalysts.
A RN \((X,\mathscr {R})\) admits an sf-realization if and only if it is conservative. That is, conservative RNs admit assignments of sum formulas such that (i) atoms (or moieties) are conserved and (ii) two compounds are assigned the same sum formula if and only if they are obligatory isomers. Obligatory isomers, in turn, are completely determined by \(\mathbf {S}\).
For every sf-realization of a RN \((X,\mathscr {R})\) there is also a Lewis-realization, i.e., an assignment of multigraphs to each compound such that reactions are exclusively rearrangements of edges.
Such chemistry-like realizations, however, are by no means unique. In general, the same RN has infinitely many chemical realizations corresponding to different atomic compositions. The structure of the stoichiometric matrix \(\mathbf {S}\) of a closed RN therefore implies surprisingly little about the underlying chemistry.
Nevertheless there is interesting information that is independent of the concrete realization. For example, Thm. 37 can be reformulated as follows: The reversible completion of \((X,\mathscr {R})\) admits a net reaction of the form \(p \, x \longrightarrow q \, y\) with \(x,y\in X\) and \(p,q \in \mathbb {N}\) if and only if \(q \, \mathbf {m}_{x} = p \, \mathbf {m}_{y}\) for every \(\mathbf {m}\in \ker \mathbf {S}^\top\). This identifies "obligatory oligomers", necessarily composed of multiples of the same monomer.
Computational considerations
Somewhat surprisingly, the computational problems associated with recognizing "chemistry-like" RNs are not particularly difficult and can be solved by well-established methods. To see this, recall that \((X,\mathscr {R})\) is conservative iff there is a vector \(\mathbf{m }\gg 0\) such that \(\mathbf {S}^\top \mathbf{m }=0\) and not thermodynamically sound iff there is a vector \(\mathbf {v}> 0\) such that \(\mathbf {S}\mathbf {v}=0\) and \(\mathbf {v}_r> 0\) for some \(r\in \mathscr {R}_{\mathrm {irr}}\) These linear programming problems can be solved in \(O((|X|+|\mathscr {R}|)^{2.37})\) time [87].
An integer (not necessarily non-negative) basis of \(\ker \mathbf {S}^\top\) can be computed exactly in polynomial time, e.g. using the Smith normal form, see [88]. Chubanov's algorithm finds exact rational solutions to systems of linear equations with a strict positivity constraint. Thus is can be employed to compute a strictly positive integer solution \(\mathbf {m}\gg 0\) to \(\mathbf {S}^\top \mathbf {m}=0\) in polynomial time [89, 90]. As a consequence, an sf-realization can also be computed explicitly in polynomial time. Each sum formula in turn can be converted into a graph with total effort bounded by \(\max _{x\in X} \sum _{a}\mathbf {A}_{xa}\cdot |X|\), the maximal number of atoms that appear in a sum formula times the number of molecules.
The equivalence relation \(\rightleftharpoons\) for obligatory isomers is determined by the existence of solutions to a linear equation of the form \(\mathbf {S}\mathbf {v}=\mathbf {w}\) and thus can also be computed in polynomial time, again bounded by the effort for matrix multiplication for each pair \(x,y \in X\). A much more efficient approach, however, is to compute a basis of \(\ker \mathbf {S}^\top\), from which \(\rightleftharpoons\) can be read off directly. This approach easily extends to "obligatory oligomers."
Treating RNs as closed systems is too restrictive to describe metabolic networks. There, RNs are considered as open systems that allow the inflow of nutrients and the outflow of waste products. Models of metabolism often impose a condition of viability. Traditionally, this is modeled as a single export "reaction" \(r_{bm}\) of the form \(\sum _i \alpha _i {C}_i \rightarrow \varnothing\), known as the biomass function [91]. It comprises all relevant precursor metabolites \({C}_i\) (forming all relevant macromolecules) in their empirically determined proportions \(\alpha _i\). Viability is then defined as the existence of a flow \(\mathbf {v}>0\) with \(\mathbf {S}\mathbf {v}=0\) and \(\mathbf {v}_{bm}>0\). This linear programming problem can be tested efficiently by means of flux balance analysis (FBA) [92]. In contrast to \((X,\mathscr {R})\) being conservative and thermodynamically sound, however, viability is a property of the metabolic model, not of the underlying representation of the chemistry.
Outlook to open problems
Construction of random chemistry-like networks
The formal characterization of chemistry-like RNs developed here suggests several interesting questions for further research. In particular, our results define rather clearly how random chemistry-like RNs should be defined and thus poses the question whether there are efficient algorithms for their construction. Let us consider the task of generating a random chemistry-like RN in a bit more detail. We first note that it suffices to generate a stoichiometric matrix \(\mathbf {S}\in \mathbb {N}_0^{X\times \mathscr {R}}\) that is thermodynamically sound and conservative. If explicit catalysts are desired, they can be added to a reaction without further restrictions. More precisely, given \(\mathbf {S}\), we obtain a network with the same stoichiometric matrix plus catalysts by setting
$$\begin{aligned} \begin{aligned} s_{xr}^- = c_{xr}, \; s_{xr}^+ = c_{xr}+s_{xr}&\quad \text {if } s_{xr}\ge 0 , \\ s_{xr}^- = c_{xr}-s_{xr}, \;s_{xr}^+ = c_{xr}&\quad \text {if } s_{xr}\le 0 . \end{aligned} \end{aligned}$$
The "catalyst matrix" \(\mathbf {C}\) may contain arbritrary integers \(c_{xr}\ge 0\). For the generation of a RN \((X,\mathscr {R})\), therefore, it can be drawn independently of \(\mathbf {S}\).
The key task of generating \((X,\mathscr {R})\) is therefore the construction of an \(|X|\times |\mathscr {R}|\) integer matrix \(\mathbf {S}\) that is conservative and thermodynamically sound. Both conditions amount to the (non)existence of vectors with certain sign patterns in \(\ker \mathbf {S}\) and \(\ker \mathbf {S}^\top\), respectively. In order to obtain a background model for a given chemical RN, one might also ask for a random integer matrix that has a given left nullspace and is thermodynamically sound. In addition, one would probably like to (approximately) preserve the fraction of zero entries per row and column and the mean of the non-zero entries. To our knowledge, no efficient exact algorithms for this problem are known.
A potentially promising alternative is the independent generation of the complex matrix \(\mathbf {Y}\) and the incidence matrix \(\mathbf {Z}\) of the complex-reaction graph. Given a fixed conservative and thermodynamically sound RN, furthermore, one can make use of the heredity of thermodynamic soundness and conservativity and consider random subnetworks. This approach has been explored in particular for metabolic networks: The ensemble of viable metabolic networks in a given chemical RN can then be sampled by a random walk on the set of reactions [57] or a more sophisticated Markov-Chain-Monte-Carlo procedure [55, 93].
Chemistry-like realizations
The structural formulas constructed in Lemma 45 are not very "realistic' from a chemical perspective. It is of interest, therefore, if one can construct chemically more appealing (multi-)graphs. As noted in the Introduction, the problem of designing a "molecular implementation" of a prescribed stoichiometric matrix \(\mathbf {S}\) is a key problem in utilizing chemical reaction networks as computing devices. From a mathematical point of view there seem to be only a few constraints: (i) If a moiety a appears in isolation, i.e., as a molecule \(x=1a\), then \({{\,\mathrm{val}}}(a)\) must be even, since it contains \({{\,\mathrm{val}}}(a)/2\) loops. (ii) The case \({{\,\mathrm{val}}}(a)=1\) is only possible if there is no compound composed exclusively of three or more copies of a or composed of more than two moieties with valency 1. (iii) It is well known that the sum of degrees must be even for every multigraph, and connectedness implies \(\sum _u {{\,\mathrm{val}}}(u)\ge 2(|V|-1)\) [94].
The problem of finding multigraph realizations is closely related to, but not the same as, the problem determining the realizability of degree sequences in graphs [95] or multigraphs [96]. As in graph theory, it seems to be of particular interest to study realizability by structural formula in the presence of additional constraints on admissible graphs. Complementary to constraints on the multigraphs that render them plausible chemical graphs, the "chemical implementation" of a given \(\mathbf {S}\) also involves constraints on the admissible (types of) reactions, i.e., the allowed rearrangements of edges in the multigraphs. It is much less clear how to formalize this aspect, although there seems to be a connection to graph grammar models of chemical reactions [97].
A Lewis structure-like presentation of \({\text{NO}_{2} + \text{NO} \longrightarrow \text{N}_{2}\text{O}_{3}}\) highlights that multigraphs with atom-type dependent degrees are not sufficient to represent all molecules of interest. To represent NO2, both an unpaired electron (shown as a semi-edge ending in a small black ball), an N atom with vertex degree \(4<{{\,\mathrm{val}}}({N})=5\) and an oxygen atom with vertex degree \(7>{{\,\mathrm{val}}}({O})=6\) are required. Similarly, NO is a neutral stable radical, with an unpaired electron at N. The product N2O3 has no unpaired electrons, but exhibits an O and an N atom with a deviant vertex degree and thus a net charge. Differences between nominal valency and actualy vertex degree are indicated by the charge symbols \(\oplus\) and \(\ominus\). In general, the net charge at a vertex v is given by \({{\,\mathrm{val}}}(\alpha (v))-\deg (v)\)
An advantage of considering the multigraphs specified in Def. 42 instead of the full range of Lewis structures is that a well-established mathematical theory is available. However, "multigraphs with semi-edges", which are essentially equivalent to Lewis structures of radicals, have been studied occasionally in recent years [98, 99] and may be an appealing framework, in particular, when restricted realizations are considered. The example of nitrogen oxids in Fig. 7 shows, however, that unpaired electrons (as in the Lewis structure of NO) are not the only issue. A complete implementation of Lewis structures also requires local net charges \({{\,\mathrm{val}}}(\alpha (v))-\deg (v)\) at vertices v, as a semi-edge-like annotation distinct from unpaired electrons, see e.g. [100].
Infinite RNs
Throughout this contribution, we have assumed that \((X,\mathscr {R})\) is finite. In general, however, chemical universes are infinite, at least in principle. The simplest example of infinite families are polymers. It is of interest, therefore, to develop a theory of infinite reaction networks. To this end, one could follow e.g. [101], where also infinite directed hypergraphs are considered, and further extend the literature on countably infinite undirected hypergraphs, see e.g. [102, 103] and the references therein. Most previous work pre-supposed k-uniformity, i.e., hyper-edges of (small) finite cardinality, matching well with the situation in chemical RNs. Every sub-RN of an infinite RN induced by a finite vertex set \(Y\subset X\) can be assumed to support only a finite number of reactions (directed hyperedges) \(\mathscr {R}_Y\subset \mathscr {R}\). This amounts to assuming that a sub-RN induced by finite set of compounds Y is a finite RN. Every finite sub-RN of a "chemistry-like" infinite RN, furthermore, needs to be conservative and thermodynamically sound. Infinite RNs will not be locally finite, in general, since every compound \(x\in X\) may have infinitely many reaction partners, e.g., all members of a polymer family. Thus x may appear in an infinite number of reactions. These simple observations suggest infinite "chemistry-like" RNs are non-trivial structures whose study may turn out to be a worth-while mathematical endeavor.
The stoichiometric matrix of the complete formose RN, Fig. 4, is availble as machine-readable Additional file 1.
Sandefur CI, Mincheva M, Schnell S (2013) Network representations and methods for the analysis of chemical and biochemical pathways. Mol Biosyst. 9:2189–2200
Alon U (2007) Network motifs: theory and experimental approaches. Nat Rev Genet. 8:450–461
Shellman ER, Burant CF, Schnell S (2013) Network motifs provide signatures that characterize metabolism. Mol Biosyst. 9:352–360
Soulé C (2003) Graphic requirements for multistationarity. ComplexUs. 1:123–133
Borenstein E, Kupiec M, Feldman MW, Ruppin E (2008) Large-scale reconstruction and phylogenetic analysis of metabolic environments. Proc Natl Acad Sci USA 105:14482–14487
Fagerberg R, Flamm C, Merkle D, Peters P, Stadler PF (2013) On the complexity of reconstructing chemical reaction networks. Math Comp Sci. 7:275–292
Horn FJM (1972) Necessary and sufficient conditions for complex balancing in chemical kinetics. Arch Rational Mech Anal. 49:172–186
Horn F, Jackson R (1972) General mass action kinetics. Arch Rational Mech Anal. 47:81–116
Feinberg M (1972) Complex balancing in general kinetic systems. Arch Rational Mech Anal. 49:187–194
Craciun G, Dickenstein A, Shiu A, Sturmfels B (2009) Toric dynamical systems. J Symb Comput. 44:1551–1565
Angeli D (2009) A tutorial on chemical reaction network dynamics. Eur J Control. 15:398–406
Craciun G, Feinberg M (2016) Multiple Equilibria in Complex Chemical Reaction Networks: II. The Species-Reaction Graph. SIAM J Appl Math. 66:1321–1338
Kaltenbach HM (2020) A unified view on bipartite species-reaction and interaction graphs for chemical reaction networks. Electronic Notes Theor Comp Sci. 350:79–90
Shinar G, Feinberg M (2013) Concordant chemical reaction networks and the Species-Reaction graph. Math Biosci. 241:1–23
Mincheva M, Roussel MR (2006) A graph-theoretic method for detecting potential Turing bifurcations. J Chem Phys. 125:204102
Zykov AA (1974) Hypergraphs. Usp Math Nauk. 6:89–154
Zhou W, Nakhleh L (2011) Properties of metabolic graphs: biological organization or representation artifacts? BMC Bioinform. 12:132
Santiago Arguello A, Stadler PF (2021) Whitney's Connectivity Inequalities for Directed Hypergraphs. Art Discr Appl Math. 5:P1.01
Klamt S, Haus UU, Theis F (2009) Hypergraphs and cellular networks. PLoS Comput Biol. 5:e1000385
Montañez R, Medina MA, Solé RV, Rodríguez-Caso C (2010) When metabolism meets topology: reconciling metabolite and reaction networks. BioEssays. 32:246–256
Andersen JL, Flamm C, Merkle D, Stadler PF (2019) Chemical transformation motifs—modelling pathways as integer hyperflows. IEEE/ACM Trans Comp Biol. 16:510–523
Wagner A, Fell DA (2001) The small world inside large metabolic networks. Proc R Soc Lond B. 268:1803–1810
Jeong H, Tombor B, Albert R, Oltvai ZN, Barabási AL (2000) The large-scale organization of metabolic networks. Nature. 407:651–654
Gleiss PM, Stadler PF, Wagner A, Fell DA (2001) Relevant cycles in chemical reaction network. Adv Complex Syst. 4:207–226
Fischer J, Kleidon A, Dittrich P (2015) Thermodynamics of random reaction networks. PLoS ONE. 10:e0117312
Schuster S, Höfer T (1991) Determining all extreme semi-positive conservation relations in chemical reaction systems: a test criterion for conservativity. J Chem Soc Faraday Trans. 87:2561–2566
Gadewar SB, Doherty MF, Malone MF (2001) A systematic method for reaction invariants and mole balances for complex chemistries. Comput Chem Eng. 25:1199–1217
Famili I, Palsson BØ (2003) The convex basis of the left null space of the stoichiometric matrix leads to the definition of metabolically meaningful pools. Biophys J. 85:16–26
Flockerzi D, Bohmann A, Kienle A (2007) On the existence and computation of reaction invariants. Chem Eng Sci. 62:4811–4816
Haraldsdóttir HS, Fleming RMT (2016) Identification of conserved moieties in metabolic networks by graph theoretical analysis of atom transition networks. PLoS Comput Biol. 12:e1004999
Fontana W (1991) Algorithmic chemistry. In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial Life II. Addison-Wesley, pp 159–210
Dittrich P, Ziegler J, Banzhaf W (2001) Artificial chemistries—a review. Artificial life. 7:225–275
Benkö G, Flamm C, Stadler PF (2003) A graph-based toy model of chemistry. J Chem Inf Comput Sci. 43:1085–1093
Banzhaf W, Yamamoto L (2015) Artificial Chemistries. MIT Press, Cambridge
Berry G, Boudol G (1992) The chemical abstract machine. Theor Comp Sci. 96:217–248
Liekens AML, Fernando CT (2007) Turing complete catalytic particle computers. In: Almeida e Costa F, Rocha LM, Costa E, Harvey I, Coutinho A, editors. Proceedings of the 9th European Conference on Artificial Life. vol. 4648 of Lect. Notes Comp. Sci. Berlin: Springer, p. 1202–1211
Soloveichik D, Cook M, Winfree E, Bruck J (2008) Computation with finite stochastic chemical reaction networks. Natural Comput. 7:615–633
Dueñas-Díez M, Pérez-Mercader J (2021) Native chemical computation. A generic application of oscillating chemistry illustrated with the Belousov-Zhabotinsky reaction. A review. Front Chem. 9:611120
Soloveichik D, Seelig G, Winfree E (2010) DNA as a universal substrate for chemical kinetics. Proc Natl Acad Sci USA 107:5393–5398
Badelt S, Shin SW, Johnson RFJ, Dong Q, Thachuk C, Winfree E (2017) A General-Purpose CRN-to-DSD Compiler with Formal Verification, Optimization, and Simulation Capabilities. In: Brijder R, Qian L, editors. DNA Computing and Molecular Programming. vol. 10467 of Lect. Notes Comp. Sci. Cham: Springer. p. 232–248
Watts DJ, Strogatz SH (1998) Collective dynamics of 'small-world' networks. Nature. 393:409–410
Newman MEJ, Strogatz SH, Watts DJ (2001) Random graphs with arbitrary degree distributions and their applications. Phys Rev E. 64:026118
Ravasz E, Somera AL, Mongru DA, Oltvai ZN, Barabási AL (2002) Hierarchical organization of modularity in metabolic networks. Science. 297:1551–1555
Arita M (2004) The metabolic world of Escherichia coli is not small. Proc Natl Acad Sci USA 101:1543–1547
Azizi A, Dewar J, Wu T, Hyman JM (2017) Generating bipartite networks with a prescribed joint degree distribution. J Complex Netw. 5:839–857
Rao AR, Jana R, Bandyopadhyay S (1996) A Markov chain Monte Carlo method for generating random \((0,1)\)-matrices with given marginals. Indian J Statistics Ser A. 58:225–242
Hanhijärvi S, Garriga GC, Puolamäki K (2009) Randomization Techniques for Graphs. In: Proceedings of the 2009 SIAM International Conference on Data Mining. SIAM. p. 780–791
Strona G, Nappo D, Boccacci F, Fattorini S, San-Miguel-Ayanz J (2014) A fast and unbiased procedure to randomize ecological binary matrices with fixed row and column totals. Nat Comm. 5:4114
Saracco F, Di Clemente R, Gabrielli A, Squartini T (2015) Randomizing bipartite networks: the case of the World Trade Web. Sci Rep. 5:10595
de Panafieu É (2015) Phase transition of random non-uniform hypergraphs. J Discrete Alg. 31:26–39
Ghoshal G, Zlatić V, Caldarelli G, Newman MEJ (2009) Random hypergraphs and their applications. Phys Rev E. 79:066118
Sloan RH, Stasi D, Turán G (2012) Random horn formulas and propagation connectivity for directed hypergraphs. Discrete Math Theor Comp Sci. 14:29–36
Nakajima K, Shudo K, Masuda N (2021) Randomizing hypergraphs preserving degree correlation and local clustering. IEEE Trans Network Sci Eng
Braun P (2019) Randomization of chemical reaction networks based on a graph-language model [MSc thesis]. Universität Wien, Fakultät für Physik. https://othes.univie.ac.at/58106/
Samal A, Matias Rodrigues JF, Jost J, Martin OC, Wagner A (2010) Genotype networks in metabolic reaction spaces. BMC Syst Biol. 4:30
Kim H, Smith HB, Mathis C, Raymond J, Walker SI (2019) Universal scaling across biochemical networks on Earth. Sci Adv. 5:eaau0149
Matias Rodrigues JF, Wagner A (2009) Evolutionary plasticity and innovations in complex metabolic reaction networks. PLoS Comput Biol. 5:e1000613
Oró J, Kimball AP (1961) Synthesis of purines under possible primitive earth conditions. I. Adenine from hydrogen cyanide. Arch Biochem Biophys. 94:217–227
Andersen JL, Andersen T, Flamm C, Hanczyc M, Merkle D, Stadler PF (2013) Navigating the chemical space of HCN polymerization and hydrolysis: guiding graph grammars by mass spectrometry data. Entropy. 15:4066–4083
Tschoegl NW (2000) Fundementals of equilibrium steady-state thermodynamics. Elsevier, Amsterdam
Schilling CH, Letscher D, Palsson BØ (2000) Theory for the systemic definition of metabolic pathways and their use in interpreting metabolic function from a pathway-oriented perspective. J Theor Biol. 203(3):229–248
Beard DA, Liang S, Qian H (2002) Energy balance for analysis of complex metabolic networks. Biophys J. 83:79–86
Schwender J, Ohlrogge J, Shachar-Hill Y (2004) Understanding flux in plant metabolic networks. Curr Opin Plant Biol. 7:309–317
Qian H, Beard DA (2006) Metabolic futile cycles and their functions: a systems analysis of energy and control. IEE Proc Systems Biology. 153:192–200
Minty GJ (1974) A "from scratch'' proof of a theorem of Rockafellar and Fulkerson. Mathematical Programming. 7:368–375
Müller S, Hofbauer J, Regensburger G (2019) On the bijectivity of families of exponential/generalized polynomial maps. SIAM J Appl Algebra Geom. 3(3):412–438
Dondi D, Merli D, Albini A, Zeffiroa A, Serpone N (2012) Chemical reaction networks as a model to describe UVC- and radiolyticallyinduced reactions of simple compounds. Photochem Photobiol Sci. 11:835–842
Pekař M (2005) Thermodynamics and foundations of mass-action kinetics. Prog React Kinet Mech. 30:3–113
Polettini M, Esposito M (2014) Irreversible thermodynamics of open chemical networks. I. Emergent cycles and broken conservation laws. J Chem Phys. 141:024117
Gorban AN, Yablonsky GS (2011) Extended detailed balance for systems with irreversible reactions. Chem Eng Sci. 66(21):5388–5399
Gorban AN, Mirkes EM, Yablonsky GS (2013) Thermodynamics in the limit of irreversible reactions. Physica A: Stat Mech Appl. 392(6):1318–1335
Bigan E, Steyaert JM, Douady S (2013) Properties of Random Complex Chemical Reaction Networks and Their Relevance to Biological Toy Models. arXiv. 1303.7439
Rao R, Esposito M (2018) Conservation laws and work fluctuation relations in chemical reaction networks. J Chem Phys. 149:245101
Schuster S, Hilgetag C (1995) What information about the conserved-moiety structure of chemical reaction systems can be derived from their stoichiometry? J Phys Chem. 99:8017–8023
Müller S, Regensburger G (2016) Elementary vectors and conformal sums in polyhedral geometry and their relevance for metabolic pathway analysis. Front Genet. 7:1–11
De Martino A, De Martino D, Mulet R, Pagnani A (2014) Identifying all moiety conservation laws in genome-scale metabolic networks. PLoS ONE. 9:e100750
Graver JE (1975) On the foundations of linear and integer linear programming. I. Math Program. 9:207–226
Doty D, Zhu S (2018) Computational complexity of atomic chemical reaction networks. Natural Computing. 17:677–691
Benner SA, Kim HJ, Kim MJ, Ricardo A (2010) Planetary organic chemistry and the origins of biomolecules. Cold Spring Harb Perspect Biol. 2:a003467
Meliéndez-Hevia E, Isidoro A (1985) The game of the pentose phosphate cycle. J Theor Biol. 117(2):251–263
Lewis GN (1916) The Atom and the Molecule. J Am Chem Soc. 38:762–785
Rossello F, Valiente G (2005) Chemical graphs, chemical reaction graphs, and chemical graph transformation. Electr Notes Theor Comp Sci. 127:157–166
Muller P (1994) Glossary of terms used in physical organic chemistry (IUPAC Recommendations 1994). Pure Appl Chem. 66:1077–1184
Kleitman DJ (1970) Proportions of irreducible diagrams. Studies Appl Math. 49:297–299
Stein PR, Waterman MS (1979) On some new sequences generalizing the Catalan and Motzkin numbers. Discr Math. 26:261–272
Waterman MS, Smith TF (1978) RNA secondary structure: a complete mathematical analysis. Math Biosci. 42:257–266
Cohen MB, Lee YT, Song Z (2021) Solving linear programs in the current matrix multiplication time. J ACM. 68:31–39
Newman M (1997) The Smith normal form. Lin Alg Appl. 254:367–381
Chubanov S (2015) A polynomial projection algorithm for linear feasibility problems. Mathematical Programming. 153:687–713
Root K (2018) An improved version of Chubanov's method for solving a homogeneous feasibility problem. Opt Methods Softw. 33:26–44
Feist AM, Palsson BØ (2010) The biomass objective function. Curr Opin Microbiol. 13:344–349
Orth JD, Thiele I, Palsson BØ (2010) What is flux balance analysis? Nature Biotech. 28:245–248
Barve A, Matias Rodrigues J, Wagner A (2012) Superessential reactions in metabolic networks. Proc Natl Acad Sci. 109:E1121–E1130
Edmonds J (1964) Existence of \(k\)-edge connected ordinary graphs with prescribed degrees. J Res Nat Bur Standards Sect B. 68:73–74
Meierling D, Volkmann L (2009) A remark on degree sequences of multigraphs. Math Methods Oper Res. 69:369–374
Sierksma G, Hoogeveen H (1991) Seven criteria for integer sequences being graphic. J Graph Th. 15:223–231
Andersen JL, Flamm C, Merkle D, Stadler PF (2013) Inferring chemical reaction patterns using graph grammar rule composition. J Syst Chem. 4:4
Getzler E, Kapranov MM (1998) Modular operads. Compositio Mathematica. 110:65–125
Mednykh AD, Nedela R (2015) Harmonic Morphisms of graphs: Part I: graph coverings. Vydavatelstvo Univerzity Mateja Bela, Banska Bystrica
Karen P, McArdle P, Takats J (2014) Toward a comprehensive definition of oxidation state. J Pure Appl Chem. 86:1017–1081. IUPAC Report
Ostermeier L, Hellmuth M, Stadler PF (2012) The Cartesian product of hypergraphs. J Graph Th. 70:180–196
Banakha T, van der Zypen D (2019) Minimal covers of infinite hypergraphs. Discr Math. 342:3043–3046
Bustamante S, Corsten J, Frankl N (2020) Partitioning Infinite Hypergraphs into Few Monochromatic Berge-Paths. Graphs Combinatorics. 36:437–444
Open Access funding enabled and organized by Projekt DEAL. This research was funded in part by the German Federal Ministry of Education and Research within the project Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI.) Dresden/Leipzig (BMBF 01IS18026B), and the German Research Foundation DFG, grant no. STA 850/58-1. SM was supported by the Austrian Science Fund (FWF), project P33218.
Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090, Vienna, Austria
Department of Theoretical Chemistry, University of Vienna, Währinger Straße 17, 1090, Vienna, Austria
Christoph Flamm & Peter F. Stadler
Bioinformatics Group, Department of Computer Science, and Interdisciplinary Center for Bioinformatics, Universität Leipzig, Härtelstraße 16–18, 04107, Leipzig, Germany
Peter F. Stadler
German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig & Competence Center for Scalable Data Services and Solutions Dresden-Leipzig & Leipzig Research Center for Civilization Diseases University Leipzig, 04107, Leipzig, Germany
Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103, Leipzig, Germany
Faculdad de Ciencias, Universidad Nacional de Colombia, Sede Bogotá, Ciudad Universitaria, Bogotá, 111321, Colombia
Santa Fe Institute, 1399 Hyde Park Rd., Santa Fe, NM87501, USA
Christoph Flamm
CF and PFS designed the study, SM and PFS proved the mathematical results, all authors contributed to the interpretation of the results and the writing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Peter F. Stadler.
Stoichiometric matrix of the complete formose RN, Fig. 4, in machine-readable form.
Appendix: Mathematical notation
We consider matrices and vectors indexed by chemical species \(x\in X\) or chemical reactions \(r\in \mathscr {R}\). Hence, both species and reactions can be thought of as endowed with an arbitrary, but fixed order determining the order of rows and columns. Standard mathematical notation is used without further explanation in the main text. Notation that may be less familiar is summarized here:
\(\mathbb {N}\) positive integers
\(\mathbb {R}\) real numbers
\(\mathbf {A}^{\!\top }\) transpose of matrix \({\mathbf {A}}\)
\(\ker \mathbf {A}\) kernel of matrix \({\mathbf {A}}\),
i.e., \(\ker \mathbf {A} = \{\mathbf {x} \mid \mathbf {A} \mathbf {x}= 0\}\)
\({{\,\mathrm{im}\,}}\mathbf {A}\) image of matrix \({\mathbf {A}}\),
i.e., \({{\,\mathrm{im}\,}}\mathbf {A} = \{\mathbf {y} \mid \mathbf {y} = \mathbf {A}\mathbf {x} \text { for some } \mathbf {x}\}\)
\({{\,\mathrm{cone}\,}}\mathbf {A}\) polyhedral cone induced by matrix \({\mathbf {A}}\),
i.e., \({{\,\mathrm{cone}\,}}\mathbf {A} = \{\mathbf {y} \mid \mathbf {y} = \mathbf {A}\mathbf {x} \text { for some } \mathbf {x}\ge 0\}\)
\(\mathbf {x}^{\!\top }\) row vector of column vector \(\mathbf {x}\)
\(\mathbf {x}_i\) component of vector \(\mathbf {x} \in \mathbb {R}^I\) (with \(i \in I\))
\({{\,\mathrm{supp}\,}}\mathbf {x}\) support of vector \(\mathbf {x} \in \mathbb {R}^I\),
i.e., \({{\,\mathrm{supp}\,}}\mathbf {x} = \{ i \in I \mid \mathbf {x}_i \ne 0 \}\)
\(\dim V\) dimension of vector space V
\(V^\perp\) orthogonal complement of vector space V,
i.e., \(V^\perp = \{\mathbf {y} \mid \mathbf {x}^{\!\top }\mathbf {y}=0 \text { for all } \mathbf {x}\in V\}\)
Müller, S., Flamm, C. & Stadler, P.F. What makes a reaction network "chemical"?. J Cheminform 14, 63 (2022). https://doi.org/10.1186/s13321-022-00621-8
Chemical reaction network
Directed hypergraph
Stoichiometric matrix
Futile cycle
Mass conservation
Reaction invariants
Null spaces
Sum formula
Multigraph
Lewis formula | CommonCrawl |
Modelling steel strip heating within an annealing furnace
Stephen W. Taylor ORCID: orcid.org/0000-0003-1633-35821 &
Shixiao Wang1
Annealing furnaces are used to heat steel in order to change its chemical structure. In this paper we model an electric radiant furnace. One of the major defects in steel strips processed in such furnaces is a wave-like pattern near the edges of the strip, apparently due to extra heating near the edges. The aim of the paper is to model this effect and provide a way to calculate the elevated temperatures near the edges. We analyse two processes that are suspected to contribute to uneven heating. The modelling involves an asymptotic analysis of the effect of heat flux at the edges and a detailed analysis of the integral equations associated with radiant heat transfer in the furnace.
The high temperatures within a steel annealing furnace preclude any reliable way to take measurements of the temperature; hence the need for mathematical models so that the temperature can be computed. We model an electric radiant annealing furnace with length of order 100 metres through which strips of steel sheet pass at speeds of up to 130 metres per minute in order to achieve the strip temperatures required for annealing. A schematic diagram of the furnace is shown in Fig. 4. The temperature along the furnace is controlled by varying the power supplied to the heating elements and the line speed through the furnace is reduced for strips of large thickness and width in order to achieve the required temperatures within the steel strips. At the beginning of the annealing–coating line there is an automatic welding process which welds the beginning of a new coil of steel sheet to the end of its predecessor, allowing the line to run continuously.
Occasionally the edges of the strip may take a wave-like shape after passing through the furnace and this seems to be a result of extra heating at the edges of the strip. This hypothesis is supported by a COMSOL®; model of the system [1, 2] which shows a trend in increasing steel strip temperatures closer to the edges. The goal of this paper is to gain a better understanding of the nonuniform heating of the strip across its width.
The furnace has already been modelled in a recent Mathematics-in-Industry Study Group (MISG) meeting [5]. However the model developed in that meeting was based on an assumption of uniform heating across the width of the strip and is thus unsuitable for explaining such defects. There is a very limited amount of modelling of such furnaces in the literature. Apart from the papers already cited, perhaps the closest work is [10] which also takes into account the radiative heat transfer within a mullti-zone annealing furnace. However, although the model in [10] is more detailed than that given in [5], it also makes the approximation that the strip temperature does not vary across its width. Other related models concern an electric furnace model for crystal formation in the papers by Pérez-Grande et al. [7], Sauermann et al. [8], Teodorczyk and Januszkiewicz [9].
Because of the high temperatures within the furnace, radiant heat transfer is the primary mode of heat transfer. This is discussed briefly in the MISG paper [5], but for a complete discussion we cite some standard texts by Incropera and DeWitt [4], Modest [6], Siegel and Howell [3].
As a starting point to our modelling, we briefly summarise what was done in [5]. The temperature u of the steel strip is assumed to satisfy the heat equation
$$ \begin{aligned} \rho_{S} C_{S}\left(\frac{\partial u}{\partial t}+v \frac{\partial u}{\partial x}\right)=k_{S}\left(\frac{\partial^{2} u}{\partial x^{2}}\,+\,\frac{\partial^{2} u}{\partial y^{2}}\,+\,\frac{\partial^{2} u}{\partial z^{2}}\right), \,\,\,t\!>\!0, \,(x,y,z)\in \mathcal{S}, \end{aligned} $$
the region of space occupied by the strip is
$$ \mathcal{S}=\{(x,y,z): 0\le x\le L, -w/2\le y\le w, 0\le z\le h\}; $$
L is the length of the furnace;
x measures distance from the point of entry of the strip into the furnace, z is a distance coordinate in the vertical direction and y is a distance coordinate across the strip;
v is the velocity of the strip through the furnace;
w and h are respectively the width and thickness of the strip.
ρ S , C S and k S are the strip's density, specific heat capacity and heat conductivity respectively.
The functions w and h are typically piecewise constant functions of x and t and v can vary with time, but in this paper we limit our analysis to the desirable steady state operation of the furnace for which these variables are constant.
Equation 1 is supplemented by an initial condition
$$u(x,y,z,0)=u_{0}(x,y,z), \quad (x,y,z)\in \mathcal{S}, $$
and boundary conditions. It is assumed that the steel strip enters the furnace at x=0 at constant temperature T 0, giving the Dirichlet boundary condition,
$$u(0,y,z,t)=T_{0}. $$
Mathematically, it is also appropriate to specify a boundary condition where the strip exits the furnace at x=L. One could propose a model leading to an appropriate boundary condition there. However we will see soon that heat conduction in the steel strip in the direction of the x-axis is very small, which means that the term involving \(\frac {\partial ^{2} u}{\partial x^{2}}\) can be neglected everywhere except in a small boundary layer near x=L. Physically, heat is not conducted quickly enough for the temperature of the part of the strip that has already left the furnace to affect the temperature within the furnace. Thus, any boundary effect at x=L is not expected to be significant and thus we do not attempt to model it.
Boundary conditions on the remaining parts of the boundary of \(\mathcal {S}\) arise from conservation of energy, which requires that we equate the normal component of the heat flux −k S ∇u to the flux of radiant energy leaving the surface. Thus we write
$$ k_{S} \frac{\partial u}{\partial z}(x,y,h)=\phi_{a}(x,y), \quad k_{S} \frac{\partial u}{\partial z}(x,y,0)=-\phi_{b}(x,y), $$
for (x,y)∈(0,L)×(−w/2,w/2), and
$$ k_{S} \frac{\partial u}{\partial y}(x,w/2,z)\!=\phi_{c}(x,z), \quad \!\!k_{S} \frac{\partial u}{\partial y}(x,\!-w/2,z)=-\phi_{d}(x,z), $$
for (x,z)∈(0,L)×(0,h). The incoming surface heat fluxes ϕ a , ϕ b , ϕ c and ϕ d are determined by considering an energy balance of the radiation within the furnace.
We can determine the relative importance of the different terms in Eq. (1) by using dimensionless coordinates \(\tilde {x}=x/L\), \(\tilde {y}=y/w\), \(\tilde {z}=x/h\), \(\tilde {t}=tv/L\), where h and w are typical values of the thickness and width of the strip. In terms of the dimensionless variables, the equation takes the form
$$\frac{\partial u}{\partial \tilde{t}}+ \frac{\partial u}{\partial \tilde{x}}=\delta \left(\frac{w^{2}}{L^{2}}\frac{\partial^{2} u}{\partial \tilde{x}^{2}}+\frac{\partial^{2} u}{\partial \tilde{y}^{2}}+\frac{w^{2}}{h^{2}}\frac{\partial^{2} u}{\partial \tilde{z}^{2}}\right), $$
$$ \delta=\frac{k_{S} L}{v \rho_{S} C_{S} w^{2}}. $$
Taking the typical values L=150 m, v=2 m s −1, w=0.5 m, h=0.5 mm, k S =50 W m −1 K −1, C S =500 J Kg −1 K −1 and ρ S =7854 Kg m −3 gives δ=3.8×10−3 and the equation
$$ \frac{\partial u}{\partial \tilde{t}}+ \frac{\partial u}{\partial \tilde{x}}=4.2 \times 10^{-8}\frac{\partial^{2} u}{\partial \tilde{x}^{2}}+3.8 \times 10^{-3} \frac{\partial^{2} u}{\partial \tilde{y}^{2}}+3.8 \times 10^{3} \frac{\partial^{2} u}{\partial \tilde{z}^{2}}. $$
which was the justification in [5] for neglecting the heat conduction terms for the x and y directions in (1). Note however that boundary conditions must be satisfied, so one expects boundary layers near y=±w/2 where the boundary conditions are satisfied. We wish to investigate these particular boundary layers to see how much they contribute to edge heating. Thus we depart from the analysis in [5] by retaining the terms involving \(\frac {\partial ^{2}u}{\partial y^{2}}\). However, as in [5], we neglect the term involving \(\frac {\partial ^{2}u}{\partial x^{2}}\) and any associated boundary layer for reasons discussed earlier.
Thus (1) simplifies to
$$\begin{array}{*{20}l} \rho_{S} C_{S}\left(\! \frac{\partial u}{\partial t}\,+\,v \frac{\partial u}{\partial x}\!\right)&\!\,=\,k_{S}\!\left(\!\frac{\partial^{2} u}{\partial y^{2}}\,+\,\frac{\partial^{2} u}{\partial z^{2}}\!\right), \quad t>0, \,(x,y,z)\in \mathcal{S}. \end{array} $$
A further simplification results by considering the temperature of the strip averaged over the z-direction:
$$T(x,y,t)=\frac{1}{h}\int_{0}^{h} u(x,y,z,t)\,dz. $$
Equation (7) then leads to
$$ \rho_{S} C_{S}\left(\frac{\partial T}{\partial t}+v \frac{\partial T}{\partial x}\right) = k_{S}\frac{\partial^{2} u}{\partial y^{2}}+\frac{2\Phi_{S}}{h}, $$
where Φ S is the average of the fluxes of heat entering the upper and lower surfaces of the strip:
$$\Phi_{S}=\frac{1}{2}\left[k\frac{\partial u}{\partial z}\right]_{z=0}^{z=h}=\frac{1}{2}(\phi_{a}+\phi_{b}). $$
Φ S is calculated by considering the energy balance of the radiation within the furnace.
The relatively large coefficient of \(\frac {\partial ^{2} u}{\partial \tilde {z}^{2}}\) in (6) indicates that we can use the approximation u(x,y,z,t)≈T(x,y,t) in our calculations. Thus we consider a model consisting of Eq. (8) with an initial condition
$$ T(x,y,0)=g(x,y), \quad 0<x<L, \quad -w/2<y<w/2, $$
and boundary conditions
$$\begin{array}{*{20}l} &T(0,y,t)=T_{0}, \quad t>0, \quad -w/2<y<w/2, \end{array} $$
$$\begin{array}{*{20}l} &\pm k_{S} \frac{\partial T}{\partial y}(x,\pm w/2)=\Phi_{E}^{\pm}, \quad 0<x<L, \end{array} $$
where \(\Phi _{E}^{+}\) and \(\Phi _{E}^{-}\) are the average radiant heat fluxes arriving at the edges of the strip.
$$\Phi_{E}^{+}(x)=\frac{1}{h}\int_{0}^{h} \phi_{d}(x,z)\,dz, \quad \Phi_{E}^{-}(x)=\frac{1}{h}\int_{0}^{h} \phi_{c}(x,z)\,dz. $$
The physical problem has reflectional symmetry through the x-z plane, so we assume that \(\Phi _{E}^{+}=\Phi _{E}^{-}=\Phi _{E}.\)
We wish to investigate two effects that could lead to edge heating of the strip. The first is the creation of a boundary layer near the edges y=±w/2 due to the boundary condition (11) there. We do this in Section 2. The second effect is a variation of Φ S in the direction of the y-axis that might explain extra heating near the edges. This requires a detailed analysis of the radiation heat transfer problem to calculate Φ S . We do this in Section 3. For the boundary layer analysis of Section 2 we use a simple approximation for Φ S that is independent of y.
Analytical treatment of edge heating
The following approximate expression for Φ S , the heat flux entering the upper and lower surfaces of the strip, was derived in [5].
$$\begin{array}{*{20}l} \Phi_{S}&= \frac{\epsilon_{S}\sigma\left(T_{W}^{4}-T_{1}^{4}\right)}{1+\frac{\epsilon_{S}(1-\epsilon_{W})}{\epsilon_{W}}\frac{w}{p}}, \end{array} $$
where ε S ≈0.2 and ε W ≈0.9 are the emissivities of the strip and furnace materials respectively, σ=5.670×10−8Wm−2K−4 is the Stefan–Boltzmann constant and p is the sum of height and width of a cross-section of the space inside the furnace. The temperature of the furnace walls and heating elements is assumed to be the same and is given by T W . We note that this flux does not vary across the width of the strip. Smaller emissivities are associated with more reflective surfaces, which lead to a greater amount of reflection of radiant heat energy arriving at a surface.
Φ E , the heat flux absorbed at the edges, is expected to be greater than Φ S because the steel strips are formed by cold rolling of steel which results in a rougher, less reflective surface at the edges.
We limit our analysis to the steady state operation of the furnace. This simplifies the analysis because it allows us to approximate the heat flux Φ S using the power supplied to the heating elements. For the non-steady state operation, one needs to take into account the heat dynamics that occur near the inner surface of the furnace walls, which are coupled to the dynamics of radiant heat transfer and heat transfer within the steel strip. For steady state operation, one can simply use the fact that the furnace walls are very good insulators and neglect the heat lost through them.
We thus seek steady state solutions of Eqs. (8), (10) and (11). In order to get a closed-form expression for the solution, we assume in Section 2.1 that ρ S , C S , k, Φ S and Φ E are all constant. We analyse the more general case for which these quantities are not constant in Section 2.2.
2.1 The case of constant ρ S C S , k, Φ S and Φ E
In terms of the dimensionless variables \(\tilde {x}=x/L\), \(\tilde {y}=y/w\), \(\tilde {T}=T \frac {hv \rho _{S} C_{S}}{\Phi _{S} L}\), Eqs. (8), (10) and (11) become
$$\begin{array}{*{20}l} &\frac{\partial \tilde{T}}{\partial \tilde{x}}=\delta \frac{\partial^{2} \tilde{T}}{\partial \tilde{y}^{2}}+2, \end{array} $$
$$\begin{array}{*{20}l} &\tilde{T}(0,\tilde{y})=\tilde{T}_{0}, \end{array} $$
$$\begin{array}{*{20}l} &\frac{\partial \tilde{T}}{\partial \tilde{y}}(\tilde{x}, \pm 1/2)=\pm \frac{h}{w} \frac{\Phi_{E}}{ \Phi_{S}}\frac{1}{\delta}. \end{array} $$
Here, \(\tilde {T}_{0}=T_{0} \frac {hv \rho _{S} C_{S}}{\Phi _{S} L}\) and δ is given by (5). In these equations, \(0<\tilde {x}<1\) and \(-1/2<\tilde {y}<1/2\).
We note that h/w and δ happen to be of the same order of magnitude for this industrial application, so the non-dimensional flux term in (15) is of order 1. This indicates that boundary edge heating is significant. However, δ is small, so we expect that the temperature of parts of the strip not close to the edges satisfies
$$\begin{array}{*{20}l} &\frac{\partial \tilde{T}_{1}}{\partial \tilde{x}}=2, \end{array} $$
$$\begin{array}{*{20}l} &\tilde{T}_{1}(0,\tilde{y})=\tilde{T}_{0}, \end{array} $$
which immediately gives
$$\tilde{T}_{1}(\tilde{x},\tilde{y})=\tilde{T}_{0}+2\tilde{x}. $$
We seek the steady state solution of the whole system (13)–(15), so we set \(\tilde {T}=\tilde {T}_{0}+2\tilde {x}+\tilde {T}_{2}\) and see that \(\tilde {T}_{2}\) must satisfy
$$\begin{array}{*{20}l} &\frac{\partial \tilde{T}_{2}}{\partial \tilde{x}}=\delta \frac{\partial^{2} \tilde{T}_{2}}{\partial \tilde{y}^{2}}, \end{array} $$
$$\begin{array}{*{20}l} &\tilde{T}_{2}(0,\tilde{y})=0, \end{array} $$
$$\begin{array}{*{20}l} &\frac{\partial \tilde{T}_{2}}{\partial \tilde{y}}(\tilde{x}, \pm 1/2)=\pm \frac{h}{w} \frac{\Phi_{E}}{ \Phi_{S}}\frac{1}{\delta}. \end{array} $$
We expect \(\tilde {T}_{2}\) to remain close to zero, except in boundary layers near \(\tilde {y}=\pm 1/2\), so we write \(\tilde {y}=\delta ^{1/2}\zeta -1/2\). The scale factor δ 1/2 for this inner variable ζ is chosen so that heating near the edge \(\tilde {y}=-1/2\) is given by the equations
$$\begin{array}{*{20}l} &\frac{\partial \tilde{T}_{2}}{\partial \tilde{x}}=\frac{\partial^{2} \tilde{T}_{2}}{\partial \zeta^{2}}, \end{array} $$
$$\begin{array}{*{20}l} &\tilde{T}_{2}=0, \quad \text{for} \tilde{x}=0, \end{array} $$
$$\begin{array}{*{20}l} &\left.\frac{\partial \tilde{T}_{2}}{\partial \zeta}\right|_{\zeta=0}=-\frac{h}{w} \frac{\Phi_{E}}{ \Phi_{S}}\frac{1}{\delta^{1/2}}. \end{array} $$
Outside this boundary layer, the solution must match the outer solution and for this we use the simple matching condition \(\tilde {T}_{2}\to 0\) as ζ→∞. The solution is easily obtained by taking the Laplace Transform with respect to the \(\tilde {x}\) variable,
$$F(s,\zeta)=\int_{0}^{\infty} \tilde{T}_{2}(\tilde{x},\tilde{y})e^{-s\tilde{x}}\,d\tilde{x}, \quad \text{with} \tilde{y}=\delta^{1/2}\zeta-1/2. $$
Equation 21 then gives sF=F ζ ζ , from which we find that
$$F(s,\zeta)=\frac{h}{w} \frac{\Phi_{E}}{ \Phi_{S}}\frac{1}{\sqrt{s^{3} \delta}}e^{-\sqrt{s} \zeta}. $$
This gives
$$ \tilde{T_{2}}= \frac{1}{\sqrt{\delta}}\frac{h}{w} \frac{\Phi_{E}}{ \Phi_{S}}\psi(\zeta,\tilde{x})=\frac{1}{\sqrt{\delta}}\frac{h}{w} \frac{\Phi_{E}}{ \Phi_{S}}\psi\left(\frac{\tilde{y}+1/2}{\sqrt{\delta}},\tilde{x}\right), $$
$$ \psi(\zeta,\tilde{x})= \zeta\left(\text{erf}\left(\frac{\zeta}{2\sqrt{\tilde{x}}}\right)-1\right) + 2\sqrt{\frac{\tilde{x}}{\pi}}\exp\left(\frac{-\zeta^{2}}{4\tilde{x}}\right), $$
and erf represents the error function,
$$\text{erf}(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z} e^{-s^{2}}\,ds. $$
A similar expression approximates the boundary layer near \(\tilde {y}=1/2\). One can combine the inner and outer solutions to obtain a composite approximation for the steady state solution of (13)–(15), valid for −1/2≤y≤1/2:
$$\tilde{T}\,=\,\tilde{T}_{0}+2\tilde{x}+\frac{1}{\sqrt{\delta}}\frac{h}{w} \frac{\Phi_{E}}{ \Phi_{S}}\left(\!\psi\!\left(\frac{\tilde{y}+1/2}{\sqrt{\delta}},\tilde{x}\right) \,+\,\psi\!\left(\!\frac{1/2-\tilde{y}}{\sqrt{\delta}},\tilde{x}\right)\right) $$
In terms of the original variables, one finds that the boundary layer penetrates to a distance
$$2\sqrt{\frac{k_{S} x}{v \rho_{S} C_{S}}} $$
and the increased temperature at the edge is
$$\Phi_{E}\sqrt{\frac{x}{\pi k_{S} v \rho_{S} C_{S}}}. $$
We use these equations to plot an example of increased strip temperature near an edge in Fig. 1. The calculations for the figure use Φ E =Φ S but in fact we expect Φ E >Φ S because the upper and lower surfaces are very smooth and are thus expected to have a lower emissivity than the edge surfaces. Thus we expect the edge temperatures to be greater than those shown in the graphs. Also used in the calculations are the values k S =50 Wm −1 K −1, C S =500 JK −1 Kg −1, T 0=573 K, ρ S =7854 Kgm −3, h=0.5 mm, w=0.5 m, v=2 ms −1, L=100 m, Φ E =Φ S =1500 Wm −2.
The effect of radiation arriving at an edge. The actual effect is expected to be greater than this because of the higher emissivity of the edges
2.2 The case of variable ρ S , C S , k, Φ S and Φ E
In this section, we wish to follow the analysis of Section 2.1, but now in the more realistic setting of variable ρ S , C S , k, Φ S and Φ E . In practice, the large temperature variation within annealing furnaces requires that we take into account the temperature variation, especially of C S and k S , and to a lesser extent, ρ S . The variation of C S and k S with temperature, shown in Figs. 2 and 3, is taken from data in [4]. Figure 3 shows that heat conductivity is approximated well using linear regression:
$$ k_{S}=73.9823-0.0437 T, $$
Variation of C p (J/Kg.K) for steel with absolute temperature T in Kelvin
Variation of the heat conductivity k (W/m.K) for steel with absolute temperature T in Kelvin
and Fig. 2 shows the C p data approximated by an interpolating quartic:
$$ \begin{aligned} C_{S} &=345 - 0.504333T + 0.004895T^{2} \\ &- 9.06667{\times10}^{-6}T^{3} + 5.5{\times10}^{-9}T^{4}. \end{aligned} $$
To allow for such variations, we assume that ρ S , C S and k S are known functions of the strip's temperature. Further, because our system is at equilibrium, Φ S and Φ E are assumed to be known functions of x which can be calculated by measuring the power supplied to the heating elements in the vicinity of a distance x along the furnace.
The form of Eq. (1) is only valid for constant diffusivity k S . With variable k S , we must instead write
$$ \rho_{S} C_{S}\left(\frac{\partial u}{\partial t}+v \frac{\partial u}{\partial x}\right)=\nabla \cdot (k_{S} \nabla u). \quad \text{in} \mathcal{S}\times(0,\infty). $$
Consequently, instead of (8), we have
$$ \rho_{S} C_{S}\left(\frac{\partial T}{\partial t}+v \frac{\partial T}{\partial x}\right) = \frac{\partial}{\partial y}\left(k_{S}\frac{\partial T}{\partial y}\right)+\frac{2\Phi_{S}}{h}. $$
Hence the steady state temperature must satisfy
$$\begin{array}{*{20}l} &\rho_{S} C_{S} v \frac{\partial T}{\partial x} = k_{S}(T)\frac{\partial^{2} T}{\partial y^{2}}+k_{S}'(T)\left(\frac{\partial T}{\partial y}\right)^{2}+\frac{2\Phi_{S}}{h}, \end{array} $$
$$\begin{array}{*{20}l} &T(0,y)=T_{0}, \quad -w/2<y<w/2, \end{array} $$
$$\begin{array}{*{20}l} &\pm k_{S} \frac{\partial T}{\partial y}(x,\pm w/2)=\Phi_{E}, \quad 0<x<L. \end{array} $$
As before, the small diffusivity indicates that for y not close to ±w/2, T(x,y)≈T 1(x), where T 1 satisfies the ordinary differential equation
$$ \rho_{S}(T_{1})C_{S}(T_{1})v T_{1}'(x)=\frac{2\Phi_{S}(x)}{h},\quad T_{1}(0)=T_{0}. $$
As before, it is useful to consider a new dimensionless variable \(\tilde {x}\), this time chosen to make the coefficients of Eq. (32) more similar to those of the constant coefficient case. We do this by choosing \(\tilde {x}\) to be the solution to
$$L\frac{d\tilde{x}}{dx}= \frac{k_{S}(T_{1})\rho_{S}(T_{0})C_{S}(T_{0})}{k_{S}(T_{0})\rho_{S}(T_{1})C_{S}(T_{1})}, \quad \tilde{x}(0)=0, $$
where T 1=T 1(x). We also let \(\tilde {y}=y/w\).
In terms of these variables, Eq. (30) takes the form
$$ \begin{aligned} \frac{\!k_{S}(\!T_{1}\!)\!\rho_{S}\!(\!T)C_{S}(T)}{k_{S}(\!T)\!\rho_{S}(\!T_{1})C_{S}(\!T_{1})}\frac{\partial T}{\!\partial \tilde{x}}\,=\,\delta\frac{\partial^{2} T}{\partial \tilde{y}^{2}}\,+\,\delta \frac{k_{S}'(T)}{k_{S}(T)}\!\left(\!\frac{\partial T}{\partial \tilde{y}}\!\right)^{2}\!\,+\,\frac{2\Phi_{S} k_{S}(\!T_{0}\!)L}{h v \rho_{S}(\!T_{0})C_{S}(\!T_{0})k_{S}(\!T)}, \end{aligned} $$
where δ is again given by Eq. (5), but with k S , ρ S and C S evaluated at temperature T 0. Finally, we choose a dimensionless temperature
$$\tilde{T}=T\frac{h v \rho_{S}(T_{0})C_{S}(T_{0})}{\overline{\Phi}_{S} L}, $$
where \(\overline {\Phi }_{S}\) is the average of Φ S ,
$$\overline{\Phi}_{S}=\frac{1}{L}\int_{0}^{L} \Phi_{S}(x)\,dx. $$
Equation 34 becomes
$$ \begin{aligned} \frac{k_{S}(T_{1})\rho_{S}(T)C_{S}(T)}{k_{S}(T_{0})\rho_{S}(T_{1})C_{S}(T_{1})}\frac{\partial \tilde{T}}{\partial \tilde{x}}&= \\ \frac{k_{S}(T)}{k_{S}(T_{0})}\delta\frac{\partial^{2} \tilde{T}}{\partial \tilde{y}^{2}}&+\delta \frac{\overline{\Phi}_{S} L}{h v \rho_{S}(T_{0})C_{S}(T_{0}\!)} \frac{k_{S}'(T)}{k_{S}(T_{0})}\!\!\left(\!\frac{\partial \tilde{T}}{\partial \tilde{y}}\!\right)^{2}\!\,+\,\frac{2\Phi_{S}}{\overline{\Phi}_{S} }\!. \end{aligned} $$
The dimensionless solution of Eq. (35) corresponding to the solution T 1 of Eq. (33) is given by
$$\tilde{T}_{1}(\tilde{x})=\frac{T_{1}(x)}{\overline{T}}, \quad \text{where} \quad \overline{T}=\frac{\overline{\Phi}_{S} L}{h v \rho_{S}(T_{0})C_{S}(T_{0})}, $$
and this corresponds to a solution outside boundary layers.
As in Section 2.1, we set \(\tilde {y}=\delta ^{1/2}\zeta -1/2\); ζ is our boundary layer variable near the edge \(\tilde {y}=-1/2\). We also write \(\tilde {T}=\tilde {T}_{1}(\tilde {x})+\tilde {T}_{2}(\tilde {x},\zeta)\). Rewriting the boundary condition (32) at this edge in terms of the new variables gives
$$ \left.\frac{\partial \tilde{T}_{2}}{\partial \zeta}\right|_{\zeta=0}=-\frac{h}{w} \frac{\Phi_{E} k_{S}(T_{0})}{ \overline{\Phi}_{S} k_{S}(T)}\frac{1}{\delta^{1/2}}. $$
It is very desirable for the industrial application that edge heating is very small. This is consistent with our observation that the physical parameters happen to be such that h/w=O(δ), and thus the right-hand-side of (36) is O(δ 1/2). In any case, we assume that
$$\frac{h}{w}\delta^{-1/2}=\epsilon $$
is small and we write \(\tilde {T}_{2}(\tilde {x},\zeta)=\epsilon \theta (\tilde {x},\zeta)+O(\epsilon ^{2})\). This allows us to use a first order Taylor approximation to ρ S (T), expanded about the point T=T 1,
$$\rho_{S}(T)=\rho_{S}(T_{1})+\epsilon \rho_{S}'(T_{1})\overline{T}\theta(\tilde{x},\zeta)+O(\epsilon^{2}). $$
We use similar approximations for C S (T) and k S (T).
Recalling that \(\partial /\partial \tilde {y}=\delta ^{-1/2} \partial /\partial \zeta \), we expand Eq. (35) up to O(ε) to find that θ must satisfy
$$ \left(\frac{\rho_{S}'(T_{1})}{\rho_{S}(T_{1})}+\frac{C_{S}'(T_{1})}{C_{S}(T_{1})}\right)\frac{\partial T_{1}}{\partial \tilde{x}}\theta+\frac{\partial \theta}{\partial \tilde{x}}=\frac{\partial^{2} \theta}{\partial \zeta^{2}}. $$
Equation 37 may be simplified by setting
$$ \chi=\frac{\rho_{S}(T_{1})C_{S}(T_{1})}{\rho_{S}(T_{0})C_{S}(T_{0})}\theta, $$
and we find that χ satisfies
$$ \frac{\partial \chi}{\partial \tilde{x}}=\frac{\partial^{2} \chi}{\partial \zeta^{2}}. $$
The flux boundary condition (36) translates to
$$ \left.\frac{\partial \chi}{\partial \zeta}\right|_{\zeta=0}=-\frac{\Phi_{E}}{\overline{\Phi}_{S}}\frac{k_{S}(T_{0})\rho_{S}(T_{1})C_{S}(T_{1})}{k_{S}(T_{1})\rho_{S}(T_{0})C_{S}(T_{0})}=f(\tilde{x}). $$
χ must also satisfy an "initial" condition, χ(0,ζ)=0, and a matching condition, \(\chi (\tilde {x},\zeta)\to 0\) as ζ→∞.
The solution of this system, readily found by use of the Laplace transform, is
$$ \chi(\tilde{x},\zeta)=\int_{0}^{\tilde{x}} g(\zeta,\sigma)f(\tilde{x}-\sigma)\,d\sigma, $$
where \(g(\zeta,\tilde {x})=\frac {\partial \psi }{\partial \tilde {x}}=e^{-\zeta ^{2}/4\tilde {x}}/\sqrt {\pi \tilde {x}}\) and ψ is given by Eq. (25).
In summary, we have found that there is a boundary layer near the edges of the strip. Outside the boundary layer, the temperature T 1 of the strip, at a distance x along the furnace, may be found by solving the ordinary differential Eq. (33). With T 1(x), we may then calculate \(f(\tilde {x})\) from (40) and then χ from (41). This gives us θ from (38). The actual perturbation to the temperature near the edge y=−w/2 is given by
$$T_{2}=\overline{T}\tilde{T_{2}}=\overline{T}\epsilon \theta(\tilde{x},\zeta). $$
Furnace radiative heat transfer analysis
The radiative heat exchange between the furnace inner surface and the strip is considered in this section. The schematic geometry of the problem is presented in Fig. 4 which shows the cross section and the side view of the furnace and the strip. The furnace is modelled as a hollow rectangular box with length much larger than the dimensions of the cross section. The heating elements are assembled on the top and bottom inner surfaces of the furnace. We make the following assumptions about the radiative heat transfer within the furnace.
Sketch of the furnace; a cross-section, b side view
The heating elements are distributed uniformly over the top and bottom inner surfaces and the the density of the input electric power is specified as a constant.
All surfaces are considered opaque gray. All surfaces emit and reflect radiation diffusely; the typical emissivity of the furnace wall surface and the heating element is ε W =ε E =0.9 and of the strip is ε S =0.2. For an opaque gray surface, the reflectivity ρ and emissivity ε are related by ρ=1−ε.
Temperature changes within the furnace are gradual and radiative and thus convective heat transfer along the length of the furnace can be ignored. The strip temperature at the entry of the furnace is at room temperature and it can reach up to 700 °C at the last heating stage, which is still significantly lower than the temperature of wall surface and the heating elements. Considering that the radiative power is proportional to the fourth power of the temperature, the dominant radiation is from the wall surfaces and the heating elements.
These assumptions simplify the analysis and are reasonable for a furnace with brick covered wall and a steel strip with rough surface finishing. For a steel strip with smooth surface finishing, a partly specular reflection model shall be considered.
We use these assumptions to develop a two dimensional model of the temperature distribution within the furnace. The model is two dimensional only in the sense that it relies on the approximation that there is only a gradual variation of temperature in the direction of the moving strip.
We are interested in temperature variations across the strip and for this we must solve a system of integral equations for the radiative and reflective heat exchange between surfaces within the furnace.
3.2 Mathematical model
For a diffuse surface, it is well known that the net radiation method can be used to analyse the heat transfer. This method is discussed in many texts on thermal radiation such as the works of Modest [6] and of Siegel and Howell [3]. The method, which involves an energy conservation argument for the absorption, emission and reflection of radiation inside an enclosure, results in an integral equation.
Let q(x) be the outgoing heat flux at the location x, which counts both radiant and reflected heat fluxes. The governing integral equation in terms of q(x) takes the form
$$\begin{array}{@{}rcl@{}} q(\mathbf{x})=\epsilon (\mathbf{x})\sigma T^{4}(\mathbf{x})+\rho(\mathbf{x})\int_{A} q(\mathbf{x'})dF_{d \mathbf{x} - d \mathbf{x'}}, \end{array} $$
for surfaces where the temperature T(x) is given, or
$$\begin{array}{@{}rcl@{}} q(\mathbf{x})=p(\mathbf{x})+ \int_{A} q(\mathbf{x'})dF_{d \mathbf{x} - d \mathbf{x'}}, \end{array} $$
for the surfaces where the input power flux p(x) is specified. In the integral equation ρ(x) is the reflectance of the surface at x and \(\phantom {\dot {i}\!}dF_{d \mathbf {x} - d \mathbf {x}^{\prime } }\) is the exchange view factor between two surface elements d x and dx ′, which is defined as,
$$ {\begin{aligned} {\kern-16.5pt}{dF}_{d\mathbf{x}-d\mathbf{x}^{\prime}} \!\equiv\! \frac{\text{diffuse energy leaving} d \text{x directly toward and intercepted by d}~\mathbf{x}^{\prime}} {\text{total diffuse energy leaving d}~\mathbf{x}}\!. \end{aligned}} $$
Note that d x denotes the differential strip element which, due to the longitudinal symmetry, is infinite in the x 3 direction.
The diffuse view factor between two infinitesimal strip elements d x, dx ′ located at x and x ′ respectively, as shown in Fig. 5, is given by
$$ \begin{aligned} {d \mathbf{x} - d \mathbf{x'} }\,=\, \frac{\cos \beta \sin \beta d \mathbf{x'} }{2r}\,=\,\frac{|(x_{1} -x_{1}')(x_{2} -x_{2}') | d \mathbf{x'} }{2 \left((x_{1} -x_{1}')^{2} + (x_{2} -x_{2}')^{2}\right)^{3/2} } \end{aligned} $$
Diagram for calculation of view factors
for perpendicular elements and
$$ \begin{aligned} dF_{d \mathbf{x} - d \mathbf{x'} }= \frac{\sin^{2} \beta d \mathbf{x'} }{2r} = \frac{d^{2} d \mathbf{x'} }{2((x_{1} -x_{1}')^{2} + (x_{2} -x_{2}')^{2})^{3/2} } \end{aligned} $$
for parallel elements, where β are the angles shown in Fig. 5, d is the perpendicular distance between the two parallel elements, see [3].
We define the kernel k(x,x ′) to be zero if the points x and x ′ are shielded from each other by another surface, otherwise it is given by
$$ \begin{aligned} k(\mathbf{x},\mathbf{\!x'})\,=\,\!\left\{\!\!\! \begin{array}{ll} \frac{|(x_{1} \,-\,x_{1}')(x_{2} \,-\,x_{2}') | }{2((x_{1} \,-\,x_{1}')^{2} + (x_{2} -x_{2}')^{2})^{3/2} }, & \text{\!\!if x and x' are on perpendicular elements,} \\ \frac{d^{2}}{2((x_{1} -x_{1}')^{2} + (x_{2} -x_{2}')^{2})^{3/2} }, & \text{\!\!if x and x' are on parallel elements.} \end{array} \right. \end{aligned} $$
The integral equation can be written via kernel k(x,x ′)
$$\begin{array}{@{}rcl@{}} q(\mathbf{x})=\epsilon(\mathbf{x}) \sigma T^{4}(\mathbf{x})+\rho(\mathbf{x})\int_{A}q(\mathbf{x'}) k(\mathbf{x},\mathbf{x'}) d\mathbf{x'} \end{array} $$
for a surface where the temperature is given, or
$$\begin{array}{@{}rcl@{}} q(\mathbf{x})=p(\mathbf{x})+\int_{A} q(\mathbf{x'})k(\mathbf{x},\mathbf{x'}) d\mathbf{x'} \end{array} $$
for a surface where the input power flux is given. The integral equation uniquely determines the outgoing heat flux q(x).
3.3 Numerical procedure
In general such integral equations do not have a closed form solution so a numerical method is needed to find an approximate solution. The integral equation is linear and the discretised equation is a linear system and can be solved by a standard LU decomposition. To numerically solve (48) and (49), all of the surfaces, which because of the assumption of longitudinal symmetry of the problem, are one dimensional domains, are divided into sufficiently small intervals of equal length and the q(x) is assigned on the nodes q(x i ). A standard trapezoid method is applied to integrate (48) and (49) numerically, resulting in a linear system with q(x i ) as unknowns.
In the actual numerical procedure, however, care must be taken in the treatment of discontinuities and singularities arising from the singularity of the kernel k(x,x ′). Let us consider why these arise.
The k(x,x ′) function has a discontinuity at the corner points of the wall arising from the two different formulas for the parallel and perpendicular elements. Thus, the numerical integration is performed on the individual planar surfaces. This gives a total of six planar surfaces including four wall surfaces and two strip surfaces.
For a node x located on a furnace wall, the kernel k(x,x ′) is only a piecewise smooth function of x ′ due to the presence of the steel strip and its shadow effect.
This can be observed from Fig. 6, showing a node x exposed only to partial heat flux emitted from another wall surface. The kernel k(x,x ′) has jumps at x1′ and x2′, each of which lies between two neighboring nodes. The exact positions of x1′ and x2′ can be found from the geometric relation. The numerical integration is performed only on the viewable portion of the relevant subintervals bounded by x1′ (or x2′) and one of the two neighboring nodes.
Due to the singularity of the kernel k(x,x ′), when two nodes on the neighboring wall are sufficiently close, the variation of k(x,x ′) on a single element is significant for whatever small step size has been chosen. To overcome this singularity, the following approximation is applied to these nodes.
Shadow effect of the strip
Assuming a linear distribution of the q(x) on the element (xi′,xi+1′), we may estimate the integral in (48) and (49) as
$$ \begin{aligned} \int_{\mathbf{x}_{i}'}^{\mathbf{x}_{i+1}'} q(\mathbf{x'}) k(\mathbf{x},\mathbf{x'}) d\mathbf{x'} \approx \frac{1}{h } \int_{\mathbf{x}_{i}'}^{\mathbf{x}_{i+1}'} ((q(\mathbf{x}_{i+1}')\\-q(\mathbf{x}_{i}'))| \mathbf{x'} - \mathbf{x}_{i}'|+ q(\mathbf{x}_{i}')h) k(\mathbf{x},\mathbf{x'}) d\mathbf{x'} \end{aligned} $$
where h=|xi′−xi+1′| is the step size. This can be written in terms of q(x i+1) and q(x i )
$$\begin{array}{@{}rcl@{}} \int_{\mathbf{x}_{i}'}^{\mathbf{x}_{i+1}'} q(\mathbf{x'}) k(\mathbf{x},\mathbf{x'}) d\mathbf{x'} \approx C_{i}^{(1)} q(\mathbf{x}_{i}') + C_{i}^{(2)} q(\mathbf{x}_{i+1}') \end{array} $$
where the coefficients \(C_{i}^{(1)}\) and \(C_{i}^{(2)}\) are determined by
$$\begin{array}{@{}rcl@{}} C_{i}^{(1)}= \frac{1}{h}\int_{\mathbf{x}_{i}'}^{\mathbf{x}_{i+1}'}(h- | \mathbf{x}'-x_{i}'|) k(\mathbf{x},\mathbf{x'}) d\mathbf{x'}. \end{array} $$
$$\begin{array}{@{}rcl@{}} C_{i}^{(2)}= \frac{1}{ h}\int_{\mathbf{x_{i}'}}^{\mathbf{x_{i+1}'}} | \mathbf{x'-x_{i}'}| k(\mathbf{x},\mathbf{x'}) d\mathbf{x'}, \end{array} $$
respectively. Adding the l corner elements leads to
$$ \begin{aligned} {\sum_{i=1}^{l}} \frac{1}{h } \int_{\mathbf{x_{i}'}}^{\mathbf{x_{i+1}'}} ((q(\mathbf{x_{i+1}'})-q(\mathbf{x_{i}'}))| \mathbf{x'} - \mathbf{x_{i}'}|+ q(\mathbf{x_{i}'})h) k(\mathbf{x},\mathbf{x'}) d\mathbf{x'} \\ = C_{1}^{(1)}q(\mathbf{x_{1}'}) +C_{l}^{(2)}q(\mathbf{x_{l+1}'}) + \sum_{i=2}^{l-1} (C_{i}^{(1)}+C_{i-1}^{(2)})q(\mathbf{x_{i}'}) \end{aligned} $$
l will be chosen to cover all elements affected by the kernel singularity.
There are two nodes belonging to two neighbouring walls intersecting at a corner point denoted by \( \mathbf {x_{S_{1}}} \) and \(\mathbf {x_{S_{2}}}\) where S 1 and S 2 indicate the surfaces the nodes belong to. The integration with respect to x ′ over the surface S 2 for \( \mathbf {x} = \mathbf {x_{S_{1}}} \) should be estimated by calculating its value at a nearby point x ε ∈S 1, \(\mathbf {x}_{\epsilon } \sim \mathbf {x_{S_{1}}}\phantom {\dot {i}\!}\), and then passing to the limit as \(\mathbf {x}_{\epsilon } \rightarrow \mathbf {x_{S_{1}}} \). It can be shown that
$$ \frac{1}{2}q(\mathbf{x}_{\mathbf{S_{1}} }) ={\lim}_{\mathbf{x}_{\epsilon} \rightarrow \mathbf{x_{S_{1}} }}\int_{\mathbf{S_{2}} } q(\mathbf{x})k(\mathbf{x}_{{\epsilon}},\mathbf{x'}) d\mathbf{x'}. $$
Remark: Equation (54) has a clear physical meaning: A differential element at node \(\mathbf {x_{S_{1}}}\) on surface S 1 receives half of the total heat flux which is emitted from a neighborhood of \(\mathbf {x_{S_{2}}}\) on surface S 2.
3.4 The isothermal surface case
The numerical model we have developed is used to reexamine the isothermal surface case for which the wall temperature T W =900°C and the strip temperature T S =500°C. The heat power absorbed per meter length of strip can be found from the incident heat flux Φ s as Q=2w Φ s . From (12), one finds
$$\begin{array}{@{}rcl@{}} Q&= \frac{2w \epsilon_{S}\sigma(T_{W}^{4}-T_{S}^{4})}{1+\frac{\epsilon_{S}(1-\epsilon_{W})}{\epsilon_{W}}\frac{w}{p}}. \end{array} $$
One notices that the above formula is derived under the assumption that the temperature and the outgoing heat flux q(x) are constant on each surface. This can be observed from the integral equations (48) and (49). When q(x) is constant, one may move the term q(x ′) out of the integral and leave only the geometric term in the integrand
$$\int_{A} q(\mathbf{x'})dF_{d \mathbf{x} - d \mathbf{x'} }=q(\mathbf{x'})\int_{A} dF_{d \mathbf{x}} - d \mathbf{x'}. $$
Thus, the heat transfers between the surfaces can be calculated by using the view factor. By solving this problem with the numerical method developed in this article, we found that q(x) is actually always varying with the location. However, the variations of q(x) across the wall or strip are not significant. This is particularly true for small ε S , as in such cases, the strip reflectance ρ S ≈1 and the radiated energy from the wall is well reflected by the strip, and thus q(x) becomes less dependent on the location.
Figure 7 shows the comparison of the numerical results and formula (55).
The strip's total heat influx calculated by the numerical method and formula (55)
It was found that in the case with wall emissivity ε W =0.5, the difference between the numerical result and (55) is more significant than the case with ε W =0.9. This indicates that the non uniformity of q(x) in the case ε W =0.5 is larger than ε W =0.9. This is confirmed by Fig. 8, which shows the distribution of q(x) on the top (or bottom) wall. In the limit case ε W =1, formula (55) becomes exact, simply because the reflectance of the wall vanishes and the integral equation is reduced to algebraic equations from which (55) is derived.
Distributions of q(x) on the top (or bottom) wall. The top plot: ε W =0.9; The bottom plot: ε W =0.5
It was also found that for the case ε W =0.9 and ε S =0.2, which is typical in this application, formula (55) is fairly accurate. The total outgoing heat flux q(x) is close to the uniform distribution on each surface. However, one notices that in contrast to the outgoing heat flux, which is dominated by the heat radiation generated by the uniform wall temperature, the net heat flux varies significantly across the wall surface. Figure 9 shows the numerical result for the case ε S =0.2 and ε W =ε E =0.9.
Net heat influx on the strip and the wall surfaces
This is not a surprising result because the view factors of the strip to various wall locations are very different and the portion of the heat energy emitted from the wall which is eventually absorbed by the strip is largely determined by the relative geometric position, or in other words, by the view factor.
Through these comparisons, we found that the numerical results correctly reflected the geometrical effects due to the view factor and matched very well with formula (55) at the limit case ε S ≈0 as expected. This validates the numerical method.
3.5 The strip temperature distribution
We now consider the problem of the temperature distribution of the strip under conditions close to those of the real furnace. The isothermal model considered in the previous section shows that the net emitted heat from the wall varies significantly across the wall surface. In the actual furnace, the electric heating elements, which are assembled at the top and bottom wall, are usually equally powered. Consequently, the temperature distribution of the heating elements must be varying. Thus, we consider the modelling problem stated as follows:
The power input density of the top and bottom heating elements is specified as a constant p=1.294×104 Wm −2, which is typical in this applications.
The side wall surface is considered as a perfect thermal insulated surface.
ε W =ε E =0.9 and ε S =0.2, where ε E denotes the emissivity of the heating elements.
Find the strip surface temperature distribution.
It will be shown that the temperature variation along the width of the strip is small, less than two percent. Together with the fact found in [5] that the coefficient of the y diffusive term was very small, we may assume that the strip temperature rise is proportional to the net heat influx. One also notices that the strip temperature T S is significantly below the wall temperature and has only a very minor influence on the internal heat transfer. Thus the variations of the strip heat influx along the width at different longitudinal locations are expected to be similar. To confirm this, we examine two locations where the strip temperatures are very different, namely, T S =500°C and T S =20°C.
The strip temperature 500°C.
The numerical results are shown in Fig. 10. It was found that
The temperature of the electric power element varies from 949°C to 983°C. The temperature variation is about 3%.
The temperature of the side wall varies from 949°C to 956°C. The temperature variation is about 0.7%.
The strip net power influx varies from 2.186×104 Wm −2 to 2.216×104 Wm −2. The net influx variation is about 1.3%.
Temperature T S =20°C.
The temperature distributions of the heating element and side wall are similar to the case T S =500 with a slightly lower temperature range.
The net heat influx of the strip is indeed very similar to the case T S =500. Figure 12 shows the comparison of the two results.
Case 1: T S =500°C
Case 2: T S =20°C
The comparison of the net heat influx of the strip for T S =500°C and T S =20°C
Based on these results, we may calculate the strip temperature along the width by using the strip heat influx at T S =500°C. The result is shown in Fig. 13. The temperature of the strip is seen to be varying smoothly with about 7°C in magnitude. The temperature variation would be about 1.2°C if the calculation was based on the isotherm model discussed in Section 3.3.
The predicted temperature variation T S across the strip width
We have analysed in detail two effects that contribute to extra heating of the steel strip at its edges.
The numerical results of Section 3 indicate that the geometrical effect due to view factors would account for an elevated temperature at the edges of about 7°C.
The analysis of Section 2 took into account the fact that the edges of the strip are really surfaces themselves. Although these surfaces are small, they contribute significantly to temperature increases at the edges because the rate of heat conduction away from the edges is slow. If one assumes that the edges are smooth and have an emissivity of about 0.2, the same as the larger surfaces of the steel, then this effect would result in temperature elevations of about 9°C at the edges. In reality the edges are much more rough than the rest of the strip's surface. The actual temperature elevation is proportional to the emissivity, so an emissivity of 0.5, for example, would contribute to a temperature elevation of about 22°C near the edges. Moreover these elevated temperatures occur within about 1 cm of the edge of the strip, resulting in potentially damaging high temperature gradients.
Depree, N, Sneyd, J, Taylor, S, Taylor, MP, Chen, JJJ, Wang, S, O'Connor, M: Development and validation of models for annealing furnace control from heat transfer fundamentals. Comput. Chem. Eng. 34(11), 1849–1853 (2010).
Depree, N, Taylor, MP, Chen, JJJ, Sneyd, J, Taylor, S, Wang, S: Development of a three-dimensional heat transfer model for continuous annealing of steel strip. Ind. Eng. Chem. Res. 51(4), 1790–1795 (2012).
Howell, JR, Siegel, R: Thermal Radiation Heat Transfer. 5th. CRC Press, Boca Raton, Fla. (2011).
Incropera, FP, DeWitt, DP: Introduction to Heat Transfer. 4th. John Wiley and Sons, New York (2002).
McGuinness, M, Taylor, SW: Strip Temperature in a Metal Coating Line Annealing Furnace. In: Proceedings of the 2004 Mathematics-in-Industry Study Group (2004). http://www.maths-in-industry.org/miis/41/.
Modest, MF: Radiative Heat Transfer. 2nd edn. Academic Press, SAN DIEGO, CA (2003).
Pérez-Grande, I, Rivas, D, de Pablo, V: A global thermal analysis of multizone resistance furnaces with specular and diffuse samples. J. Crystal Growth. 246, 37–54 (2002).
Sauermann, H, Stenzel, CH, Keesmann, S, Bonduelle, B: High-stability control of multizone furnaces using optical fibre thermometers. Cryst. Res. Technol. 36(12), 1329–1343 (2001).
Teodorczyk, T, Januszkiewicz, KT: Computer simulation of electric multizone tube furnaces. Adv. Eng. Softw. 30, 121–126 (1999).
Zareba, S, Wolff, A, Jelali, M: Mathematical modelling and parameter identification of a stainless steel annealing furnace. Simul. Model. Prac. Theory. 60, 15–39 (2016).
SWT was responsible for Section 2, SW was responsible for Section 3. Both authors read and approved the final manuscript.
Mathematics Department, University of Auckland, Private Bag 92019, Auckland, New Zealand
Stephen W. Taylor
& Shixiao Wang
Search for Stephen W. Taylor in:
Search for Shixiao Wang in:
Correspondence to Stephen W. Taylor.
Taylor, S.W., Wang, S. Modelling steel strip heating within an annealing furnace. Pac. J. Math. Ind. 9, 5 (2017) doi:10.1186/s40736-017-0030-7
Accepted: 12 April 2017
Radiant heat transfer
Annealing furnace | CommonCrawl |
Results for 'Knowability Paradox'
1000+ found
The Knowability Paradox.Jonathan L. Kvanvig - 2006 - Oxford, England: Oxford University Press UK.details
The paradox of knowability poses real difficulities to our understanding of truth. It does so by claiming that if we assume a truth is knowable, we can demonstrate that it is known. This demonstration threatens our understanding of truth in two quite different ways, only one of which has been recognized to this point in the literature on the paradox. Jonathan Kvanvig first unearths the ways in which the paradox is threatening, and then delineates an approach (...) to the paradox that solves both of the problems raised by the paradox for our understanding of truth. His book will be of interest throughout philosophy, but especially to logicians and epistemologists. (shrink)
Knowability in Epistemology
$13.81 used $61.06 new View on Amazon.com
Direct download (4 more)
Bookmark 18 citations
The Knowability Paradox in the light of a Logic for Pragmatics.Massimiliano Carrara & Daniele Chiffi - 2014 - In R. Ciuni, H. Wansing & C. Willkommen (eds.), Recent Trends in Philosophical Logic (Proceedings of Trends in Logic XI). Berlin: Springer. pp. 47-58.details
The Knowability Paradox is a logical argument showing that if all truths are knowable in principle, then all truths are, in fact, known. Many strategies have been suggested in order to avoid the paradoxical conclusion. A family of solutions –ncalled logical revision – has been proposed to solve the paradox, revising the logic underneath, with an intuitionistic revision included. In this paper, we focus on so-called revisionary solutions to the paradox – solutions that put the blame (...) on the underlying logic. Specifically, we analyse a possibile translation of the paradox into a modified intuitionistic fragment of a logic for pragmatics inspired by Dalla Pozza and Garola in 1995. Our aim is to understand if KILP is a candidate for the logical revision of the paradox and to compare it with the standard intuitionistic solution to the paradox. (shrink)
Bookmark 4 citations
From the Knowability Paradox to the existence of proofs.W. Dean & H. Kurokawa - 2010 - Synthese 176 (2):177 - 225.details
The Knowability Paradox purports to show that the controversial but not patently absurd hypothesis that all truths are knowable entails the implausible conclusion that all truths are known. The notoriety of this argument owes to the negative light it appears to cast on the view that there can be no verification-transcendent truths. We argue that it is overly simplistic to formalize the views of contemporary verificationists like Dummett, Prawitz or Martin-Löf using the sort of propositional modal operators which (...) are employed in the original derivation of the Paradox. Instead we propose that the central tenet of verificationism is most accurately formulated as follows: if φ is true, then there exists a proof of φ Building on the work of Artemov (Bull Symb Log 7(1): 1-36, 2001), a system of explicit modal logic with proof quantifiers is introduced to reason about such statements. When the original reasoning of the Paradox is developed in this setting, we reach not a contradiction, but rather the conclusion that there must exist non-constructed proofs. This outcome is evaluated relative to the controversy between Dummett and Prawitz about proof existence and bivalence. (shrink)
The knowability paradox and the prospects for anti-realism.Jonathan Kvanvig - 1995 - Noûs 29 (4):481-500.details
How to solve the knowability paradox with transcendental epistemology.Andrew Stephenson - 2018 - Synthese 198 (Suppl 13):3253-3278.details
A novel solution to the knowability paradox is proposed based on Kant's transcendental epistemology. The 'paradox' refers to a simple argument from the moderate claim that all truths are knowable to the extreme claim that all truths are known. It is significant because anti-realists have wanted to maintain knowability but reject omniscience. The core of the proposed solution is to concede realism about epistemic statements while maintaining anti-realism about non-epistemic statements. Transcendental epistemology supports such a view (...) by providing for a sharp distinction between how we come to understand and apply epistemic versus non-epistemic concepts, the former through our capacity for a special kind of reflective self-knowledge Kant calls 'transcendental apperception'. The proposal is a version of restriction strategy: it solves the paradox by restricting the anti-realist's knowability principle. Restriction strategies have been a common response to the paradox but previous versions face serious difficulties: either they result in a knowability principle too weak to do the work anti-realists want it to, or they succumb to modified forms of the paradox, or they are ad hoc. It is argued that restricting knowability to non-epistemic statements by conceding realism about epistemic statements avoids all versions of the paradox, leaves enough for the anti-realist attack on classical logic, and, with the help of transcendental epistemology, is principled in a way that remains compatible with a thoroughly anti-realist outlook. (shrink)
Formal Epistemology, Misc in Epistemology
Kant: Epistemology in 17th/18th Century Philosophy
Semantic Anti-Realism in Metaphysics
Knowability Paradox.Jonathan L. Kvanvig - 2006 - Oxford, England: Oxford University Press UK.details
The paradox of knowability, derived from a proof by Frederic Fitch in 1963, is one of the deepest paradoxes concerning the nature of truth. Jonathan Kvanvig argues that the depth of the paradox has not been adequately appreciated. It has long been known that the paradox threatens antirealist conceptions of truth according to which truth is epistemic. If truth is epistemic, what better way to express that idea than to maintain that all truths are knowable? In (...) the face of the paradox, however, such a characterization threatens to undermine antirealism. If Fitch's proof is valid, then one can be an antirealist of this sort only by endorsing the conclusion of the proof that all truths are known.Realists about truth have tended to stand on the sidelines and cheer the difficulties faced by their opponents from Fitch's proof. Kvanvig argues that this perspective is wholly unwarranted. He argues that there are two problems raised by the paradox, one that threatens antirealism about truth and the other that threatens everybody's view about truth, realist or antirealist. The problem facing antirealism has had a number of proposed solutions over the past 40 years, and the results have not been especially promising with regard to the first problem. The second problem has not even been acknowledged, however, and the proposals regarding the first problem are irrelevant to the second problem. This book thus provides a thorough investigation of the literature on the paradox, and also proposes a solution to the deeper of the two problems raised by Fitch's proof. It provides a complete picture of the paradoxicality that results from Fitch's proof, and presents a solution to the paradox that claims to address both problems raised by the original proof. (shrink)
The Knowability Paradox and Unsuccessful Updates.Arkadiusz Wójcik - 2020 - Studies in Logic, Grammar and Rhetoric 62 (1):53-71.details
In this paper we undertake an analysis of the knowability paradox in the light of modal epistemic logics and of the phenomena of unsuccessful updates. The knowability paradox stems from the Church-Fitch observation that the plausible knowability principle, according to which all truths are knowable, yields the unacceptable conclusion that all truths are known. We show that the phenomenon of an unsuccessful update is the reason for the paradox arising. Based on this diagnosis, we (...) propose a restriction on the knowability principle which resolves the paradox. (shrink)
New Essays on the Knowability Paradox.Joe Salerno (ed.) - 2008 - Oxford, England and New York, NY, USA: Oxford University Press.details
This collection assembles Church's referee reports, Fitch's 1963 paper, and nineteen new papers on the knowability paradox.
Paradoxes in Logic and Philosophy of Logic
$84.12 used $105.00 new View on Amazon.com
The Knowability Paradox, perfectibility of science and reductionism.Massimiliano Carrara & Davide Fassio - unknowndetails
A logical argument known as Fitch's Paradox of Knowability, starting from the assumption that every truth is knowable, leads to the consequence that every truth is also actually known. Then, given the ordinary fact that some true propositions are not actually known, it concludes, by modus tollens, that there are unknowable truths. The main literature on the topic has been focusing on the threat the argument poses to the so called semantic anti-realist theories, which aim to epistemically characterize (...) the notion of truth; according to those theories, every true proposition must be knowable. But the paradox seems to be a problem also for epistemology and philosophy of science: the conclusion of the paradox – the claim that there are unknowable truths – seems to seriously narrow our epistemic possibilities and to constitute a limit for knowledge. This fact contrasts with certain views in philosophy of science according to which every scientific truth is in principle knowable and, at least at an ideal level, a perfected, "all-embracing", omniscient science is possible. The main strategies proposed in order to avoid the paradoxical conclusion, given their effectiveness, are able to address only semantic problems, not epistemological ones. However, recently Bernard Linsky (2008) proposed a solution to the paradox that seems to be effective also for the epistemological problems. In particular, he suggested a possible way to block the argument employing a type-distinction of knowledge. In the present paper, firstly, we introduce the paradox and the threat it represents for a certain views in epistemology and philosophy of science; secondly, we show Linsky's solution; thirdly, we argue that this solution, in order to be effective, needs a certain kind of justification, and we suggest a way of justifying it in the scientific field; fourthly, we show that the effectiveness of our proposal depends on the degree of reductionism adopted in science: it is available only if we do not adopt a complete reductionism. (shrink)
The knowability paradox – by Jonathan Kvanvig.Fredrik Stjernberg - 2008 - Theoria 74 (3):255-262.details
The Church–Fitch knowability paradox in the light of structural proof theory.Paolo Maffezioli, Alberto Naibo & Sara Negri - 2012 - Synthese 190 (14):2677-2716.details
Anti-realist epistemic conceptions of truth imply what is called the knowability principle: All truths are possibly known. The principle can be formalized in a bimodal propositional logic, with an alethic modality ${\diamondsuit}$ and an epistemic modality ${\mathcal{K}}$, by the axiom scheme ${A \supset \diamondsuit \mathcal{K} A}$. The use of classical logic and minimal assumptions about the two modalities lead to the paradoxical conclusion that all truths are known, ${A \supset \mathcal{K} A}$. A Gentzen-style reconstruction of the Church–Fitch paradox (...) is presented following a labelled approach to sequent calculi. First, a cut-free system for classical bimodal logic is introduced as the logical basis for the Church–Fitch paradox and the relationships between ${\mathcal {K}}$ and ${\diamondsuit}$ are taken into account. Afterwards, by exploiting the structural properties of the system, in particular cut elimination, the semantic frame conditions that correspond to KP are determined and added in the form of a block of nonlogical inference rules. Within this new system for classical and intuitionistic "knowability logic", it is possible to give a satisfactory cut-free reconstruction of the Church–Fitch derivation and to confirm that OP is only classically derivable, but neither intuitionistically derivable nor intuitionistically admissible. Finally, it is shown that in classical knowability logic, the Church–Fitch derivation is nothing else but a fallacy and does not represent a real threat for anti-realism. (shrink)
Epistemic Paradoxes in Epistemology
Proof Theory in Logic and Philosophy of Logic
The Knowability Paradox, by Jonathan L. Kvanvig. [REVIEW]Igor Douven - 2007 - Ars Disputandi 7.details
Religious Topics in Philosophy of Religion
The incarnation and the knowability paradox.Jonathan Kvanvig - 2010 - Synthese 173 (1):89 - 105.details
The best defense of the doctrine of the Incarnation implies that traditional Christianity has a special stake in the knowability paradox, a stake not shared by other theistic perspectives or by non-traditional accounts of the Incarnation. Perhaps, this stake is not even shared by antirealism, the view most obviously threatened by the paradox. I argue for these points, concluding that these results put traditional Christianity at a disadvantage compared to other viewpoints, and I close with some comments (...) about the extent of the burden incurred. (shrink)
Perfected Science and the Knowability Paradox.Massimiliano Carrara & Davide Fassio - 2010 - In M. M. D'Agostino, G. Giorello, F. Laudisa, T. Pievani & C. Sinigaglia (eds.), New Essays in Logic and Philosophy of Science. London College Publications.details
In "The Limits of Science" N. Rescher introduces a logical argument known as the Knowability Paradox, according to which, if every true proposition is knowable, then every true proposition is known, i.e. if there are unknown truths, there are unknowable truths. Rescher argues that the Knowability Paradox, giving evidence to a limit of our knowledge (the existence of unknowable truths) could be used for arguing against perfected science. In this article we present two criticisms against Rescher's (...) argument. (shrink)
New Essays on the Knowability Paradox.Jens Christian Bjerring - 2012 - History and Philosophy of Logic 33 (1):101 - 104.details
History and Philosophy of Logic, Volume 33, Issue 1, Page 101-104, February 2012.
Why Knowledge Should Not Be Typed: An Argument against the Type Solution to the Knowability Paradox.Massimiliano Carrara & Davide Fassio - 2011 - Theoria 77 (2):180-193.details
The Knowability Paradox is a logical argument to the effect that, if there are truths not actually known, then there are unknowable truths. Recently, Alexander Paseau and Bernard Linsky have independently suggested a possible way to counter this argument by typing knowledge. In this article, we argue against their proposal that if one abstracts from other possible independent considerations supporting reasons for typing knowledge and considers the motivation for a type-theoretic approach with respect to the Knowability (...) class='Hi'>Paradox alone, there is no substantive philosophical motivation to type knowledge, except that of solving the paradox. Every attempt to independently justify the typing of knowledge is doomed to failure. (shrink)
Jonathan Kvanvig, The Knowability Paradox. [REVIEW]Manuel Bremer - 2007 - Philosophy in Review 27:415-416.details
A Multimodal Pragmatic Treatment of the Knowability Paradox.Massimiliano Carrara, Daniele Chiffi & Davide Sergio - 2017 - In Gillman Payette & Rafal Urbaniak (eds.), Applications of Formal Philosophy. The Road Less Travelled. Berlin: Springer International Publishing AG. pp. 195-209.details
Paradoxes, Miscellaneous in Logic and Philosophy of Logic
$24.99 new (collection) View on Amazon.com
Review: The Knowability Paradox. [REVIEW]C. S. Jenkins - 2006 - Mind 115 (460):1141-1147.details
New Essays on the Knowability Paradox.Alexandre Costa-Leite - 2011 - International Studies in the Philosophy of Science 25 (2):194 - 196.details
International Studies in the Philosophy of Science, Volume 25, Issue 2, Page 194-196, June 2011.
New Essays on the Knowability Paradox – Edited by Joe Salerno.Hans Van Ditmarsch - 2010 - Theoria 76 (3):270-273.details
Fitch's problem and the knowability paradox: Logical and philosophical remarks'.Concha Martinez, Jose-Miguel SAGüILLO & Javier Vilanova - 1997 - Logica Trianguli 1:73-91.details
Fitch´s problem and the "knowability paradox" involve a couple of argumentations that are to each other in the same relation as Cantor´s uncollected multitudes theorem and Russell´s paradox. The authors exhibit the logical nature of the theorem and of the paradox and show their philosophical import, both from an anti-realist and from a realist perspective. In particular, the authors discuss an anti-realist solution to Fitch´s problem and provide an anti-realist interpretation of the problematic statement "It is (...) knowable that r is known and yet unknown". Then, it is argued that the knowability paradox has a solution even if one adopts a realist point of view. The authors provide a solution that takes into account the ambiguity of the term 'knowability' by deploying a temporal possible world semantics for epistemic modalities. (shrink)
Phenomenology, anti‐realism, and the knowability paradox.James Kinkaid - 2022 - European Journal of Philosophy 30 (3):1010-1027.details
Husserl endorses ideal verificationism, the claim that there is a necessary correlation between truth and the ideal possibility of experience. This puts him in the company of semantic anti-realists like Dummett, Tennant, and Wright who endorse the knowability thesis that all truths are knowable. Unfortunately, there is a simple, seductive, and troubling argument due to Alonzo Church and Frederic Fitch that the knowability thesis collapses into the omniscience thesis that all truths are known. Phenomenologists should be worried. I (...) assess the damage by surveying responses that may be open to Husserl. In particular, I explore whether Husserl ought to have adopted intuitionistic logic and motivate a restriction of ideal verificationism on phenomenological grounds. (shrink)
Husserl and Analytic Philosophers in Continental Philosophy
Husserl: Intentionality, Misc in Continental Philosophy
Husserl: Truth in Continental Philosophy
Realism and Anti-Realism in Metaphysics
Joe Salerno (ed): New essays on the knowability paradox. [REVIEW]Mark Jago - 2010 - Journal of Logic, Language and Information 19 (3):383-387.details
European Journal of Philosophy, Volume 30, Issue 3, Page 1010-1027, September 2022.
Joe Salerno, ed. New Essays on the Knowability Paradox. Reviewed by.Sam Cowling - 2010 - Philosophy in Review 30 (3):220-222.details
Review of Jonathan L. Kvanvig, The Knowability Paradox. [REVIEW]Philip Percival - 2007 - Notre Dame Philosophical Reviews 2007 (3).details
Knowability and bivalence: intuitionistic solutions to the Paradox of Knowability.Julien Murzi - 2010 - Philosophical Studies 149 (2):269-281.details
In this paper, I focus on some intuitionistic solutions to the Paradox of Knowability. I first consider the relatively little discussed idea that, on an intuitionistic interpretation of the conditional, there is no paradox to start with. I show that this proposal only works if proofs are thought of as tokens, and suggest that anti-realists themselves have good reasons for thinking of proofs as types. In then turn to more standard intuitionistic treatments, as proposed by Timothy Williamson (...) and, most recently, Michael Dummett. Intuitionists can either point out the intuitionistc invalidity of the inference from the claim that all truths are knowable to the insane conclusion that all truths are known, or they can outright demur from asserting the existence of forever-unknown truths, perhaps questioning—as Dummett now suggests—the applicability of the Principle of Bivalence to a certain class of empirical statements. I argue that if intuitionists reject strict finitism—the view that all truths are knowable by beings just like us—the prospects for either proposal look bleak. (shrink)
The Paradox of Knowability and Factivity.Michael Shaffer - 2014 - Polish Journal of Philiosophy 8 (1):85-91.details
This paper shows that the knowability paradox isn't a paradox because the derivation of the paradox is faulty. This is explained by showing that the K operator employed in generating the paradox is used equivocally and when the equivocation is eliminated the derivation fails.
Doxastic and Epistemic Logic in Logic and Philosophy of Logic
Truth, Misc in Philosophy of Language
Knowability, possibility and paradox.Berit Brogaard & Joe Salerno - 2008 - In Vincent Hendricks (ed.), New Waves in Epistemology. Palgrave-Macmillan.details
The paradox of knowability threatens to draw a logical equivalence between the believable claim that all truths are knowable and the obviously false claim that all truths are known. In this paper we evaluate prominent proposals for resolving the paradox of knowability. For instance, we argue that Neil Tennant's restriction strategy, which aims principally to restrict the main quantifier in 'all truths are knowable', does not get to the heart of the problem since there are (...) class='Hi'>knowability paradoxes that the restriction does nothing to thwart. We argue that Jon Kvanvig's strategy, which aims to block the paradox by appealing to the special role of quantified epistemic expressions in modal contexts, has grave errors. We offer here a new proposal founded on Kvanvig's insight that quantified expressions play a special role in modal contexts. On an articulation of this special role provided by Stanley and Szabo, we propose a solution to the knowability paradoxes. Introduction.. (shrink)
$49.95 used $57.80 new $68.64 from Amazon (collection) View on Amazon.com
Fitchův paradox poznatelnosti a rozvětvená teorie typů [Fitch's Paradox of Knowability and Ramified Theory of Types].Jiri Raclavsky - 2013 - Organon F: Medzinárodný Časopis Pre Analytickú Filozofiu 20:144-165.details
It is already known that Fitch's knowability paradox can be solved by typing knowledge within ramified theory of types. One of the aims of this paper is to provide a greater defence of the approach against recently raised criticism. My second goal is to make a sufficient support for an assumption which is needed for this particular application of typing knowledge but which is not inherent to ramified theory of types as such.
Knowability and Other Onto-theological Paradoxes.Franca D'Agostini - 2019 - Logica Universalis 13 (4):577-586.details
In virtue of Fitch-Church proof, also known as the knowability paradox, we are able to prove that if everything is knowable, then everything is known. I present two 'onto-theological' versions of the proof, one concerning collective omniscience and another concerning omnificence. I claim these arguments suggest new ways of exploring the intersection between logical and ontological givens that is a grounding theme of religious thought. What is more, they are good examples of what I call semi-paradoxes: apparently sound (...) arguments whose conclusion is not properly unacceptable, but simply arguable. (shrink)
Bookmark 1 citation
The Paradox of Knowability and Epistemic Theories of Truth.Boris Rähme - manuscriptdetails
The article suggests a reading of the term 'epistemic account of truth' which runs contrary to a widespread consensus with regard to what epistemic accounts are meant to provide, namely a definition of truth in epistemic terms. Section 1. introduces a variety of possible epistemic accounts that differ with regard to the strength of the epistemic constraints they impose on truth. Section 2. introduces the paradox of knowability and presents a slightly reconstructed version of a related argument brought (...) forward by Wolfgang Künne. I accept the paradox and Künnes argument as sound objections to all the different epistemic accounts which are committed to one of the various constraints on truth introduced in section 1. Section 3. offers a modified epistemic constraint which, or so I argue, is immune to the paradox of knowability and plausible on independent grounds. (shrink)
Theories of Truth, Misc in Philosophy of Language
Truth and Justification in Philosophy of Language
The Paradox of Knowability from a Russellian Perspective.Pierdaniele Giaretta - 2009 - Prolegomena 8 (2):141-158.details
The paradox of knowability and the debate about it are shortly presented. Some assumptions which appear more or less tacitly involved in its discussion are made explicit. They are embedded and integrated in a Russellian framework, where a formal paradox, very similar to the Russell-Myhill paradox, is derived. Its solution is provided within a Russellian formal logic introduced by A. Church. It follows that knowledge should be typed. Some relevant aspects of the typing of knowledge are (...) pointed out. (shrink)
Logical Semantics and Logical Truth in Logic and Philosophy of Logic
Paradoxes, Misc in Logic and Philosophy of Logic
The paradox of knowability.Dorothy Edgington - 1985 - Mind 94 (376):557-568.details
Fitch's Paradox of Knowability.Berit Brogaard & Joe Salerno - 2010 - The Stanford Encyclopedia of Philosophy.details
The paradox of knowability is a logical result suggesting that, necessarily, if all truths are knowable in principle then all truths are in fact known. The contrapositive of the result says, necessarily, if in fact there is an unknown truth, then there is a truth that couldn't possibly be known. More specifically, if p is a truth that is never known then it is unknowable that p is a truth that is never known. The proof has been used (...) to argue against versions of anti-realism committed to the thesis that all truths are knowable. For clearly there are unknown truths; individually and collectively we are non-omniscient. So, by the main result, it is false that all truths are knowable. The result has also been used to draw more general lessons about the limits of human knowledge. Still others have taken the proof to be fallacious, since it collapses an apparently moderate brand of anti-realism into an obviously implausible and naive idealism. (shrink)
Realism and Anti-Realism, Misc in Metaphysics
Knowability and a New Paradox of Happiness.Joe Salerno - 2018 - In Hans van Ditmarsch & Gabriel Sandu (eds.), Jaakko Hintikka on Knowledge and Game Theoretical Semantics. Springer. pp. 457-474.details
The paper examines the logic of the knowability paradox and a structural analogue, a new paradox of happiness. We develop a general understanding of what it is to be a Fitch paradox, and follow a natural thread in the literature that attempts to block or resolve Fitch paradoxes. We conclude that, in the case of the attitude of happiness, the new paradox remains even if one finds the knowability analogue non-threatening.
Knowability as De Re Modality: A Certain Solution to Fitch Paradox.Tomasz Jarmużek, Krzysztof Krawczyk & Rafał Palczewski - 2020 - Roczniki Filozoficzne 68 (4):291-313.details
Poznawalność jako modalność de re: pewne rozwiązanie paradoksu Fitcha W artykule staramy się znaleźć nowe, intuicyjne rozwiązanie paradoksu Fitcha. Twierdzimy, że tradycyjne wyrażenie zasady poznawalności opiera się na błędnym rozumieniu poznawalności jako modalności de dicto. Zamiast tego proponujemy rozumieć poznawalność jako modalność de re. W artykule przedstawiamy minimalną logikę poznawalności, w której zasada poznawalności jest ważna, ale paradoks Fitcha już nie obowiązuje. Logikę charakteryzujemy semantycznie, a także poprzez podejście aksjomatyczne i tabelaryczne.
Distributed Knowability and Fitch's Paradox.Rafał Palczewski - 2007 - Studia Logica 86 (3):455-478.details
Recently predominant forms of anti-realism claim that all truths are knowable. We argue that in a logical explanation of the notion of knowability more attention should be paid to its epistemic part. Especially very useful in such explanation are notions of group knowledge. In this paper we examine mainly the notion of distributed knowability and show its effectiveness in the case of Fitch's paradox. Proposed approach raised some philosophical questions to which we try to find responses. We (...) also show how we can combine our point of view on Fitch's paradox with the others. Next we give an answer to the question: is distributed knowability factive? At the end, we present some details concerning a construction of anti-realist modal epistemic logic. (shrink)
Logics in Logic and Philosophy of Logic
Kant, the Paradox of Knowability, and the Meaning of 'Experience'.Andrew Stephenson - 2015 - Philosophers' Imprint 15 (27):1-19.details
It is often claimed that anti-realism is a form of transcendental idealism or that Kant is an anti-realist. It is also often claimed that anti-realists are committed to some form of knowability principle and that such principles have problematic consequences. It is therefore natural to ask whether Kant is so committed, and if he is, whether this leads him into difficulties. I argue that a standard reading of Kant does indeed have him committed to the claim that all empirical (...) truths are knowable and that this claim entails that there is no empirical truth that is never known. I extend the result to a priori truths and draw some general philosophical lessons from this extension. However, I then propose a re-examination of Kant's notion of experience according to which he carefully eschews any commitment to empirical knowability. Finally I respond to a remaining problem that stems from a weaker, justified believability principle. (shrink)
Kant: Metaphysics and Epistemology, Misc in 17th/18th Century Philosophy
Kant: Transcendental Idealism in 17th/18th Century Philosophy
``The Paradox of Knowability".Dorothy Edgington - 1985 - Mind 94:557-568.details
The paradox of knowability and the mapping objection.Stig Alstrup Rasmussen - 2009 - In Joe Salerno (ed.), New Essays on the Knowability Paradox. Oxford University Press.details
$84.12 used $105.00 new (collection) View on Amazon.com
Discovering knowability: a semantic analysis.Sergei Artemov & Tudor Protopopescu - 2013 - Synthese 190 (16):3349-3376.details
In this paper, we provide a semantic analysis of the well-known knowability paradox stemming from the Church–Fitch observation that the meaningful knowability principle /all truths are knowable/, when expressed as a bi-modal principle F --> K♢F, yields an unacceptable omniscience property /all truths are known/. We offer an alternative semantic proof of this fact independent of the Church–Fitch argument. This shows that the knowability paradox is not intrinsically related to the Church–Fitch proof, nor to the (...) Moore sentence upon which it relies, but rather to the knowability principle itself. Further, we show that, from a verifiability perspective, the knowability principle fails in the classical logic setting because it is missing the explicit incorporation of a hidden assumption of /stability/: 'the proposition in question does not change from true to false in the process of discovery.' Once stability is taken into account, the resulting /stable knowability principle/ and its nuanced versions more accurately represent verification-based knowability and do not yield omniscience. (shrink)
Intuitionistic Logic in Logic and Philosophy of Logic
Modal Logic in Logic and Philosophy of Logic
On the paradox of knowability.Timothy Williamson - 1987 - Mind 96 (382):256-261.details
Fitch's Paradox of Knowability and the Knower Paradox: Against a Proposed Dialetheist Unified Solution.Ricardo Santos - 2017 - Revista Portuguesa de Filosofia 73 (3-4):1001-1020.details
After introducing Fitch's paradox of knowability and the knower paradox, the paper critically discusses the dialetheist unified solution to both problems that Beall and Priest have proposed. It is first argued that the dialetheist approach to the knower paradox can withstand the main objections against it, these being that the approach entails an understanding of negation that is intolerably weak and that it commits dialetheists to jointly accept and reject the same thing. The lesson of the (...) knower paradox, according to dialetheism, is that human knowledge is inconsistent. The paper also argues that this inconsistency has not been shown by dialetheists to be wide enough in its scope to justify their approach to Fitch's problem. The connection between the two problems is superficial and therefore the proposed unified solution fails. (shrink)
Fitch's paradox of knowability.Michael Dummett - 2009 - In Joe Salerno (ed.), New Essays on the Knowability Paradox. Oxford University Press.details
Clues to the paradoxes of knowability: Reply to Dummett and Tennant.Berit Brogaard & Joe Salerno - 2002 - Analysis 62 (2):143–150.details
Tr(A) iff ‡K(A) To remedy the error, Dummett's proposes the following inductive characterization of truth: (i) Tr(A) iff ‡K(A), if A is a basic statement; (ii) Tr(A and B) iff Tr(A) & Tr(B); (iii) Tr(A or B) iff Tr(A) v Tr(B); (iv) Tr(if A, then B) iff (Tr(A) Æ Tr(B)); (v) Tr(it is not the case that A) iff ¬Tr(A), where the logical constant on the right-hand side of each biconditional clause is understood as subject to the laws of intuitionistic (...) logic.2 The only other principle in play in Dummett's discussion is (+) A iff Tr(A), which, as he notes, the anti-realist is likely to accept. (shrink)
Clues to the paradoxes of knowability: reply to Dummett and Tennant.B. Brogaard & J. Salerno - 2002 - Analysis 62 (2):143-150.details
Situations, Truth and Knowability: A Situation-Theoretic Analysis of a Paradox by Fitch.Sten Lindström - 1997 - In Eva Ejerhed & Sten Lindström (eds.), Logic, Action and Cognition: Essays in Philosophical Logic. Kluwer Academic Publishers.details
$82.20 new $84.10 used $93.04 from Amazon (collection) View on Amazon.com
``On the Paradox of Knowability".Timothy Williamson - 1987 - Mind 96:256-261.details
1 — 50 / 1000 | CommonCrawl |
HomeTextbook AnswersMathCalculusCalculus (3rd Edition)Chapter 12 - Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises - Page 63656
Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Appendix A Appendix C Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.1 Parametric Equations - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.2 Arc Length and Speed - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.3 Polar Coordinates - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.4 Area and Arc Length in Polar - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Preliminary Questions Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises Parametric Equations, Polar Coordinates, and Conic Sections - Chapter Review Exercises Parametric Equations, Polar Coordinates, and Conic Sections - Chapter Review Exercises 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
Chapter 12 - Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises - Page 636: 56
The conic section is an ellipse and the directrix is $x=8$.
Converting the given equation to the standard form $$ r=\frac{e d}{1+e \cos \theta}. $$ We get $$ r=\frac{8}{4+ \cos \theta}=\frac{2}{1+(1/4) \cos \theta}. $$ Thus $ e=\frac{1}{4}.$ Since $e\lt 1$ then the conic section is an ellipse. To find the directrix, we find $d$ first. Since $ed=2$ and $e=1/4$, then $d=8$ and hence the directrix is $x=8$.
Next Answer Chapter 12 - Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises - Page 636: 57 Previous Answer Chapter 12 - Parametric Equations, Polar Coordinates, and Conic Sections - 12.5 Conic Sections - Exercises - Page 636: 55 | CommonCrawl |
Search results for: Y. Chen
Items from 1 to 20 out of 7,438 results
An empirical study of computer science majors' intentions in Vietnamese higher education
Tim Chen, J.C.‐Y. Chen
Computer Applications in Engineering Education > 27 > 4 > 814 - 820
The study aimed to explore Vietnamese computer science (CS) majors' intentions of entering computer science professions in the future and to examine the factors influencing the intentions applying the theory of planned behavior as the framework. There are questionnaire surveys of 725 Vietnamese CS majors and were analyzed using path analysis by SPSS Statistics 17.0.1. We found that the CS majors'...
Transmittance Analysis for the Translucent White Porcelain of the Xing Kiln, China
W. X. Wang, Y. Chen, Z. Z. Zhang, C. S. Wang
Archaeometry > 61 > 4 > 828 - 836
The Xing kiln, which is located in Xingtai, Hebei Province, is famous for white porcelain production, especially translucent white porcelain. However, the translucency, as the most important characteristic of translucent white porcelain, is rarely mentioned. In this study, eight samples including translucent white porcelain and non‐translucent white porcelain excavated from the Xing kiln are quantitatively...
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Genome‐wide association study reveals candidate genes associated with body measurement traits in Chinese Wagyu beef cattle
B. An, J. Xia, T. Chang, X. Wang, more
Animal Genetics > 50 > 4 > 386 - 390
We performed a genome‐wide association study to identify candidate genes for body measurement traits in 463 Wagyu beef cattle typed with the Illumina Bovine HD 770K SNP array. At the genome‐wide level, we detected 18, five and one SNPs associated with hip height, body height and body length respectively. In total, these SNPs are within or near 11 genes, six of which (PENK, XKR4, IMPAD1, PLAG1, CCND2...
A splicing mutation in PHKG1 decreased its expression in skeletal muscle and caused PSE meat in Duroc × Luchuan crossbred pigs
Y. Liu, Y. Liu, T. Ma, H. Long, more
In recent years, Luchuan pigs in southern China have been used to produce high‐quality meat by crossbreeding them with Duroc boars; however, PSE (pale, soft and exudative) meat was frequently reported in the crossbred pigs, and the underlying reason remains unknown. We excluded the possibility of the well‐known causative mutations in RYR1 and PRKAG3 but identified the existence of an unfavorable allele...
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production...
Morphology Analysis and Characteristics Evaluation of Typical Super Abrasive Grits in Micron Scale
Y. Chen, X. Chen, L. AIOuarab, T. Opoz, more
Journal of Superhard Materials > 2019 > 41 > 3 > 189-200
Distribution characterization of geometry shape and size of abrasive grits with high quality in tight size band and exact pattern is crucial for modern tool manufacturer to make fine powder abrasive tool and other powder tools, but complex to be classified and evaluated accurately due to the lack of scientific method. In contrast to industrial methods with sieving mesh size or simplified projection...
Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons
Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation...
Regulation of m6A RNA Methylation and Its Effect on Myogenic Differentiation in Murine Myoblasts
J. N. Chen, Y. Chen, Y. Y. Wei, M. A. Raza, more
Molecular Biology > 2019 > 53 > 3 > 384-392
N6-methyladenosine (m6A) has been identified as a conserved epitranscriptomic modification of eukaryotic mRNAs, and plays important biological roles in the regulation of cellular metabolic processes. However, its role in myogenic differentiation is unclear. Here, we altered the m6A RNA methylation level by overexpression of METTL3, and explored the effect of m6A RNA methylation on myogenic differentiation...
Effects of colchicine in adults with metabolic syndrome: A pilot randomized controlled trial
Andrew P. Demidowich, Jordan A. Levine, Ginikanwa I. Onyekaba, Shahzaib M. Khan, more
Diabetes, Obesity and Metabolism > 21 > 7 > 1642 - 1651
Aim To evaluate the efficacy and safety of colchicine for improving metabolic and inflammatory outcomes in people with obesity and metabolic syndrome (MetS). Materials and methods Adults with obesity and MetS, but who did not have diabetes, were randomized to colchicine 0.6 mg or placebo capsules twice daily for 3 months. The primary outcome was change in insulin sensitivity (SI) as estimated by...
Alien chromosome segment from Aegilops speltoides and Dasypyrum villosum increases drought tolerance in wheat via profuse and deep root system
M. Djanaguiraman, P. V. V. Prasad, J. Kumari, S. K. Sehgal, more
BMC Plant Biology > 2019 > 19 > 1 > 1-15
Background Recurrent drought associated with climate change is a major constraint to wheat (Triticum aestivum L.) productivity. This study aimed to (i) quantify the effects of addition/substitution/translocation of chromosome segments from wild relatives of wheat on the root, physiological and yield traits of hexaploid wheat under drought, and (ii) understand the mechanism(s) associated with drought...
Structural Characterization and Expression Analysis of SmCSD1 Gene in Eggplant (Solanum melongena)
L. Zhou, L. Xu, M. M. Jiang, Y. Liu, more
Russian Journal of Plant Physiology > 2019 > 66 > 3 > 461-468
Superoxide dismutases (SODs) are crucial for plants for stress tolerance. They convert the superoxide anion into oxygen and hydrogen peroxide, providing the first line of defense against reactive oxygen species (ROS). SmCSD1 isolated from Solanum melongena L. belongs to the plant Cu/ZnSOD superfamily and shares high homology with Cu/ZnSODs in potato and tomato by protein sequence analysis. It was...
Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level...
Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te
A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like...
Systemic disruption of the homeostasis of transfer RNA isopentenyltransferase causes growth and development abnormalities in Bombyx mori
Y. Chen, B. Bai, H. Yan, F. Wen, more
Insect Molecular Biology > 28 > 3 > 380 - 391
Isopentenylation at A37 (i6A37) of some transfer RNAs (tRNAs) plays a vital role in regulating the efficiency and fidelity of protein synthesis. However, whether insects, which are well known for their highly efficient protein synthesis machinery, employ this regulatory mechanism remains uninvestigated. In the current study, a candidate tRNA isopentenyltransferase (IPT) gene with three alternative...
Subarray Beam-space Adaptive Beamforming Combined with Array Shape Estimation based on Non-Acoustic Sensor
Q. Wang, B. Zhou, Y. Chen, H. Quan
Acoustical Physics > 2019 > 65 > 2 > 226-233
To address the issue of serious decline in performance of the array signal processing caused by the towed array shape distortion during maneuvering, this paper presents a new method of subarray beam-space adaptive beamforming combined with an array shape estimation method based on non-acoustic sensor. Firstly, the array shape through the approximate circular arc structure of the array segment between...
Last year (240)
Last 3 years (1,328)
Available (7,070)
None (368)
Publication language
Undetermined (3)
SILICON (36)
OPTICAL FIBERS (30)
HADRON-HADRON SCATTERING (28)
RESISTANCE (27)
MICROSTRUCTURE (26)
LOGIC GATES (25)
RELIABILITY (24)
ANALYTICAL MODELS (23)
APOPTOSIS (22)
COMPUTATIONAL MODELING (22)
HIGGS (22)
PROGNOSIS (21)
TEMPERATURE MEASUREMENT (21)
ELECTRODES (19)
EPIDEMIOLOGY (19)
III-V SEMICONDUCTORS (19)
POLYMERASE CHAIN REACTION SEQUENCE‐BASED TYPING (19)
NEW ALLELE (18)
STABILITY (18)
COMPUTED TOMOGRAPHY (17)
HAFNIUM COMPOUNDS (17)
LHC (17)
MAGNETIC PROPERTIES (17)
MATHEMATICAL MODEL (17)
SUBSTRATES (17)
ABSORPTION (16)
ATRIAL FIBRILLATION (16)
HGCDTE (16)
NONLINEAR OPTICS (16)
OXIDATION (16)
X-RAY DIFFRACTION (16)
COILS (15)
BRACHYTHERAPY (14)
CMOS INTEGRATED CIRCUITS (14)
EDUCATIONAL INSTITUTIONS (14)
HIGH-TEMPERATURE SUPERCONDUCTORS (14)
MORTALITY (14)
NANOPARTICLES (14)
RADIOTHERAPY (14)
MAGNETIC FIELDS (13)
PHOTOLUMINESCENCE (13)
PREDICTIVE MODELS (13)
PROSTATE CANCER (13)
PROTEINS (13)
SUPERSYMMETRY (13)
GAN (12)
IMAGE RECONSTRUCTION (12)
MAGNETIC ANISOTROPY (12)
MBE (12)
OFDM (12)
OPTICAL POLARIZATION (12)
POLYMORPHISM (12)
CELLULAR BIOPHYSICS (11)
CURRENT MEASUREMENT (11)
DIELECTRICS (11)
DIFFERENTIATION (11)
INDIUM COMPOUNDS (11)
LYAPUNOV METHODS (11)
MEASUREMENT BY LASER BEAM (11)
MODULATION (11)
NANOFABRICATION (11)
OPTICAL WAVEGUIDES (11)
PHOTONIC BAND GAP (11)
PHOTONIC CRYSTALS (11)
RANDOM ACCESS MEMORY (11)
SIGNAL TO NOISE RATIO (11)
Elsevier (3,360)
Springer (2,138)
ieee (941)
ACS (159)
BazTech (88)
Agro (64)
PSJD (6)
DML (3)
Journal of High Energy Physics (489)
The European Physical Journal C (301)
Physics Letters B (263)
International Journal of Radiation Oncology, Biology, Physics (206)
Value in Health (81)
Transplantation Proceedings (77)
Microelectronic Engineering (69)
Journal of Hepatology (58)
International Journal of Radiation Oncology*Biology*Physics (53)
Electronics Letters (51)
Chemistry of Natural Compounds (49)
Journal of Electronic Materials (48)
Journal of Alloys and Compounds (42)
Applied Physics A (40)
Journal of the American College of Cardiology (39)
The Journal of Physical Chemistry C (38)
American Heart Journal (36)
IEEE Transactions on Applied Superconductivity (34)
Applied Physics B (33)
Nuclear Inst. and Methods in Physics Research, A (31)
Materials Letters (30)
Polish Journal of Environmental Studies (27)
Acta Materialia (25)
HLA (25)
The American Journal of Cardiology (25)
Clinical Microbiology and Infection (24)
Osteoporosis International (24)
Osteoarthritis and Cartilage (23)
European Journal of Clinical Microbiology & Infectious Diseases (21)
The International Journal of Advanced Manufacturing Technology (21)
Bioorganic & Medicinal Chemistry Letters (20)
Fusion Engineering and Design (20)
International Journal of Cardiology (20)
Nuclear Inst. and Methods in Physics Research, B (20)
Otolaryngology - Head and Neck Surgery (20)
Journal of Viral Hepatitis (19)
Macromolecules (19)
Materials Science & Engineering A (19)
Metallurgical and Materials Transactions A (19)
European Journal of Cancer (18)
Journal of Materials Science (18)
Thin Solid Films (18)
Acta Physiologiae Plantarum (17)
Journal of Nuclear Materials (17)
Solid State Communications (17)
Analytical Chemistry (16)
Applied Surface Science (16)
Breast Cancer Research and Treatment (16)
European Journal of Neurology (16)
Journal of Endocrinological Investigation (16)
Physica C: Superconductivity and its applications (16)
Analytical and Bioanalytical Chemistry (15)
Biophysical Journal (15)
Combustion and Flame (15)
International Journal of Infectious Diseases (15)
Materials Chemistry and Physics (15)
Nano Letters (15)
Radiotherapy and Oncology (15)
Scripta Materialia (15)
Clinical Radiology (14)
IET Communications (14)
Journal of Crystal Growth (14)
Journal of Superconductivity and Novel Magnetism (14)
Oxidation of Metals (14)
Photosynthetica (14)
The Journal of Allergy and Clinical Immunology (14)
The Journal of Thoracic and Cardiovascular Surgery (14)
Theoretical and Applied Genetics (14)
Animal Genetics (13)
Cell Proliferation (13)
Diabetologia (13)
EJC Supplements (13)
Fuel and Energy Abstracts (13)
International Journal of Oral & Maxillofacial Surgery (13)
Journal of Surgical Research (13)
Journal of Thermal Analysis and Calorimetry (13)
Academic Radiology (12)
British Journal of Anaesthesia (12)
Bulletin of Environmental Contamination and Toxicology (12)
Chemical Physics Letters (12)
Fertility and Sterility (12)
IEEE Transactions on Magnetics (12)
Journal of Biomechanics (12)
Journal of Magnetism and Magnetic Materials (12)
Proceedings of the Combustion Institute (12)
Przegląd Elektrotechniczny (12)
The Journal of Minimally Invasive Gynecology (12)
Accident Analysis and Prevention (11)
Angewandte Chemie (11)
Angewandte Chemie International Edition (11)
Atherosclerosis (Supplements) (Component) (11)
Breast Cancer Research (11)
JACC: Cardiovascular Interventions (11) | CommonCrawl |
Home › Study Groups › Epidemiology
Logistic equation - with recent commentary on Corona virus
January 2011 edited March 2020 in Epidemiology
I created a tiny boring stub:
Logistic equation
Staffan Liljegren
and i added a link to what we have to deal with soon Population growth
Comment Source:and i added a link to what we have to deal with soon [[Population growth]]
WebHubTel
There is some push to use the Logistic equation solution to model oil depletion. It's one of those ideas that draws from questionable premises: "do oil molecules multiply?, is there a carrying capacity for oil?", etc. I would be interesting to either counter or support this claim by deconstructing the equation for oil. (I have my own theories that I can contribute).
The other interesting feature of the Logistic solution is that it has the same structure as the Fermi-Dirac distribution function, although the derivation is completely different.
This leads one to think that the Logistic can have other origins, not related to solutions of the Logistic equation. I have my own derivation that I can contribute, drawing from dispersion mechanisms.
Comment Source:There is some push to use the Logistic equation solution to model oil depletion. It's one of those ideas that draws from questionable premises: "do oil molecules multiply?, is there a carrying capacity for oil?", etc. I would be interesting to either counter or support this claim by deconstructing the equation for oil. (I have my own theories that I can contribute). The other interesting feature of the Logistic solution is that it has the same structure as the Fermi-Dirac distribution function, although the derivation is completely different. This leads one to think that the Logistic can have other origins, not related to solutions of the Logistic equation. I have my own derivation that I can contribute, drawing from dispersion mechanisms.
I have my own derivation that I can contribute, drawing from dispersion mechanisms.
Please do! You can just create a separate section on that page and give your own derivation.
Comment Source:> I have my own derivation that I can contribute, drawing from dispersion mechanisms. Please do! You can just create a separate section on that page and give your own derivation.
I added to the boring stub a discussion of how it can be chaotic.
So far I've developed the discrete case.
The continuous case displays chaos if the logistic growth model is given delayed feedback.
As for how the logistic function describes oil depletion, see The Derivation of "Logistic-shaped" Discovery in The Oil Drum. The point is that the logistic function describes discovery. The rate of discovery of new oil fields grows with the rate of search and decreases with the fraction of fields left to discover.
Comment Source:I added to the boring stub a discussion of how it can be chaotic. So far I've developed the discrete case. The continuous case displays chaos if the logistic growth model is given delayed feedback. As for how the logistic function describes oil depletion, see [The Derivation of "Logistic-shaped" Discovery](http://www.theoildrum.com/node/4171) in The Oil Drum. The point is that the logistic function describes discovery. The rate of discovery of new oil fields grows with the rate of search and decreases with the fraction of fields left to discover.
I added the redirect from logistic map, in order to resolve the link to logistic map on Nonlinear science Cool with collaboration :-)
I will make a simple " Experiments in chaotic logistic map" in Sage, similar to Experiments in predator-prey in Sage
Comment Source:I added the redirect from logistic map, in order to resolve the link to [[logistic map]] on [[Nonlinear science]] Cool with collaboration :-) I will make a simple " Experiments in chaotic logistic map" in Sage, similar to [[ Experiments in predator-prey in Sage]]
The Corona Virus pandemic appears to be a relevant example of logistic growth. It grows exponentially at first but then tends to level out, as in China.
As mentioned in comment #2 above, I have a novel mathematical derivation of this logistic sigmoid which has absolutely nothing to do with the logistic equation, but instead uses stochastic principles of the competing processes of a dispersive exponential growth and a range of limiting populations in which to draw from -- this is on p.85 of our book Mathematical Geoenergy.
Just because a sigmoid-shaped curve follows a shape such as 1/(1+A exp(-t)) doesn't mean that it comes solely from the logistic equation. As noted in #2, consider that just as the logistic sigmoid also maps to the Fermi-Dirac distribution, the heuristic logistic equation derivation also appears to be just a quirky coincidence.
As an exercise amongst the mathematicians, can anyone else derive the logistic sigmoid function without relying on the logistic equation?
EDIT: This YouTube was recently posted and goes through the conventional derivation https://youtu.be/Kas0tIxDvrg
John has a Twitter thread on the virus here : https://twitter.com/WHUT/status/1238148317739089920
Comment Source:The Corona Virus pandemic appears to be a relevant example of logistic growth. It grows exponentially at first but then tends to level out, [as in China](https://publichealth.gsu.edu/coronavirus/).  As mentioned in comment #2 above, I have a novel mathematical derivation of this logistic sigmoid which has absolutely nothing to do with the logistic equation, but instead uses stochastic principles of the competing processes of a dispersive exponential growth and a range of limiting populations in which to draw from -- this is on [p.85 of our book Mathematical Geoenergy](https://www.google.com/books/edition/Mathematical_Geoenergy/xb17DwAAQBAJ?hl=en&gbpv=1&bsq=%22LOGISTIC%E2%80%90SHAPED%22). Just because a sigmoid-shaped curve follows a shape such as 1/(1+A exp(-t)) doesn't mean that it comes solely from the logistic equation. As noted in #2, consider that just as the logistic sigmoid also maps to the [Fermi-Dirac distribution](https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics#Fermi%E2%80%93Dirac_distribution), the heuristic logistic equation derivation also appears to be just a quirky coincidence. As an exercise amongst the mathematicians, can anyone else derive the logistic sigmoid function without relying on the logistic equation? **EDIT**: This YouTube was recently posted and goes through the conventional derivation https://youtu.be/Kas0tIxDvrg John has a Twitter thread on the virus here : https://twitter.com/WHUT/status/1238148317739089920
The reason why the original logistic equation suffers in its predictive power is that it assumes that the exponential growth coefficient includes an asymptotic limiting factor incorporating an effective population "carrying capacity" or "herd immunity", but that this must be known a prior to the process's initiation. In other words, how would the initial dynamics of an epidemic's growth know anything about the ultimate carrying capacity? It can't and because of this conflation between growth and decline in the logistic equation's formulation, it makes no sense to apply it over the entire time interval. An alternative formulation is needed that separates the growth dynamics from the carrying capacity and this is the context of how a more general dispersive growth model is derived.
For the virus contagion, the "flattening of the growth curve" is important as one can see in the China situation, growth initially exploded but it nowhere near reached the potential offered by China's total population. More info is needed to understand the limiting factor in the contagion.
Comment Source:The reason why the original logistic equation suffers in its predictive power is that it assumes that the exponential growth coefficient includes an asymptotic limiting factor incorporating an effective population "carrying capacity" or "herd immunity", but that this must be known *a prior* to the process's initiation. In other words, how would the initial dynamics of an epidemic's growth know anything about the ultimate carrying capacity? It can't and because of this conflation between growth and decline in the logistic equation's formulation, it makes no sense to apply it over the entire time interval. An alternative formulation is needed that separates the growth dynamics from the carrying capacity and this is the context of how a more general dispersive growth model is derived. For the virus contagion, the "flattening of the growth curve" is important as one can see in the China situation, growth initially exploded but it nowhere near reached the potential offered by China's total population. More info is needed to understand the limiting factor in the contagion.
John wrote up the SIR model in the Azimuth library and his blog on network theory. This has Recovered and Resistant in the Petri net but it would be interesting to add the SEIR model to the page which adds Exposure to the SIR model as in a Covid-19 SEIR paper I read the other day which I'll post a link to when I can find it.
Comment Source:John wrote up the [SIR model](https://www.azimuthproject.org/azimuth/show/Blog+-+stochastic+epidemic-type-1D-models) in the Azimuth library and his blog on network theory. This has Recovered and Resistant in the Petri net but it would be interesting to add the SEIR model to the page which adds Exposure to the SIR model as in a Covid-19 SEIR paper I read the other day which I'll post a link to when I can find it.
Jim, Thanks. Good start, the links are very useful. This one has some charted stochastic dynamics
https://journals.aps.org/pre/abstract/10.1103/PhysRevE.78.061132
Comment Source:Jim, Thanks. Good start, the links are very useful. This one has some charted stochastic dynamics https://journals.aps.org/pre/abstract/10.1103/PhysRevE.78.061132
The time derivative of the logistic sigmoid is this bell-shaped curve that Obama tweeted. It's all about flattening this curve as he links to
If you're wondering whether it's an overreaction to cancel large gatherings and public events (and I love basketball), here's a useful primer as to why these measures can slow the spread of the virus and save lives. We have to look out for each other. Barack Obama (@BarackObama) March 12, 2020
Comment Source:The time derivative of the logistic sigmoid is this bell-shaped curve that Obama tweeted. It's all about flattening this curve as he links to >If you're wondering whether it's an overreaction to cancel large gatherings and public events (and I love basketball), here's a useful primer as to why these measures can slow the spread of the virus and save lives. We have to look out for each other. Barack Obama (@BarackObama) <a href="https://twitter.com/BarackObama/status/1238238576141352966?ref_src=twsrc%5Etfw">March 12, 2020</a> :format(webp):no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/19780273/flattening_the_curve_final.jpg)
This is how the stochastic derivation of the logistic sigmoid function is simulated via Monte Carlo. The averaged ensemble of the samples is multiplied by 10x to stand out.
From Chapter 7 of the book (this was a proofed version)
Comment Source:This is how the stochastic derivation of the logistic sigmoid function is simulated via Monte Carlo. The averaged ensemble of the samples is multiplied by 10x to stand out.  From Chapter 7 of the book (this was a proofed version)
Someone on Twitter pointed out that the growth may also be parabolic, referencing this news item:
"There was a decision today [Wednesday], and air carriers have already started implementing it, this is actually about cancellations of flights to and from Italy so far. Later, we'll see about those countries where this virus will develop in a parabolic way, so we will also make relevant decisions, recommend that air carriers halt their flights. Forty-nine out of 219 border crossing points will operate starting tomorrow [Thursday, March 12]. The rest will be closed for citizens and vehicles," he said at a briefing on Wednesday following a meeting of the Cabinet of Ministers, according to an UNIAN correspondent.
The stochastic approach to producing a logistic function curve is flexible in that the growth and volume parameters are independently adjustable. Here are some variations:
It's not difficult to add parabolic growth to the mix. But not really sure what parabolic means in the context of this chart, with the caption : "China Cases Leveled Off - Rest of World Parabolic"
So "parabolic" could be a euphemism for accelerating or concave up. China definitely has clamped down, reaching some type of herd-immunity decelerating limit, whereas the rest is still accelerating.
Comment Source:Someone on Twitter pointed out that the growth may also be parabolic, referencing this news item: > ["There was a decision today [Wednesday], and air carriers have already started implementing it, this is actually about cancellations of flights to and from Italy so far. Later, we'll see about those countries where this virus will develop in a **parabolic way**, so we will also make relevant decisions, recommend that air carriers halt their flights. Forty-nine out of 219 border crossing points will operate starting tomorrow [Thursday, March 12]. The rest will be closed for citizens and vehicles," he said at a briefing on Wednesday following a meeting of the Cabinet of Ministers, according to an UNIAN correspondent.](https://www.unian.info/society/10912457-ukraine-mulls-register-of-citizens-foreigners-arriving-from-countries-with-high-epidemic-hazard.html) The stochastic approach to producing a logistic function curve is flexible in that the growth and volume parameters are independently adjustable. Here are some variations:  It's not difficult to add parabolic growth to the mix. But not really sure what parabolic means in the context of this chart, with the caption : *"China Cases Leveled Off - Rest of World Parabolic"*  So "parabolic" could be a euphemism for accelerating or concave up. China definitely has clamped down, reaching some type of herd-immunity decelerating limit, whereas the rest is still accelerating.
What's troubling about the classical logistic equation formulation with the herd immunity inflection point and asymptotic leveling off is that it requires a significant proportion of the population to kick in -- since the negative feedback is a (1-N/population) factor where N is number infected. British PM Boris Johnson and his chief scientific adviser got into hot water for suggesting that herd immunity on the scale of 40 million people (60% of population) infected as the leveling-off mechanism which will ultimately keep it under control (inflected=infected as a mnenomic).
So is herd immunity an oxymoron in this case? It appears that avoiding a herd of people is what's keeping the pandemic under control at the moment in places like China and South Korea. The classic logistic contagion model dictates that the disease stops spreading after enough people are infected, but what happens if re-infections are possible? -- as reports from China and now Japan indicate that re-infection is occurring (also found multiple opinions such as "there is no herd immunity if re-infection is possible" a la possible mutations and potential vaccine ineffectiveness)
Take a look at comment #11 above and you can see how the stochastic dispersive model takes into account sub-populations that individually reach an asymptote and that these can be superposed to create an inflection point at much less than 60% of the total population. Isolation of these sub-populations is the key factor to model the growth in China. The bigger question is whether this can continue until a vaccine is developed. Keep our fingers crossed.
From Twitter : https://twitter.com/AdamJKucharski/status/1238821515526897664
"I am deeply uncomfortable with the message that UK is actively pursuing 'herd immunity' as the main COVID-19 strategy. Our group's scenario modelling has focused on reducing two main things: peak healthcare demand and deaths..."
Consider dominos as a propagating contagion. A break in the dominos limits the propagation but occasionally one can slip through. That's the importance of strict quarantining.
https://youtu.be/066TQoTqkxQ
Subvolume and social distancing simulations in the WaPo https://www.washingtonpost.com/graphics/2020/world/corona-simulator/?itid=hp_hp-top-table-main_virus-simulator520pm:homepage/story-ans
Comment Source:What's troubling about the classical logistic equation formulation with the herd immunity inflection point and asymptotic leveling off is that it requires a significant proportion of the population to kick in -- since the negative feedback is a (1-*N/population*) factor where *N* is number infected. British PM Boris Johnson and his chief scientific adviser got into hot water for [suggesting that herd immunity on the scale of 40 million people (60% of population) infected as the leveling-off mechanism which will ultimately keep it under control](https://www.ft.com/content/38a81588-6508-11ea-b3f3-fe4680ea68b5) (inflected=infected as a mnenomic). So is herd immunity an oxymoron in this case? It appears that **avoiding** a herd of people is what's keeping the pandemic under control at the moment in places like China and South Korea. The classic logistic contagion model dictates that the disease stops spreading after enough people are infected, but what happens if re-infections are possible? -- as [reports from China and now Japan indicate that re-infection is occurring](https://thehill.com/changing-america/well-being/prevention-cures/484942-japan-confirms-first-case-of-person-reinfected) (also found multiple opinions such as *"there is no herd immunity if re-infection is possible"* a la possible mutations and potential vaccine ineffectiveness) Take a look at comment #11 above and you can see how the stochastic dispersive model takes into account sub-populations that individually reach an asymptote and that these can be superposed to create an inflection point at much less than 60% of the total population. Isolation of these sub-populations is the key factor to model the growth in China. The bigger question is whether this can continue until a vaccine is developed. Keep our fingers crossed. --- From Twitter : https://twitter.com/AdamJKucharski/status/1238821515526897664 > "I am deeply uncomfortable with the message that UK is actively pursuing 'herd immunity' as the main COVID-19 strategy. Our group's scenario modelling has focused on reducing two main things: peak healthcare demand and deaths..." --- Consider dominos as a propagating contagion. A break in the dominos limits the propagation but occasionally one can slip through. That's the importance of strict quarantining. https://youtu.be/066TQoTqkxQ --- Subvolume and social distancing simulations in the WaPo https://www.washingtonpost.com/graphics/2020/world/corona-simulator/?itid=hp_hp-top-table-main_virus-simulator520pm%3Ahomepage%2Fstory-ans
If someone wants to know how to predict the cumulative of a logistics curve when the data is still around the inflection point but before the asymptote is reached, there's a technique one can borrow from the fossil fuel resource world called Hubbert linearization. This is a screen grab from the book that shows how the HL is graphically constructed. The value of dU/U is plotted against U (as in eq 7.8) and the x-intercept gives the asymptotic limiting value. Wikipedia explanation : https://en.wikipedia.org/wiki/Hubbert_linearization
This only works for the case of the perfect logistic, and any other (non-exponential) growth law won't linearize in the same way, as is seen in a power-law growth model.
Comment Source:If someone wants to know how to predict the cumulative of a logistics curve when the data is still around the inflection point but before the asymptote is reached, there's a technique one can borrow from the fossil fuel resource world called *Hubbert linearization*. This is a screen grab from the book that shows how the HL is graphically constructed. The value of dU/U is plotted against U (as in eq 7.8) and the x-intercept gives the asymptotic limiting value. Wikipedia explanation : https://en.wikipedia.org/wiki/Hubbert_linearization  This only works for the case of the perfect logistic, and any other (non-exponential) growth law won't linearize in the same way, as is seen in a power-law growth model.
ICYM these, @JacobBiamonte started a draft blog article on creation-annihilation operators in the SIR model and @NathanUrban commented on discrete event simulations which links to EpSimS epidemology simulation software.
Comment Source:ICYM these, @JacobBiamonte started a [draft blog article](http://bit.ly/3d4nVt1) on creation-annihilation operators in the SIR model and @NathanUrban commented on [ discrete event simulations](http://bit.ly/3aY0IGR) which links to [EpSimS](http://bit.ly/2QiULMU) epidemology simulation software.
Thanks, Jim. These fall into the category known as compartment models (wikipedia), which are essentially stochastic models of data flow along a directed graph. SIR models give a transient notion of patients that recover, which is also an important consideration.
Blue=Susceptible, Green=Infected, and Red=Recovered, for the directed graph shown below the chart
From our book, a short appendix on compartment models is available for free (another medical application is on pharmacokinetics, which is how one models drug delivery) : https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app5
Compartment modeling relies extensively on the concept of convolution, which can be calculated easily via a scripted software algorithm, described in another appendix https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app2
There's unfortunately no convolution operator built into the Excel spreadsheet, but there's a nifty array-syntax trick that allows you to calculate a convolution between two ranges very compactly. If anyone is interested, I can describe exactly how to formulate an Excel convolution.
BTW, compartment modeling is the basis for our comprehensive Oil Shock Model, which I think will also be an important model in the near future, as it will help to understand the sharp disruption in global oil production that will eventually impact the world, see our blog https://peakOilBarrel.com for up-to-date projections. The Oil Shock Model is described in Chapter 5 of the book (which is behind a firewall): https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5
Comment Source:Thanks, Jim. These fall into the category known as compartment models ([wikipedia](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology)), which are essentially stochastic models of data flow along a directed graph. SIR models give a transient notion of patients that recover, which is also an important consideration. >  >  >Blue=Susceptible, Green=Infected, and Red=Recovered, for the directed graph shown below the chart From our book, a short appendix on compartment models is available for free (another medical application is on pharmacokinetics, which is how one models drug delivery) : https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app5 Compartment modeling relies extensively on the concept of convolution, which can be calculated easily via a scripted software algorithm, described in another appendix https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app2 There's unfortunately no convolution operator built into the Excel spreadsheet, but there's a nifty array-syntax trick that allows you to calculate a convolution between two ranges very compactly. If anyone is interested, I can describe exactly how to formulate an Excel convolution. BTW, compartment modeling is the basis for our comprehensive Oil Shock Model, which I think will also be an important model in the near future, as it will help to understand the sharp disruption in global oil production that will eventually impact the world, see our blog https://peakOilBarrel.com for up-to-date projections. The Oil Shock Model is described in Chapter 5 of the book (which is behind a firewall): https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5
Here's the Zang et. al. paper: Prediction of New Coronavirus Infection Based on a Modified SEIR Model (2020)
Comment Source:Here's the Zang et. al. paper: [Prediction of New Coronavirus Infection Based on a Modified SEIR Model (2020) ](https://www.medrxiv.org/content/10.1101/2020.03.03.20030858v1.full.pdf)
I'd like to see the algorithm for the "nifty array-syntax trick that allows you to calculate a convolution between two ranges very compactly." Tnx for the offer. :)
Comment Source:I'd like to see the algorithm for the "nifty array-syntax trick that allows you to calculate a convolution between two ranges very compactly." Tnx for the offer. :)
Jim, Are you aware of the Excel syntax for range-based calculations? The dot product in Excel is =SUM(A1:A10 * B1:B10) for two ranges 1..10 but you need to do a Shift-Ctrl-Enter to invoke it (same as =SUMPRODUCT(A1:A10, B1:B10) but w/o the Shift-Ctrl-Enter). A convolution is a running dot product on two ranges with the range endpoints shifting along the timeline, but with one of the ranges reversed in direction. The rest follows from this, as it will depend on how your data is arranged.
edit: As convolution has been given greater awareness via NN and machine learning applications, you can find other algorithms, see https://towardsdatascience.com/convolution-a-journey-through-a-familiar-operators-deeper-roots-2e3311f23379
Comment Source:Jim, Are you aware of the Excel syntax for range-based calculations? The dot product in Excel is `=SUM(A1:A10 * B1:B10)` for two ranges 1..10 but you need to do a Shift-Ctrl-Enter to invoke it (same as `=SUMPRODUCT(A1:A10, B1:B10)` but w/o the Shift-Ctrl-Enter). A convolution is a running dot product on two ranges with the range endpoints shifting along the timeline, but with one of the ranges reversed in direction. The rest follows from this, as it will depend on how your data is arranged. edit: As convolution has been given greater awareness via NN and machine learning applications, you can find other algorithms, see https://towardsdatascience.com/convolution-a-journey-through-a-familiar-operators-deeper-roots-2e3311f23379
This is an SEIR compartment model
I'm sharing a simulation tool I put together for studying COVID19 dynamics and generating visualizations without having to do a bunch of coding yourself. Hoping it can help with your coronavirus-related research and teaching. https://t.co/GV5ErDkfE9 (1/7)
— Alison Lynn Hill (@alison_l_hill)
Comment Source:This is an SEIR compartment model <blockquote class="twitter-tweet"><p lang="en" dir="ltr">I'm sharing a simulation tool I put together for studying COVID19 dynamics and generating visualizations without having to do a bunch of coding yourself. Hoping it can help with your coronavirus-related research and teaching. <a href="https://t.co/GV5ErDkfE9">https://t.co/GV5ErDkfE9</a> (1/7)</p>— Alison Lynn Hill (@alison_l_hill) <a href="https://twitter.com/alison_l_hill/status/1239072817678823425?ref_src=twsrc%5Etfw">March 15, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> 
From what I am trying to understand, a feature of the early growth acceleration is that it is independent of the total population size of the country, indicating that it really depends on the initial hot spot cells and the typical human interaction within a sub-population. In other words, these are not per capita numbers. So while Switzerland & Sweden have populations of ~10 million, the UK has a population of 66 million and so I think the eventual cumulative for the UK will rise above that of Switzerland & Sweden. In other words the individual country curves will eventually diverge depending on how effectively each country can sustain social distancing + quarantining. Yet, given the huge population of China of 1400 million, it is at least heartening that they could limit this so far to 180,000 cases, which on an exclusively per capita basis would put them at the level of Switzerland or Sweden as of today.
Remember that the goal is to bend the cumulative curve so it reaches an asymptote sooner and flatten the daily curve so the cumulative doesn't rise as fast, which some media people are mixing up.
Comment Source:From what I am trying to understand, a feature of the early growth acceleration is that it is independent of the total population size of the country, indicating that it really depends on the initial hot spot cells and the typical human interaction within a sub-population. In other words, these are not *per capita* numbers. So while Switzerland & Sweden have populations of ~10 million, the UK has a population of 66 million and so *I think* the eventual cumulative for the UK will rise above that of Switzerland & Sweden. In other words the individual country curves will eventually diverge depending on how effectively each country can sustain social distancing + quarantining. Yet, given the huge population of China of 1400 million, it is at least heartening that they could limit this so far to 180,000 cases, which on an exclusively **per capita basis** would put them at the level of Switzerland or Sweden as of today.  Remember that the goal is to *bend* the cumulative curve so it reaches an asymptote sooner and *flatten* the daily curve so the cumulative doesn't rise as fast, which some [media people](https://twitter.com/julesbell27/status/1239694655094009856?s=20) are mixing up.
From the initial comment I contributed @ #2, one application of logistic-like formulations is to model oil depletion from a finite population of reserves. Now, with the awareness brought on by the coronavirus crisis, one can also see how the "flattening of the curve" will impact future oil production.
Goldman Sachs: current consumption decrease 8 mmb/d. Brent will average $20 a barrel during the second quarter
Trafigura: daily demand loss 10 mmb/d.https://t.co/6brRUTayuQ#OOTT #oilandgas #oil #WTI #CrudeOil #fintwit #OPEC
— Art Berman (@aeberman12)
This is at least a 10% reduction in production, which will flatten the current consumption curve and prolong the duration to the asymptotic limit in cumulative production (the URR shown in #15). Of course, with this kind of non-stationarity in the flow, the logistic function by itself no longer works, so we need to apply a stochastic model that provides the possibility for perturbations. This is the Oil Shock Model, derived as a directed graph analogous to a compartmental flow, described here: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5
I'm predicting that at some point John will comment on the success (so far) of Singapore in almost completely flattening the curve
Life in Singapore (243 cases, 0 deaths) has pretty much returned to normal.
People are walking around, mostly without masks. Shops & restaurants are open.
Big events & school activities such as tournaments were canceled, but schools remained open unless there was a case. pic.twitter.com/MIObq8HWRu
— Melissa Chen (@MsMelChen)
John responded here : https://twitter.com/johncarlosbaez/status/1240023988878725120
Comment Source:From the initial comment I contributed @ [#2](https://forum.azimuthproject.org/discussion/comment/2160/#Comment_2160), one application of logistic-like formulations is to model oil depletion from a finite population of reserves. Now, with the awareness brought on by the coronavirus crisis, one can also see how the "flattening of the curve" will impact future oil production. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Goldman Sachs: current consumption decrease 8 mmb/d. Brent will average $20 a barrel during the second quarter<br><br>Trafigura: daily demand loss 10 mmb/d.<a href="https://t.co/6brRUTayuQ">https://t.co/6brRUTayuQ</a><a href="https://twitter.com/hashtag/OOTT?src=hash&ref_src=twsrc%5Etfw">#OOTT</a> <a href="https://twitter.com/hashtag/oilandgas?src=hash&ref_src=twsrc%5Etfw">#oilandgas</a> <a href="https://twitter.com/hashtag/oil?src=hash&ref_src=twsrc%5Etfw">#oil</a> <a href="https://twitter.com/hashtag/WTI?src=hash&ref_src=twsrc%5Etfw">#WTI</a> <a href="https://twitter.com/hashtag/CrudeOil?src=hash&ref_src=twsrc%5Etfw">#CrudeOil</a> <a href="https://twitter.com/hashtag/fintwit?src=hash&ref_src=twsrc%5Etfw">#fintwit</a> <a href="https://twitter.com/hashtag/OPEC?src=hash&ref_src=twsrc%5Etfw">#OPEC</a></p>— Art Berman (@aeberman12) <a href="https://twitter.com/aeberman12/status/1239901824590757888?ref_src=twsrc%5Etfw">March 17, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> This is at least a 10% reduction in production, which will flatten the current consumption curve and prolong the duration to the asymptotic limit in cumulative production (the URR shown in #15). Of course, with this kind of non-stationarity in the flow, the logistic function by itself no longer works, so we need to apply a stochastic model that provides the possibility for perturbations. This is the Oil Shock Model, derived as a directed graph analogous to a compartmental flow, described here: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5 --- I'm predicting that at some point John will comment on the success (so far) of Singapore in almost completely flattening the curve <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Life in Singapore (243 cases, 0 deaths) has pretty much returned to normal. <br><br>People are walking around, mostly without masks. Shops & restaurants are open. <br><br>Big events & school activities such as tournaments were canceled, but schools remained open unless there was a case. <a href="https://t.co/MIObq8HWRu">pic.twitter.com/MIObq8HWRu</a></p>— Melissa Chen (@MsMelChen) <a href="https://twitter.com/MsMelChen/status/1239604019460558848?ref_src=twsrc%5Etfw">March 16, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> John responded here : https://twitter.com/johncarlosbaez/status/1240023988878725120
DavidTanzer
@WebHubTel Thanks for keeping the torch burning! I gotta read up on this stuff...
Comment Source:@WebHubTel Thanks for keeping the torch burning! I gotta read up on this stuff...
A follow-on from comment #14, Laherrere calculated a Hubbert Linearization on COVID-19 numbers from several countries here:
https://aspofrance.files.wordpress.com/2020/03/hlcovid19-16mars.pdf
As with using the technique for oil, a set of numbers from the curve can linearize but if another growth acceleration occurs after it settles down, its clear that it's not predicting an ultimate herd immunity level. It's just temporarily keeping it at bay.
And if the curve follows the "Abroad" trajectory as shown below, then the HL will flatline to an infinite x-interception since the inflection point has yet to be reached and there's no sign of bending in the cumulative (on a semi-log plot).
Comment Source:A follow-on from comment #14, Laherrere calculated a Hubbert Linearization on COVID-19 numbers from several countries here: https://aspofrance.files.wordpress.com/2020/03/hlcovid19-16mars.pdf As with using the technique for oil, a set of numbers from the curve can linearize but if another growth acceleration occurs after it settles down, its clear that it's not predicting an ultimate herd immunity level. It's just temporarily keeping it at bay. And if the curve follows the "Abroad" trajectory as shown below, then the HL will flatline to an infinite x-interception since the inflection point has yet to be reached and there's no sign of bending in the cumulative (on a semi-log plot). 
Here's Neil Ferguson et. al., Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand (2020). This has a description and the params, of the Imperial college task-force model which has changed the UK gov's policy.
Comment Source:Here's Neil Ferguson et. al., [Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand (2020)](http://bit.ly/2IYkIgO). This has a description and the params, of the Imperial college task-force model which has changed the UK gov's policy.
Are Ferguson using a convolution approach with exponentials in the compartmental model based on this statement:
"Individual infectiousness is assumed to be variable, described by a gamma distribution with mean 1 and shape parameter ⍺=0.25. "
A multiple convolution of damped exponentials results in a gamma with each exponential narrowing the gamma. https://en.wikipedia.org/wiki/Gamma_distribution
Maybe not enough info to figure out the model? Probably more in these papers
Ferguson NM, Cummings DAT, Fraser C, Cajka JC, Cooley PC, Burke DS. Strategies for mitigating an influenza pandemic. Nature 2006;442(7101):448–52.
HalloranME, Ferguson NM, Eubank S, et al. Modeling targeted layered containment of an influenza pandemic in the United States. Proc Natl Acad Sci U S A 2008;105(12):4639–44.
Added 3/20 :The lead author Neil Ferguson apparently was infected https://twitter.com/leahmcelrath/status/1240848615255560193
Comment Source:Are Ferguson using a convolution approach with exponentials in the compartmental model based on this statement: > "Individual infectiousness is assumed to be variable, described by a gamma distribution with mean 1 and shape parameter ⍺=0.25. " A multiple convolution of damped exponentials results in a gamma with each exponential narrowing the gamma. https://en.wikipedia.org/wiki/Gamma_distribution  Maybe not enough info to figure out the model? Probably more in these papers > Ferguson NM, Cummings DAT, Fraser C, Cajka JC, Cooley PC, Burke DS. Strategies for mitigating an influenza pandemic. Nature 2006;442(7101):448–52. > HalloranME, Ferguson NM, Eubank S, et al. Modeling targeted layered containment of an influenza pandemic in the United States. Proc Natl Acad Sci U S A 2008;105(12):4639–44. --- Added 3/20 :The lead author Neil Ferguson apparently was infected https://twitter.com/leahmcelrath/status/1240848615255560193
I find it reassuring that a sample of 2 sims: https://aspofrance.files.wordpress.com/2020/03/hlcovid19-16mars.pdf and Alison_L_Hill's https://t.co/GV5ErDkfE9 both show peaks not later than 100 days.
Comment Source:I find it reassuring that a sample of 2 sims: https://aspofrance.files.wordpress.com/2020/03/hlcovid19-16mars.pdf and Alison_L_Hill's https://t.co/GV5ErDkfE9 both show peaks not later than 100 days.
This is another paper coming out of Imperial College but not from Ferguson:
Li et al, "Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV2)"
https://science.sciencemag.org/content/early/2020/03/13/science.abb3221
They use a similar Gamma for the compartmental model as Ferguson, with supplementary material here
https://science.sciencemag.org/cgi/content/full/science.abb3221/DC1
Comment Source:This is another paper coming out of Imperial College but not from Ferguson: Li et al, "Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV2)" https://science.sciencemag.org/content/early/2020/03/13/science.abb3221 They use a similar Gamma for the compartmental model as Ferguson, with supplementary material here https://science.sciencemag.org/cgi/content/full/science.abb3221/DC1
The compartmental models of oil production & contagion growth show intuitive parallels
Discovery of an oil reservoir is analogous to the start of infection -- the leading indicator.
Extraction from that reservoir to depletion is analogous to death -- the lagging indicator.
Keeping oil in the ground is the equivalent of recovering from the infection.
Comment Source:The compartmental models of oil production & contagion growth show intuitive parallels Discovery of an oil reservoir is analogous to the start of infection -- the leading indicator. Extraction from that reservoir to depletion is analogous to death -- the lagging indicator. Keeping oil in the ground is the equivalent of recovering from the infection.
Links; Halloran ME et al., Modeling targeted layered containment of an influenza pandemic in the United States Ferguson NM, Strategies for mitigating an influenza pandemic (2020)
Comment Source:Links; Halloran ME et al., [Modeling targeted layered containment of an influenza pandemic in the United States](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2290797/) Ferguson NM, [Strategies for mitigating an influenza pandemic (2020)](https://www.researchgate.net/publication/7139153_Strategies_for_mitigating_an_Influenza_pandemic)
https://www.gov.uk/government/groups/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19-response
From that site: https://www.medrxiv.org/content/10.1101/2020.01.31.20019901v2.full.pdf+html
Referencing "Capturing the time-varying drivers of an epidemic using stochastic dynamical systems." https://academic.oup.com/biostatistics/article/14/3/541/259859
Looking as if the dispersive approach in comment #11 is describing the current situation
As these dispersive waves that hit a (perhaps temporary) herd immunity/quarantine ceiling aggregate, they create an envelope that continues to grow. So each of the subvolume curves shown in the figure above may represent a country, with a large country such as the USA with an initially slow growth adding a lagged response. The Maximum Entropy dispersion formulation of the logistic function sigmoid is simply a mechanism to provide variability to the mix, which thus emulates a global spread of growth.
Comment Source:https://www.gov.uk/government/groups/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19-response From that site: https://www.medrxiv.org/content/10.1101/2020.01.31.20019901v2.full.pdf+html Referencing "Capturing the time-varying drivers of an epidemic using stochastic dynamical systems." https://academic.oup.com/biostatistics/article/14/3/541/259859 Looking as if the dispersive approach in [comment #11](https://forum.azimuthproject.org/discussion/comment/21938/#Comment_21938) is describing the current situation  As these dispersive waves that hit a (perhaps temporary) herd immunity/quarantine ceiling aggregate, they create an envelope that continues to grow. So each of the subvolume curves shown in the figure above may represent a country, with a large country such as the USA with an initially slow growth adding a lagged response. The Maximum Entropy dispersion formulation of the logistic function sigmoid is simply a mechanism to provide variability to the mix, which thus emulates a global spread of growth.
Here is a stochastic analogy to epidemic growth -- the rate at which popcorn popping follows a similar logistic sigmoid as an ensemble of contagions. This is the familiar slow initial popping of a few kernels leading up to a maximum popping rate followed by a decline of popping as the population of kernels impacted saturates.
Click on the PDF on the link below and go to section C.4:
https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.app3
https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app3
The rationale for including this example in our book is because it describes the Hubbert logistic curve not directly related to resource extraction, yet provides a real world analogy that can be easily set up as a controlled lab experiment.
It also doesn't hold as a perfect analogy to contagion, as the accelerated growth is controlled as an Arrhenius rate activated by temperature instead of as a multiplicative contagion, i.e. an individual kernel does not pop because its neighbor pops but because of the temperature of the medium. In the figure below, the fraction unpopped is simply the complement of the fraction popped to convert to the familiar S-curve.
The usual observation in virus epidemic growth is that the contagiousness decreases with increased temperature instead of what might be expected as an increase if it was a thermally activated complex obeying the laws of statistical mechanics. This behavior apparently is not completely understood but there is some thought it might be related to the increased intensity of UV light during the summer months killing any airborne virus : https://www.webmd.com/cold-and-flu/news/20180212/can-uv-light-be-used-to-kill-airborne-flu-virus-#1 or that buildings have more air circulation and people tend to congregate less indoors during the summer. Considering that humans have a thermally stabilized environment controlled by their regulated body temperature, it would be hard to make sense of a thermally activated mechanism once the virus enters the body.
This is a recent paper on COVID-19 based on available geographic/climate correlation High Temperature and High Humidity Reduce the Transmission of COVID-19
PS: The popcorn experiment is interesting in that the controlled conditions were set up as a single isolated popcorn kernel was monitored as it was heated up, not by monitoring an aggregation of kernels. The statistical distribution resembling a logistic sigmoid was only found by compiling the results of thousands of individual kernel measurements. So this is essentially characterizing the stochastic uncertainty of a single kernel.
Comment Source:Here is a stochastic analogy to epidemic growth -- the rate at which popcorn popping follows a similar logistic sigmoid as an ensemble of contagions. This is the familiar slow initial popping of a few kernels leading up to a maximum popping rate followed by a decline of popping as the population of kernels impacted saturates. Click on the PDF on the link below and go to section C.4: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.app3 https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app3 The rationale for including this example in our book is because it describes the Hubbert logistic curve not directly related to resource extraction, yet provides a real world analogy that can be easily set up as a controlled lab experiment. It also doesn't hold as a perfect analogy to contagion, as the accelerated growth is controlled as an Arrhenius rate activated by temperature instead of as a multiplicative contagion, i.e. an individual kernel does not pop because its neighbor pops but because of the temperature of the medium. In the figure below, the fraction unpopped is simply the complement of the fraction popped to convert to the familiar S-curve.  The usual observation in virus epidemic growth is that the contagiousness *decreases* with increased temperature instead of what might be expected as an increase if it was a thermally activated complex obeying the laws of statistical mechanics. This behavior apparently is not completely understood but there is some thought it might be related to the increased intensity of UV light during the summer months killing any airborne virus : https://www.webmd.com/cold-and-flu/news/20180212/can-uv-light-be-used-to-kill-airborne-flu-virus-#1 or that buildings have more air circulation and people tend to congregate less indoors during the summer. Considering that humans have a thermally stabilized environment controlled by their regulated body temperature, it would be hard to make sense of a thermally activated mechanism once the virus enters the body. This is a recent paper on COVID-19 based on available geographic/climate correlation [High Temperature and High Humidity Reduce the Transmission of COVID-19](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3551767) PS: The popcorn experiment is interesting in that the controlled conditions were set up as a *single* isolated popcorn kernel was monitored as it was heated up, not by monitoring an aggregation of kernels. The statistical distribution resembling a logistic sigmoid was only found by compiling the results of thousands of individual kernel measurements. So this is essentially characterizing the stochastic uncertainty of a single kernel.
@WebHubTel wrote:
Not sure what you mean by the heuristic logistic equation derivation, and by it being a coincidence.
Also not sure how this relates to your point, but I see that in the Petri net analysis for SI compartmental, the logistic equation is a result not a premise of analyzing the rate equation for a highly simplistic stochastic process model that is motivated by empirical considerations.
Comment Source:@WebHubTel wrote: > Just because a sigmoid-shaped curve follows a shape such as 1/(1+A exp(-t)) doesn't mean that it comes solely from the logistic equation. As noted in #2, consider that just as the logistic sigmoid also maps to the [Fermi-Dirac distribution](https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics#Fermi%E2%80%93Dirac_distribution), the heuristic logistic equation derivation also appears to be just a quirky coincidence. Not sure what you mean by the heuristic logistic equation derivation, and by it being a coincidence. Also not sure how this relates to your point, but I see that in the Petri net analysis for SI compartmental, the logistic equation is a result not a premise of analyzing the rate equation for a highly simplistic stochastic process model that is motivated by empirical considerations.
"Note that In the Petri net model for SI, the logistic equation is a result of the analysis of the rate equation not a premise."
The classic logistic equation is not strictly a stochastic derivation, and at best assumes a mean value for the measure of interest, with no uncertainty in the outcome. In any realistic situation there would be a spread in rates and constraints and so that's what my derivation calculates.
In the comment before yours at #32, I described an experiment and model for a process that is purely stochastic, the popping of a popcorn kernel. As an exercise, see if you can describe the behavior of the amount popped as a function of time just by assuming a mean value of one kernel popping. I originally tried to fit a mean-value model and it didn't come close to Figure C.8 above. That's because the variability in popcorn kernel characteristics was large enough to skew the expected temporal behavior away from that assuming a single mean value.
I just looked up the stochastic logistic equation and a recent paper on that is here: https://www.sciencedirect.com/science/article/pii/S0893965913000050 This uses an Ito calculus formulation which is a noise perturbation on the mean value approach.
I can elaborate more about the "quirky coincidence" and "heuristic" aspects with another example that I will place in another comment.
Comment Source:> "Note that In the Petri net model for SI, the logistic equation is a result of the analysis of the rate equation not a premise." The classic logistic equation is not strictly a stochastic derivation, and at best assumes a mean value for the measure of interest, with no uncertainty in the outcome. In any realistic situation there would be a spread in rates and constraints and so that's what my derivation calculates. In the comment before yours at #32, I described an experiment and model for a process that is purely stochastic, the popping of a popcorn kernel. As an exercise, see if you can describe the behavior of the amount popped as a function of time just by assuming a mean value of one kernel popping. I originally tried to fit a mean-value model and it didn't come close to Figure C.8 above. That's because the variability in popcorn kernel characteristics was large enough to skew the expected temporal behavior away from that assuming a single mean value. I just looked up the stochastic logistic equation and a recent paper on that is here: https://www.sciencedirect.com/science/article/pii/S0893965913000050 This uses an Ito calculus formulation which is a noise perturbation on the mean value approach. I can elaborate more about the "quirky coincidence" and "heuristic" aspects with another example that I will place in another comment.
There is a purely stochastic model - a stochastic Petri net - for the SI process, which leads to the logistic equation as a result. See for example page 24 from:
John C. Baez and Jacob Biamonte, Quantum Techniques for Stochastic Mechanics, arXiv:1209.3632 [quant-ph]. Text includes treatment from the ground up of Petri nets, both stochastic and deterministic. Clear introduction to SI, SIR and SIRS models.
Comment Source:There is a purely stochastic model - a stochastic Petri net - for the SI process, which leads to the logistic equation as a result. See for example page 24 from: * John C. Baez and Jacob Biamonte, [Quantum Techniques for Stochastic Mechanics](https://arxiv.org/abs/1209.3632), arXiv:1209.3632 [quant-ph]. Text includes treatment from the ground up of Petri nets, both stochastic and deterministic. Clear introduction to SI, SIR and SIRS models.
There, it's as simple as it gets: infection is modeled as a stochastic process in which one susceptible plus one infected person are transformed into two infected people. In the large number limit, using the regular "mass action" kinetics, the rate equation for this process turns out to be the logistic equation.
Comment Source:There, it's as simple as it gets: infection is modeled as a stochastic process in which one susceptible plus one infected person are transformed into two infected people. In the large number limit, using the regular "mass action" kinetics, the rate equation for this process turns out to be the logistic equation.
David, Yes that's the correct way to derive the logistic sigmoid function from the logistic equation, but its not really considered a stochastic model. It's generally classified as a mean-value model or as a classic deterministic SI model.
I gave a ref to a recent paper on the stochastic logistic equation in comment #34, and the equation for this model is available from the Wolfram math site: https://reference.wolfram.com/language/example/StochasticLogisticGrowthModel.html
And the following paper provides an even better description of the distinction between the two:
Liu&Wang (2011), "Asymptotic properties and simulations of a stochastic logistic model under regime switching"
"On the other hand, in the real world, population system is inevitably affected by the environmental noise which is an important component in an ecosystem (see e.g. [7], [8], [9], [10]). The deterministic systems assume that parameters in the models are all deterministic irrespective of environmental fluctuations. Hence, they have some limitations in mathematical modeling of ecological systems, besides they are quite difficult to fitting data perfectly and to predict the future dynamics of the system accurately [11]. May [1] pointed out the fact that due to environmental noise, the birth rate, carrying capacity, competition coefficient and other parameters involved in the system exhibit random fluctuation to a greater or lesser extent."
So the logistic sigmoid function is a result of solving the classic logistic equation. OTOH, solving the stochastic logistic equation will give something that may look like the logistic sigmoid function but obviously can't match it exactly.
What I did with the dispersive approach described in comment #11 is to apply a spread in the growth parameters and self-limiting factors that does generate precisely a logistic sigmoid function. This can be tested to work by drawing from a population with a specific distribution via a Monte Carlo simulation, and verifying that the statistical aggregate approaches the logistic sigmoid function as shown in comment #31. This may be a better representation of an actual evolving epidemic since it considers the variation implicit over a set of sub-populations. I'm not suggesting that it is in any way equivalent to the stochastic compartmental simulations that e.g. Ferguson et al are doing to model the COVID-19 epidemic, but it's the approach I am using for my resource depletion models and so I thought I would introduce them into the discussion.
This is a good discussion because I think it illuminates the distinction between the simple models used for logistic growth and the more elaborate considerations that must be occurring in the compartmental models of Ferguson et al. There are plenty of references to stochastic simulations in their articles, and so this gives an idea of what they might mean by that. If you have a different interpretation, that would also be good to know.
Comment Source:David, Yes that's the correct way to derive the logistic sigmoid function from the logistic equation, but its not really considered a stochastic model. It's generally classified as a mean-value model or as a [classic *deterministic* SI model](http://idmod.org/docs/general/model-si.html). I gave a ref to a recent paper on the *stochastic logistic equation* in comment #34, and the equation for this model is available from the Wolfram math site: https://reference.wolfram.com/language/example/StochasticLogisticGrowthModel.html  And the following paper provides an even better description of the distinction between the two: > [Liu&Wang (2011), "Asymptotic properties and simulations of a stochastic logistic model under regime switching"](https://www.sciencedirect.com/science/article/pii/S0895717711002937) > "On the other hand, in the real world, population system is inevitably affected by the environmental noise which is an important component in an ecosystem (see e.g. [7], [8], [9], [10]). The deterministic systems assume that parameters in the models are all deterministic irrespective of environmental fluctuations. Hence, they have some limitations in mathematical modeling of ecological systems, besides they are quite difficult to fitting data perfectly and to predict the future dynamics of the system accurately [11]. May [1] pointed out the fact that due to environmental noise, the birth rate, carrying capacity, competition coefficient and other parameters involved in the system exhibit random fluctuation to a greater or lesser extent." So the logistic sigmoid function is a result of solving the classic logistic equation. OTOH, solving the stochastic logistic equation will give something that may look like the logistic sigmoid function but obviously can't match it exactly. What I did with the dispersive approach described in comment #11 is to apply a spread in the growth parameters and self-limiting factors that does generate precisely a logistic sigmoid function. This can be tested to work by drawing from a population with a specific distribution via a Monte Carlo simulation, and verifying that the statistical aggregate approaches the logistic sigmoid function as shown in comment #31. This may be a better representation of an actual evolving epidemic since it considers the variation implicit over a set of sub-populations. I'm not suggesting that it is in any way equivalent to the stochastic compartmental simulations that e.g. Ferguson et al are doing to model the COVID-19 epidemic, but it's the approach I am using for my resource depletion models and so I thought I would introduce them into the discussion. This is a good discussion because I think it illuminates the distinction between the simple models used for logistic growth and the more elaborate considerations that must be occurring in the compartmental models of Ferguson et al. There are plenty of references to stochastic simulations in their articles, and so this gives an idea of what they might mean by that. If you have a different interpretation, that would also be good to know.
Yes that's the correct way to derive the logistic sigmoid function from the logistic equation, but its not really considered a stochastic model. It's generally classified as a mean-value model or as a classic deterministic SI model.
True the page I referred you to talks about the deterministic interpretation of the Petri net SI model. Yet this occurs in the broader context of the stochastic interpretation of Petri nets - which is very much a discrete popcorn-like process - and that is what I meant to be talking about.
When simulated you will get variation, which will be especially pronounced when run on smaller populations like neighborhoods.
For reference, in a separate thread intended for the general Azimuth Forum community, I will summarize some of the basic the key ideas of stochastic Petri nets.
Comment Source:@WebHubTel wrote: > Yes that's the correct way to derive the logistic sigmoid function from the logistic equation, but its not really considered a stochastic model. It's generally classified as a mean-value model or as a [classic *deterministic* SI model](http://idmod.org/docs/general/model-si.html). True the page I referred you to talks about the deterministic interpretation of the Petri net SI model. Yet this occurs in the broader context of the stochastic interpretation of Petri nets - which is very much a discrete popcorn-like process - and that is what I _meant_ to be talking about. When simulated you will get variation, which will be especially pronounced when run on smaller populations like neighborhoods. For reference, in a separate thread intended for the general Azimuth Forum community, I will summarize some of the basic the key ideas of stochastic Petri nets.
The Lotka-Volterra equation is closely related to the logistic equation via a growth term, with a feedback term set so that a predatory species can further accelerate the prey toward a limiting value. But since the disappearance of prey will also cause the disappearance of the predator, a cyclic pattern can develop based on this coupled feedback. It's possible that that this is a real mechanism in actual ecological predator/prey relationships but it's difficult to verify. Any stochastic perturbation will likely knock the cycle off its current period.
One of the famous predator/prey behaviors in the Arctic latitudes is the Lemming/Arctic Fox cycle (or Snowy Owl). Over a long time interval this cycle has been estimated to have a period of 3.8 years. A wildlife ecologist working on the topic for 40+ years finally seems to have pattern-matched to a plausible model -- published last year :
Archibald, H. L. Relating the 4-year lemming ( Lemmus spp. and Dicrostonyx spp.) population cycle to a 3.8-year lunar cycle and ENSO. Can. J. Zool. 97, 1054–1063 (2019).
What he noted is that the lemming cycle happens to match a spring tide cycle, implying more of a climate related mechanism controlling the population. After coming across this paper I noted that the cycle appeared suspiciously close to the tidal forcing that I am using in the ENSO model. The vertical dotted lines indicate the alignment (the inset is CC between ENSO and PDO )
Why it follows the more predictable tidal forcing rather than the more erratic ENSO response, I don't have an answer. But this does look more plausible than a predator-prey cycle. The environment is so harsh in the Arctic that climate factors likely control the health of the lemming population, and the predators then follow that cycle as well since that is their food supply. This may be a classic common-mode mechanism instead of a mutual resonance set by the eigenvalue or chaotic attractor of a differential equation.
Comment Source:The [Lotka-Volterra equation](https://forum.azimuthproject.org/discussion/967/lotka-volterra-equation) is closely related to the logistic equation via a growth term, with a feedback term set so that a predatory species can further accelerate the prey toward a limiting value. But since the disappearance of prey will also cause the disappearance of the predator, a cyclic pattern can develop based on this coupled feedback. It's possible that that this is a real mechanism in actual ecological predator/prey relationships but it's difficult to verify. Any stochastic perturbation will likely knock the cycle off its current period. One of the famous predator/prey behaviors in the Arctic latitudes is the Lemming/Arctic Fox cycle (or Snowy Owl). Over a long time interval this cycle has been estimated to have a period of 3.8 years. A wildlife ecologist working on the topic for **40+ years** finally seems to have pattern-matched to a plausible model -- published last year : > Archibald, H. L. [Relating the 4-year lemming ( Lemmus spp. and Dicrostonyx spp.) population cycle to a 3.8-year lunar cycle and ENSO](https://tspace.library.utoronto.ca/bitstream/1807/97104/1/cjz-2018-0266.pdf). Can. J. Zool. 97, 1054–1063 (2019). What he noted is that the lemming cycle happens to match a spring tide cycle, implying more of a climate related mechanism controlling the population. After coming across this paper I noted that the cycle appeared suspiciously close to the tidal forcing that I am using in the [ENSO model](https://forum.azimuthproject.org/discussion/comment/21894/#Comment_21894). The vertical dotted lines indicate the alignment (the inset is CC between ENSO and PDO )  Why it follows the more predictable tidal forcing rather than the more erratic ENSO response, I don't have an answer. But this does look more plausible than a predator-prey cycle. The environment is so harsh in the Arctic that climate factors likely control the health of the lemming population, and the predators then follow that cycle as well since that is their food supply. This may be a classic common-mode mechanism instead of a mutual resonance set by the eigenvalue or chaotic attractor of a differential equation.
"True the page I referred you to talks about the deterministic interpretation of the Petri net SI model. Yet this occurs in the broader context of the stochastic interpretation of Petri nets - which is very much a discrete popcorn-like process - and that is what I meant to be talking about."
I generally agree with your argument as that is basically what is involved when developing a state diagram that represents probability flow -- for example when Markov modeling a system for stochastic reliability analysis, c.f. top citations https://scholar.google.com/scholar?q=Markov+modeling+for+reliability+analysis
The mean value flow in this case is describing the probability of a fault-tolerant system existing in a particular state, with the Petri net providing a more concise representation than the expanded state diagram, due to the extra logic in the bar symbols. See this paper for how to create rewrite rules for transforming between Petri net and pure Markov state diagram representations:
Pukite, P. (1995). Intelligent reliability analysis tool for fault-tolerant system design. In 10th Computing in Aerospace Conference https://www.researchgate.net/publication/269227210_Intelligent_reliability_analysis_tool_for_fault-tolerant_system_design/figures
So the point is that there is a distinction between how one understands the system under study versus the preferred vocabulary used by the practitioners. I certainly wouldn't have a problem calling these stochastic models, but that isn't the standard practice in epidemiology where it defines a larger spread in the model's parameterization via noise or variability.
I am not sure where this started but since you mentioned the mass-action law of chemistry, consider that a typical reaction is so well mixed and uniform that the fluctuations are not considered that important compared to the mean-value of the the reagent constituents. Consider also that in solid-state electronics where the law of mass action for electrons and holes is \(n p = n_i ^2\). For modeling semiconductors, the stochastic variability is not typically required unless one is interested in modeling shot noise or other carrier fluctuations. Another situation where an extra level of stochastic variability would be applied, is in the analysis of amorphous materials where for example the photovoltaic characteristics show fat tails indicating that the material has a significant spread in electrical properties. I have written about this here: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch18
This discussion is in the weeds but important when placed in context. For a reliability analyst, the stochastic part is everything but for a chemist or semiconductor designer, it's a secondary aspect of their models. For epidemiology, both the mean value determinism and stochastic fluctuations are apparently important.
Comment Source:David said: > "True the page I referred you to talks about the deterministic interpretation of the Petri net SI model. Yet this occurs in the broader context of the stochastic interpretation of Petri nets - which is very much a discrete popcorn-like process - and that is what I meant to be talking about." I generally agree with your argument as that is basically what is involved when developing a state diagram that represents probability flow -- for example when Markov modeling a system for stochastic reliability analysis, c.f. top citations https://scholar.google.com/scholar?q=Markov+modeling+for+reliability+analysis  The mean value flow in this case is describing the probability of a fault-tolerant system existing in a particular state, with the Petri net providing a more concise representation than the expanded state diagram, due to the extra logic in the bar symbols. See this paper for how to create rewrite rules for transforming between Petri net and pure Markov state diagram representations: > Pukite, P. (1995). Intelligent reliability analysis tool for fault-tolerant system design. In 10th Computing in Aerospace Conference https://www.researchgate.net/publication/269227210_Intelligent_reliability_analysis_tool_for_fault-tolerant_system_design/figures So the point is that there is a distinction between how one understands the system under study versus the preferred vocabulary used by the practitioners. I certainly wouldn't have a problem calling these stochastic models, but that isn't the standard practice in epidemiology where it defines a larger spread in the model's parameterization via noise or variability. I am not sure where this started but since you mentioned the mass-action law of chemistry, consider that a typical reaction is so well mixed and uniform that the fluctuations are not considered that important compared to the mean-value of the the reagent constituents. Consider also that in solid-state electronics where the law of mass action for electrons and holes is \\(n p = n_i ^2\\). For modeling semiconductors, the stochastic variability is not typically required unless one is interested in modeling shot noise or other carrier fluctuations. Another situation where an extra level of stochastic variability would be applied, is in the analysis of amorphous materials where for example the photovoltaic characteristics show fat tails indicating that the material has a significant spread in electrical properties. I have written about this here: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch18 This discussion is in the weeds but important when placed in context. For a reliability analyst, the stochastic part is *everything* but for a chemist or semiconductor designer, it's a secondary aspect of their models. For epidemiology, both the mean value determinism and stochastic fluctuations are apparently important.
Thanks for putting all this into context!
Comment Source:Thanks for putting all this into context!
Here's a Medium.com primer on how to do a Hubbert Linearization of the Logistic as first noted in comment #14
https://medium.com/@puk_54065/how-to-linearize-the-logistic-d8143bfe33be
Comment Source:Here's a Medium.com primer on how to do a Hubbert Linearization of the Logistic as first noted in comment #14 https://medium.com/@puk_54065/how-to-linearize-the-logistic-d8143bfe33be
The power of "stochastic thinking"
Why was the wearing of face masks by doctors in the USA not encouraged? Every doctor interviewed by the media said they were not that effective. How can they not be effective? Even if they reduced transmission of droplets by 30% that would be effective in reducing overall R0, and therefore growth in the logistic function. So why were they not recommended? Could it be the doctors realized people would start hoarding surgical masks in an emergency, thus reducing the amount available at hospitals?
Why not tell people how to make their own mask? Even if was only 20% effective instead of 30% effective it would be good. Maybe this isn't done because western people don't understand stochastic thinking, but perhaps the people of the far east do?
From 2009 : https://gigazine.net/gsc_news/en/20090521_papermask/
How to make an anti-infection mask easily from paper towel
Swine flu is spreading all over the world and the number of patients is growing every each day.
Maybe it's not familiar to those who are out of Japan, but in Japan, when you got cold, it's very common to wear a mask covering mouth and nose. The flu mainly passes on by droplet transmission caused by coughs and sneezes. So the mask may not cover you from virus but will help preventing further infection from you to others.
Here's the step-by-step instruction of how to make a transmission preventing mask from paper towel, written by a doctor of Katakai Hospital in Niigata pref.
(JP)Making masks from Paper Towel
What you need is a sheet
of paper towel, 2 rubber
bands, & a stapler.
Fold a paper towel in two.
You may want to pile up 2
or 3 towels in one.
Fold to the center.
Fold the shown part
Flip it back and
fold to center
Fold at center and make
an accordion-like thing
Place rubber bands on
both sides and fix them
with stapler
Change the position of rubber
bands to adjust the size.
We should warn you that it will not be a perfect defense for viruses. But you need a mask in hurry, this tip will do.
Comment Source:The power of "stochastic thinking" Why was the wearing of face masks by doctors in the USA not encouraged? Every doctor interviewed by the media said they were not that effective. How can they not be effective? Even if they reduced transmission of droplets by 30% that would be effective in reducing overall R0, and therefore growth in the [logistic function](https://forum.azimuthproject.org/discussion/377/logistic-equation#latest). So why were they not recommended? Could it be the doctors realized people would start hoarding surgical masks in an emergency, thus reducing the amount available at hospitals? Why not tell people how to make their own mask? Even if was only 20% effective instead of 30% effective it would be good. Maybe this isn't done because western people don't understand stochastic thinking, but perhaps the people of the far east do? --- From 2009 : https://gigazine.net/gsc_news/en/20090521_papermask/ <h1 class="title">How to make an anti-infection mask easily from paper towel</h1> <!-- google_ad_section_start --> <p class="preface"></p><img src="https://i.gzn.jp/img/2009/05/21/papermask/863311_24844261.jpg" border="0" width="100"><p class="preface"> <br /> <b><a href="http://en.wikipedia.org/wiki/Swine_influenza" target="_blank">Swine flu</a></b> is spreading all over the world and the number of patients is growing every each day.<br /> <br /> Maybe it's not familiar to those who are out of Japan, but in Japan, when you got cold, it's very common to wear a mask covering mouth and nose. The flu mainly passes on by droplet transmission caused by coughs and sneezes. So the mask may not cover you from virus but will help preventing further infection from you to others.<br /> <br /> Here's the step-by-step instruction of how to make a transmission preventing mask from paper towel, written by a doctor of <b><a href="http://comet.endless.ne.jp/users/katacli/index.html" target="_blank">Katakai Hospital</a></b> in Niigata pref.<br /> <br /> <b>(JP)<a href="http://comet.endless.ne.jp/users/katacli/mask/papermask.html" target="_blank">Making masks from Paper Towel</a></b><br /> <br /> <table border="1" cellpadding="20"><tr> <td style="text-align:center"> What you need is a sheet <br /> of paper towel, 2 rubber <br /> bands, & a stapler. <br /> </p><img src="https://i.gzn.jp/img/2009/05/21/papermask/DSCN0019.JPG" border="0" class="lzsmall"><p class="preface"><br /> <td style="text-align:center"> Fold a paper towel in two. <br /> You may want to pile up 2 <br /> or 3 towels in one. <br /> </p><img src="https://i.gzn.jp/img/2009/05/21/papermask/DSCN0020.JPG" border="0" class="lzsmall"><p class="preface"><br /> <td style="text-align:center"> <br /> <br /> Fold to the center.<br /> </p><img src="https://i.gzn.jp/img/2009/05/21/papermask/DSCN0021.JPG" border="0" class="lzsmall"> <tr><td style="text-align:center"> Fold the shown part <br /> in half<br />  <br /> <td style="text-align:center"> Flip it back and <br /> fold to center<br />  <br /> <td style="text-align:center"> Fold at center and make <br /> an accordion-like thing<br />  <br /> <tr><td style="text-align:center"> Place rubber bands on <br /> both sides and fix them <br /> with stapler<br />  <td style="text-align:center"> <br /> <br /> Complete <br />  <br /> <td style="text-align:center"> <br /> Change the position of rubber <br /> bands to adjust the size.<br />  </table> We should warn you that it will not be a perfect defense for viruses. But you need a mask in hurry, this tip will do.
What does it imply if this Oxford-based study of the UK already indicates that 1/2 the population is already infected with COVID-19?
https://amp.ft.com/content/5ff6469a-6dd8-11ea-89df-41bea055720b
https://www.dropbox.com/s/oxmu2rwsnhi9j9c/Draft-COVID-19-Model (13).pdf?dl=0
The new coronavirus may already have infected far more people in the UK than scientists had previously estimated — perhaps as much as half the population — according to modelling by researchers at the University of Oxford.
If the results are confirmed, they imply that fewer than one in a thousand of those infected with Covid-19 become ill enough to need hospital treatment, said Sunetra Gupta, professor of theoretical epidemiology, who led the study. The vast majority develop very mild symptoms or none at all.
If 1/2 are infected then herd immunity is already reached and they only have to wait for the progression of the illness to play itself out. The document in the dropbox link shows concise details of the parameters of the stochastic compartmental model they are using
The study mentioned above has issues. It may take a while to shake out, so look at the following Twitter thread and follow up to see what the eventual interpretation settles down to
https://twitter.com/WHUT/status/1242517559762735104
In the UK, it's a battle between the Imperial College model and the Oxford University model. Essentially, the Imperial model says that we are at the initial acceleration of the logistic function with a high potential lethality, while the Oxford model says that we are in the last few doublings of the logistic function as it nears the 50% inflection point, and we are only seeing a critical care increase because the overall lethality is very low. So criticality is \( N \times p \) -- Imperial says N is still low, while p may be large while Oxford say it is just the reverse.
Comment Source:What does it imply if this Oxford-based study of the UK already indicates that 1/2 the population is already infected with COVID-19? > https://amp.ft.com/content/5ff6469a-6dd8-11ea-89df-41bea055720b > https://www.dropbox.com/s/oxmu2rwsnhi9j9c/Draft-COVID-19-Model%20%2813%29.pdf?dl=0 > The new coronavirus may already have infected far more people in the UK than scientists had previously estimated — perhaps as much as half the population — according to modelling by researchers at the University of Oxford. > If the results are confirmed, they imply that fewer than one in a thousand of those infected with Covid-19 become ill enough to need hospital treatment, said Sunetra Gupta, professor of theoretical epidemiology, who led the study. The vast majority develop very mild symptoms or none at all. If 1/2 are infected then herd immunity is already reached and they only have to wait for the progression of the illness to play itself out. The document in the dropbox link shows concise details of the parameters of the stochastic compartmental model they are using  --- The study mentioned above has issues. It may take a while to shake out, so look at the following Twitter thread and follow up to see what the eventual interpretation settles down to https://twitter.com/WHUT/status/1242517559762735104 In the UK, it's a battle between the Imperial College model and the Oxford University model. Essentially, the Imperial model says that we are at the initial acceleration of the logistic function with a high potential lethality, while the Oxford model says that we are in the last few doublings of the logistic function as it nears the 50% inflection point, and we are only seeing a critical care increase because the overall lethality is very low. So criticality is \\( N \times p \\) -- Imperial says *N* is still low, while *p* may be large while Oxford say it is just the reverse.
Following up comment #2 above, even though the idea of compartmental modeling is well-known in epidemiology, here's evidence of how little it is applied to fossil fuel depletion.
Google Scholar citations for "compartmental models" & "oil depletion"
https://scholar.google.com/scholar?q="compartmental+model"+"oil+depletion"
https://scholar.google.com/scholar?q="compartmental+model"+"peak+oil"
https://scholar.google.com/scholar?q="compartmental+model"+"oil+reserves"
The only hit shown refers to our work, and by expanding the keyword search, Google returns this:
Herrero, C., García-Olivares, A. and Pelegrí, J.L., 2014. Impact of anthropogenic CO2 on the next glacial cycle. Climatic change, 122(1-2), pp.283-298.
https://www.researchgate.net/profile/Carmen_Herrero5/publication/259148288_Impact_of_anthropogenic_CO2_on_the_next_glacial_cycle/links/02e7e52a5d76fc196e000000.pdf
The way that Herroro et al apply a compartmental model is to model the compartments as oil, atmospheric CO2 from combustion of the oil, and sequestering of that CO2 in the ocean. We also model this compartmental flow in Chap 9.
So it's an interesting intersection of a model used both for Green Math (epidemiology) and for earth/climate sciences.
Comment Source:Following up comment #2 above, even though the idea of compartmental modeling is well-known in epidemiology, here's evidence of how little it is applied to fossil fuel depletion. Google Scholar citations for "compartmental models" & "oil depletion" https://scholar.google.com/scholar?q=%22compartmental+model%22+%22oil+depletion%22 https://scholar.google.com/scholar?q=%22compartmental+model%22+%22peak+oil%22 https://scholar.google.com/scholar?q=%22compartmental+model%22+%22oil+reserves%22 The only hit shown refers to our work, and by expanding the keyword search, Google returns this: > Herrero, C., García-Olivares, A. and Pelegrí, J.L., 2014. Impact of anthropogenic CO2 on the next glacial cycle. Climatic change, 122(1-2), pp.283-298. > https://www.researchgate.net/profile/Carmen_Herrero5/publication/259148288_Impact_of_anthropogenic_CO2_on_the_next_glacial_cycle/links/02e7e52a5d76fc196e000000.pdf The way that Herroro et al apply a compartmental model is to model the compartments as oil, atmospheric CO2 from combustion of the oil, and sequestering of that CO2 in the ocean. We also model this compartmental flow in [Chap 9](https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch9). So it's an interesting intersection of a model used both for Green Math (epidemiology) and for earth/climate sciences.
This is data from https://data.humdata.org/dataset/novel-coronavirus-2019-ncov-cases
Scroll down the page where the data link is and connect or download the confirmed cases data set.
This is a Hubbert Linearization for Italy as described in comment #42
This is a purely mechanical fit, plotting n/N vs N and then selecting the Add Trendline on Excel with a sufficient forecast interval to intercept the x-axis.
Comment Source:This is data from https://data.humdata.org/dataset/novel-coronavirus-2019-ncov-cases Scroll down the page where the data link is and connect or download the confirmed cases data set.  This is a Hubbert Linearization for Italy as described in [comment #42](https://forum.azimuthproject.org/discussion/comment/21972/#Comment_21972)  This is a purely mechanical fit, plotting n/N vs N and then selecting the Add Trendline on Excel with a sufficient forecast interval to intercept the x-axis.
Example of stochastic thinking applied to testing to better estimate infection levels. Tests are pooled in larger groups so that aggregate positives can be determined at a higher throughput. If a test comes back positive, then the group members are tested individually to identify positives. This divide & conquer strategy is only efficient for populations at low infection levels though.
What's disturbing about the Italy example above is that if the number of confirmed cases limits to 130,000, then the rest of Italy's population of 60 million is really unknown. So aggregate testing can more quickly estimate how many more people are infected w/o symptoms or have anti-bodies (via a different test).
Comment Source:Example of stochastic thinking applied to testing to better estimate infection levels. [Tests are pooled](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3500568/) in larger groups so that aggregate positives can be determined at a higher throughput. If a test comes back positive, then the group members are tested individually to identify positives. This divide & conquer strategy is only efficient for populations at low infection levels though. What's disturbing about the Italy example above is that if the number of confirmed cases limits to 130,000, then the rest of Italy's population of 60 million is really unknown. So aggregate testing can more quickly estimate how many more people are infected w/o symptoms or have anti-bodies (via a different test).
This morning someone tweeted:
"The natural & life sciences (1) may use the same words as humanities & social sciences (2) but they often use them differently.
We need to get comfortable w the uncertainty of 21stC-life. We need to use language more precisely & showing clearly where we use it imprecisely
https://twitter.com/_ppmv/status/1243144909735055361
I responded that he should look into category theory. Computational structures can be categorized according to computational flow, and thus structures & algorithms used in different disciplines but with idiosyncratic names can be pattern-matched according to their structure and data flow and then reapplied elsewhere. This can potentially benefit from the work that went into the original model and so the new algorithms don't have to be reinvented.
So it's not simply about defining our terminology as one commenter recommended but defining the model unambiguosly.
I bring this up because I see this happening in this thread with the cross-disciplinary use of compartmental models in resource depletion and in contagion modeling, and with ideas shared both ways
Example 1 : No one (except for moi) seems to mention compartmental models in resource depletion but they are well-known in epidemiology.
Example 2 : In resource depletion the idea of linearizing the logistic function (via Hubbert linearization) is well known but I have no idea whether it even exists in epidemiology.
The approach is to apply category theory to describe the compartment model in each case and then pattern match. The equivalence in the structure at the category theory level will root out the commonality independent of the naming of the model.
This is the pattern recognition application of category theory that seems to be eluding everyone, IMO. But it is straightforward when we place it into this context, where "roughly speaking, category theory is graph theory with additional structure to represent composition".
Comment Source:This morning someone tweeted: > "The natural & life sciences (1) may use the same words as humanities & social sciences (2) but they often *use them differently.* >We need to get comfortable w the uncertainty of 21stC-life. We need to use language more precisely & showing clearly where we use it imprecisely https://twitter.com/_ppmv/status/1243144909735055361 I responded that he should look into category theory. Computational structures can be categorized according to computational flow, and thus structures & algorithms used in different disciplines but with idiosyncratic names can be pattern-matched according to their structure and data flow and then reapplied elsewhere. This can potentially benefit from the work that went into the original model and so the new algorithms don't have to be reinvented. So it's not simply about defining our terminology as one commenter recommended but defining the model unambiguosly. I bring this up because I see this happening in this thread with the cross-disciplinary use of compartmental models in resource depletion and in contagion modeling, and with ideas shared both ways Example 1 : No one (except for moi) seems to mention compartmental models in resource depletion but they are well-known in epidemiology. Example 2 : In resource depletion the idea of linearizing the logistic function (via Hubbert linearization) is well known but I have no idea whether it even exists in epidemiology. The approach is to apply [category theory to describe the compartment model](https://forum.azimuthproject.org/discussion/2499/tutorial-on-stochastic-petri-nets-with-sir-disease-model-as-example#latest) in each case and then pattern match. The equivalence in the structure at the category theory level will root out the commonality independent of the naming of the model. This is the pattern recognition application of category theory that seems to be eluding everyone, IMO. But it is straightforward when we place it into this context, where *"roughly speaking, category theory is graph theory with additional structure to represent composition"*.
There is perhaps a way to make the Hubbert Linearization of the logistic more general. This is an excerpt from our Mathematical GeoEnergy book
This formulation has at least some resemblance to path integral transforms that many people on this forum are likely familiar with. So perhaps we can leverage some other ideas on this front.
The fact that the time-dependent aspect is missing from Hubbert Linearization is perhaps a result of the distinction between autonomous (which describes the logistic) and non-autonomous differential equations. I don't think that this topic has been covered anywhere on this forum so it may be worth a new category.
Comment Source:There is perhaps a way to make the Hubbert Linearization of the logistic more general. This is an excerpt from our Mathematical GeoEnergy book  This formulation has at least some resemblance to path integral transforms that many people on this forum are likely familiar with. So perhaps we can leverage some other ideas on this front. The fact that the time-dependent aspect is missing from Hubbert Linearization is perhaps a result of the distinction between autonomous (which describes the logistic) and non-autonomous differential equations. I don't think that this topic has been covered anywhere on this forum so it may be worth a new category.
Some are plotting the progression this way (again removing time as with Hubbert Linearization):
This is from https://aatishb.com/covidtrends/
https://youtu.be/ZWYY1LiuHUk
The intuition behind understanding this chart is that when the exponential rate of increase is strongest, the incremental increase (i.e. the daily to weekly count) may be of similar scale to the cumulative up to that point. That's why the curve linearizes in the chart, at least until the logistic inflection point is reached, as indicated by the divergence of China and South Korea.
In contrast to this projection, the Hubbert Linearization accounts for the logistic divergence and generates a linear fit over the entire range.
Comment Source:Some are plotting the progression this way (again removing time as with Hubbert Linearization):  This is from https://aatishb.com/covidtrends/ https://youtu.be/ZWYY1LiuHUk The intuition behind understanding this chart is that when the exponential rate of increase is strongest, the incremental increase (i.e. the daily to weekly count) may be of similar scale to the cumulative up to that point. That's why the curve linearizes in the chart, at least until the logistic inflection point is reached, as indicated by the divergence of China and South Korea. In contrast to this projection, the Hubbert Linearization accounts for the logistic divergence and generates a linear fit over the entire range. | CommonCrawl |
Neumann homogenization via integro-differential operators
On blow-up criterion for the nonlinear Schrödinger equation
July 2016, 36(7): 3651-3675. doi: 10.3934/dcds.2016.36.3651
Spectral properties of renormalization for area-preserving maps
Denis Gaidashev 1, and Tomas Johnson 2,
Department of Mathematics, Uppsala University, Uppsala, Sweden
Fraunhofer-Chalmers Research Centre for Industrial Mathematics, SE-412 88 Gothenburg, Sweden
Received December 2014 Revised November 2015 Published March 2016
Area-preserving maps have been observed to undergo a universal period-doubling cascade, analogous to the famous Feigenbaum-Coullet-Tresser period doubling cascade in one-dimensional dynamics. A renormalization approach has been used by Eckmann, Koch and Wittwer in a computer-assisted proof of existence of a conservative renormalization fixed point.
Furthermore, it has been shown by Gaidashev, Johnson and Martens that infinitely renormalizable maps in a neighborhood of this fixed point admit invariant Cantor sets with vanishing Lyapunov exponents on which dynamics for any two maps is smoothly conjugate.
This rigidity is a consequence of an interplay between the decay of geometry and the convergence rate of renormalization towards the fixed point.
In this paper we prove a result which is crucial for a demonstration of rigidity: that an upper bound on this convergence rate of renormalizations of infinitely renormalizable maps is sufficiently small.
Keywords: hyperbolicty, period-doubling, Renormalization, computer-assited proof, rigidity., area-preserving maps.
Mathematics Subject Classification: 37E20, 37E4.
Citation: Denis Gaidashev, Tomas Johnson. Spectral properties of renormalization for area-preserving maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3651-3675. doi: 10.3934/dcds.2016.36.3651
J. J. Abad and H. Koch, Renormalization and periodic orbits for Hamiltonian flows,, Comm. Math. Phys., 212 (2000), 371. doi: 10.1007/s002200000218. Google Scholar
J. J. Abad, H. Koch and P. Wittwer, A renormalization group for Hamiltonians: Numerical results},, Nonlinearity, 11 (1998), 1185. doi: 10.1088/0951-7715/11/5/001. Google Scholar
G. Benettin et al, Universal properties in conservative dynamical systems,, Lettere al Nuovo Cimento, 28 (1980), 1. Google Scholar
T. Bountis, Period doubling bifurcations and universality in conservative Systems,, Physica, 3 (1981), 577. doi: 10.1016/0167-2789(81)90041-5. Google Scholar
A. de Carvalho, M. Lyubich and M. Martens, Renormalization in the Hénon family, I: Universality but non-rigidity,, J. Stat. Phys, 121 (2005), 611. doi: 10.1007/s10955-005-8668-4. Google Scholar
P. Collet, J.-P. Eckmann and H. Koch, Period doubling bifurcations for families of maps on $\mathbbR^n$,, J. Stat. Phys., 3D (1980). Google Scholar
P. Collet, J.-P. Eckmann and H. Koch, On universality for area-preserving maps of the plane,, Physica D, 3 (1981), 457. doi: 10.1016/0167-2789(81)90033-6. Google Scholar
B. Derrida and Y. Pomeau, Feigenbaum's ratios of two dimensional area preserving maps,, Phys. Lett. A, 80 (1980), 217. doi: 10.1016/0375-9601(80)90003-1. Google Scholar
J.-P. Eckmann, H. Koch and P. Wittwer, Existence of a fixed point of the doubling transformation for area-preserving maps of the plane,, Phys. Rev. A, 26 (1982), 720. doi: 10.1103/PhysRevA.26.720. Google Scholar
J.-P. Eckmann, H. Koch and P. Wittwer, A computer-assisted proof of universality for area-preserving maps,, Memoirs of the American Mathematical Society, 47 (1984). doi: 10.1090/memo/0289. Google Scholar
H. Epstein, New proofs of the existence of the Feigenbaum functions,, Commun. Math. Phys., 106 (1986), 395. doi: 10.1007/BF01207254. Google Scholar
D. Gaidashev, Renormalization of isoenergetically degenerate Hamiltonian flows and associated bifurcations of invariant tori,, Discrete Contin. Dyn. Syst., 13 (2005), 63. doi: 10.3934/dcds.2005.13.63. Google Scholar
D. Gaidashev, Period doubling renormalization for area-preserving maps and mild computer assistance in contraction mapping principle,, Int. Journal of Bifurcations and Chaos, 21 (2011), 3217. doi: 10.1142/S0218127411030477. Google Scholar
D. Gaidashev and T. Johnson, Dynamics of the universal area-preserving map associated with period doubling: Hyperbolic sets,, Nonlinearity, 22 (2009), 2487. doi: 10.1088/0951-7715/22/10/010. Google Scholar
D. Gaidashev and T. Johnson, Dynamics of the universal area-preserving map associated with period doubling: Stable sets,, J. Mod. Dyn., 3 (2009), 555. doi: 10.3934/jmd.2009.3.555. Google Scholar
D. Gaidashev, T. Johnson and M. Martens, Rigidity for infinitely renormalizable area-preserving maps,, Duke Mathematical Journal, 165 (2016), 129. doi: 10.1215/00127094-3165327. Google Scholar
D. Gaidashev and H. Koch, Renormalization and shearless invariant tori: Numerical results,, Nonlinearity, 17 (2004), 1713. doi: 10.1088/0951-7715/17/5/008. Google Scholar
D. Gaidashev and H. Koch, Period doubling in area-preserving maps: An associated one-dimensional problem,, Ergod. Th. & Dyn. Sys., 31 (2011), 1193. doi: 10.1017/S0143385710000283. Google Scholar
P. Hazard, Hénon-like maps with arbitrary stationary combinatorics,, Ergod. Th. & Dynam. Sys., 31 (2011), 1391. doi: 10.1017/S0143385710000398. Google Scholar
P. E. Hazard, M. Lyubich and M. Martens, Renormalisable Henon-like maps and unbounded geometry,, Nonlinearity, 25 (2012), 397. doi: 10.1088/0951-7715/25/2/397. Google Scholar
R. H. G. Helleman, Self-generated chaotic behavior in nonlinear mechanics,, in Fundamental Problems in Statistical Mechanics (ed. E. G. D. Cohen), (1980), 165. Google Scholar
T. Johnson, No elliptic islands for the universal area-preserving map,, Nonlinearity, 24 (2011), 2063. doi: 10.1088/0951-7715/24/7/008. Google Scholar
K. Khanin, J. Lopes Dias and J. Marklof, Multidimensional continued fractions, dynamic renormalization and KAM theory,, Comm. Math. Phys., 270 (2007), 197. doi: 10.1007/s00220-006-0125-y. Google Scholar
H. Koch, On the renormalization of Hamiltonian flows, and critical invariant tori,, Discrete Contin. Dyn. Syst., 8 (2002), 633. doi: 10.3934/dcds.2002.8.633. Google Scholar
H. Koch, A renormalization group fixed point associated with the breakup of golden invariant tori,, Discrete Contin. Dyn. Syst., 11 (2004), 881. doi: 10.3934/dcds.2004.11.881. Google Scholar
H. Koch, Existence of critical invariant tori,, Ergod. Th. & Dynam. Sys., 28 (2008), 1879. doi: 10.1017/S0143385708000199. Google Scholar
S. Kocić, Renormalization of Hamiltonians for Diophantine frequency vectors and KAM tori,, Nonlinearity, 18 (2005), 2513. doi: 10.1088/0951-7715/18/6/006. Google Scholar
M. Lyubich and M. Martens, Renormalization in the Hénon family, II: Homoclinic tangle,, Invent. Math., 186 (2011), 115. doi: 10.1007/s00222-011-0316-9. Google Scholar
M. Lyubich and M. Martens, Probabilistic universality in two-dimensional dynamics,, e-print , (2011). Google Scholar
Y. W. Nam, Renormalization for three-dimensional Hénon-like maps,, e-print , (2014). Google Scholar
, Programs, available at , (). Google Scholar
Denis Gaidashev, Tomas Johnson. Dynamics of the universal area-preserving map associated with period-doubling: Stable sets. Journal of Modern Dynamics, 2009, 3 (4) : 555-587. doi: 10.3934/jmd.2009.3.555
Hans Koch. On hyperbolicity in the renormalization of near-critical area-preserving maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7029-7056. doi: 10.3934/dcds.2016106
Simion Filip. Tropical dynamics of area-preserving maps. Journal of Modern Dynamics, 2019, 14: 179-226. doi: 10.3934/jmd.2019007
Mário Bessa, César M. Silva. Dense area-preserving homeomorphisms have zero Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1231-1244. doi: 10.3934/dcds.2012.32.1231
Giovanni Forni. The cohomological equation for area-preserving flows on compact surfaces. Electronic Research Announcements, 1995, 1: 114-123.
Daniel N. Dore, Andrew D. Hanlon. Area preserving maps on $\boldsymbol{S^2}$: A lower bound on the $\boldsymbol{C^0}$-norm using symplectic spectral invariants. Electronic Research Announcements, 2013, 20: 97-102. doi: 10.3934/era.2013.20.97
Àngel Jorba, Pau Rabassa, Joan Carles Tatjer. Period doubling and reducibility in the quasi-periodically forced logistic map. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1507-1535. doi: 10.3934/dcdsb.2012.17.1507
Piotr Zgliczyński. Steady state bifurcations for the Kuramoto-Sivashinsky equation: A computer assisted proof. Journal of Computational Dynamics, 2015, 2 (1) : 95-142. doi: 10.3934/jcd.2015.2.95
Kaijen Cheng, Kenneth Palmer, Yuh-Jenn Wu. Period 3 and chaos for unimodal maps. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1933-1949. doi: 10.3934/dcds.2014.34.1933
Miroslav KolÁŘ, Michal BeneŠ, Daniel ŠevČoviČ. Area preserving geodesic curvature driven flow of closed curves on a surface. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3671-3689. doi: 10.3934/dcdsb.2017148
Jingzhi Yan. Existence of torsion-low maximal isotopies for area preserving surface homeomorphisms. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4571-4602. doi: 10.3934/dcds.2018200
Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979
Iuliana Oprea, Gerhard Dangelmayr. A period doubling route to spatiotemporal chaos in a system of Ginzburg-Landau equations for nematic electroconvection. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 273-296. doi: 10.3934/dcdsb.2018095
A. Aschwanden, A. Schulze-Halberg, D. Stoffer. Stable periodic solutions for delay equations with positive feedback - a computer-assisted proof. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 721-736. doi: 10.3934/dcds.2006.14.721
Tien-Cuong Dinh, Nessim Sibony. Rigidity of Julia sets for Hénon type maps. Journal of Modern Dynamics, 2014, 8 (3&4) : 499-548. doi: 10.3934/jmd.2014.8.499
H. E. Lomelí, J. D. Meiss. Generating forms for exact volume-preserving maps. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 361-377. doi: 10.3934/dcdss.2009.2.361
Marie Henry, Danielle Hilhorst, Masayasu Mimura. A reaction-diffusion approximation to an area preserving mean curvature flow coupled with a bulk equation. Discrete & Continuous Dynamical Systems - S, 2011, 4 (1) : 125-154. doi: 10.3934/dcdss.2011.4.125
Jianlu Zhang. Coexistence of period 2 and 3 caustics for deformative nearly circular billiard maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6419-6440. doi: 10.3934/dcds.2019278
Horst R. Thieme. Eigenvectors of homogeneous order-bounded order-preserving maps. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 1073-1097. doi: 10.3934/dcdsb.2017053
Rafael de la Llave, Jason D. Mireles James. Parameterization of invariant manifolds by reducibility for volume preserving and symplectic maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4321-4360. doi: 10.3934/dcds.2012.32.4321
Denis Gaidashev Tomas Johnson | CommonCrawl |
Opposites Attract: A Review of Basic Magnetic Theories
June 08, 2015 by Editorial Team
Electric machineries are based on the basic principles of electromechanical conversion. They use either the electrostatic or the electromagnetic principle. This technical article deals with the magnetic circuit theory for the conversion of one form of energy to another.
Recommended Level
This technical article deals with the magnetic circuit theory for the conversion of one form of energy to another. A static device such as a transformer converts the electrical energy to electrical energy while rotating devices such as a DC machine, induction machine, or synchronous machine convert the mechanical or electrical energy into electrical or mechanical energy. Actuators, solenoids, and relays are also based on this conversion process. This conversion process happens in a magnetic material inside these machines. The magnetic material provides the high flux density which can provide high torque and high machine output per unit volume of the machines. This article is dedicated to the properties of these magnetic materials. We will see the basic methodology for the analysis of these machines by using their magnetic circuits.
A Review on Basic Magnetics
If we use a permanent magnet or let electric current flow through a coil, magnetic field is produced. The direction of magnetic field can be found out using the Right-hand rule which says that if the conductor is held in the right hand in such a way that the thumb indicates the direction of current, then the fingertips will indicate the direction of the magnetic field.
The basic laws related to magnetics are given below.
Faraday's Law
The EMF (or voltage) produced around a closed-loop coil is directly proportional to the rate of change of a magnetic field (time-variant) going through or out of that loop.
$$EMF ∝ \frac{∂\Phi}{∂t}$$
Figure 1. Magnetic Field (Varying with Time)
Lenz's Law
According to this law, the direction of the electromagnetically-induced current is such that its magnetic field opposes direction of the original magnetic field that created the induced current. This is shown in the figure below.
Figure 2. Direction for the Induced Current
As a result, the basic equation of Faraday's law of electromagnetic induction will have a negative sign.
$$EMF=-\frac{∂\Phi}{∂t}$$
Ampere's Law
This law is based on the discovery of the compass needles used for the detection of direction. We know that a current-carrying conductor produces a magnetic field. Lines of magnetic field form a closed path around the wire. The magnitude of the magnetic field density, B, is same on circular paths. B is directly proportional to the current and inversely proportional to the distance of a point on the closed path from the wire.
For a vector B and where dS is small element on the circular path,
$$\mathbf{B}\cdot d\boldsymbol{S} = B\,dS$$
$$B=\frac{µ_{0}I}{2πr}$$
where the value of B is constant around the closed path. Here, µ0 is the permeability of air.
Sum of the products for all such dS elements is given as,
$$\oint B\,dS=B\,\oint\,dS=\frac{µ_{0}I}{2πr}\,\oint dS$$
Consider a circular path, then
$$\oint dS = 2πr$$
We now have,
$$\oint B\,dS=µ_{0}I$$
The Ampere's circuital law is that line integral of $$\mathbf{B} \cdot d\mathbf{S}$$ around any closed path is $$µ_{0}I$$. Here, $$I$$ is the total continuous current passing through any surface bounded by the closed path.
In terms of magnetic field intensity, this terms are reduced to:
$$\oint \mathbf{H}\cdot d\mathbf{l}=I_{Enclosed\,by\,path}$$
Parameters and Terminologies in Magnetics
Now, the basic terminologies are given for the construction of magnetic circuit. Later, we will see the process to form the magnetic equivalent circuit for a given machine.
Magnetic Flux Intensity (H)
If V is the potential of any point, then the electric current will produce a magnetic field of intensity H = -∇V.
The EMF in an electric circuit is analogous to the ampere-turns in a magnetic circuit. This gives the relationship between the current and field intensity.
The relationship between the current and field intensity is given by the Ampere's circuital law mentioned.
$$\oint \mathbf{H}\cdot d\mathbf{l}=\sum{i}$$
$$\mathbf{H}$$ = magnetic field intensity at any point on the closed path of any shape
$$d\mathbf{I}$$= incremental length at the chosen point
Let the angle between the H and dI is Ө. Then
$$\oint H dL\,cosӨ=\sum{i}$$
But for a circular shape, the value of Ө = 0°. Thus, for a circular shape of radius r we have
$$H\, (2πr) = i_{T}$$
Where iT = i1 + i2 - i3 for the Figure 1. For a coil of N turns carrying i current in one conductor, iT = Ni.
Figure 3. Image to Illustrate Ampere Circuital Law
Magnetic Flux Density (B)
It is the flux per unit area. The flux in a magnetic circuit is analogous to the current in electric circuit. It is related to the flux density by the surface integral as shown by the following equation.
$$\Phi=\int_{S}\mathbf{B}\cdot d\mathbf{S}$$
Here, $$\Phi$$ is the flux expressed in Wb (Weber) measured for the surface area S.The unit of B is Wb/m2 or Tesla.
This is analogous to the electric resistance in the electric circuit but it is not necessarily a loss component in the magnetic circuit. The equation for the reluctance is given below which uses Ohm's law and replaces the equivalent magnetic-circuit variables.
$$Reluctance,\,R=\frac{Magnetomotive\,Force}{Flux}$$
$$\Rightarrow R=\frac{NI}{\Phi} \text{ (At/Wb)}$$
Permeance (ᵱ)
It is the inverse of reluctance. This is used to portray the geometrical characteristics for magnetic field.
$$ᵱ=\frac{1}{R}$$
The entire magnetic flux through a magnet does not entirely pass through the low-reluctance path of the core; instead part of the flux also goes to the high-reluctance path of air or leaks out from the core.
As the current passes through the path of least resistance, the magnetic flux also has the ability of leaking out to the surrounding air. There is no magnetic insulator available to eliminate them, but magnetic shielding using DC or AC at low frequency can reduce it to some level. In the case of coupled circuits consisting of two or more coupled circuits having more than one winding, the leakage flux links to one coil without interlinking others.
Fringing
This term is used to illustrate the deviations of the flux lines in an air gap of magnetic machine. Fringing is more significant in air medium than to iron. Fringing increases the effective area of the gap. It is proportional to the length of the air gap.
Absolute and Relative Permeability (µ0, µr )
Permeability tells about the capability of the magnetic substance to favor the making of the magnetic field within itself. Absolute permeability is the ratio of the magnetic field density to the magnetic field intensity in a given medium given by
$$µ=\frac{B}{H}$$
Thus, absolute permeability of a material is given by the slope of the curve obtained between flux density and flux intensity for a particular value of the flux intensity.
The permeability changes with the change in flux intensity and change of the material as shown in the figure below. The different materials require the different values of current to establish a particular level of flux density.
Figure 4. B-H Curve for Magnetic Materials
The flux density B increases linearly when the value of the magnetic field intensity is low. However, as the value of the intensity is increased, flux density increases in a non-linear manner showing the effect of saturation. Hence, the reluctance of the magnetic path is based on the value of the flux density as shown in figure below.
Figure 5. B-H Magnetization Curve
Relative permeability is a dimensionless quantity given by a ratio of a particular magnetic substance to that of a permeability of an air.
$$µ_{r}=\frac{µ}{µ_{0}}$$
Where µ0 is the permeability of air = 4 π X 10-7 Henry/meter.
Table 1. Relative Permeability of Few Materials
A coil is usually wound on the magnetic core to generate the flux. This coil may be represented by the ideal element known as inductance represented by symbol L shown below. Inductance is the flux linkage of the coil per ampere of the current flowing through it.
Figure 6. (a) Basic Magnetic Circuit (b) Equivalent Inductance for a Coil
$$L=\frac{N\Phi}{i}=\frac{N(BA)}{i}=Nµ\frac{HA}{i}=Nµ\frac{HA}{\frac{Hl}{N}}=\frac{{N}^{2}}{\frac{l}{µA}}=\frac{{N}^{2}}{R}$$
Note that inductance is proportional to the square of the number of turns on the coil.
Before starting the construction of the equivalent magnetic circuit, let us revise the basic analogy between the magnetic and electric circuit shown below.
Table 2. Electric and Magnetic Circuits Analogy
Equivalent Magnetic Circuit
The usefulness of the equivalent magnetic circuit is to find out the proper size of magnetic parts of an electric device during the design process, i.e. in finding out the parameters such as inductance and in finding out the air gap flux density for the calculation of power and torque.
The flux density in the core increases with the presence of ferromagnetic material or current-carrying coils. This in turn affects the inductance of the coil.
Although the magnetic field is a distributed parameter phenomenon, we can use the lumped parameter analysis for a definite class of magnetic material as done in the electric circuit analysis. However, the accuracy and precision for such analysis is less than the electric circuit analysis.
Consider a simple magnetic circuit having a ring-shaped magnetic core known as toroid as shown in figure below.
Figure 7. A Toroid
The coil is wrapped around the entire circumference and is carrying the current i through a coil making N turns.
Let the leakage flux is nil as it is mostly confined within the core material.
From Ampere's circuital law, we have,
$$\oint \mathbf{H}\cdot d\mathbf{l}=Ni$$
$$\Rightarrow H \cdot l=Ni=magnetomotive\,force = F \text{ (At)}$$
$$(l=2\,π\,r)$$
$$B = µ\, H \Rightarrow B=µ\frac{Ni}{l} \text{ (Tesla)}$$
As there is no leakage flux, the flux covering the cross section of the toroid is given by
$$\Phi=\oint \mathbf{B}\cdot d\mathbf{A}=B\,A\Rightarrow µ\frac{Ni}{l}A=\frac{Ni}{\frac{l}{µA}}=\frac{Ni}{R}$$
Here, $$\Re$$ is the reluctance of magnetic path given by
$$R=\frac{l}{µA}=\frac{1}{ᵱ}$$
The magnetic equivalent circuit for a toroid can be represented as shown below, which is basically derived from the analogous electric circuit. In this example, we have considered the circular core, but it can be of another form also such as in rectangular form.
Figure 8. (a) Equivalent Magnetic Circuit for a Toroid (b) Equivalent Electric circuit
Equivalent Magnetic Circuit for a Core with Multiple Excitations
Consider the magnetic device shown in Fig. 9 consisting of three coils for excitation carrying the currents i1, i2, i3.
Consider the mean length of this magnetic circuit is L. The coils have turns ratio N1, N2, N3.The first two coils are producing the magnetic fluxes $$\Phi_{1}$$, $$\Phi_{2}$$ in the same direction while the direction of the current in the third coil is in such a way that it produces the flux $$\Phi_{3}$$ in the opposite direction.
Figure 9. Magnetic Device having Multiple Excitations
According to Ampere's circuital law, integral of the magnetic field intensity around any closed path is equal to the total algebraic sum of the electric current in that path.
$$\int_{a}^{b}H\,dl=Ni=F=R\Phi=\frac{l}{µA}\Phi$$
Magnetic flux through surface S is given by
$$\Phi=\int_{S}\mathbf{B} \cdot d\mathbf{S}$$
If there is no saturation, then the value of the magnetic field intensity H varies linearly with the change in the magnetic field density B.
The net magnetomotive force (mmf) is given by
$$\oint H\,dl=\int_{a}^{b}H_{K}dl\,+\,\int_{b}^{c}H_{K}dl\,+\,\int_{c}^{d}H_{K}dl\,+\,\int_{d}^{a}H_{K}dl$$
$$\Rightarrow \oint H\,dl=H_{K}L_{ab}\,+\,H_{K}L_{bc}\,+\,H_{K}L_{cd}\,+\,H_{K}L_{da}=H_{K}L=\Phi R=F=N_{1}I_{1}\,+\,N_{2}I_{2}\,-\,N_{3}I_{3}$$
Thus, the algebraic sum of magnetic potential around the closed path is zero. (Analogous of the KVL in electric circuit)
Thus, the magnetic circuit representation will be:
Figure 10. Equivalent Circuit Representation for the Multiple Excitation System
Equivalent Magnetic Circuit with an Air Gap
In an electric machine, input and output of a magnetic system is isolated from the air gap. Practically, the same flux is required in the magnetic core and the air gap. Thus, an air will require more mmf than the core due to high reluctance. If the value of flux density is high, the core will exhibit saturation. However, the air gap will not get saturated as B-H curve for the air medium is linear (µ is constant for an air medium).
Consider the magnetic structure with a single coil and an air gap having mean length L as shown below.
Figure 11. Magnetic Core having an Air Gap
$$mmf,\;F = Ni$$
Let the core have the reluctance RC and air gap have the reluctance Rg which is given by the following equations as follows:
$$R_{C}=\frac{l_{c}}{µ_{c}A_{C}}$$ and $$R_{g}=\frac{l_{g}}{µ_{0}A_{g}}$$
$$Flux,\, \Phi=\frac{Ni}{R_{C}+R_{g}}$$
$$mmf, \;Ni = H_{C}\,l_{c}+H_{g}\,l_{g}$$
Consider that the air gap is of small size (usual). Thus, fringing effect can be ignored. Also, assume there is no saturation. Then,
$$Ni = H_{C}\,l_{c} + H_{g}\,l_{g} = H_{K}L$$
$$A_{g} = A_{C}$$
The value of flux density will be same both in its core and the air gap given by the ratio of flux to the cross-section area of the core. The equivalent circuit in this case is shown below.
Figure 12. Equivalent Circuit for Magnetic Core with an Air Gap
magnets magnetic flux inductance magnetic circuit
Printed Magnetics Could Revolutionize Magneto-Mechanical Devices
A company in Huntsville, Alabama is printing "correlated magnets" with surprising properties that could create a new generation of...
Jeremy Lee
What Is Inductance and How Does It Apply to Ground Bounce?
Why you should care about even small amounts of inductance.
DAC Output Circuitry for an Arbitrary Waveform Generator
How do you convert DAC output currents into a normal voltage signal? Let's take a look.
Robert Keim
Learning to Simplify: Thevenin and Norton Equivalent Circuits
This article reviews the basics of finding Thevenin and Norton equivalents and discusses how to apply Thevenin's theorem to a practical... | CommonCrawl |
The effects of coupling on finite-amplitude acoustic traveling waves in thermoviscous gases: Blackstock's models
Nonlinear acoustics and shock formation in lossless barotropic Green--Naghdi fluids
September 2016, 5(3): 367-381. doi: 10.3934/eect.2016009
Oscillating nonlinear acoustic shock waves
Yuri Gaididei 1, , Anders Rønne Rasmussen 2, , Peter Leth Christiansen 3, and Mads Peter Sørensen 4,
Bogolyubov Institute for Theoretical Physics, 03143 Kiev, Ukraine
GreenHydrogen, DK-6000 Kolding, Denmark
Department of Physics and Department of Applied Mathematics and Computer Science, Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
Department of Applied Mathematics and Computer Science, Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
Received January 2016 Revised March 2016 Published August 2016
We investigate oscillating shock waves in a tube using a higher order weakly nonlinear acoustic model. The model includes thermoviscous effects and is non isentropic. The oscillating shock waves are generated at one end of the tube by a sinusoidal driver. Numerical simulations show that at resonance a stationary state arise consisting of multiple oscillating shock waves. Off resonance driving leads to a nearly linear oscillating ground state but superimposed by bursts of a fast oscillating shock wave. Based on a travelling wave ansatz for the fluid velocity potential with an added 2'nd order polynomial in the space and time variables, we find analytical approximations to the observed single shock waves in an infinitely long tube. Using perturbation theory for the driven acoustic system approximative analytical solutions for the off resonant case are determined.
Keywords: Nonlinear acoustics, nonlinear waves., shock waves.
Mathematics Subject Classification: Primary: 35Q35, 74J40; Secondary: 74J30, 74J3.
Citation: Yuri Gaididei, Anders Rønne Rasmussen, Peter Leth Christiansen, Mads Peter Sørensen. Oscillating nonlinear acoustic shock waves. Evolution Equations & Control Theory, 2016, 5 (3) : 367-381. doi: 10.3934/eect.2016009
B. O. Enflo and C. M. Hedberg, Theory of Nonlinear Acoustics in Fluids, $1^{st}$, edition, (2002). Google Scholar
W. Chester, Resonant oscillations in closed tubes,, J. Fluid Mech., 18 (1964), 44. doi: 10.1017/S0022112064000040. Google Scholar
I. Christov, C. I. Christov and P. M. Jordan, Modeling weakly nonlinear acoustic wave propagation,, Q. Jl Mech. Appl. Math., 60 (2007), 473. doi: 10.1093/qjmam/hbm017. Google Scholar
I. Christov, C. I. Christov and P. M. Jordan, Corrigendum and addendum: Modeling weakly nonlinear acoustic wave propagation,, Q. Jl Mech. Appl. Math., 68 (2015), 231. doi: 10.1093/qjmam/hbu023. Google Scholar
S. M. Hagsäter, T. G. Jensen, H. Bruus and J. P. Kutter, Acoustic resonances in piezo-actuated microfluidic chips: Full-image micro-piv experiments and numerical simulations,, Lab Chip, 7 (2007), 1336. Google Scholar
S. M. Hagsäter, A. Lenshof, P. Skafte-Pedersen, J. P. Kutter, T. Laurell and H. Bruus, Acoustic resonances in straight micro channels: Beyond the 1d-approximation,, Lab Chip, 8 (2008), 1178. Google Scholar
M. F. Hamilton and C. L. Morfey, In: M.F. Hamilton and D.T. Blackstock, (eds.),, Nonlinear Acoustics, (1998), 41. Google Scholar
P. M. Jordan, An analytical study of Kuznetsov's equation: Diffusive solitons, shock formation, and solution bifurcation,, Physics Letters A, 326 (2004), 77. doi: 10.1016/j.physleta.2004.03.067. Google Scholar
P. M. Jordan, G. V. Norton, S. A. Chin-Bing and A. Warn-Varnas, On the propagation of nonlinear acoustic waves in viscous and thermoviscous fluids,, European Journal of Mechanics B-Fluids, 34 (2012), 56. doi: 10.1016/j.euromechflu.2012.01.016. Google Scholar
B. Kaltenbacher, Mathematics of nonlinear acoustics,, Evolutiuon equations and control theory, 4 (2015), 447. doi: 10.3934/eect.2015.4.447. Google Scholar
R. S. Keiffer, R. McNorton, P. M. Jordan and I. C. Christov, Dissipative acoustic solitons under a weakly-nonlinear, Lagrangian-averaged Euler-$\alpha$ model of single-phase lossless fluids,, Wave Motion, 48 (2011), 782. doi: 10.1016/j.wavemoti.2011.04.013. Google Scholar
V. P. Kuznetsov, Equations of nonlinear acoustics,, Sov. Phys. Acoust., 16 (1971), 467. Google Scholar
S. Makarov and M. Ochmann, Nonlinear and thermoviscous phenomena in acoustics, part I,, Acustica, 82 (1996), 579. Google Scholar
NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/,, Release 1.0.10 of 2015-08-07. Online companion to [OLBC10]., (): 2015. Google Scholar
W. L. Nyborg, Acoustic streaming,, Physical Acoustics, 2 (1965), 265. doi: 10.1016/B978-0-12-395662-0.50015-1. Google Scholar
A. R. Rasmussen, M. P. Sørensen, Yu. B. Gaididei and P. L. Christiansen, Interacting wave fronts and rarefaction waves in a second order model of nonlinear thermoviscous fluids,, Acta Appl. Math., 115 (2011), 43. doi: 10.1007/s10440-010-9581-7. Google Scholar
Anders Rønne Rasmussen, Thermoviscous Model Equations in Nonlinear Acoustics,, Ph.D Thesis, (2009). Google Scholar
Ivan C. Christov. Nonlinear acoustics and shock formation in lossless barotropic Green--Naghdi fluids. Evolution Equations & Control Theory, 2016, 5 (3) : 349-365. doi: 10.3934/eect.2016008
Xiao-Biao Lin, Stephen Schecter. Traveling waves and shock waves. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : i-ii. doi: 10.3934/dcds.2004.10.4i
Barbara Kaltenbacher. Mathematics of nonlinear acoustics. Evolution Equations & Control Theory, 2015, 4 (4) : 447-491. doi: 10.3934/eect.2015.4.447
James K. Knowles. On shock waves in solids. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 573-580. doi: 10.3934/dcdsb.2007.7.573
Elena Kartashova. Nonlinear resonances of water waves. Discrete & Continuous Dynamical Systems - B, 2009, 12 (3) : 607-621. doi: 10.3934/dcdsb.2009.12.607
Angelo Morro. Nonlinear waves in thermoelastic dielectrics. Evolution Equations & Control Theory, 2019, 8 (1) : 149-162. doi: 10.3934/eect.2019009
Jerry L. Bona, Thierry Colin, Colette Guillopé. Propagation of long-crested water waves. Ⅱ. Bore propagation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5543-5569. doi: 10.3934/dcds.2019244
Margaret Beck. Stability of nonlinear waves: Pointwise estimates. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 191-211. doi: 10.3934/dcdss.2017010
Jerry Bona, Hongqiu Chen. Solitary waves in nonlinear dispersive systems. Discrete & Continuous Dynamical Systems - B, 2002, 2 (3) : 313-378. doi: 10.3934/dcdsb.2002.2.313
Frederike Kissling, Christian Rohde. The computation of nonclassical shock waves with a heterogeneous multiscale method. Networks & Heterogeneous Media, 2010, 5 (3) : 661-674. doi: 10.3934/nhm.2010.5.661
Narcisa Apreutesei, Vitaly Volpert. Reaction-diffusion waves with nonlinear boundary conditions. Networks & Heterogeneous Media, 2013, 8 (1) : 23-35. doi: 10.3934/nhm.2013.8.23
John Boyd. Strongly nonlinear perturbation theory for solitary waves and bions. Evolution Equations & Control Theory, 2019, 8 (1) : 1-29. doi: 10.3934/eect.2019001
Mina Jiang, Changjiang Zhu. Convergence rates to nonlinear diffusion waves for $p$-system with nonlinear damping on quadrant. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 887-918. doi: 10.3934/dcds.2009.23.887
Pedro M. Jordan, Barbara Kaltenbacher. Introduction to the special volume ``Mathematics of nonlinear acoustics: New approaches in analysis and modeling''. Evolution Equations & Control Theory, 2016, 5 (3) : i-ii. doi: 10.3934/eect.201603i
H. T. Banks, R.C. Smith. Feedback control of noise in a 2-D nonlinear structural acoustics model. Discrete & Continuous Dynamical Systems - A, 1995, 1 (1) : 119-149. doi: 10.3934/dcds.1995.1.119
Peter Howard, K. Zumbrun. The Evans function and stability criteria for degenerate viscous shock waves. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 837-855. doi: 10.3934/dcds.2004.10.837
Denis Serre, Alexis F. Vasseur. The relative entropy method for the stability of intermediate shock waves; the rich case. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4569-4577. doi: 10.3934/dcds.2016.36.4569
Thierry Colin, Pierre Fabrie. Semidiscretization in time for nonlinear Schrödinger-waves equations. Discrete & Continuous Dynamical Systems - A, 1998, 4 (4) : 671-690. doi: 10.3934/dcds.1998.4.671
Juan Belmonte-Beitia, Vladyslav Prytula. Existence of solitary waves in nonlinear equations of Schrödinger type. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1007-1017. doi: 10.3934/dcdss.2011.4.1007
Masahito Ohta. Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1671-1680. doi: 10.3934/cpaa.2018080
Yuri Gaididei Anders Rønne Rasmussen Peter Leth Christiansen Mads Peter Sørensen | CommonCrawl |
The well-posedness of stochastic Kawahara equation: fixed point argument and Fourier restriction method
Abd-Allah Hyder ORCID: orcid.org/0000-0001-9273-95121,2 &
M. Zakarya1,3
Journal of the Egyptian Mathematical Society volume 27, Article number: 5 (2019) Cite this article
In this paper, we investigate the Cauchy problem for the stochastic Kawahara equation, which is a fifth-order shallow water wave equation. We prove local well-posedness for data in \(H^{s}(\mathbb {R})\), s>−7/4. Moreover, we get global existence for \(L^{2}(\mathbb {R})\) solutions. Due to the non-zero singularity of the phase function, a fixed point argument and Fourier restriction method are proposed.
In this paper, we consider the Cauchy problem for the stochastic Kawahara equation:
$$ u_{t}+\alpha u_{5x}+\beta u_{3x}+\gamma u_{x}+\mu {uu}_{x}=\Phi\frac{\partial^{2}B}{\partial t\partial x}, $$
where α≠0, β, and γ are real numbers; μ is a complex number; u is a stochastic process defined on \((x,t)\in \mathbb {R}\times \mathbb {R_{+}}\); Φ is a linear operator; and B is a two-parameter Brownian motion on \(\mathbb {R}\times \mathbb {R_{+}}\), that is, a zero mean Gaussian process whose correlation function is given by:
$$ \mathbb{E}\left(B(x,t)B(y,s)\right)=(x\wedge y)(t\wedge s),\ \ \ t,s\geq0,\ x,y\in\mathbb{R}. $$
In general, the covariance operator Φ can be described by a kernel \(\mathcal {K}(x,y).\) The correlation function of the noise is then given by
$$\mathbb{E}\left(\Phi\frac{\partial^{2}B}{\partial t\partial x}(x,t)\Phi\frac{\partial^{2}B}{\partial t\partial x}(y,s)\right)=c(x,y)\delta_{t-s}, $$
where \(t,s\geq 0,\ x,y\in \mathbb {R}\), δ is the Dirac function and
$$c(x,y)=\int_{\mathbb{R}}\mathcal{K}(x,z)\mathcal{K}(y,z)dz. $$
Consider a fixed probability space \((\Omega,\mathcal {F},P)\) adapted to a filtration \((\mathcal {F}_{t})_{t\geq 0}\). As usual, we can rewrite the right hand side of Eq. (1) as the time derivative of a cylindrical Wiener process on \(L^{2}(\mathbb {R})\) by setting:
$$ W(t)=\frac{\partial B}{\partial x}=\sum_{i\in\mathbb{N}}\beta_{i}(t)e_{i}, $$
where \((e_{i})_{i\in \mathbb {N}}\) is an orthonormal basis of \(L^{2}(\mathbb {R})\) and \((\beta _{i})_{i\in \mathbb {N}}\) is a sequence of mutually independent real Brownian motions in \((\Omega,\mathcal {F},P)\). Let us rewrite Eq. (1) in its Itô form as follows:
$$ \left\{\begin{array}{l} du+\left(\alpha u_{5x}+\beta u_{3x}+\gamma u_{x}+\mu {uu}_{x}\right)dt=\Phi dW(t),\\ u(x,0)=u_{0}(x) \end{array}\right. $$
In order to obtain local well-posedness of Eq. (1), we mainly work on the general mild formulation of Cauchy problem (4) as below:
$$ {\begin{aligned} u(t)=U(t)u_{0}+\int_{0}^{t}U(t-s)\left(\mu {uu}_{x}\right)ds+\int_{0}^{t}U(t-s)\Phi dW(s). \end{aligned}} $$
Here, \(U(t)=\mathfrak {F}_{x}^{-1}\text {exp}\left (-it\phi (\xi)\right)\mathfrak {F}_{x}\) is the unitary group of operators related to the linearized equation:
$$ u_{t}+\alpha u_{5x}+\beta u_{3x}+\gamma u_{x}=0,\ \ \ \ (x,t)\in\mathbb{R}\times\mathbb{R_{+}}, $$
where ϕ(ξ)=αξ5−βξ3+γξ is the phase function and \(\mathfrak {F}_{x}\) (or ". ̂") is the usual Fourier transform in the x variable. We note that the phase function ϕ has non-zero singularity. This differs from the phase function of the linear Korteweg-de Vries (KdV) equation (see [1]) and causes some difficulties in the problem. To avoid these difficulties, we eliminate the singularity of the phase function ϕ by using the Fourier restriction operators [2]:
$${\begin{aligned} P^{N}f=\int_{|\xi|\geq N}e^{ix\xi}\hat{f}(\xi)d\xi,\ \ \ \ \ \ P_{N}f=\int_{|\xi|\leq N}e^{ix\xi}\hat{f}(\xi)d\xi,\ \ \forall N>0. \end{aligned}} $$
In the case of Φ≡0 (effect of the noise does not exist), Eq. (1) is reduced to the deterministic Kawahara equation:
$$ u_{t}+\alpha u_{5x}+\beta u_{3x}+\gamma u_{x}+\mu {uu}_{x}=0,\ \ \ \ (x,t)\in\mathbb{R}\times\mathbb{R_{+}}. $$
As aforesaid in [3–5], Eq. (7) is a fifth-order shallow water wave equation. It arises in study of the water waves with surface tension, in which the Bond number takes on the critical value, where the Bond number represents a dimensionless magnitude of surface tension in the shallow water regime. If we consider a realistic situation, in which a non-constant pressure affects on the surface of the fluid or the bottom of the layer is not flat, it is meaningful to add a forcing term to Eq. (7). This term can be given by the gradient of the exterior pressure or of the function whose graph defines the bottom [6, 7]. This paper focuses on the case when the forcing term is of additive white noise type. This leads us to study the stochastic fifth-order shallow water wave Eq. (1). By means of white noise functional analysis, the analytical white noise functional solutions for the nonlinear stochastic partial differential equations (SPDEs) can be investigated. This subject is attracting more and more attention [8–15].
It is well known that the Cauchy problem (4) is locally well-posed for data in \(H^{s}(\mathbb {R}),\ s\in \mathbb {R}\), if for any finite time T, there exists a locally continuous mapping that transfers \(u_{0}\in H^{s}(\mathbb {R})\) to a unique solution \(u\in C\left ([0,T];H^{s}(\mathbb {R})\right)\). If the solution mapping exists for all time, we say that the Cauchy problem (4) is globally well-posed [16].
In [17], Huo obtained a local well-posedness result in \(H^{s}(\mathbb {R})(s>-11/8)\) for the Kawahara equation. Moreover, Jia and Huo [18] proved the local well-posedness of the Kawahara and modified Kawahara equations for data in \(H^{s}(\mathbb {R})\) with s>−7/4 and s≥−1/4 respectively. The first well-posedness result for the Kaup-Kupershmidt equations was presented by Tao and Cui [19]. They proved that their Cauchy problems are locally well-posed in \(H^{s}(\mathbb {R})\) for s>5/4 and s>301/108, respectively. Thereafter, Zhao and Gu [20] lowered the regularity of the initial data space to s>9/8 and improved the preceding result in [19]. Also, using a Fourier restriction method, a local well-posedness result for the Kaup-Kupershmidt equations was established in [18] for data in \(H^{s}(\mathbb {R})\) with s>0 and s>−1/4, respectively.
If α=γ=0, the model (7) is minified to the famous KdV equation:
$$ u_{t}+\beta u_{3x}+\mu {uu}_{x}=0,\ \ \ \ (x,t)\in\mathbb{R}\times\mathbb{R_{+}}. $$
The well-posedness of Eq. (8) was studied by Kenig, Ponce, and Vega [21]. They proved that its Cauchy problem is locally well-posed in \(H^{s}(\mathbb {R})\) for s>−3/4. Also, Ponce [1] discussed the general fifth-order shallow water wave equation:
$$ {\begin{aligned} u_{t}+u_{x}+c_{1}u u_{x}+c_{2}u_{3x}+c_{3} u_{x} u_{xx}+c_{4}u u_{3x}+c_{5} u_{5x}=0 \ \ \ (x,t)\in\mathbb{R}\times\mathbb{R_{+}} \end{aligned}} $$
and gave a global well-posedness result of its Cauchy problem for data in \(H^{4}(\mathbb {R})\). The well-posedness of the SPDEs has been the subject of a large amount of work. de Bouard and Debussche [22] considered the stochastic KdV equation forced by a random term of white noise type. They proved existence and uniqueness of solutions in \(H^{1}(\mathbb {R})\) and existence of martingale solutions in \(L^{2}(\mathbb {R})\) in the case of additive and multiplicative noise, respectively. Since that time, many researchers paid more attention to investigate the Cauchy problems for some SPDEs and have obtained a number of local and global well-posedness results [23–25].
The goal of this paper is to investigate the Cauchy problem of the stochastic Kawahara Eq. (1), where the random force is of additive white noise type. By employing a Fourier restriction method, a Banach fixed point theorem, and some basic inequalities, we show that Eq. (1) is locally well-posed for data in \(H^{s}(\mathbb {R}),\ s>-7/4\). Also, we give global existence for \(L^{2}(\mathbb {R})\) solutions. An outline of this paper is as follows. The "Main results" section contains precise statement of our new results and some important function spaces. In the section "The stochastic convolution estimate", we give an estimation of the stochastic convolution term via a Fourier restriction method and some basic inequalities. In the section "Local well-posedness: proof of Theorem 1", we use the stochastic estimation proved in the section "The stochastic convolution estimate" and the Banach fixed point theorem to obtain a local well-posedness result of Eq. (1). In the section "Global well-posedness: proof of Theorem 2", we extend our technique and show global well-posedness result of Eq. (1). The "Summary and discussion" section is devoted to the summary and discussion.
Before giving the precise statement of our main results, we introduce some notations and assumptions.
For \(s,b\in \mathbb {R}\) the space \(\mathfrak {X}_{s,b}\) is defined to be the completion of the Schwartz function space \(\mathcal {S}\left (\mathbb {R}^{2}\right)\) with respect to the norm:
$$ \|u\|_{\mathfrak{X}_{s,b}}=\|U(-t)u\|_{H_{x}^{s}H_{t}^{b}}=\|\langle\xi\rangle^{s}\langle\tau+\phi(\xi)\rangle^{b}\mathfrak{F}u\|_{L^{2}_{\xi}L^{2}_{\tau}}, $$
where 〈·〉=1+|·|.
For T>0, \(\mathfrak {X}_{s,b}^{T}\) is the space of restrictions to [0,T] of functions in \(\mathfrak {X}_{s,b}\) endowed with the norm:
$$ \|u\|_{\mathfrak{X}_{s,b}^{T}}=\inf\{\|\tilde{u}\|_{\mathfrak{X}_{s,b}}:\tilde{u}\in \mathfrak{X}_{s,b}, u=\tilde{u}|_{[0,T]}\}. $$
Assume that\(s>-\frac {7}{4}\), \(\Phi \in L_{2}^{0,s}\), \(b\in \left (0,\frac {1}{2}\right)\)and b is close enough to\(\frac {1}{2}\). If\(u_{0}\in H^{s}(\mathbb {R})\)for almost surely ω∈Ωand u0is\(\mathcal {F}_{0}-\)measurable. Then for almost surely ω∈Ω, there exists a constant Tω>0and a unique solution u of the Cauchy problem (4) on [0,Tω]which satisfies:
$$u\in C\left([0,T_{\omega}];H^{s}(\mathbb{R})\right)\cap\mathfrak{X}_{s,b}^{T_{\omega}}. $$
In fact the L2−norm is preserved for a solution of the Kawahara equation [4]. Therefore, in the case of s=0, we can obtain a global existence result for Eq. (1). Precisely, we have:
Let\(u_{0}\in L^{2}\left (\Omega,L^{2}(\mathbb {R})\right)\)be an\(\mathcal {F}_{0}-\)measurable initial data, and let\(\Phi \in L_{2}^{0,0}\). Then, the solution u given by Theorem 1 is global and satisfies:
$$u\in L^{2}\left(\Omega;C\left([0,T_{0}];H^{s}(\mathbb{R})\right)\right),\ \ \ \ {for any}\ \ T_{0}>0. $$
The stochastic convolution estimate
In this section, using the Fourier restriction method, the properties of Itô stochastic integral and some basic inequalities, we give an estimation for the last term in Eq. (5), which is the stochastic convolution:
$$ u_{l}(t):=\int_{0}^{t} U(t-s)\Phi dW(s). $$
Choose \(\chi \in C_{0}^{\infty }\left (\mathbb {R}_{+}\right)\) such that χ(t)=0 for t>0,χ(t)=1 for 0<t<1 and χ(t)=0 for t≥2. Hence, \(\chi \in H^{b}(\mathbb {R})\) for any \(b<\frac {1}{2}\). Let \(H_{t}^{b}:=H^{b}\left ([0,T];\mathbb {R}\right)\) be the Sobolev space in the time variable t with the norm:
$$ {\begin{aligned}\|\psi\|^{2}_{H_{t}^{b}}:=\|\psi\|^{2}_{L^{2}(\mathbb{R})}+\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{|\psi(t_{1})-\psi(t_{2})|^{2}}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2},\ \ \ \ \psi\in H_{t}^{b}. \end{aligned}} $$
Now, we state and prove the estimation of the stochastic convolution (12) as follows:
Assume that \(s,b\in \mathbb {R}\) with \(b\in \left (0,\frac {1}{2}\right)\), and let \(\Phi \in L_{2}^{0,s}\) Then, ul defined by (12) satisfies:
$$\chi u_{l}\in L^{2}\left(\Omega,\mathfrak{X}_{s,b}\right) $$
$$ \mathbb{E}\left(\|\chi u_{l}\|^{2}_{\mathfrak{X}_{s,b}}\right)\leq N(b,\chi)\|\Phi\|^{2}_{L_{0}^{0,s}}, $$
where N(b,χ)is a constant that depends on b, \(\|\chi \|_{H^{b}_{t}}\), \(\||t|^{\frac {1}{2}}\chi \|_{L^{2}_{t}}\) and \(\||t|^{\frac {1}{2}}\chi \|_{L^{\infty }_{t}}\),
Let us introduce the function
$$ w(t,.)=\chi(t)\int_{0}^{t} U(-s)\Phi dW(s),\ \ \ \ t\in \mathbb{R}_{+}. $$
This implies that U(t)w(t,.)=χ(t)ul(t). Thus, by Eq. (10), we have:
$$ {\begin{aligned} \mathbb{E}\left(\|\chi u_{l}\|^{2}_{\mathfrak{X}_{s,b}}\right)&=\mathbb{E}\left(\int_{\mathbb{R}}\int_{\mathbb{R}}\left(1+|\xi|\right)^{2s}\left(1+|\tau|\right)^{2b}|\mathfrak{F}_{x} w(t,\xi)|^{2}d\tau d\xi\right)\\ &=\int_{\mathbb{R}}\left(1+|\xi|\right)^{2s}\mathbb{E}\left(\left\|\mathfrak{F}_{x} w(.,\xi)\right\|^{2}_{H_{t}^{b}}\right)d\xi, \end{aligned}} $$
According to the expansion (3) of the cylindrical Wiener process and Eq. (13), we have:
$$ \mathbb{E}\left(\|\mathfrak{F}_{x} w(.,\xi)\|^{2}_{H_{t}^{b}}\right)=S_{1}+S_{2}, $$
$$ S_{1}=\sum\limits_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}\left[\mathbb{E}\left(\left\|\chi(t)\int_{0}^{t} e^{i s\phi(\xi)}d\beta_{i}(s)\right\|^{2}_{L^{2}(\mathbb{R})}\right)\right], $$
$$ {\begin{aligned} S_{2}=\sum\limits_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}\left[\mathbb{E}\left(\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{\left|\begin{array}{c}\chi(t_{1})\int_{0}^{t_{1}}e^{i s\phi(\xi)}d\beta_{i}(s)\\ -\chi(t_{2})\int_{0}^{t_{2}}e^{i s\phi(\xi)}d\beta_{i}(s)\end{array}\right|^{2}}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\right)\right]. \end{aligned}} $$
From the Itô isometry formula, we get:
$$\begin{array}{@{}rcl@{}} S_{1}&=&\sum\limits_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}\int_{0}^{2}|\chi(t)|^{2}\ \mathbb{E}\left(\left|\int_{0}^{t} e^{i s\phi(\xi)}d\beta_{i}(s)\right|^{2}\right)dt\\ &=&\left\||t|^{\frac{1}{2}}\chi\right\|^{2}_{L^{2}_{t}}\sum\limits_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}. \end{array} $$
To estimate S2, we have:
$$ {\begin{aligned} S_{2}=&\sum_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}\left[\mathbb{E}\left(\int_{\mathbb{R}}\int_{\mathbb{R}}\frac{\left| \begin{array}{c} \chi(t_{1})\int_{0}^{t_{1}}e^{i s\phi(\xi)}d\beta_{i}(s)\\ -\chi(t_{2})\int_{0}^{t_{2}}e^{i s\phi(\xi)}d\beta_{i}(s)\end{array}\right|^{2}}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\right)\right]\\ =&2\sum_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}\int_{t_{2}>0}\int_{t_{1}< t_{2}}\frac{\mathbb{E}\left(\left| \begin{array}{c} \chi(t_{1})\int_{0}^{t_{1}}e^{i s\phi(\xi)}d\beta_{i}(s)\\ -\chi(t_{2})\int_{0}^{t_{2}}e^{i s\phi(\xi)}d\beta_{i}(s)\end{array}\right|^{2}\right)}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\\ \leq&\sum_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}\left[2\int_{t_{2}>0}\int_{t_{1}<0}\frac{|\chi(t_{2})|^{2}\mathbb{E}\left(\left|\int_{0}^{t_{2}}e^{i s\phi(\xi)}d\beta_{i}(s)\right|^{2}\right)}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\right.\\ &\left. + 2\int_{t_{2}>0}\int_{0< t_{1}< t_{2}}\frac{\mathbb{E}\left(\left| \begin{array}{c} \chi(t_{1})\int_{0}^{t_{1}}e^{i s\phi(\xi)}d\beta_{i}(s)\\ -\chi(t_{2})\int_{0}^{t_{1}}e^{i s\phi(\xi)}d\beta_{i}(s)\\ +\chi(t_{2})\int_{t_{1}}^{t_{2}}e^{i s\phi(\xi)}d\beta_{i}(s) \end{array}\right|^{2}\right)}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\right]\\ \leq&\sum_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}\left[2\int_{t_{2}>0}\int_{t_{1}<0}\frac{|\chi(t_{2})|^{2}\mathbb{E}\left(\left|\int_{0}^{t_{2}}e^{i s\phi(\xi)}d\beta_{i}(s)\right|^{2}\right)}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2} \right.\\ +&4\int_{t_{2}>0}\int_{0< t_{1}< t_{2}}\frac{|\chi(t_{1})-\chi(t_{2})|^{2}\mathbb{E}\left(\left|\int_{0}^{t_{1}}e^{i s\phi(\xi)}d\beta_{i}(s)\right|^{2}\right)}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\\ & \left. +4\int_{t_{2}>0}\int_{0< t_{1}< t_{2}}\frac{|\chi(t_{2})|^{2}\mathbb{E}\left(\left|\int_{t_{1}}^{t_{2}}e^{i s\phi(\xi)}d\beta_{i}(s)\right|^{2}\right)}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\right]\\ =&\sum_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2}\left[I_{1}+I_{2}+I_{3}\right]. \end{aligned}} $$
Now, we limit I1, I2, and I3 separately,
$$ {\begin{aligned} I_{1}\leq2\int_{0}^{2}t_{1}|\chi(t_{2})|^{2}\int_{t_{1}<0}\frac{1}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\leq M_{b}\left\||t|^{\frac{1}{2}-b}\chi\right\|^{2}_{L_{t}^{2}}. \end{aligned}} $$
Using Eq. (15) and the assumption that 2b∈(0,1), we have
$$\begin{array}{@{}rcl@{}} I_{2}&\leq&4\int_{0}^{\infty}\int_{0}^{t_{2}}\frac{t_{1}|\chi(t_{1})-\chi(t_{2})|^{2}}{\|t_{1}-t_{2}\|^{1+2b}}{dt}_{1}{dt}_{2}\\ &\leq&4\int_{0}^{2}\int_{0}^{t_{2}}\frac{t_{1}|\chi(t_{1})-\chi(t_{2})|^{2}}{\|t_{1}-t_{2}\|^{1+2b}}{dt}_{1}{dt}_{2}\\ &+&4\int_{2}^{\infty}\int_{0}^{2}\frac{t_{1}|\chi(t_{1})|^{2}}{\|t_{1}-t_{2}\|^{1+2b}}{dt}_{1}{dt}_{2}\\ &\leq&8\|\chi\|^{2}_{H_{t}^{b}}+4\left\||t|^{\frac{1}{2}}\chi\right\|^{2}_{L_{t}^{\infty}}\int_{0}^{\infty}\int_{0}^{2}\frac{1}{|t_{1}-t_{2}|^{1+2b}}{dt}_{1}{dt}_{2}\\ &\leq&8\|\chi\|^{2}_{H_{t}^{b}}+M_{b}\left\||t|^{\frac{1}{2}}\chi\right\|^{2}_{L_{t}^{\infty}}. \end{array} $$
$$ I_{3}\leq4\int_{0}^{2}\int_{0}^{t_{2}}\frac{|\chi(t_{2})|^{2}}{|t_{1}-t_{2}|^{2b}}{dt}_{1}{dt}_{2}\leq M_{b}\left\||t|^{\frac{1}{2}-b}\chi\right\|^{2}_{L_{t}^{2}}. $$
Combining (20)–(24) with (17), we get
$$ \mathbb{E}\left(\|\mathfrak{F}_{x} w(.,\xi)\|^{2}_{H_{t}^{b}}\right)\leq N(b,\chi)\sum_{i\in\mathbb{N}}|\hat{\Phi e_{i}}|^{2} $$
where \(N(b,\chi)=M_{b}\left (\|\chi \|_{H^{b}_{t}}+\||t|^{\frac {1}{2}}\chi \|_{L^{2}_{t}}+\||t|^{\frac {1}{2}}\chi \|_{L^{\infty }_{t}}\right)\). Hence, the estimate (14) comes from (16) and (25).
Local well-posedness: proof of Theorem 1
According to the stochastic estimation proved in the above section and the Banach fixed point theorem, we deduce a local well-posedness result of Eq. (1). That is, this section is devoted to the proof of Theorem 1. Let v(t)=U(t)u0 and \(\bar {u}=u(t)-v(t)-u_{l}(t)\), then Eq. (5) is equivalent to
$$ {\begin{aligned}\bar{u}(t)=\mathcal{A}\bar{u}(t):=\frac{1}{2}\int_{0}^{t} U(t-s)\frac{\partial}{\partial x}\left(\bar{u}^{2}+u_{l}^{2}+v^{2}+2\left(\bar{u}u_{l}+\bar{u}v+v u_{l}\right)\right)(s)ds. \end{aligned}} $$
Therefore, the goal of this section becomes to prove that \(\mathcal {A}\) is a contraction mapping in
$$\mathfrak{Y}_{R}^{T}=\left\{\bar{u}\in\mathfrak{X}_{s,b}^{T}:\|\bar{u}\|_{\mathfrak{X}_{s,b}^{T}}\leq R\right\},\ \ \ \ \ \ \ R>0,\ \ T>0, $$
where R and T are sufficiently large and small, respectively. Before doing this, we recall some previous results on the linear and bilinear estimates.
[23] Assume that a>0, \(b<\frac {1}{2}\) and b is close enough to \(\frac {1}{2}\). For \(s\in \mathbb {R}\), \(u_{0}\in H^{s}(\mathbb {R})\), and \(f\in \mathfrak {X}_{s,-a}^{T}\), we have:
$$ \left\|\int_{0}^{t} U(t-\tau)f(\tau)d\tau\right\|_{\mathfrak{X}_{s,b}^{T}}\leq CT^{1-a-b}\|f\|_{\mathfrak{X}_{s,b}^{T}} $$
$$ \|v\|_{\mathfrak{X}_{s,b}^{T}}\leq\|u_{0}\|_{H^{s}}. $$
[18] Assume that a>0, \(b<\frac {1}{2}\), and b is close enough to \(\frac {1}{2}\). For \(b'>\frac {1}{2}\), \(s>-\frac {7}{4}\), and \(u_{1},u_{2}\in \mathcal {S}(\mathbb {R}^{2})\), we have:
$$ \left\|\frac{\partial}{\partial x}(u_{1}u_{2})\right\|_{\mathfrak{X}_{s,-a}}\leq C\|u_{1}\|_{\mathfrak{X}_{s,b}}\|u_{1}\|_{\mathfrak{X}_{s,b'}} $$
provided that the right hand side is finite.
According to Lemmas 1, 2, and 3, we obtain
$$ \left\|\mathcal{A}\bar{u}\right\|_{\mathfrak{X}^{T}_{s,b}}\leq C^{\prime} T^{1-a-b}\left(R^{2}+\left\|u_{l}\right\|_{\mathfrak{X}^{T}_{s,b}}+\left\|u_{0}\right\|_{H^{s}}\right). $$
Therefore, for \(\bar {u}_{1},\bar {u}_{2}\in \mathfrak {Y}_{R}^{T}\), we get
$$ {\begin{aligned} \left\|\mathcal{A}\bar{u}_{1}-\mathcal{A}\bar{u}_{2}\right\|_{\mathfrak{X}^{T}_{s,b}}\leq C^{\prime} T^{1-a-b}\left(R^{2}+\left\|u_{l}\right\|_{\mathfrak{X}^{T}_{s,b}}+\left\|u_{0}\right\|_{H^{s}}\right)\left\|\bar{u_{1}}-\bar{u_{2}}\right\|_{\mathfrak{X}^{T}_{s,b}}. \end{aligned}} $$
Now, define the stopping time Tω by:
$$ T_{\omega}=\inf\left\{t>0:4C^{\prime} t^{1-a-b}R_{\omega}^{T}\geq1\right\}, $$
where \(R_{\omega }^{T}=\left \|u_{l}\right \|_{\mathfrak {X}^{T}_{s,b}}+\left \|u_{0}\right \|_{H^{s}}\). Then, \(\mathcal {A}\) maps the ball with center zero and radius \(R_{\omega }^{T}\) in \(\mathfrak {X}_{s,b}^{T_{\omega }}\) into itself, and
$$ \left\|\mathcal{A}\bar{u_{1}}-\mathcal{A}\bar{u_{2}}\right\|_{\mathfrak{X}^{T_{\omega}}_{s,b}}\leq\frac{3}{4}\left\|\bar{u_{1}}-\bar{u_{2}}\right\|_{\mathfrak{X}^{T_{\omega}}_{s,b}}. $$
From the fixed point theory, \(\mathcal {A}\) has a unique fixed point, which is the solution of (5) in \(\mathfrak {X}_{s,b}^{T_{\omega }}\). Observe that \(u=v+\bar {u}+u_{l}\in \mathfrak {X}^{T_{\omega }}_{s,b^{\prime }}+\mathfrak {X}^{T_{\omega }}_{s,b}\).
In the remaining part of this section, we complete the proof by showing that \(u\in C([0,T_{\omega }],H^{s}(\mathbb {R}))\). Taking in attention that \(b<\frac {1}{2}, b^{\prime } >\frac {1}{2}\). By virtue of the Sobolev imbedding theorem, we have \(v\in C\left ([0,T_{\omega }],H^{s}(\mathbb {R})\right)\). Under the condition that \(\Phi \in L_{2}^{0,s}\) and the fact that U(t) is a unitary group in \(H^{s}(\mathbb {R})\), an application of Theorem 6.10 in [16] implies that \(u_{l}\in C\left ([0,T_{\omega }];H^{s}(\mathbb {R})\right)\).
Now, choose a cutoff function \(\chi _{T}\in C_{0}^{\infty }(\mathbb {R})\) such that χT(t)=1 on [0,2], supp χT⊂[−1,2], and χT(t)=0 on (−∞,−1]∪[2,∞). Denote χq(.)=χ(q−1(.)) for some \(q\in \mathbb {R}\). By Lemma 3, we have \(\tilde {u}\tilde {u}_{x}\in \mathfrak {X}_{s,-a}\) for any prolongation \(\tilde {u}\) of u in \(\mathfrak {X}_{s,c}+\mathfrak {X}_{s,b}\). Therefore,
$$ \left\|\chi_{T}\int_{0}^{t} U(t-s)\left(\tilde{u}(s)\tilde{u}_{x}(s)\right)\right\|_{\mathfrak{X}_{s,1-a}}\leq C\left\|\tilde{u}(s)\tilde{u}_{x}(s)\right\|_{\mathfrak{X}_{s,-a}}. $$
Since \(1-a>\frac {1}{2}\), then \(\tilde {u}\in \mathfrak {X}_{s,b}\subset C\left ([0,T_{\omega }];H^{s}(\mathbb {R})\right)\). This completes the proof of Theorem 1.
Global well-posedness: proof of Theorem 2
Fix T0>0 and assume that u0 satisfies the conditions of Theorem 1. In this section, we present a proof of Theorem 2, that is, we show that the solution u can be extended to the whole interval [0,T0]. Let \(\left (\Phi _{n}\right)_{n\in \mathbb {N}}\) be a sequence in \(L_{0}^{0,4}\) such that
$$ {\lim}_{n\rightarrow\infty}\Phi_{n}=\Phi\ \ \ \ \ \text{in}\ \ L_{2}^{0,0}. $$
and let \(\left (u_{0,n}\right)_{n\in \mathbb {N}}\) be another sequence in \(L^{2}\left (\Omega,H^{s}(\mathbb {R})\right)\) such that
$$ {\lim}_{n\rightarrow\infty}u_{0,n}=u_{0}\ \ \ \ \ \text{in}\ \ L^{2}\left(\Omega,L^{2}(\mathbb{R})\right). $$
By using reasoning similar to that in [23], we can find a unique solution un in \(C\left ([0,T_{0}],H^{3}(\mathbb {R})\right)\) for
$$ {\begin{aligned} u_{n}=U(t)u_{0,n}+\int_{0}^{t} U(t-s)\left(u_{n}(s)\frac{\partial u_{n}}{\partial x}(s)\right)ds+\int_{0}^{t} U(t-s)\Phi_{n} dW(s). \end{aligned}} $$
By using the Itô formula on \(\|u_{n}\|^{2}_{L^{2}(\mathbb {R})}\) and martingale inequality (see [16]), we have
$$ \mathbb{E}\left(\sup_{t\in[0,T_{0}]}\|u_{n}\|^{2}_{L^{2}_{x}}\right)\leq\mathbb{E}\left(\|u_{0,n}\|^{2}_{L^{2}_{x}}\right)+C\|\Phi_{n}\|^{2}_{L_{2}^{0,0}}. $$
Therefore, the sequence \((u_{n})_{n\in \mathbb {N}}\) is bounded and weakly star convergent to a function \(u^{\ast }\in L^{2}\left (\Omega ;L^{\infty }\left (\left [0,T_{0}\right ];L^{2}(\mathbb {R})\right)\right)\), which satisfies
$$ \mathbb{E}\left(\sup_{t\in[0,T_{0}]}\|u^{\ast}\|^{2}_{L^{2}_{x}}\right)\leq\mathbb{E}\left(\|u_{0}\|^{2}_{L^{2}_{x}}\right)+C\|\Phi\|^{2}_{L_{2}^{0,0}}. $$
In the same way as \(\mathcal {A}\), define the mapping \(\mathcal {A}_{n}\). It is easy to show that \(\mathcal {A}_{n}\) is uniformly strict contraction on \(\mathfrak {Y}_{r(\omega)}^{t(\omega)}\) in \(\mathfrak {X}_{s,b}^{T_{\omega }}\). According to the fixed point theorem, there exists a unique function \(u\in \mathfrak {X}_{s,b}^{T_{\omega }}\) such that
$$ u=u^{\ast}={\lim}_{n\rightarrow\infty}u_{n}\ \ \ \ \ \text{a.s. in}\ \ [0,T_{\omega}], $$
where un is the unique fixed point of \(\mathcal {A}_{n}\). Also, we have
$$ \|u(t(\omega))\|_{L^{2}(\mathbb{R})}\leq\|u^{\ast}\|_{L^{\infty}\left([0,T_{0}];L^{2}(\mathbb{R})\right)}. $$
Thus, we can emerge a solution on [Tω,2Tω]. Hence, the solution u can be extended to [0,T0] almost surely by reiteration. This completes the proof of Theorem 2.
Summary and discussion
This paper is devoted to employ the Fourier restriction method, the Banach contraction principle, and some basic inequalities for investigating nonlinear SPDEs and for proving local and global well-posedness results for their solutions in convenient function spaces. Our attention is focused on the stochastic Kawahara Eq. (1), which is a fifth-order shallow water wave equation considered in random environment. We prove that Eq. (1) is locally well-posed for data in \(H^{s}(\mathbb {R})\), s>−7/4 and its solution can be extended to a global one on [0,T0]. The Fourier restriction method is proposed due to the non-zero singularity of the phase function ϕ.
The deterministic Kawahara Eq. (7) was discussed by Jia and Huo in [18]. They proved local well-posedness result for data in \(H^{s}(\mathbb {R})\), s>−7/4. In this paper, we extend their result and handle the stochastic version of the Kawahara equation by choosing new appropriate stochastic function spaces (such as the space \(\mathfrak {X}_{s,b}^{T})\) and estimating the stochastic convolution (12) in these spaces. That is, we consider a realistic situation of the fifth-order shallow water wave equations. We believe that the ideas which we have suggested in this paper can be also applied to a wide class of stochastic nonlinear evolution equations in the field of mathematical physics, for instance, the stochastic modified Kawahara, generalized KdV, Hirota-Satsuma coupled KdV, and Swada-Kotera equations.
KdV:
Korteweg-de Vries
SPDEs:
Stochastic partial differential equations
Ponce, G.: Lax pairs and higher order models for water waves. J. Differ. Equat. 102, 360–381 (1993).
Bourgain, J.: Fourier restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations, part I: Schrödinger equation, part II: the KdV equation. Geom. Funct. Anal. 2(107-156), 209–262 (1993).
Bona, J. L., Smith, R. S.: A model for the two-ways propagation of water waves in a channel. Math. Proc. Cambridge Philos. Soc. 79, 167–182 (1976).
Kawahara, T.: Oscillatory solitary waves in dispersive media. J. Phys. Soc. Jpn. 33, 260–264 (1972).
Kichenassamy, S., Olver, P. J.: Existence and nonexistence of solitary wave solutions to higher-order model evolution equations. SIAM J. Math. Anal. 23, 1141–1166 (1992).
Akylas, T. R.: On the excitation of long nonlinear water waves by a moving pressure distribution. J. Fluid Mech. 141, 455–466 (1984).
Wu, T. Y.: Generation of upstream advancing solitons by moving disturbances. J. Fluid Mech. 184, 75–99 (1987).
Ghany, H. A., Hyder, A.: White noise functional solutions for the Wick-type two-dimensional stochastic Zakharov-Kuznetsov equations. Int. Rev. Phys. 6, 153–157 (2012).
Ghany, H. A., Okb El Bab, A. S., Zabal, A. M., Hyder, A.: The fractional coupled KdV equations: exact solutions and white noise functional approach, Vol. 22 (2013).
Ghany, H. A., Hyder, A.: Exact solutions for the Wick-type stochastic time-fractional KdV equations. Kuwait J. Sci. 41, 75–84 (2014).
MathSciNet Google Scholar
Ghany, H. A., Hyder, A.: Abundant solutions of Wick-type stochastic fractional 2D KdV equations, Vol. 23 (2014).
Ghany, H. A., Elagan, S. K., Hyder, A.: Exact travelling wave solutions for stochastic fractional Hirota-Satsuma coupled KdV equations. Chin. J. Phys. 53, 1–14 (2015).
Ghany, H. A., Hyder, A., Zakarya, M.: Non-Gaussian white noise functional solutions of χ-Wick-type stochastic KdV equations. Appl. Math. Inf. Sci. 11, 915–924 (2017).
Hyder, A., Zakarya, M.: Non-Gaussian Wick calculus based on hypercomplex systems. Int. J. Pure Appl. Math. 109, 539–556 (2016).
Ghany, H. A., Zakarya, M.: Generalized solutions of Wick-type stochastic KdV-Burgers equations using exp-function method. Int. Rev. Phys. 8, 38–46 (2014).
Da Prato, G., Zabczyk, J.: Stochastic equations in infinite dimensions. Cambridge University Press, Cambridge (1992).
Huo, Z.: The Cauchy problem for the fifth-order shallow water equation. Acta Math. Appl. Sin. Engl. Ser. 21, 441–454 (2005).
Jia, Y., Huo, Z.: Well-posedness for the fifth-order shallow water equations. J. Diff. Equat. 246, 2448–2467 (2009).
Tao, S. P., Cui, S. B.: Local and global existence of solutions to initial value problems of nonlinear Kaup-Kupershmidt equations. J. Acta Math. Sin. Engl. Ser. 21, 881–892 (2005).
Zhao, X. Q., GU, S. M.: Local solvability of Cauchy problem for Kaup-Kupershmidt equation. J. Math. Res. Exposition. 30, 543–551 (2010).
Kenig, C. E., Ponce, G., Vega, L.: A bilinear estimate with applications to the KdV equation. J. Amer. Math. Soc. 9, 573–603 (1996).
de Bouard, A., Debussche, A.: On the stochastic Korteweg-de Vries equation. J. Funct. Anal. 154, 215–251 (1998).
de Bouard, A.: White noise driven Korteweg-de Vries equation. J. Funct. Anal. 169, 532–558 (1999).
Ghany, H. A., Hyder, A.: Local and global well-posedness of stochastic Zakharov- Kuznetsov equation. J. Comput. Anal. Appl. 15, 1332–1343 (2013).
Printems, J.: The stochastic Korteweg-de Vries equation in\(L^{2}(\mathbb {R})\). J. Diff. Equat. 153, 338–373 (1999).
The authors is very thankful to the editor and referees for their valuable comments and suggestions.
Department of Mathematics, College of Science, King Khalid University, Abha, P.O. Box 9004, Saudi Arabia
Abd-Allah Hyder & M. Zakarya
Department of Engineering Mathematics and Physics, Faculty of Engineering, Al-Azhar University, Cairo, 11371, Egypt
Abd-Allah Hyder
Department of Mathematics, Faculty of Science, Al-Azhar University, Assiut, 71524, Egypt
M. Zakarya
All authors jointly worked on the results and they read and approved the final manuscript.
Correspondence to Abd-Allah Hyder.
Hyder, AA., Zakarya, M. The well-posedness of stochastic Kawahara equation: fixed point argument and Fourier restriction method. J Egypt Math Soc 27, 5 (2019). https://doi.org/10.1186/s42787-019-0006-0
Kawahara equation
Well-posedness
Wiener process
Fixed point theorem
Fourier restriction method | CommonCrawl |
nature human behaviour
Assessing the risks of 'infodemics' in response to COVID-19 epidemics
Combining interventions to reduce the spread of viral misinformation
Joseph B. Bak-Coleman, Ian Kennedy, … Jevin D. West
Sentinel node approach to monitoring online COVID-19 misinformation
Matthew T. Osborne, Samuel S. Malloy, … Joseph H. Tien
Partisan asymmetries in exposure to misinformation
Ashwin Rao, Fred Morstatter & Kristina Lerman
Misinformation: susceptibility, spread, and interventions to immunize the public
Sander van der Linden
COVID-19 as a turning point in the fight against disinformation
Paul Butcher
Early warnings of COVID-19 outbreaks across Europe from social media
Milena Lopreite, Pietro Panzarasa, … Massimo Riccaboni
In.To. COVID-19 socio-epidemiological co-causality
Elroy Galbraith, Jie Li, … Matteo Convertino
Influence of fake news in Twitter during the 2016 US presidential election
Alexandre Bovet & Hernán A. Makse
Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior
Gregory Eady, Tom Paskhalis, … Joshua A. Tucker
Riccardo Gallotti ORCID: orcid.org/0000-0002-8088-19731,
Francesco Valle1,
Nicola Castaldo ORCID: orcid.org/0000-0002-2034-40331,
Pierluigi Sacco ORCID: orcid.org/0000-0002-5559-28892,3 &
Manlio De Domenico ORCID: orcid.org/0000-0001-5158-85941
Nature Human Behaviour volume 4, pages 1285–1293 (2020)Cite this article
Complex networks
Science, technology and society
During COVID-19, governments and the public are fighting not only a pandemic but also a co-evolving infodemic—the rapid and far-reaching spread of information of questionable quality. We analysed more than 100 million Twitter messages posted worldwide during the early stages of epidemic spread across countries (from 22 January to 10 March 2020) and classified the reliability of the news being circulated. We developed an Infodemic Risk Index to capture the magnitude of exposure to unreliable news across countries. We found that measurable waves of potentially unreliable information preceded the rise of COVID-19 infections, exposing entire countries to falsehoods that pose a serious threat to public health. As infections started to rise, reliable information quickly became more dominant, and Twitter content shifted towards more credible informational sources. Infodemic early-warning signals provide important cues for misinformation mitigation by means of adequate communication strategies.
The recent explosion of publicly shared, decentralized information production that characterizes digital societies1 and in particular social media activity2 provides an exceptional laboratory for the observation and study of complex social dynamics3, and potentially functions as a laboratory to understand, test and validate possible solutions to large-scale crises4. Pandemics are an instance of such crises, and the current outbreak of COVID-19 may therefore be thought of as a natural experiment to observe social responses to a major threat that may escalate to catastrophic levels and has already managed to seriously affect levels of economic activity and radically alter human social behaviours across the globe. In this study, we show that information dynamics tailored to alter individuals' perceptions, and potentially their behavioural responses, is associated with a shift of collective attention5 towards false6,7 or inflammatory8 content, a phenomenon named infodemic (that is, an epidemic of information)9,10,11,12, sharing similarities with more traditional epidemics and spreading phenomena13,14,15.
Contrary to what could be expected in principle, this natural experiment reveals that, on the verge of a threatening global pandemic emergency due to SARS-CoV-2 (refs. 16,17,18), human communication activity is largely characterized by the production of informational noise and even of misleading or false information19. This generates waves of unreliable and low-quality information with potentially dangerous impacts on society's capacity to respond adaptively at all scales by rapidly adopting those norms and behaviours that may effectively contain the propagation of the pandemic20. Spreading false or misleading information may prevent the timely and effective adoption of appropriate behaviours and of public health recommendations or measures21. Therefore, on the one hand, we face the threats of a pandemic, which spreads in the absence of effective therapies and valid countermeasures and calls for major efforts to model and anticipate the time course of its diffusion18. On the other hand, we can speak of an infodemic threat22, which proliferates when credible information sources fail to capture the attention and trust of some parts of the public, for whom alternative, low-quality sources are more appealing as they capture more social attention23, better match their own beliefs or prejudices24, or sound more convincing, thanks to their typically straightforward messages25.
The appeal of low-quality, misleading or manipulative information relies on simple, effective psychological mechanisms, such as curbing anxiety by denying or minimizing the seriousness of the threat; controlling fear and anger by targeting scapegoat individuals, groups or institutions as the ones responsible for the crisis; and delivering an illusory sense of control through the provision of 'miracle' remedies. Similarly to epidemics, infodemics could be thought of as outbreaks of false rumours and unreliable news26,27 with unexpected effects on social dynamics (Fig. 1), which can substantially increase epidemic spread. Infodemics call for suitable policy interventions built on state-of-the-art social and behavioural research28.
Fig. 1: How infodemics work.
Human (circles) and non-human (squares) accounts participate in the spread of news across a social network. Some users (A and B) create unreliable content, such as false or untrustworthy news or unsupported claims, while others (C) create content informed by reliable sources. When the topic attracts worldwide attention as in the case of COVID-19, the volume of information circulating makes it difficult to orientate oneself and to identify reliable sources. Indeed, some users (D) might be exposed to unreliable information only, while others (E and F) might receive contradictory information and become uncertain as to what information to trust. This is exacerbated when multiple spreading processes co-occur, and some users might be exposed multiple times to the same content or to different contents generated by distinct accounts.
As shown in Fig. 1, an infodemic is the result of the simultaneous action of multiple human and non-human sources of unreliable or misleading news in times of great abundance of circulating information. Note that, although this study does not directly deal with non-human accounts and their role in (mis-)information diffusion, we include them in the figure because they are known to be important contributors of noise in online social media7,8,29,30,31. As users are repeatedly hit by a given message from different sources, this works as an indirect validation of its reliability and relevance, leading the user to spread it in turn and to become a vector of dangerously misleading information.
The COVID-19 crisis allows us to provide an evidence-based assessment of such risks and of the real-time interaction of infodemic and epidemic spread14. We focus our attention on the analysis of messages posted on Twitter32, an online social network characterized by heterogeneous connectivity33 and topological shortcuts typical of small-world systems34. Information spread on this type of network is well understood in terms of global cascades in a population of individuals who have to choose between complementary alternatives, while accounting for the behaviour and the relative size of the individuals' social neighbourhoods35, as well as for factors that characterize the popularity of specific content, such as the memory time of users and the underlying connectivity structure36. However, the exact mechanisms responsible for the spread of false information and inflammatory content, for example during political events8,30,37,38, remain fundamentally unknown. Recently, it has been suggested that this challenging phenomenon might exist because, at a population level, the dynamics of multiple interacting contagions are indistinguishable from social reinforcement39.
This feature reinforces the increasing consensus around the idea that infodemics of news consumption should be analysed through the lens of epidemiology9,40 to gain insights about the role of online activities in spreading reliable as well as unreliable news. To this end, we monitored Twitter activity and collected more than 112 million messages using a selection of words commonly used in the medical discourse about COVID-19, between 22 January and 10 March 2020 (see Methods for the details). The messages were in 64 languages from around the world, but because of our data filtering and enrichment procedures, the largest fraction of analysed messages point to English-language sources. As a result, the findings reported in this study mostly capture the behaviour of the English-speaking portion of Twitter users, while in the majority of countries included in our analysis, English is not an officially spoken language. Additionally, Twitter demographics are not representative of the general population—there is overrepresentation of the highly educated, working-age male population. Moreover, limiting the focus to medical terminology clearly narrows the scope of our search and is a further limitation of our work. However, it allows us use terms such as 'coronavirus' and 'covid19' that are interculturally consistent and used in several languages not depending on local idiomatic usages and variants. We describe in detail the limitations of our dataset in the Discussion and Methods.
Where available, we extracted URLs from messages, collecting approximately 20.7 million links (3.3 million unique) pointing to websites external to the platform. Each URL was then subjected to our source reliability rating method, inheriting the reliability of its source (Methods, Table 1 and Supplementary Fig. 1). We successfully associated approximately 50% of URLs with a reliability rating by screening almost 4,000 expert-curated web domains; the remaining corpus pointed to disappeared web pages or to content not classifiable automatically (for example, videos on YouTube) and rarely shared sources.
Table 1 Description of the nine categories of news in our classification
Our method allowed us to overcome the limitations due to text mining of different languages for the analysis of narratives. However, this step in our analysis is predominantly based on sources in English, and this prevents us from covering and representing local discourses that mostly use local languages.
To better understand the diffusion of these messages across countries, we filtered messages that included geographic information. Approximately 0.84% of the collected posts were geotagged by the user, providing highly accurate information about their geographic location. By geocoding the information available in users' profiles, we were able to extend the corpus of geolocated messages to approximately 50% of the total observed volume (Fig. 2 and Methods). We therefore analysed more than 60 million geolocated messages, containing more than 9 million news links.
Fig. 2: The evolution of Twitter activity about the COVID-19 pandemic.
We observe a first increase in collective attention after the outbreak in Wuhan, China (between 24 January and 2 February 2020), and a second strong rise after the epidemics began to spread in northern Italy (20 February 2020 onwards). The fraction of geolocated messages (messages with shared locations, or geonamed, indicated in green) is constantly approximately 50% of the total volume recorded (indicated in blue). From 26 February, we reached the limit of the fraction of data shared by Twitter (Methods), missing an increasing fraction of Tweets (indicated in red).
For each message, we applied a distinction between verified and unverified users. Usually, verification is performed by the social platform to clearly identify accounts of public interest and certify that they are authentic. The number of followers Ku of a single user u defines the exposure (see Supplementary Note 1 for further details), in terms of potential visualizations at first-order approximation, of a single message m posted by user u at time t. Let Mu(t,t + Δt) indicate the set of messages posted by user u in a time window of length Δt. Since there are two different classes of users—verified (V) and unverified (U) accounts—we define the partial exposure (E) due to a single class Ci (i = V,U) as
$$E_i\left( {t,t + \Delta t} \right) = \mathop {\sum}\limits_{u \in C_i} {\mathop {\sum}\limits_{m \in M_u\left( {t,t + \Delta t} \right)} {K_u} }$$
Note that different users of the same class might have overlapping social neighbourhoods: those neighbours might be reached multiple times by the messages coming from distinct users of the same class; therefore, our measure of exposure accounts for this effect. Note also that our measure provides a lower bound to the number of exposed users, because we do not track higher-order transmission pathways: a user might read a news item included in a message but not share it further. There is no way to account for such users.
The assumption that all followers of a specific user u will be reached by posted messages is clearly unrealistic. In Supplementary Note 1, we provide a mathematical extension of the definition of exposure from equation (1), which allows one to relax this assumption on the basis of a recent study7 and a mean-field model, without altering the quantitative analysis presented in this study.
Finally, for each message, we identified the presence of links pointing to external websites, and for each link, we verified whether it came from a trustworthy source or not (Methods). The reliability rm of a single message m is either 0 or 1, because we discarded all web links that could not be easily assessed (such as ones shortened by third-party services) or that pointed to external platforms (such as YouTube) where it is not possible to automatically classify the reliability of the content. The news reliability of messages produced by a specific class of users (Ri) is therefore defined as
$$R_i\left( {t,t + \Delta t} \right) = \mathop {\sum}\limits_{u \in C_i} {\mathop {\sum}\limits_{m \in M_u\left( {t,t + \Delta t} \right)} {r_m} }$$
Unreliability can be defined similarly by replacing rm with 1 − rm. Exposure and reliability are useful descriptors but do not fully suffice to assess the risk of infodemics. For this reason, we developed an Infodemic Risk Index (IRI), which quantifies the rate at which a generic user is exposed to unreliable news produced by a specific class of users (partial IRI, equation (3)) or by any class of users (IRI, equation (4)):
$$p{\mathrm{IRI}}_i\left( {t,t + \Delta t} \right) = \frac{{\mathop {\sum}\limits_{u \in C_i} {\mathop {\sum}\limits_{m \in M_u\left( {t,t + \Delta t} \right)} {K_u\left( {1 - r_m} \right)} } }}{{\mathop {\sum}\limits_i {E_i\left( {t,t + \Delta t} \right)} }}$$
$${\mathrm{IRI}}\left( {t,t + \Delta t} \right) = \mathop {\sum}\limits_i {p{\mathrm{IRI}}_i\left( {t,t + \Delta t} \right)}$$
Both indices are well defined and range from 0 (no infodemic risk) to 1 (maximum infodemic risk). Note that we can calculate all the infodemic descriptors introduced above at a desired level of spatial and temporal resolution.
Figure 3 shows how countries characterized by different levels of infodemic risk present very different profiles of news sources, which appear not to be strictly correlated with the level of socio-economic development (Supplementary Fig. 2). In low-risk countries such as Canada and South Korea, the level of infodemic risk remains small throughout the period of study, apart from isolated spikes mostly associated with unverified sources. As the epidemic spreads to important levels, infodemic risk further decreases, signalling an increasing focus of the public towards reliable news sources. By contrast, in a high-risk country such as Venezuela, the infodemic is pronounced throughout the period of observation, and in addition to the expected activity from unverified sources, even verified sources contribute to a large extent to the infodemic. Finally, in a relatively high-risk country such as Russia, infodemic risk is erratic with sudden, very pronounced spikes, and again verified sources play a major role. Here too, information about the epidemic is fragmented and mostly unreliable.
Fig. 3: Mapping infodemic risk worldwide.
The infodemic risk of each country, aggregated over time, is colour-coded on the map. The panels show the evolution of risk over time for a sample of countries; the bars indicate the partial contributions of verified and unverified users to the overall risk and the dashed lines represent the cumulative mean of the IRI at a given day d (computed as the ratio between the cumulative sum of the daily IRI in the days between 22 January and d, and the number of days between these two dates). Risk evolution for the whole world is also shown, demonstrating an overall decrease of risk over time (bottom middle panel, where the grey line represents a LOESS regression with R2 = 0.29). The markers horizontally aligned at the top of each panel indicate the daily confirmed epidemiological cases, with their number encoded by the markers' sizes (Venezuela does not contain epidemiological markers as no confirmed cases were reported at the time of the anaysis). Map made with public domain Natural Earth data.
Overall, the global level of infodemic risk tends to decrease as COVID-19 spreads globally, suggesting that epidemic spread leads people to look for relatively more reliable sources. It also suggests that verified influencers with many followers started to point to more reliable news (Supplementary Figs. 3 and 4 and Supplementary Note 2), possibly shifting the state of the infodemic towards a clearer information landscape where it is easier to orientate and to identify unreliable facts.
In the case of Italy, where the epidemic struck the country heavily within the window of observation of the current study, we observe in coincidence with the first verified domestic contagions a sudden, clear increase in national Google searches for the best-known Italian virologists as they gained substantial visibility on national mainstream media (Supplementary Note 2). Our data do not allow us to establish a causal relationship between the sudden increase in popularity and media exposure of such experts and the shift in focus from unreliable to reliable sources in online social media conversations. However, it is likely that a spillover effect has occurred, contributing at least partly to this shift, as Italian Twitter is known to be very reflective of trending personalities and topics from the mainstream media41. This overall pattern, linking the local spread of the epidemics to the diffusion of more reliable information, is confirmed in terms of measures of infodemic risk aggregated daily and at the country level (Fig. 4 and Supplementary Figs. 5 and 6). This pattern is particularly pronounced with the escalation of the epidemic, suggesting that the effect could be mediated by levels of perceived social alarm.
Fig. 4: Reduction of infodemic risk after COVID-19 reaches countries.
Aggregated view of the evolution of the IRI for increasing numbers of reported cases. For each day and each of the 162 countries considered in our analysis, we compute the cumulative mean of the IRI at a given day d (computed as the ratio between the cumulative sum of the daily IRI in the days between 22 January and d, and the number of days between these two dates). We aggregate days and countries with a similar cumulative number of reported cases, using bins of increasing size to compensate for the limited number of countries that reached high levels of contagion at the time of the analysis and reporting the average value on the x axis. This allows us to describe the drop in IRI as the number of cases grows in a country using box plots. In box plots, the centre lines represent the medians, the boxes the range between the 25th and 75th percentile, and the whiskers the range between the smallest and largest data point, excluding outliers, which are represented as circles. Therefore, the difference between two boxes is statistically significant when each middle line lies outside of the other box. On the basis of the results of both a one-way ANOVA (F statistic (degrees of freedom), 18.86 (5); P < 0.001; effect size, F = 0.05; 95% confidence interval, (0.03, 0.06); the data distributions were assumed to be normal, but this was not formally tested) and Kruskal–Wallis rank sum tests F 137.14 (5); P < 0.001; effect size, F = 0.0677; 95% confidence interval, (0.0501, 0.0918); no assumptions are needed to use this non-parametric test), there is evidence of a statistically significant effect (P < 0.001 for both tests) of the number of reported cases on the IRI cumulative mean. In Supplementary Fig. 4, we provide further tests illustrating the significant difference between each pair of boxes except pairs 3–7 with 1–2 and with 8–15 and pair 16–50 with 51–9,999, where the differences are not statistically significant.
In principle, countries with high infodemic risk could also present more reliability issues in terms of reporting of epidemic data, thus altering the perceptions of the public and indirectly misleading them in their search for reliable information. In fact, there have been cases of countries with high infodemic risk where political leaders have actively spread misleading information and openly questioned the necessity to accurately track and measure the development of the epidemic diffusion, as well as the reliability of fact-checking sources42,43,44,45. Our results, though, do not provide direct supporting evidence for this possibility, and this remains an open question for future research.
The dynamic profiles of infodemic risk in countries with similar risk levels may also be very different. Figure 5 compares Italy with the United States. In the case of Italy, the risk is mostly due to the activity of unverified sources, but we notice that with the outbreak of the epidemic, the production of misinformation collapses, and there is a sudden shift to reliable sources. In the United States, misinformation is mainly driven by verified sources, and it remains essentially constant even after the epidemic outbreak. Notice also how infodemic risk varies substantially across US states. As in our time window the United States lagged widely behind Italy in terms of epidemic progression, it remains to be seen whether a similar readjustment can be observed for the United States later on. Figure 5 shows, however, that the relationship between the reduction of infodemic risk and the spread of the epidemic seems to be a rather general trend, as the relationship between the number of confirmed cases and infodemic risk is (nonlinearly) negative, confirming the result shown in Fig. 4. Figure 5 also shows how the evolution of infodemic risk among countries with both high message volume and considerable epidemic contagion tends to be very different. The IRI maintained its relatively high level not only in countries such as Iran but also in the United States, Germany, the Netherlands, Sweden and Norway. Conversely, in other countries such as Italy, South Korea and Japan, the IRI substantially dropped with the progression of the epidemics.
Fig. 5: Infodemic evolution is country dependent.
a, As in Fig. 3, for the European Union and the United States at a finer resolution, with a detailed map for Italy (regional resolution). Areas with fewer than ten messages were excluded from the analysis and are colour-coded in grey. Note the striking drop in the Italian IRI coinciding with the first official report of non-imported epidemiological cases. b, Risk evolution for countries characterized by a high volume of messages per day (at least one day with more than 2,000) and a high number of epidemiological cases (at least one day with more than 100). This picture illustrates, with the same colour legend as in the maps, how the temporal pattern of the infodemic is strongly localized and depends on the online discourse of each country. c, The number of epidemiological cases is shown against the IRI for all countries with at least one confirmed COVID-19 case. The countries are coloured according to their continent, with dot sizes proportional to the daily volume of messages generated. The black dashed curve encodes a local polynomial regression fit, here shown as a guide for the eye to highlight the highly nonlinear pattern relating epidemic and infodemic indices, while the shaded area and the solid red line encode a simple linear regression fit with a 95% confidence interval illustrating an anticorrelation (Spearman's r, −0.42; confidence interval, (−0.60, −0.24)). China is an outlier due to its role in the global epidemic in terms of the timing and size of the contagion, which makes it difficult to compare it with other countries; it has therefore been removed from this analysis. Maps made with public domain Natural Earth data, which also define the country abbreviation codes used in b and c.
Our findings show that, in a highly digital society, the epidemic and the infodemic dimensions of COVID-19 co-evolve. The infodemic dimension is driven by a heterogeneous set of actors who pursue largely undisclosed goals.
Given the lack of pharmacological interventions to combat COVID-19, responsible behaviours driven by reliable information at all scales are key for the mitigation of adverse effects. It may therefore be important to develop integrated public health approaches, where the biological and informational dimensions of an epidemic are equally recognized, taken into account and managed through careful policy design.
Here, we have shown that in the context of the COVID-19 crisis, complex infodemic effects are indeed at work, with remarkable variations across countries, and the level of socio-economic development is not the key discriminant to separate countries with high versus low infodemic risk. In fact, we find that there are G8 countries with remarkable infodemic risk (for example, Russia and Germany) and developing countries with far lower risk levels (for example, Thailand and the Philippines). This means that, especially in countries where infodemic risk is high, the eventual speed and effectiveness of the containment of COVID-19 could depend on a prompt policy switch in communication strategies and in the effective countervailing of the most active sources of unreliable news. The escalation of the epidemics leads people to progressively pay attention to more reliable sources, thus potentially limiting the impact of infodemics, but the actual speed of adjustment may make a major difference in determining the social outcome (and in particular between a controlled epidemic and a global pandemic).
Our study is characterized by important limitations. A key limitation of any data collection from social media content is that each social medium has a specific demographic that is not representative of the whole population, so that different social media are biased in different directions46. However, social media platforms offer unique opportunities to collect very large volumes of data in real time on key social phenomena, and currently there are no viable alternatives for the collection of similar amounts of data in an equally timely way from other sources. There is currently no means of obtaining representative data worldwide relying only on online sources, yet the collection of offline sources presents other substantial limitations. In fact, before the advent of social media, it would have been unthinkable to carry out analyses of social phenomena at this scale in real time. Our focus on Twitter means that our reference population tends to be highly educated, working age and male, and our filter selection and source reliability database exacerbate this bias towards English-speaking users. One way to tackle this problem in future research is to extend data collection to several social media platforms at once, but there is a clear trade-off between intensively collecting large volumes of data on a single platform and extensively collecting data from multiple platforms with smaller volumes for each. Moreover, joint collection from multiple biased sources remains biased in principle, although the overall bias becomes less controllable. We consider our approach as a first step, with clear limitations, which may provide a benchmark for more comprehensive future approaches.
There are several important questions and goals for future research. We highlight four: (1) a better understanding of the role of artificial agents (bots) in infodemics, (2) the development of truly multilingual corpora and source reliability databases, (3) the extension of text mining to multiple social media platforms while maintaining the highest possible volumes of mined content from each source, and (4) building a representative sample of the global population through a suitable integration of online and offline sources. These are formidable challenges, but their urgency and relevance do not need much argumentation. We look forward to the future developments of what promises to be an emerging discipline with key theoretical and policy implications.
We followed a methodology for collecting social media data consolidated over the years. We focused on Twitter, which is well known for providing access to publicly available messages upon specific requests through their application programming interface (API). We identified a set of hashtags and keywords gaining collective attention since the first recorded cases of COVID-19: coronavirus, ncov, #Wuhan, covid19, covid-19, sarscov2 and covid. This set includes the official names of the virus and the disease, including the early tentative ones, as well as the name of the city where the first cases of COVID-19 were recorded. We estimate the recall rate for these keywords to be higher than 16% and probably in the 40%–60% range at the time of recording (see Supplementary Note 3 for more details). We used the Filter API to collect the data in real time from 24 January 2020 to 10 March 2020 and the Search API to collect the data between 21 January 2020 and 24 January 2020. Our choice allowed us to monitor, without interruptions and regardless of the language, all the tweets posted about COVID-19 since 22 January 2020, when China reported more than 6,000 cases, calling for the attention of the international community. The Stream API has the advantage of providing all the messages satisfying our selection criteria and posted to the platform in the period of observation, provided that their volume is not larger than 1% of the overall (unfiltered) volume of posted messages. Above 1% of the overall flow of information, the Filter API provides a sample of filtered tweets and communicates an estimate of the number of lost messages. Note that this choice is the most reliable to date: in fact, it was recently shown that biases affecting the Sample API (which samples data on the basis of rate limits), for instance, are not found in the REST and Filter APIs47. In Supplementary Note 4, we show how this problem does not affect our data.
We estimate that until 24 February 2020, we lost approximately 60,000 tweets out of millions, capturing more than 99.5% of all messages posted (Fig. 2). The global attention towards COVID-19 increased the volume of messages after 25 February 2020; however, Twitter restrictions allowed us to get no more than 4.5 million messages per day, on average. We have estimated a total of 161.2 million tweets posted until 10 March 2020; we have successfully collected 112.6 million of them.
The user's self-declared location field was used for geocoding with ArcGIS API. For approximately 56% of users, we had a response in terms of latitude and longitude. However, a large portion of these answers (about 10%) were associated with a small number (~1,600) of wrongly attributed locations that were removed (reaching the 50% ratio indicated in the main text). These errors were mostly caused by the use of non-toponyms in the location field such as 'Home' or 'Somewhere', or other pieces of information (such as Instagram and website URLs), which were wrongly associated with real locations. We identified these errors by isolating single locations associated with a large number of different unique user-defined location strings. Finally, we also filtered out names of continents that were correctly geocoded but do not match the country-based granularity we set for our analysis. The reliability of our method was tested by comparing geocoded and georeferenced data for the United States (Supplementary Note 5).
Source reliability rating
We collected manually checked web domains from multiple publicly available databases, including scientific and journalistic ones. Specifically, we considered data shared by the sources listed in refs. 48,49,50,51,52,53,54,55,56.
The databases adopted different labelling schemes to classify web domains. We therefore first had to develop a unifying classification scheme, reported in Table 1, and map all existing categories into a unique set of categories. Note that we have also mapped those categories into a coarse-grained classification scheme, distinguishing between reliable and unreliable.
We found a total of 4,988 domains, reduced to 4,417 after removing hard duplicates across databases. Note that a domain is considered a hard duplicate if its name and its classification coincide across databases.
A second level of filtering was applied to domains that are classified differently across databases (for example, xyz.com might be classified as FAKE/HOAX in a database and as SATIRE in another database). To deal with these cases, we adopted our own classification method by assigning to each category a Harm Score (HS) between 1 and 9. When two or more domains were soft duplicates, we kept the classification with the highest HS, as a conservative choice. This phase of processing reduced the overall database to 3,920 unique domains.
The HS classifies sources in terms of their potential contribution to the manipulative and misinformative character of an infodemic. As a general principle, the more systematic and intentionally harmful the knowledge manipulation and data fabrication, the higher the HS. "Science" or "Scientific" content has the lowest level of HS due to the rigorous process of validation carried out through scientific methods. "Mainstream media" content has the second lowest level of HS due to its constant scrutiny in terms of fact checking and media accountability. "Satire" is an unreliable source of news, but due to its explicit goal of distorting or misrepresenting information according to specific cultural codes of humour and social critique, it is generally identified with ease as an unreliable source. "Clickbait" is a more dangerous source (and thus ranks higher in HS) due to its intent to pass fabricated or misrepresented information for facts, with the main purpose of attracting attention and online traffic (that is, for mostly commercial purposes), but without a clear ideological intent. "Other" is a general-purpose category that contains diverse forms of (possibly) misleading or fabricated content, not easily classifiable but probably including bits of ideologically characterized content pursuing systematic goals of social manipulation, and thus ranking higher in HS. "Shadow" is a similar category to the previous one, where links are anonymized and often temporary (for example, bit.ly and dlvr.it), thereby adding an extra element of unaccountability and manipulation that translates into a higher level of HS. Known vanity URL shorteners such usnyti.ms for the New York Times and wpo.st for the Washington Post are automatically associated with the source. "Political" is a category where we find an ample spectrum of content with varying levels of distortion and manipulation of information, also including mere selective reporting and omission, whose goal is to build consensus on a polarized political position against others; this category therefore directly aims at conditioning the public discourse and opinion making, with a higher HS than the previous categories. The majority of web domains listed in this category overlap with 'left' and 'right' categories as defined by the MediaBiasFactCheck source, while domains labelled as left-centre and right-centre are considered Mainstream media. "Fake or hoax" contains entirely manipulated or fabricated inflammatory content that is intended to be perceived as realistic and reliable and whose goal may also be political but fails to meet the basic rules of plausibility and accountability, thus reaching an even higher level of HS. Finally, the highest level of HS is associated with "Conspiracy and junk science"—that is, to strongly ideological, inflammatory content that aims at building conceptual paradigms that are entirely alternative and oppositional to tested and accountable knowledge and information, with the intent of building self-referential bubbles where fidelized audiences are simply refusing a priori any kind of knowledge or information that is not legitimized by the alternative source itself or by recognized affiliates, as is typical in sects of a religious or other nature.
A third level of filtering concerned poorly defined domains—for example, the ones explicitly missing top-level domain names (such as ".com" or ".org")—as well as the domains not classifiable by means of our proposed scheme. This action reduced the database to the final number of 3,892 entries (Table 1 and Supplementary Fig. 1).
Finally, in Supplementary Note 6 we also provide quantitative results excluding effects due to the shift of misinformation towards untracked domains during the time frame of our analysis. In Supplementary Note 7, we further provide a comparison between MediaBiasFactCheck and other databases.
Data limitations and possible selection biases
The process of gathering and integrating vast sources of user-generated data provides us with the opportunity of analysing complex collective phenomena in almost real time. At the same time, it is subject to a number of limitations inherent in user-generated content data45 selection biases that might influence the analysis at different levels. In this section, we discuss these limitations in detail, as well as how they affect our results.
Use of Twitter as a data source (population bias)
All Twitter-based research has to cope with the intrinsic demographic limitations of Twitter's penetration: our results apply mostly to well-educated males (65% of Twitter users57) between the ages of 18 and 34 (58% of Twitter users, according to Statista GmbH58). Although our results must be interpreted in the light of these demographic limitations, we believe that our work represents a first step in establishing a robust research agenda for the study of infodemic risk. Future research should expand our knowledge by working on different demographics from different data sources.
Furthermore, as the COVID-19 public health emergency spread and raised international concern, Twitter (as well as Facebook and Google) took actions against the diffusion of unreliable/misleading news by attempting to prioritize reliable sources over unreliable ones. In Supplementary Note 8, we show how this action seems not to have influenced our measures.
Use of words written with Latin characters in the Twitter Filter API (data filtering bias)
Latin characters, and particularly English, are widespread and often used for hashtags in messages in languages not using the same alphabet. However, the fact that we used a set of terms shared by Western languages (including English, Spanish, French, Portuguese, German, Italian and others) to select tweets in the Filter API may exacerbate the Twitter bias towards highly educated individuals in countries where local languages do not use Latin characters.
Use of a limited and static list of words in the Twitter Filter API (data filtering bias)
As discussed above, our analyses do not focus on reconstructing the whole communication network related to the topic; instead, they focus on estimating the fraction and impact of unreliable news. Therefore, our rationale behind the word choice was to include the most commonly used keywords to ensure that, if the discourse abruptly changed its key terms, we were still tracking them. This might lower the recall rate, as new terms might be progressively emerging. In particular, our dataset only partially includes '#stayathome' or '#staystrong' messages, but ultimately our focus is on understanding whether news related to key medical pandemic hashtags is reliable or not, and to what extent this news reliablity correlates with the epidemic wave. For this reason, we chose a set of words commonly used in medical discourse, using query expansion when it was crucial for collecting medical-related data (for example, when the name of the virus and of the disease changed to SARS-CoV-2 and COVID-19, respectively, from the previous 2019-nCov).
An alternative would have been to use automated query expansion techniques to enlarge the set of terms used for filtering. Unfortunately, there is not yet an agreement on a standard methodology, as each design leads to a different source of bias. For example, a possible method would have been to build a hashtag co-occurrence network periodically and to expand the list using more central nodes in such networks. However, query expansions might have increased the sample at the expense of introducing further bias in our analysis, as it would have been done, day by day, on a considerably different user base. While our choice does not provide a complete picture of the social dynamics during the pandemic, it was specifically designed for the task of gathering tweets containing links to medically related news sources, reliable or not, which is the focus of our paper.
Use of Western-centric fact-checking sites (data enrichment bias)
To enhance the specificity and robustness of our multilanguage Twitter dataset sample, we collected fact-checking information data from several different and independent sources. Since the World Wide Web is strongly English centric, this collection of sources provides an overabundance of information about content in English. The English-centric nature of the resources helping us identify unreliable news sources probably exacerbates the intrinsic Twitter demographic limitations towards well-educated English-speaking users, a bias that could not be amended by any more complete database.
To assess this limitation, we collected statistics from Amazon Alexa (www.alexa.com/topsites/countries) about web traffic (the top 50 most visited websites) for all countries across the globe, matching these lists with the list of domains we used to classify reliable and unreliable sources. Remarkably, for 127 countries we have at least one domain in the reliable top-50 news source, and for 21 (iso2 codes: AE, AR, BB, BE, CA, DK, FR, KE, MX, NG, PA, PE, PH, PR, PT, QA, SD, SE, TT, US and VE) we have at least one domain in the top-50 websites labelled as unreliable (split equally between politically biased and fake or hoax websites). In fact, this is a lower bound, because Alexa provided only major domains, disregarding subdomains that we instead classified as well. This large presence among the very top tier of websites suggests that our results are robust for multilanguage/multicultural analysis.
In our opinion, however, it is not entirely correct to say that fact-checking sites suffer from a Western-centric bias. It is the very notion of institutional sources of fact-checking and certification of media bias that is today still largely Western centric. An eloquent picture is provided by Reporters Without Borders' Press Freedom Index (https://rsf.org/en/ranking), where it is clearly shown that today, apart from the Western world and a few isolated non-Western countries (South Korea, Costa Rica, Jamaica, Uruguay, South Africa, a few small Western African states and micro states), the media environment of all other countries cannot be considered free, and in such conditions, the possibility of thorough, transparent fact-checking is basically impossible. So, whereas we acknowledge that our study suffers from other sources of bias, we are not sure that this particular source should be classified as such: we are simply considering the only functioning, relatively reliable sources of fact-checking available.
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The datasets generated during the current study are available from the corresponding author on reasonable request. The aggregated information, compliant with all privacy regulations, is publicly available online at the Infodemics Observatory (http://covid19obs.fbk.eu/) and at OSF (https://doi.org/10.17605/OSF.IO/N6UPX).
Code availability
The custom code that supports the findings of this study is available from the corresponding author upon request and available alongside the data in the permanent repository indicated above.
Benkler, Y. The Wealth of Networks: How Social Production Transforms Markets and Freedom (Yale Univ. Press, 2006).
Fuchs, C. Social Media: A Critical Introduction (SAGE, 2014).
Giglietto, F., Rossi, L. & Bennato, D. The open laboratory: limits and possibilities of using Facebook, Twitter, and YouTube as a research data source. J. Technol. Hum. Serv. 30, 145–159 (2012).
Ojo, A. & Mellouli, S. Deploying governance networks for societal challenges. Gov. Inf. Q. https://doi.org/10.1016/j.giq.2016.04.001 (2016).
De Domenico, M. & Altmann, E. G. Unraveling the origin of social bursts in collective attention. Sci. Rep. 10, 4629 (2020).
Vosoughi, S., Roy, D. & Aral, S. The spread of true and false news online. Science 359, 1146–1151 (2018).
Shao, C. et al. The spread of low-credibility content by social bots. Nat. Commun. 9, 4787 (2018).
Stella, M., Ferrara, E. & De Domenico, M. Bots increase exposure to negative and inflammatory content in online social systems. Proc. Natl Acad. Sci. USA 115, 12435–12440 (2018).
Eysenbach, G. Infodemiology: the epidemiology of (mis)information. Am. J. Med. 113, 763–765 (2002).
Eysenbach, G. Infodemiology and infoveillance: framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the Internet. J. Med. Internet Res. 11, e11 (2009).
Eysenbach, G. Infodemiology and infoveillance tracking online health information and cyberbehavior for public health. Am. J. Prev. Med. 40, S154–S158 (2011).
Zarocostas, J. How to fight an infodemic. Lancet 395, 676 (2020).
Pastor-Satorras, R., Castellano, C., Van Mieghem, P. & Vespignani, A. Epidemic processes in complex networks. Rev. Mod. Phys. 87, 925–979 (2015).
De Domenico, M., Granell, C., Porter, M. A. & Arenas, A. The physics of spreading processes in multilayer networks. Nat. Phys. 12, 901–906 (2016).
Brockmann, D. & Helbing, D. The hidden geometry of complex, network-driven contagion phenomena. Science 342, 1337–1342 (2013).
Huang, C. et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 395, 497–506 (2020).
Zhu, N. et al. A novel coronavirus from patients with pneumonia in China, 2019. N. Engl. J. Med. 382, 727–733 (2020).
Chinazzi, M. et al. The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak. Science 368, 395–400 (2020).
Lazer, D. M. J. et al. The science of fake news. Science 359, 1094–1096 (2018).
Rapp, D. N. & Salovich, N. A. Can't we just disregard fake news? The consequences of exposure to inaccurate information. Policy Insights Behav. Brain Sci. 5, 232–239 (2018).
Waszak, P. M., Kasprzycka-Waszak, W. & Kubanek, A. The spread of medical fake news in social media—the pilot quantitative study. Health Policy Technol. 7, 115–118 (2018).
Leung, G. M. & Leung, K. Crowdsourcing data to mitigate epidemics. Lancet Digit. Health https://doi.org/10.1016/S2589-7500(20)30055-8 (2020).
Altay, S., de Araujo, E. & Mercier, H. 'If this account is true, it is most enormously wonderful': interestingness-if-true and the sharing of true and false news. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/tdfh5 (2020).
Vicario, M. D., Quattrociocchi, W., Scala, A. & Zollo, F. Polarization and fake news. ACM Trans. Web 13, 10 (2019).
Britt, M. A., Rouet, J.-F., Blaum, D. & Millis, K. A reasoned approach to dealing with fake news. Policy Insights Behav. Brain Sci. 6, 94–101 (2019).
Weekly Epidemiological Record Vol. 95, 16 (WHO, 2020); https://www.who.int/wer/2020/wer9516/en/
Tangcharoensathien, V. et al. A framework for managing the COVID-19 infodemic: methods and results of an online, crowdsourced WHO technical consultation. J. Med. Internet Res. https://doi.org/10.2196/19659 (2020).
Lunn, P. D. et al. Using behavioral science to help fight the Coronavirus. J. Behav. Public Adm. https://doi.org/10.30636/jbpa.31.147 (2020).
Ferrara, E., Varol, O., Davis, C., Menczer, F. & Flammini, A. The rise of social bots. Commun. ACM 59, 96–104 (2016).
Bessi, A. & Ferrara, E. Social bots distort the 2016 U.S. Presidential election online discussion. First Monday https://doi.org/10.5210/fm.v21i11.7090(2016).
Ferrara, E. Disinformation and social bot operations in the run up to the 2017 French presidential election. First Monday https://doi.org/10.5210/fm.v22i8.8005 (2017).
Kwak, H., Lee, C., Park, H. & Moon, S. What is Twitter, a social network or a news media? In Proc. 19th International Conference on World Wide Web 591 (ACM, 2010).
Barabasi, A. L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).
Watts, D. J. & Strogatz, S. H. Collective dynamics of 'small-world' networks. Nature 393, 440–442 (1998).
Watts, D. J. A simple model of global cascades on random networks. Proc. Natl Acad. Sci. USA 99, 5766–5771 (2002).
Gleeson, J. P., O'Sullivan, K. P., Baños, R. A. & Moreno, Y. Effects of network structure, competition and memory time on social spreading phenomena. Phys. Rev. X 6, 021019 (2016).
PubMed Central Google Scholar
Aral, S. & Eckles, D. Protecting elections from social media manipulation. Science 365, 858–861 (2019).
Stella, M., Cristoforetti, M. & De Domenico, M. Influence of augmented humans in online interactions during voting events. PLoS ONE 14, e0214210 (2019).
Hébert-Dufresne, L., Scarpino, S. V. & Young, J.-G. Macroscopic patterns of interacting contagions are indistinguishable from social reinforcement. Nat. Phys. https://doi.org/10.1038/s41567-020-0791-2 (2020).
Eysenbach, G. How to fight an infodemic: the four pillars of infodemic management. J. Med. Internet Res. 22, e21820 (2020).
Marchetti, R. & Ceccobelli, D. Twitter and television in a hybrid media system. Journalism Pract. 10, 626–644 (2016).
Yen, H., Braun, S. & Woodward, C. AP fact check: Trump's alternate reality on COVID-19 threat. Associated Press https://apnews.com/0aa783aa734b2ac3d984c5116b3e8039 (20 July 2020).
Broad, W. J. Putin's long war against American science. The New York Times https://www.nytimes.com/2020/04/13/science/putin-russia-disinformation-health-coronavirus.html (13 April 2020).
Iran's reaction to coronavirus has become a danger for the world. The Washington Post https://www.washingtonpost.com/opinions/global-opinions/irans-moment-of-truth-on-coronavirus/2020/03/03/f82548fe-5cca-11ea-b29b-9db42f7803a7_story.html (3 March 2020).
Coronavirus: world leaders' posts deleted over fake news. BBC News https://www.bbc.com/news/technology-52106321 (31 March 2020).
Olteanu, A. et al. Social data: biases, methodological pitfalls, and ethical boundaries. Front. Big Data 2, 13 (2019).
Pfeffer, J., Mayer, K. & Morstatter, F. Tampering with Twitter's Sample API. EPJ Data Sci. 7, 50 (2018).
Zimdar, M. My fake news list went viral but made up stories are only part of the problem. The Washington Post https://www.washingtonpost.com/posteverything/wp/2016/11/18/my-fake-news-list-went-viral-but-made-up-stories-are-only-part-of-the-problem/(18 November 2016).
Silverman, C. Inside the partisan fight for your news feed. BuzzFeed News https://www.buzzfeednews.com/article/craigsilverman/inside-the-partisan-fight-for-your-news-feed (8 August 2017).
Fake News Watch (2015); https://web.archive.org/web/20180213181029/http://www.fakenewswatch.com/
Politifacts guide to fake news and what they peddle. Politifacts.com https://www.politifact.com/article/2017/apr/20/politifacts-guide-fake-news-websites-and-what-they/ (20 April 2017).
The black list. La lista nera del web. Bufale.net https://www.bufale.net/the-black-list-la-lista-nera-del-web/ (2018).
Starbird, K. et al. Ecosystem or echo-system? Exploring content sharing across alternative media domains. In 12th International AAAI Conference on Web and Social Media 365–374 (AAAI, 2018).
Fletcher, R. et al. Measuring the Reach of 'Fake News' and Online Disinformation in Europe (Reuters Institute, 2018); https://reutersinstitute.politics.ox.ac.uk/our-research/measuring-reach-fake-news-and-online-disinformation-europe
Grinberg, N. et al. Fake news on Twitter during the 2016 US presidential election. Science 363, 374–378 (2019).
MediaBiasFactCheck (2020); https://mediabiasfactcheck.com/
Distribution of Twitter Users Worldwide as of July 2020, by Gender (Statista, 2020); https://www.statista.com/statistics/828092/distribution-of-users-on-twitter-worldwide-gender/
Distribution of Twitter Users Worldwide as of July 2020, by Age Group (Statista, 2020); https://www.statista.com/statistics/283119/age-distribution-of-global-twitter-users/
We received no specific funding for this work. We acknowledge the support of the FBK's Digital Society Department and the FBK's Flagship Project CHUB (Computational Human Behavior). We thank all FBK's Research Units for granting us privileged access to extraordinarily high-performance computing for the analysis of massive infodemic data. We thank J. Baumgartner for sharing data between 21 January and 24 January 2020. We acknowledge the WHO Information Network for Epidemics (WHO EPI-WIN) for useful discussions and the scientific members of the WHO ad hoc online consultation on managing the COVID-19 infodemic for very inspiring insights and conversations.
CoMuNe Lab, Fondazione Bruno Kessler, Trento, Italy
Riccardo Gallotti, Francesco Valle, Nicola Castaldo & Manlio De Domenico
IULM University, Milan, Italy
Pierluigi Sacco
Fondazione Bruno Kessler, Trento, Italy
Riccardo Gallotti
Francesco Valle
Nicola Castaldo
Manlio De Domenico
M.D.D. conceived the study. M.D.D. and F.V. collected the data. R.G., N.C. and F.V. analysed the data. M.D.D., P.S. and R.G. interpreted the data and wrote the manuscript.
Correspondence to Pierluigi Sacco or Manlio De Domenico.
Peer review information Primary handling editor: Stavroula Kousta.
Supplementary Figs. 1–13, Supplementary Notes 1–8 and Supplementary Tables 1–3.
Gallotti, R., Valle, F., Castaldo, N. et al. Assessing the risks of 'infodemics' in response to COVID-19 epidemics. Nat Hum Behav 4, 1285–1293 (2020). https://doi.org/10.1038/s41562-020-00994-6
A multiscale modeling framework to study the interdependence of brain, behavior, and pandemic
Spandan Kumar
Bhanu Sharma
Nonlinear Dynamics (2023)
Face mask use during the COVID-19 pandemic: how risk perception, experience with COVID-19, and attitude towards government interact with country-wide policy stringency
Annelot Wismans
Peter van der Zwan
Roy Thurik
BMC Public Health (2022)
A new infodemiological approach through Google Trends: longitudinal analysis of COVID-19 scientific and infodemic names in Italy
Alessandro Rovetta
Lucia Castaldo
BMC Medical Research Methodology (2022)
COVID-19 myth-busting: an experimental study
Aimée Challenger
Petroc Sumner
Lewis Bott
A microblog content credibility evaluation model based on collaborative key points
Ling Xing
Jinglong Yao
Huahong Ma
Associated Content
COVID-19 and human behaviour
Nature Human Behaviour (Nat Hum Behav) ISSN 2397-3374 (online) | CommonCrawl |
Affect and gaze responses during an Emotion-Evoking Task in infants at an increased likelihood for autism spectrum disorder
Lori-Ann R. Sacrey ORCID: orcid.org/0000-0002-4229-69331,
Lonnie Zwaigenbaum1,
Jessica A. Brian2,
Isabel M. Smith3,
Vickie Armstrong3,
Sarah Raza1,
Tracy Vaillancourt4 &
Louis A. Schmidt5
The majority of research examining emotional difficulties in autism spectrum disorder (ASD) prior to age 2 relies on parent report.
We examined behavioral responses (affect and gaze) during emotionally salient tasks designed to elicit mildly positive and negative emotional states in infants. At 12 and 18 months, infants at an increased likelihood for an ASD diagnosis (IL; have an older sibling with ASD; n = 60) and low likelihood (LL; no family history of ASD; n = 21) completed the Emotion-Evoking (EE) Task and parents completed the Infant Behavior Questionnaire-Revised (IBQ-R). All children received an Autism Diagnostic Observation Scale—second Edition assessment for ASD symptomatology at 24 months.
The main findings were (1) the IL group displayed higher rates of negative affect and spent less time looking at the task objects compared to the LL group, and (2) affect and gaze scores at 12 and 18 months, but not scores on the IBQ-R, predicted ASD symptoms at 24 months.
The data were drawn from an IL sample and may not be generalizable to the general ASD population, and the children were not followed to determine a diagnosis of ASD.
These results suggest that behavioral responses can provide important information that complements parent reports of emotional regulation in IL infants as early as 12 months of age.
Emotional regulation (ER) begins to appear in the first year of life and refers to the ability to modulate the occurrence, intensity, and valence of emotional reactions through intrinsic (learned with experience) and extrinsic (with assistance from others) strategies [9,46,59, 26]. Depending on context, ER can be unconscious or conscious, controlled or automatic, and extrinsic (e.g., parent regulating child's emotions) or intrinsic (child regulating own emotions) [27]. Emotional regulation is predictive of several domains of development in childhood, including behavioral problems (e.g., externalizing behaviors, [49, 63], social skills [14, 15], and academic skills [5, 65]). Difficulties in ER also show high concordance with the core features of autism spectrum disorder (ASD, [43, 50, 61]). For example, impairments in communication, affective expression, and reciprocal play are often associated with emotional dysregulation [10]. Although neurotypical children make developmental strides in learning to regulate their emotions during their early school years, many children with neurodevelopmental disorders, including those on the autism spectrum, continue to struggle with ER into adolescence and adulthood [45].
Emotional regulation can be measured during childhood using questionnaires, direct observation, and physiological measurement, such as heart rate [64]. Studies of ER in individuals with ASD suggest that they experience increased negative emotions and reduced positive emotions [3,7,56, 30]. Most previous research examining ER in very young children (2 years and under) has used parent questionnaires [44] that assess temperament, that is, individual differences in reactivity and self-regulation of emotion, attention, and activity [53], rather than direct (i.e., physiological) measures. For example, Capps et al. [7] compared ratings on the parent-rated Emotion Behavior Checklist [33] between children with ASD and neurotypical children who were matched on mental age (24 months). Parents of children with ASD rated their children as showing more sadness and fear, as well as less joy than did parents of neurotypical children. Similarly, Garon et al. [20] examined parent ratings on the Infant Behavior Questionnaire-Revised (IBQ-R, [52]) at 12 months and the Toddler Behavior Assessment Questionnaire-Revised [54] at 24 months and found that parents of infants at an increased likelihood of an ASD diagnosis (IL, younger siblings of children diagnosed with ASD) rated their children as showing higher levels of fear, sadness, and anger, and lower inhibitory control, soothability, attention focus, high pleasure, and low pleasure compared to typically developing peers. Furthermore, IL infant siblings who were later diagnosed with ASD at age 3 showed lower levels of positive affect at 12 and 24 months and lower effortful control at 24 months, compared to IL infant siblings who were not diagnosed with ASD at age 3, Garon et al. [20]. Most recently, Ersoy et al. [16] asked parents of IL and children without a family history of ASD (low likelihood, LL) children to complete the IBQ-R at 9 and 15 months of age, when no group differences emerged for the sadness scale. However, the Early Childhood Behavior Questionnaire [51] administered at 24 months yielded higher levels of sadness among the IL group than for LL children.
The earliest age at which the emotional expressivity of children with ASD has been directly observed during emotionally valanced tasks was two years. Macari et al. [39] found that children at age 2 with ASD displayed lower intensity fear, but no differences for anger or joy when compared to age-matched neurotypical children. In the only other study to look at observed emotion, videos taken at 12 months during toy play (not designed as an emotionally salient task) showed that children later diagnosed with ASD had lower rates of positive affect (i.e., smiling) compared to children who were not diagnosed with ASD [17]. Thus, further examination of positive and negative emotional responses early in life in relation to ASD is warranted.
In the present study, we examined behavioral responses to emotionally salient stimuli at 12 and 18 months of age in children who were at a low likelihood (LL; no family history of ASD) and IL (infant sibling of child with ASD) for ASD. Predictions were informed by previous studies of ER in older children with ASD [3,7,56, 30]. Specifically, we predicted that (1) children in the IL group would display higher levels of negative affect and lower levels of positive affect during the Emotion-Evoking (EE) Task, which was adapted from the Laboratory Temperament Assessment Battery (Lab-TAB, Goldsmith and Rothbart 1996), compared to children in the LL group at 12 and 18 months; and (2) affect and gaze at 12 and 18 months would predict ASD symptoms at 24 months. To test the assumption that our EE task was a valid measure of ER, we predicted that affect and gaze would be associated with concurrent ratings on the IBQ-R at 12 and 18 months.
Infant siblings of children with ASD were recruited between the ages of 6 and 12 months from families attending one of three multidisciplinary ASD clinical centers and surrounding communities [locations blinded]. Participants were assessed at 12, 18, and 24 months of age. The research ethics board at each institution approved this study, and all families gave written informed consent prior to study enrollment.
For the IL group, diagnosis of ASD in the older sibling (i.e., proband) was confirmed by a review of diagnostic records, using DSM-5 [1] criteria. The IL infants did not have identifiable neurological or genetic conditions, nor severe sensory or motor impairments. LL infants were recruited from the same communities, had at least one older sibling but no reported first- or second-degree relatives with an ASD diagnosis. All participants were born at 36–42 weeks of gestation, with birth weight greater than 2500 g.
Emotion-Evoking (EE) Task
Positive and negative affect, as well as gaze, was measured using tasks adapted from the Laboratory Temperament Assessment Battery (Lab-TAB; [24]), a comprehensive temperament assessment that includes episodes designed to elicit behavior related to differing dimensions of temperament, including smiling, reaching, crying, touching, or changes in facial expression. The EE Task was completed at 12 and 18 months of age.
EE task set-up
Children were seated at a height-adjustable table in a high-chair with their parent seated to their right. As there are no general instructions regarding where the parent should be seated with respect to the child, we used the parent location guidelines for the mask and toy removal tasks in the Lab-TAB manual [24]. All phases of the EE Task, including the Baseline video, occurred with the child seated in the high-chair. The Baseline video was shown on a laptop or computer monitor, which was placed on the table in front of the child (see Fig. 1). Once the video ended, the computer/monitor was placed on the floor next to the examiner and out of sight of the child. The objects used for each task were held in an opaque bin next to the examiner and out of the child's sight. The phases included within our EE Task are shown in Fig. 1:
Baseline 1 phase Child was shown a 2-min video comprising 15-s clips of intermixed screensaver images and 'Baby Einstein' clips accompanied by instrumental music to allow an opportunity to acclimate to the research setting (neutral task).
Bubbles phase Experimenter blew bubbles towards child and directed child's attention toward bubbles for 90 s (positive task).
Baseline 2 phase Child was shown the same 2-min video from Baseline 1 to allow an opportunity to return to baseline (neutral task).
Toy Play phase Child was given a toy that lights up and makes musical noise when its buttons are pushed, for 30 s (positive task).
Toy Removal phase Appealing toy (used in Toy Play) was placed out of reach, but within sight of child for 30 s (negative task).
Masks phase Experimenter wore a blank mask on their face and sat still and quiet for 15 s, followed by wearing a cow mask and sitting quietly for 15 s (negative task).
Hair Brushing phase Experimenter brushed child's hair with comb or soft brush for 15 s (negative task).
Face Washing phase Experimenter gently washed child's face (forehead, cheeks, chin, nose) with baby wipe for 15 s (negative task).
Baseline 3 phase Child watched the same 2-min video from Baselines 1 and 2 to allow an opportunity to return to baseline following the negative tasks.
Emotion-Evoking Task
Affect and gaze coding
The EE Task was video-recorded, and affect and gaze were coded off-line from video-recordings using Noldus Observer 13 XT behavioral coding software (see Additional file 1: Table S1 for brief coding scheme). Coding was completed in two separate runs/viewings of the entire video-recording for each participant; once for phase (onset and offset) and affect, and separately for gaze. Videos were played at real time for coding. Phases were coded continuously, and codes were mutually exclusive and exhaustive, such that one code ended the previous code. Periods between phases were coded as 'transition' episodes and were not coded for behavior or included in analyses.
Affect was coded in 5-s intervals as either negative, neutral, or positive on a 5-point scale from − 2 to + 2, based on both facial and vocal cues. Periods during which the face was not visible and vocal cues for affect were absent were coded as 'not codable' (for definitions associated with use of facial or vocal cues alone to code affect, see Additional file 1). Interval coding was selected because onset and offset of affect intensity were difficult to define and facial affect cues can change rapidly. The variable for mean affect was calculated for each phase of the EE Task by taking the mean of means of the 5-s intervals. For example, the Masks phase was 30 s and comprised 6 coded intervals (each interval was 5 s). The mean affect for the Masks phase was calculated as the sum of the codes for each of the 6 intervals divided by 6.
Gaze was coded continuously (as opposed to interval coding), and codes were mutually exclusive and exhaustive. The types of behavior of interest included infant looking at the 'on-task' object, 'off-task' objects, the experimenter conducting the task, the parent sitting beside the child, and gaze aversion. Off-task objects included objects that were proximal to the infant that the infant manipulated or interacted with (e.g., sensors and cables, as well as objects that parents may have given their children unexpectedly, such as toys or sippy cups, which were removed as quickly as possible). 'Other' was used to code any other looking behavior (e.g., scanning the room). The data included in this paper assessed the on-task gaze behavior only. The on-task gaze objects were the computer monitor for the baseline phases, bubbles or bubble wand for Bubbles phase, the toy used for the Toy Play and Toy Removal phases (same toy), the two masks used in Masks phase, the comb/brush used in Hair Brushing phase, and the baby wipe used in Face Washing phase. The variable for percentage of time spent on the "on-task" object was calculated for each phase of the EE Task using the following formula:
$$\left[ {\frac{{{\text{time}}\;{\text{spent}}\;{\text{looking}}\;{\text{at}}\;{\text{``on}}\;{\text{task"}} \;{\text{object}}}}{{{\text{length}}\;{\text{of}}\;{\text{phase}}}}} \right] \times 100$$
Inter-rater reliability
Two raters coded 20% of the videos to assess for reliability. Inter-rater reliability was assessed using Cohen's kappa (κ), with 0.01–0.20 representing no to slight agreement, 0.21–0.4 representing fair agreement, 0.41–0.60 as moderate agreement, 0.61–0.80 representing substantial agreement, and 0.81–1.00 representing almost perfect agreement [41]. The formula is
$$\kappa = \frac{{p_{{\text{o}}} - p_{{\text{c}}} }}{{1 - p_{{\text{c}}} }}$$
where po is the observed proportion of agreements and pc is the proportion of agreements expected by chance [8]. For affect, κ = 81% when assessing for no differences in code value (both raters gave the same code). When reliability was assessed using a modifier margin of 1 (codes were within ± 1 point), κ = 95% was achieved. For gaze, κ = 89% was achieved when calculating the percentage agreement for duration of gaze codes for the two raters. The raters were blind to group membership, with the exception that the reliability rater was involved in study visits at one site but remained blind to enrollment group (IL vs. LL) and ASD symptom history.
Infant behavior questionnaire-revised (IBQ-R)
The IBQ-R [52] was designed to assess temperament in children aged 3–12 months and has fourteen subscales: activity level, smiling and laughing, fear, distress to limitations, high pleasure, low pleasure, soothability, falling reactivity, cuddliness, sadness, approach, vocal reactivity, perceptual sensitivity, and duration of orienting. Items are rated on a 7-point scale ranging from 1 (never) to 7 (always), with an 8th option for 'does not apply'. Calculation of the mean ratings on all items in a particular scale, minus the 'does not apply' items, yields scaled scores. The IBQ-R can be completed by parents within 15 min and is well-validated and has excellent test-retest reliability [23]. Cronbach's alpha for the 14 subscales of the IBQ-R ranged from .76 to .93 at 12 months and .71 to .91 at 18 months for our sample (see Additional file 1: Table S2).
We chose to have parents complete the IBQ-R at both the 12- and 18-month visits, rather than the Early Childhood Behavior Questionnaire (ECBQ) (for children between 18 and 36 months [51]) at the 18-month visit for three reasons. First, we wanted to use the same measure at both 12 and 18 months of age to compare to the EE Task. Second, social-emotional development follows an expected trajectory in the first 12–18 months of life [40], which can be influenced by ASD [38]. Third, many children with ASD have lower mental ages than their typically developing counterparts, which can affect performance on behavioral assessments and questionnaires [29]. Developmental age equivalencies in our sample were assessed using the Mullen Scales of Early Learning [47], and scores on the IBQ-R subscales were correlated to determine relatedness in scoring.
Mullen scales of early learning (Mullen)
The Mullen [47] is a developmental measure that assesses Visual Reception, Receptive Language, Expressive Language, Fine Motor and Gross Motor abilities and has an Early Learning Composite comprising the first four scales. We administered the Mullen at 12 and 18 months to assess developmental age equivalencies in our sample.
Autism diagnostic observation schedule -2nd edition (ADOS-2)
The ADOS-2 [37] was administered by a research-reliable examiner, it includes standardized activities and 'presses' intended to elicit communication, social interaction, imaginative use of play materials, and repetitive behavior. The Toddler module was administered at the 24-month assessment, and Social Affect (SA), Restricted and Repetitive Behavior (RRB), and Total algorithm scores were derived. Cronbach's alpha was .92 for the SA score and .61 for the RRB score (the lower alpha for RRB was likely due to the high number of '0' and '1' scores (26.15% and 23.08%, respectively).
Analyses were run in Statistical Package for the Social Sciences (version 24, IBM). First, two multi-level repeated measures ANOVAs were run to assess mean affect and gaze separately during baseline phase, with age (12 months, 18 months) and baseline phase (baseline 1, baseline 2, baseline 3) as the embedded repeated factors, and enrollment group (LL, IL) and sex (boy, girl) as the independent between-group variables. Second, we calculated affect scores by subtracting the affect score during baseline phase 1 (before being exposed to Emotion-Evoking (EE) Task) from each phase of the EE Task to derive an affect change score for each task. We did not calculate a change score for the gaze scores. We then ran two multi-level repeated measures ANOVAs to assess mean affect and gaze separately during the phases of the EE Task, with age (12 months, 18 months) and phase (bubbles, toy play, toy removal, mask 1, mask 2, hair brushing, face washing) as the embedded repeated factors, and enrollment group (LL, IL) and sex (boy, girl) as the independent between group variables. We also completed exploratory analyses on the congruence and incongruence of the emotion expressed using a repeated measures ANOVA, with phases of the EE Task (bubbles, toy play, toy removal, mask 1, mask 2, hair brushing, face washing), age (12 months, 18 months), and evoked emotion (positive, negative, neutral) as the embedded repeated factors, and enrollment group (LL, IL) and sex (boy, girl) as the independent between-group variables. Third, we used Pearson's r correlations to examine the concurrent (IBQ-R and EE Task at 12 and 18 months) associations between different measures of ER. Finally, multiple linear regressions were used to examine the utility of baseline, EE Task, and parent-reported measures for predicting later ASD symptoms (ADOS-2 Total score).
Participant characteristics
As displayed in Table 1, data from 21 LL (14 boys and 7 girls) and 60 IL (34 boys and 26 girls) children were included in this study. There were no differences between the groups for sex, race/ethnicity, parental marital status, household income, or age for assessments at 12, 18, or 24 months (all ps > .05).
Table 1 Participant characteristics by enrollment group
Preliminary analyses
Developmental age equivalents at 12 months
Group differences were explored between the children who were identified as 'at risk' for ASD based on ADOS-2 scores (score ≥ 8; n = 10). One-way ANOVAs on age equivalencies for the Mullen subscales (except Gross Motor) resulted in significant effects for the Visual Reception (F(2,76) = 5.86, p = .004) and Fine Motor (F(2,68) = 4.81, p = .01) subscales at 12 months of age. Post hoc analyses revealed that for both the Visual Reception and Fine Motor subscales, the children identified as 'at-risk' for ASD in the IL group had lower age equivalences compared to children in the IL group without an ASD classification, as shown in Table 2.
Table 2 Mean and standard deviations for age equivalencies (in months) on the Mullen
Group differences were explored between the children who were identified as 'at risk' for ASD based on the ADOS-2 (score ≥ 8; n = 10). One-way ANOVAs on age equivalencies for the subscales (except Gross Motor) resulted in significant effects for the Visual Reception (F(2,75) = 10.11, p < .001), Fine Motor (F(2,60) = 13.26, p < .001), Receptive Language (F(2,60) = 7.16, p = .002), and Expressive Language (F(2,74) = 13.36, p < .001) subscales at 18 months of age. Post hoc analyses revealed that for all subscales, children 'at risk' for ASD in the IL group had lower age equivalences than children in the IL group without an ASD classification and children in the LL group, who did not differ.
IBQ-R associations between 12 and 18 months
Correlations between subscales on the IBQ-R at 12 and 18 months were all statistically significant; with the lowest r value for high pleasure (r = .40, p = .002) and the highest r value for cuddliness (r = .71, p < .001). Associations between other subscales are in the Additional file 1.
EE task associations between 12 and 18 months
Baseline associations between 12 and 18 months
Correlations between baseline phases were all significant for gaze [baseline phase 1 (r = .51, p < .001); phase 2 (r = .41, p < .001); phase 3(r = .26, p = .021)]. For affect, only correlations between baseline phase 3 were significant [baseline phase 1 (r = .17 p = .13); phase 2 (r = .19, p = .09); phase 3 (r = .26, p = .024)].
Four of the seven phases for the EE Task had significant correlations between 12 and 18 months for gaze [toy play (r = .33, p = .003); toy removal (r = .28, p = .012); mask 1 (r = .40, p < .001); and mask 2 (r = .28, p = .012); not for bubbles (r = .02, p = .88); hair brushing (r = .05, p = .64); or face washing (r = .01, p = .92)]. For affect, three of the seven tasks had significant correlations [toy removal (r = .22, p = .05); mask 2 (r = .22, p = .048);and face washing (r = .31, p = .006), but not bubbles (r = .13, p = .23); toy play (r = .14, p = .21); mask 1 (r = .01, p = .88); or hair brushing (r = .08, p = .48)].
Mean affect
Baseline phases
A multi-level repeated measures ANOVA found a significant effect for sex (F(1,70) = 4.50, p = .038), baseline phase (F(2,140) = 8.36, p < .001), age x group (F(1,70) = 4.99, p = .029), age x sex (F(1,70) = 11.64, p = .001), and age x group x sex (F(1,70) = 9.24, p = .003) effects. No other effects or interactions yielded significant differences.
Post hoc exploration of the sex effect using Bonferroni correction showed that girls displayed higher mean negative affect (mean ± SD = − .12 ± .25) compared to boys (mean ± SD = − .005 ± .23; t(72) = 2.13, p = .038; d = .39). Post hoc exploration of the baseline phase effect using Bonferroni correction showed that participants displayed higher mean negative affect during baseline phase 2 (mean ± SD = − .06 ± .36; t(294) = 2.52, p = .016; d = .20) and baseline phase 3 (mean ± SD = − .13 ± .47; t(294) = 3.55, p = .001; d = .38) compared to baseline phase 1 (mean ± SD = .002 ± .39).
Follow-up analyses of the age x group interaction showed that the LL group displayed lower mean negative affect at 18 months (mean ± SD =− .15 ± .29) compared to 12 months (mean ± SD =.02 ± .22; t(36) = 3.22, p = .018; d = .43); whereas there were no differences in mean affect for the IL group at 12 (mean ± SD = − .06 ± .28) or 18 months (mean ± SD = − .07 ± .22; t(108) = .33, p = .81, d = .02). Post hoc exploration of the age × group × sex interaction did not result in any significant relations when p values were adjusted using Bonferroni correction.
Phases of EE task (using affect change scores)
A multi-level repeated measures ANOVA found a significant effect for EE Task phase (F(6,420) = 16.72, p < .001), EE Task phase x group (F(6,420) = 2.73, p = .013), and EE Task phase x age (F(6,420) = 2.32, p = .033). No other effects or interactions were significant.
The phases of the EE Task produced the anticipated affect results for affect, with bubbles (mean ± SD = .34 ± .69) and toy play (mean ± SD = .10 ± .58) producing more positive mean affect and toy removal (mean ± SD = − .15 ± .58), mask 1 (mean ± SD = .001 ± .58), mask 2 (mean ± SD = − .06 ± .78), hair brushing (mean ± SD = − .11 ± .72), and face washing (mean ± SD = − .28 ± .88) phases producing more negative mean affect, which generally peaked at the last successive negative phase. The affective differences were confirmed with planned comparisons, showing that the bubbles phase was responded to more positively than any other phase (all t(146)'s > 4.35, all p's < .001), and the response to the toy play phase was more positive than to the hair brushing (t(146) = 3.16, p = .002) or face washing phases (t(146) = 4.66, p < .001). For the negative phases, toy removal was more negative than mask 1 (t(146) = − 2.61, p = .01); mask 1was less negative than face washing (t(146) = 3.66, p = .001) and hair brushing (t(146) = 2.09, p = .04); and face washing was more negative than mask 2 (t(146) = − 2.91, p = .005) and hair brushing (t(146) = − 2.55, p = .01).
Planned comparisons on the EE Task phase x group showed that IL infants displayed higher rates of negative affect compared to the LL group during the hair brushing (t(146) = 4.72, p < .05; d = .49) and face washing phases (t(146) = 6.01, p < .05; d = .62).
Planned comparisons on the EE Task phase × age showed that bubbles elicited more positive affect at 18 months compared to 12 months (t(146) = 3.84, p < .05; d = .38). No other comparisons were significant.
Exploratory analyses
Statistical comparisons of the presence of evoked positive, negative, and neutral affect during each phase of the EE Task, as well as incongruent responses (e.g., negative affect during positive task) are included in the Additional file 1. Briefly, for evoked emotion, the IL group displayed more negative affect than the LL group (t(138) = 3.10, p = .016; d = .61) throughout the EE Task, with no group difference between positive (t(138) = − .45, p = .24; d = .28) or neutral (t(138) = − 2.14, p = .18; d = .33) expressions of affect. For incongruent responding, similar responses are seen for both groups, except for the hair brushing and face washing phases, in which the IL group had fewer displays of positive affect.
On-task gaze
A multi-level repeated measures ANOVA found a significant effect for age (F(1,75) = 8.89, p = .004) and baseline phase (F(2,150) = 6.39, p = .002). No other effects or interactions were significant.
Follow-up exploration of the age effect showed that participants spent more time looking at the computer screen at 18 months (mean ± SD = 72.13 ± 22.66%) compared to 12 months (mean ± SD = 63.38 ± 26.66%; t(156) = 2.99, p = .004; d = .31).
Follow-up exploration of the baseline phase effect showed that participants spent more time looking at the computer screen during baseline phase 1 (mean ± SD = 71.63 ± 30.67%) compared to baseline phase 2 (mean ± SD = 65.71 ± 32.68%; t(314) = 3.22, p = .002; d = .23) and baseline phase 3 (mean ± SD = 65.92 ± 34.55%; t(314) = 2.80, p = .006; d = .22), which did not differ (t(314) = .12, p = .91; d = .007).
Phases of EE task
A multi-level repeated measures ANOVA found significant effects for group (F(1,71) = 8.10, p = .006), EE Task phase (F(6,426) = 440.41, p < .001), and group x EE Task phase (F(6,420) = 2.30, p = .034). No other effects or interactions were significant. The main effect of group showed that the LL group spent more time looking at the task object (mean ± SD = 60.94 ± 6.91%) than did the IL group (mean ± SD = 55.51 ± 6.77%; t(72) = 2.85, p = .006; d = .48).
Examination of the main effect for EE Task phase revealed that the bubbles (mean ± SD = 88.27 ± 11.62%), toy play (mean ± SD = 84.28 ± 15.01%), mask 1 (mean ± SD = 87.61 ± 18.74%), and mask 2 (mean ± SD = 78.32 ± 19.03%) phases had the highest durations of on-task object gaze, and the phases of toy removal (mean ± SD = 53.16 ± 19.30%), hair brushing (mean ± SD = 12.95 ± 17.75%), and face washing (mean ± SD = 3.01 ± 7.03%) had lower amounts of on-task object gaze. The gaze differences were confirmed by planned comparisons, showing that all phases differed from each other (all t(146)'s > 2.10, all p's < .03), except for mask 1 compared to bubbles (t(146) = .27, p = .77) and toy play (t(146) = 1.22, p = .21).
Planned comparisons of the EE Task x group interaction effect found that children in the LL group spent more time looking at the task object during the phases of toy removal (t(72) = 3.94, p = .05; d = .32), mask 1 (t(72) = 4.94, p= .02; d = .40), and mask 2 (t(72) = 5.51, p = .01; d = .45) compared to the IL group. The groups did not differ on on-task gaze for the phases of bubbles, toy play, face washing, or hair brushing.
Concurrent association with parent-reported temperament
To test the validity of our EE Task, we ran correlations between affect and gaze scores during the EE Task and subscale scores on the IBQ-R at the 12-month and 18-month time-points. Because of the many statistical comparisons, we corrected the p value by number of Baseline and EE Task activities (n = 10), flagging only those correlations with p < .005 as statistically significant. Results are presented below for all participants combined, followed by the IL group alone and the LL group alone.
Overall, at 12 or 18 months, affect and on-task gaze scores for the EE Task were concurrently associated with 3 of 14 IBQ-R scales.
Correlations between IBQ-R subscales and EE Tasks for all participants at 12 months are shown in Table 3. There were no significant associations with a p value of < .005 for affect or gaze.
Table 3 Concurrent associations between subscales on the IBQ-R and phases on the EE task at 12 months
Correlations between IBQ-R subscales and EE Task for all participants at 18 months are shown in Table 4. Three associations for affect and one for gaze were significant with a p value of < .005. Higher negative affect during the hair brushing phase was associated with endorsement on the IBQ-R of higher rates of fussiness and distress when in a confined space, during caretaking activities, or inability to do a preferred action (distress to limitations), as well as displaying low mood and activity (sadness). Similarly, higher negative affect during the mask 2 phase was also associated with ratings of lower mood and activity (sadness) on the IBQ-R. Decreased on-task gaze during baseline phase 1 was associated with ratings indicating greater detection of slight, low intensity stimuli in the child's environment (perceptual sensitivity) on the IBQ-R.
IL group
Scores on the EE Task were associated with 7 of 14 scales on the IBQ-R.
No significant associations were seen between affect or gaze ratings during baseline phases 1, 2, or 3 and IBQ-R subscales. For phases of the EE Task, there were no associations with gaze but there were significant associations with affect. Mask 1 was positively associated with high pleasure (r = .42, p = .003), approach (r = .41, p = .004), and vocal reactivity (r = .51, p < .001). These relationships suggest that children who displayed higher levels of positive affect during Mask 1 were also rated as showing increased levels of pleasure to situations with high stimuli of novel and complex intensity (high pleasure), increased approach and anticipation of pleasurable activities (approach), and engagement in high rates of vocalization throughout the day (vocal reactivity).
During baseline phase 1, affect was negatively associated with low pleasure (r = − .49, p = .001) and gaze was negatively associated with perceptual sensitivity (r = − .43, p = .004). These relations suggest that children who displayed increased negative affect during baseline phase 1 were endorsed on the IBQ-R as showing higher interest in situations with reduced amounts of stimuli of novel and complex intensity (low pleasure). Similarly, children who spent less time looking at the monitor were rated as showing increased interest in low intensity stimuli in their environment (perceptual sensitivity).
For phases of the EE Task, there were no associations with gaze, but there were significant associations for affect. Mask 2 was negatively associated with endorsement of sadness (r = − .50, p < .001) and hair brushing was negatively associated with distress to limitations (r = − .42, p = .005). These relations suggest that increased negative affect during masks 2 was associated with parental IBQ-R ratings of increased levels of low mood and activity (sadness). Similarly, higher negative affect during the hair brushing phase was associated with endorsement of higher rates of fussiness and distress when in a confined space, during caretaking activities, or inability to do a preferred action (distress to limitations).
LL group
Overall, scores on the EE Task were associated with 2 of 14 scales on the IBQ-R.
There were no significant associations for affect or gaze and IBQ-R subscales during the phases of baseline. During the EE Task, affect during the mask 2 phase was associated with IBQ-R falling reactivity/ recovery rate (r = .68, p = .002) and gaze during face washing phase was associated with endorsement of cuddliness (r = − .76, p < .001). These relations suggest that increased negative affect during masks 2 was associated with parental ratings of prolonged recovery from peak distress or excitement (falling reactivity). Similarly, higher on-task gaze during face washing was related to ratings of increased expression of enjoyment while being held by a caregiver (cuddliness).
There were no associations for affect or gaze during the baseline phases or phases of the EE Task and IBQ-R subscales.
Predictive association with ASD symptoms
Hierarchical linear regressions were performed with Total ADOS-2 score at 24 months as the dependent variable and baseline phases, EE task phases, and IBQ-R subscales at 12 and 18 months as separate predictor variables. All regression models included enrollment group (IL, RL) as an independent predictor in model 2 and age equivalencies on the receptive and expressive subscales of the Mullen as independent predictors in model 3.
Predictors
We first ran linear regressions with our participant characteristics (enrollment group, sex, receptive language age equivalence, expressive language age equivalence) to determine if they predicted ADOS Total Severity Scores alone. Enrollment group (R2 = .05; F(1,63) = 3.62, p = .06) and sex (R2 = .04; F(1,63) = 2.88, p = .09) did not predict ADOS Total Severity Scores. Similarly, receptive and expressive age equivalencies were not predictive at 12 months (R2 = .04; F(2,52) = .93, p = .40), but were predictive at 18 months (R2 = .31; F(2,47) = 10.58, p < .001). Because we were interested in exploring differences between the IL and LL groups, and the regression trended towards significance, we included enrollment group as a predictor in the models, in addition to age equivalencies at 12 and 18 months.
Gaze and affect at 12 months
For affect, all three models were not significant [Model 1: (R2 = .03; F(3,57) = .51, p = .68; Model 2: (R2 = .08; F(4,56) = 1.27, p = .29); Model 3 (R2 = .18; F(6,46) = 1.69, p = .14)]. For gaze, Models 1 (R2 = .13; F(2,59) = 2.87, p = .04) and 2 (R2 = .16; F(4,58) = 2.72, p = .038) were significant, whereas Model 3 was not (R2 = .13; F(6,46) = 1.11, p = .37). As shown in Table 5, examination of the coefficients identified that baseline phase 3 was a significant predictor for Model 1 (β = − .11, p = .005) and Model 2 (β = − .10, p = .009).
Table 5 Predictive relationships between 12-month affect and gaze and 24-month ADOS total severity score
EE task
For affect, all three models were not significant [Model 1: (R2 = .13; F(7,53) = 1.11, p = .37; Model 2: (R2 = .14; F(8,52) = 1.04, p = .42); Model 3 (R2 = .29; F(10,41) = 1.69, p = .12)]. For gaze, Models 1 (R2 = .25; F(7,52 = 2.51, p = .026) and 2 (R2 = .27; F(8,51) = 2.41, p = .027) were significant, whereas Model 3 was not (R2 = .28; F(10,39) = 1.48, p = .18). As shown in Table 5, examination of the coefficients did not identify any significant effects for gaze in Table 6, examination of the coefficients identified the toy removal (β = − .08, p = .026) , mask 1 (β = − .12, p = .011), and hair brushing (β = .09, p = .048) phases as significant predictors for Model 1; the mask 1 (β = − .11, p = .018) and hair brushing (β = .09, p = .037) phases as significant predictors for Model 2; and the mask 1 phase (β = − .14, p = .009) and receptive language age equivalence (β = − .46, p = .018) as significant predictors for Model 3.
For affect, Model 2 (R2 = .15; F(4,558 = 2.59, p = .046) and Model 3 (R2 = .49; F(6,42) = 6.71, p < .001) are significant, whereas Model 1 was not (R2 = .08; F(3,59) = 1.60, p = .20). As shown in Table 6, examination of the coefficients identified that enrollment group (β = − 3.83, p = .026) was a significant predictor for Model 2 and enrollment group (β = − 3.86, p = .014) and receptive language age equivalence (β = − .52, p = .006) were significant predictors in Model 3.
For gaze, Models 1 (R2 = .04; F(3,60) =.75, p = .53) and 2 (R2 = .08; F(4,59) = 1.30, p = .28) were not significant, whereas Model 3 was significant (R2 = .40; F(6,42) = 4.74, p = .001). As shown in Table 6, examination of coefficients identified that receptive language age equivalence was a significant predictor for Model 3 (β = − .42, p = .033).
For affect, all three models were significant [Model 1: (R2 = .26; F(7,55) = 2.75, p = .016); Model 2: (R2 = .31; F(8,54) = 3.09, p = .006); Model 3 (R2 = .49; F(10,37) = 3.38, p = .002)]. As shown in Table 6, examination of the coefficients identified the mask 2 (β = 3.99, p = .038) and face washing (β = − 5.28, p = .001) phases as significant predictors for Model 1; mask 2 (β = 3.66, p = .05) and face washing (β = − 5.34, p = .001) phases, as well as enrollment group (β = − 3.39, p = .043) as significant predictors for Model 2; and the face washing phase (β = − 4.38, p = .036) as a significant predictor for Model 3.
For gaze, all three models were significant [Model 1: (R2 = .26; F(7,55) = 2.75, p = .016); Model 2: (R2 = .31; F(8,54) = 3.09, p = .006); Model 3 (R2 = .40; F(10,36) = 4.08, p = .001)]. As shown in Table 6, examination of the coefficients identified the toy removal (β = − .08, p = .026) , mask 1 (β = − .12, p = .011), and hair brushing (β = .09, p = .048) phases as significant predictors for Model 1; the mask 1 (β = − .11, p = .018) and hair brushing (β = .09, p = .037) phases as significant predictors for Model 2; and the mask 1 phase (β = − .14, p = .009) and receptive language age equivalence (β = − .46, p = .018) as significant predictors for Model 3.
IBQ-R at 12 and 18 months
As shown in Table 7, all three models were not significant [Model 1: (R2 = .13; F(714,41) =.44, p = .95; Model 2: (R2 = .14; F(15,40) =.45, p = .95); Model 3 (R2 = .25; F(17,31) = .60, p = .87)].
Table 7 Predictive relationships between 12- and 18-month IBQ-R and 24-month ADOS total severity score
As shown in Table 7, all three models were not significant [Model 1: (R2 = .27; F(14,34) =.92, p = .55; Model 2: (R2 = .31; F(15,33) =.98, p = .50); Model 3 (R2 = .54; F(17,21) = 1.44, p = .21)].
We explored behavioral responses (affect and gaze) to emotionally salient stimuli at 12 and 18 months of age by children who were at a low or increased likelihood for a later diagnosis of ASD. Parents completed the IBQ-R temperament questionnaire at 12 and 18 months, and all children received an ADOS-2 assessment for ASD symptomatology at 24 months. There were three main results. First, the IL group showed higher rates of negative affect and spent less time looking at the task objects compared to the LL group during the Emotion-Evoking Task. Second, affect and gaze showed concurrent associations with several IBQ-R subscales for both the LL and IL groups. Third, gaze at 12 months and gaze and affect at 18 months, but not IBQ-R scores, predicted ADOS-2 scores at 24 months in the IL group. These results suggest that behavioral responses to emotionally salient stimuli may provide important information to support early detection of emerging ASD symptoms, complementing parent ratings of temperament in IL children as early as 12 months of age.
A critical consideration when assessing ER is to determine whether the tasks are producing the expected result (i.e., the putatively negative tasks elicit negative responses [55]). The tasks used in this study were adapted from the Lab-TAB [24] and were designed to probe-specific emotions. Comparisons across our tasks showed increasingly negative responses following bubbles (most positive) to face washing and hair brushing (most negative). Participants also spent more time looking at the more positive tasks (bubbles and toy play) and less time looking at the negative tasks, particularly toy removal, hair brushing, and face washing. The reduced time spent looking at on-task objects during hair brushing and face washing may also be related to the difficulty of looking at a comb/brush and face cloth during these tasks, as well as attempts to avoid (move away from) the brush and face cloth. Some children in both groups responded in ways that were incongruent with the probed emotion, for example, smiling during toy removal. Despite this individual variability, we showed that the vast majority of responses aligned with the probed emotion, which may reflect the validity of the task (positive tasks were experienced as positive, and vice versa), and the placement of a neutral task between the positive and negative tasks to allow time to recover from the previous emotionally salient stimuli [55, 58]. That our tasks appear valid is important because we chose tasks that children could experience in their day-to-day life that would be emotionally valanced (positive or negative) without being too emotionally arousing for the children (evidenced by the low means [< ± 1] for affect during negative and positive tasks).
As noted, we included three baseline periods within our testing protocol. The first allowed participants to acclimate to the testing environment and provided baseline values for affect and gaze, the second allowed an opportunity to recover to minimize carry-over from positive to negative tasks, and the third provided an opportunity to recover from stress produced by the negative tasks, per methodological recommendations [55, 58]. Although we did collect heart rate data in this study, these were not examined in the current report. We did, however, follow the protocol for testing autonomic nervous system reactivity (calculating the difference between affective responses during the emotionally salient stimuli and baseline [35]). Comparisons of affect and gaze during baseline showed no differences between the LL and IL groups. Participants (collectively) had slightly more negative affective responses and spent less time looking at the screen during baselines 2 and 3 compared to baseline 1. Evaluation of ER during baseline is important as it provides a measure of the child's ability to regulate their emotions [2]. That our participants showed more negative affect and spent less time looking at the computer screen during successive baseline periods may be the result of (1) the EE Task, highlighting the importance of baseline periods to minimize carry-over effects and reduce cumulative stress to the child caused by emotionally challenging tasks, (2) the child becoming restless or fatigued from the EE Task, and/or (3) the child becoming bored by the baseline video, which was the same across the three baseline periods.
The LL and IL groups showed differential responding during the emotionally salient tasks, as predicted. The IL group displayed higher rates of negative affect and spent less time looking at the task objects compared to the LL group, in accordance with previous research on parent ratings of temperament in children diagnosed with ASD [7,36,57, 21]. Although there is a paucity of research on observed ER in children under 2 who are at increased likelihood of/diagnosed with ASD, a few studies have explored ER in children between ages 2 and 5 years. Jahromi et al. [34] assessed facial affect in 4-year-old children with and without ASD during two frustration tasks (toy in a locked box and unsolvable puzzle) and found no differences between the two groups. Similarly, Zantinge et al. [67] presented 5-year-old children with and without ASD with an unpredictable toy robot and recorded facial affect; again, the researchers did not find group differences. Hirschler Guttenberg et al. [30] measured affect and gaze during tasks designed to elicit fear (experimenter wears masks) and joy (child and parent play with hand puppets) in 5-year-old children with and without ASD. Although no differences were found for gaze, positive emotions were reduced and fear was increased during the fearful task in children with ASD, but only when fathers rather than mothers were present. The protocol that most closely resembled ours was carried out by Macari et al. [39]. Two-year old-children with ASD and neurotypical children participated in tasks designed to elicit anger, fear, and joy using tasks from the Lab-TAB. The researchers found that children with ASD displayed lower intensity fear, but no differences for anger or joy when compared to neurotypical peers.
Our findings of differences between the LL and IL groups may be explained by differences in methodology relative to previous studies. First, our participants were tested at younger ages [12 and 18 months vs. ~20 [39] or ~50 months [30, 34, 67]], and as such, may be more reactive because ER systems are still developing. Second, the previous studies included smaller samples and selected children with higher cognitive and language functioning [34]. Our relatively large sample of IL children was tested at two time-points, and we did not select participants based on level of cognitive or language ability. Third, we employed shorter intervals for coding affect (5 s) compared to the 10-s (or longer) intervals used by Jahromi et al. [34], Macari et al. [39], and Zantinge et al. [67], which may have allowed us to capture more nuanced changes in affect.
As predicted, the validity of our Emotion-Evoking Task relative to assessing emotion regulation was supported by concurrent relations with temperament on the parent-reported IBQ-R. Interestingly, when both the IL and LL groups were combined, significant relationships were not found at 12 months of age, but were found for both affect (mask 1, mask 2, and hair brushing) and gaze (baseline 1) at 18 months. When separated out, the IL group did show significant relationships between three subscales on the IBQ-R at 12 months and affective responses on the mask 1 phase. At 18 months, affect and gaze during baseline phase 1 (before any EE Task phase) was associated with low pleasure and perceptual sensitivity, and affective responses during mask 2 and hair brushing were associated with sadness and distress to limitation, respectively. For the LL group, affect during mask 2 was associated with recovery rate and gaze during face washing was associated with cuddliness. There were no relationships for the LL group at 18 months. These finding are important because they suggest our EE Task shows convergent validity with parent-reported temperament, specifically the affective responses during negative tasks for mask 1, mask 2, hair brushing, and face washing and gaze durations for baseline 1). These results are in line with a recent review by Sacrey et al. [55], which reviewed physiological and affective responses during emotionally salient tasks and found that the overwhelming majority of studies used negatively salient tasks to elicit responses. That there were relationships between EE Task and the IBQ-R at 18 months for the combined and IL group, but not the LL group may be due to the age parameters of the IBQ-R. The IBQ-R had suggested use for infants between 6 and 12 months of age. We included it here at 18 months both for consistency between time points for the parent-reported measure and the EE Task, but also due to variability in the developmental age of the IL group (which was confirmed by the Mullen subscales at 18 months). Nevertheless, all subscales of the IBQ-R were significantly correlational with each other at 12 and 18 months. Temperament is viewed as the biologically based disposition to express certain emotions when challenged, and with development we learn to regulate our expressed emotions with respect to our inherent disposition using a variety of ER strategies [18].
Associations between affect, gaze, and IBQ-R scales and ADOS-2 Total scores supported our prediction that affect and gaze would predict ASD symptoms at age 2. The discriminatory ability of affect and gaze was important for the IL group. Gaze at 12 months and both affect and gaze at 18 months were predictive of 24-month ADOS scores in the IL group. Differences in ER have been associated with later mental health disorders [32], as maladaptive ER strategies tax our cognitive capacity and increase autonomic arousal, resulting in long-term ER dysregulation [4, 25]. As such, our results are in accordance with studies that report a higher prevalence of emotional difficulties in children with ASD compared to neurotypical children [12] and children with intellectual disability [6]. Rates of emotional difficulties in children with ASD are reported to range from 71 to 86% [50, 61], with over 50% reporting four or more internalizing or externalizing problems [43]. Because emotional difficulties can have negative effects on a child's academic ability and quality of life, as well as on their families [19, 66, 60], the earlier ER difficulties can be identified, the earlier interventions can be implemented. For example, the Attachment and Biobehavioral Catch-Up intervention has been shown to improve emotional dysregulation through mother-oriented strategies in emotionally dysregulated infants as young 12 months [28], although long-term effects will be important to demonstrate.
Strengths and limitations
Our study has several strengths; we measured behavioral responses to positively and negatively valanced tasks twice prior to age 2, we included three baseline periods to minimize carry-over effects between positive and negative tasks, our effect sizes were within the medium range, and our sample of IL infant siblings was relatively large.
Limitations include first that there may be a difference in ER between IL siblings and children with non-familial ASD; as such, our results may not be generalizable to non-IL samples. Second, due in part to the age of participants, we did not identify outcomes based on clinical best estimate diagnosis (ASD versus no ASD), but rather compared LL and IL groupings and used ADOS-2 scores as an index of ASD symptoms. Third, the lesser percentages of time spent looking at the on-task object for the IL group may have impacted the affect results. That the IL group spent less time looking at the mask 1, mask 2, and toy removal phases, but did not differ in affective responding from the LL group, may suggest that the IL group was gazing away from the on-task object as a means of regulating their affective response [62]. Further examination in the different types of gaze used during the phases of the EE Task (e.g., looking at parent), is warranted. However, there is value in examining early ASD symptoms on a continuum, especially in relation to emotion regulation in siblings of children with ASD, for whom a higher prevalence of mental health difficulties is an additional concern beyond increased likelihood of ASD [31].
Future work will include comparison of affect and gaze between IL siblings stratified by ASD diagnosis at age 3. Nevertheless, the current study contributes to the growing evidence that ER difficulties are one of the earliest expressions of ASD vulnerability and manifest as early as 12 months of age. These results have the potential to inform ASD surveillance efforts as well as novel treatment strategies to interrupt pathways between emotional dysregulation and academic, behavioral, and social impairments [5,14,15,49,63,65].
Our study is the first to show that children with increased familial likelihood of an ASD diagnosis have differences from children at community-level risk in directly observed behavioral responses to emotionally evocative stimuli by as young as 12 months. These findings add to the cumulative evidence that children at IL for ASD have very early ER difficulties. Observed behavioral responses in the IL sample, but not parent ratings on the IBQ-R, were associated with later ASD symptoms, highlighting the importance of directly observing behavioral responses in emotionally salient situations. The associations between increased negative affect for participants on the mask 1, mask 2, hair brushing, and face washing phases and parent endorsement of more problematic scores on the scales that measure distress or sadness when placed in a confined position, when barred from performing a desired activity, or when engaged in caretaking activities may help focus future work on ER in children with ASD to those tasks and scales that show the highest concordance. Further, these more negatively salient tasks were those that predicted ASD symptomology at 24 months. These observations may provide nuanced differences that can complement standard parent-reported temperament questionnaires.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
ADOS:
Autism Diagnostic Observation Scale—2nd edition
APA:
American Psychiatric Association
EE Task:
IBQ-R:
Infant Behavior Questionnaire-Revised
infant with an increased likelihood of an ASD diagnosis
Lab-TAB:
Laboratory Temperament Assessment Battery
infant with a community-level risk of an ASD diagnosis
American Psychiatric Association. Diagnostic and statistical manual of mental disorder. 5th ed. American Psychiatric Association; 2013. https://doi.org/10.1176/appi.books.9780890425596.744053.
Appelhans BM, Luecken LJ. Heart rate variability as an index of regulated emotional responding. Rev Gen Psychol. 2006;10:229–40.
Ben Shalom D, Mostofsky SH, Hazlett RL, Goldberg MC, Landa RJ, Faran Y, et al. Normal physiological emotions but differences in expression of conscious feelings in children with high-functioning autism. J Autism Dev Dis. 2006;36:395–400.
Berking M, Whitley B. Affection regulation training: A practitioner's manual. New York: Business Media; 2014. https://doi.org/10.1007/978-1-4939-1022-9_1.
Blair C, Razza RP. Relating effortful control, executive function, and false belief understanding to emerging math and literacy ability in kindergarten. Child Dev. 2007;78(2):647–63.
Brereton AV, Tonge BJ, Einfeld SL. Psychopathology in children and adolescents with autism compared to young people with intellectual disability. J Autism Dev Disord. 2006;36(7):863–70. https://doi.org/10.1007/s10803-006-0125-y.
Capps L, Kasari C, Yirmiya N, Sigman M. Parental perception of emotional expressiveness in children with autism. J Consult Clin Psychol. 1993;61:475–84.
Cohen JA. coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46.
Cole PM, Michel MK, Teti LO. The development of emotion regulation and dysregulation: a clinical perspective. Monogr Soc Res Child Dev. 1994;59(2/3):73. https://doi.org/10.2307/1166139.
DeGangi GA, Breinbauer C. The symptomatology of infants and toddlers with regulatory disorders. J Dev Learn Dis. 1997;1(1):183–215.
DeGangi GA, Greenspan SI. The development of sensory functioning in infants. Phys Occup Ther Pediatr. 1988;8(3):21–33.
Dickerson Mayes S, Calhoun SL, Murray MJ, Ahyja M, Smith LA. Anxiety, depression, and irritability in children with autism relative to other neuropsychiatric disorders and typical development. Res Autism Spectrum Dis. 2011;5(1):474–85. https://doi.org/10.1016/j.rasd.2010.06.012.
Eisenberg N, Fabes RA, Guthrie IK, Murphy BC, Maszk P, Holmgren R, et al. The relations of regulation and emotionality to problem behavior in elementary school children. Dev Psychopathol. 1996;8(1):141–62.
Eisenberg N, Guthrie IK, Fabes RA, Shepard S, Losoya S, Murphy B, et al. Prediction of elementary school children's externalizing problem behaviors from attentional and behavioral regulation and negative emotionality. Child Dev. 2000;71(5):1367–82.
Eisenberg N, Spinrad TL, Eggum ND. Emotion-related self-regulation and its relation to children's maladjustment. Ann Rev Clin Psychol. 2010;6:495–525.
Ersoy M, Charman T, Pasco G, Carr E, Johnson MH, Jones EJH. Developmental paths to anxiety in an autism-enriched infant cohort: the role of temperamental reactivity and regulation. J Autism Dev Dis. 2020. https://doi.org/10.1007/s10803-020-04734-7.
Filliter JH, Longard J, Lawrence MA, Zwaigenbaum L, Brian J, Garon N, et al. Positive affect in infant siblings of children diagnosed with autism spectrum disorder. J Abnorm Child Psycol. 2015;43(3):567–75. https://doi.org/10.1007/s10802-014-9921-6.
Fox NA, Henderson HA, Perez-Edgar K, White LK. The biology of temperament: an integrative approach. In: Nelson C, Luciana M, editors. The handbook of developmental cognitive neuroscience. Cambridge: MIT Press; 2008. p. 839–54.
Gadow KD, Devincent C, Schneider J. Predictors of psychiatric symptoms in children with an autism spectrum disorder. J Autism Dev Disord. 2008;38(9):1710–20. https://doi.org/10.1007/s10803-008-0556-8.
Garon N, Zwaigenbaum L, Bryson S, Smith IM, Brian J, Roncadin C, et al. Temperament and its association with autism symptoms in a high-risk population. J Abnorm Child Psychol. 2015;44(4):757–69.
Garon N, Zwaigenbaum L, Bryson S, Smith IM, Brian J, Roncadin C, Vaillancourt T, Armstrong V, Sacrey LA, Roberts W. Temperament and its Association with Autism Symptoms in a Highrisk Population. J Abnorm Child Psychol. 2016;44(4):757–69. https://doi.org/10.1007/s10802-015-0064-1.
Goldsmith HH. Toddler behavior assessment questionnaire. Eugene: University of Oregon, Department of Psychiatry; 1996.
Goldsmith HH, Rothbart MK. Contemporary instruments for assessing early temperament by questionnaire and in the laboratory. In: Strelau J, Angleitner A, editors. Explorations in temperament. Perspectives on individual differences. Cham: Springer; 1991. p. 249–72. https://doi.org/10.1007/978-1-4899-0643-4_16.
Goldsmith HH, Rothbart MK. Prelocomotor and locomotor laboratory temperament assessment battery, Lab-TAB, Version 3.0. Technical Manual, Department of Psychology, University of Wisconsin; 1996.
Gratz K, Roemer L. Multidimensional assessment of emotion regulation and dysregulation: development, factor structure, and initial validation of the difficulties in emotion regulation scale. J Psychopath Behav Assess. 2004;26:41–54. https://doi.org/10.1023/B:JOBA.0000007455.08539.94.
Gross JJ, Jazaieri H. Emotion, emotion regulation, and psychopathology: an affective science perspective. Clin Psychol Sci. 2014;2(4):387–401.
Gross JJ, Thompson RA, editors. Emotion regulation: conceptual foundations. New York: Guilford Press; 2007.
Hepworth AD, Berlin LJ, Martoccio TL, Cannon EN, Berger RH, Jones Harden B. Supporting infant emotion regulation through attachment-based intervention: a randomized controlled trial. Prev Sci. 2020;21:702–13. https://doi.org/10.1007/s11121-020-01127-1.
Hinnebusch AJ, Miller LE, Fein DA. Autism spectrum disorders and low mental age: diagnostic stability and developmental outcomes in early childhood. J Autism Dev Dis. 2017;2017(47):3967–82. https://doi.org/10.1007/s10803-017-3278-y.
Hirschler-Guttenberg Y, Golan O, Ostfeld-Etzion S, Feldman R. Mothering, fathering, and the regulation of negative and positive emotions in high-functioning preschoolers with autism spectrum disorder. J Child Psychol Psychiatry. 2015;56(5):530–9. https://doi.org/10.1111/jcpp.12311.
Howlin P, Moss P, Savage S, Bolton P, Rutter M. Outcomes in adult life among siblings of individuals with autism. J Autism Dev Dis. 2015;45(3):707–18. https://doi.org/10.1007/s10803-014-2224-5.
Inwood E, Ferrari M. Mechanisms of change in the relationship between self-compassion, emotion regulation, and mental health: a systematic review. Appl Psychol Health Wellbeing. 2018;10(2):215–35. https://doi.org/10.1111/aphw.12127.
Izard CE, Dougherty FE, Bloxton BM, Kotsch WE. The differential emotions scale: A method of measuring the subjective experience of discrete emotions. Unpublished manuscript, Newark: Department of Psychology, University of Delaware; 1974.
Jahromi LB, Meek SE, Ober Reynolds S. Emotion regulation in the context of frustration in children with high functioning autism and their typical peers. J Child Psychol Psychiatry. 2012;53(12):1250–8. https://doi.org/10.1111/j.1469-7610.2012.02560.x.
Jones-Mason K, Alkon A, Coccia M, Bush NR. Autonomic nervous system functioning assessed during the still-face paradigm: a meta-analysis and systematic review of methods, approach, and findings. Dev Rev. 2018;50:113–39.
Kasari C, Sigman M. Linking parental perceptions to interactions in young children with autism. J Autism Dev Dis. 1997;27:39–57.
Lord C, Rutter M, DiLavore PC, Risi S, Gotham K, Bishop SL. Autism diagnostic observation schedule. 2nd ed. Torrence: Western Psychological Services; 2012.
Loveland KA. Social-emotional impairment and self-regulation in autism spectrum disorders. In: Nadel J, Muir D, editors. Emotional development: recent research advances. Oxford: Oxford University Press; 2005. p. 365–82.
Macari S, DiNicola L, Kane-Grade F, Prince E, Vernetti A, Powell K, Fontenelle S 4th, Chawarska K. Emotional expressivity in toddlers with autism spectrum disorder. J Am Acad Child Adolesc Psychiatry. 2018;57(11):828–36.e2. https://doi.org/10.1016/j.jaac.2018.07.872.
Malik F, Marwaha R. Developmental stages of social emotional development in children. Treasure Island: StatPearls Publishing; 2020.
Marston L. Introductory Statistics for Health and Nursing Using SPSS. Thousand Oaks, CA: Sage Publications, Ltd.; 2010. https://doi.org/10.1111/jcpp.12311.
Martel MM, Nigg JT, Von Eye A. How do trait dimensions map onto ADHD symptom domains? J Abnorm Child Psychol. 2009;37(3):337–48.
Maskey M, Warnell F, Parr JR, Le Couteur A, McConachie H. Emotional and behavioral problems in children with autism spectrum disorder. J Autism Dev Dis. 2013;43:851–9.
Mazefsky CA, Herrington J, Siegel M, Scarpa A, Maddox BB, Scahill L, White SW. The role of emotion regulation in autism spectrum disorder. J Am Acad Child Adolesc Psychiatry. 2013;52(7):679–88. https://doi.org/10.1016/j.jaac.2013.05.006.
Mazefsky CA, White SW. Emotion regulation: Concepts and practice in autism spectrum disorder. Child Adolesc Psychiatry Clin N Am. 2014;23(1):15–24. https://doi.org/10.1016/j.chc.2013.07.002.
Morris AS, Silk JS, Steinberg L, Myers SS, Robinson LR. The role of the family context in the development of emotion regulation. Soc Dev. 2007;16(2):361–88. https://doi.org/10.1111/j.1467-9507.2007.00389.x.
Mullen E. Mullen scales of early learning. Circle Pines, MN: American Guidance Services; 1995.
Nigg JT, Casey B. An integrative theory of attention-deficit/hyperactivity disorder based on the cognitive and affective neurosciences. Dev Psychopathol. 2005;17(03):785–806.
Nolan EE, Gadow KD, Sprafkin J. Teacher reports of DSM-IV ADHD, ODD, and CD symptoms in schoolchildren. J Am Acad Child Adolesc Psychiatry. 2001;40(2):241–9.
Ooi YP, Tan ZJ, Lim CX, Goh TJ, Sung M. Prevalence of behavioral and emotional problems in children with high-functioning autism spectrum disorders. Aust N Z J Psychiatry. 2011;45:370–5.
Putnam SP, Gartstein MA, Rothbart MK. Measurement of fine-grained aspects of toddler temperament: the early childhood behavior questionnaire. Infant Beh Dev. 2006;29(3):386–401. https://doi.org/10.1016/j.infbeh.2006.01.004.
Rothbart MK. Measurement of temperament in infancy. Child Dev. 1981;52(2):569–78. https://doi.org/10.2307/1129176.
Rothbart MK, Derryberry D. Development of individual differences in temperament. In: Lamb ME, Brown AL, editors. Advances in developmental psychology, vol. I. Mahwah: Lawrence Erlbaum Association Inc; 1981. p. 37–86.
Rothbart MK, Ellis LK, Rueda M, Posner MI. Developing mechanisms of temperamental effortful control. J Personal. 2003;71:1113–43. https://doi.org/10.1111/1467-6494.7106009.
Sacrey LR, Raza S, Armstrong V, Brian JA, Kushki A, Smith IM, et al. Physiological measurement of emotion from infancy to preschool: a systematic review and meta-analysis. Brain Behav. 2020. https://doi.org/10.1002/brb3.1989.
Samson AC. Humor (lessness) elucidated—sense of humor in individuals with autism spectrum disorders: review and introduction. Int J Humour Res. 2013;26:393–409.
Samson AC, Huber O, Gross JJ. Emotion regulation in Asperger's syndrome and high functioning autism. Emotion. 2012;12:659–65.
Suurland J, Van der Heijden KB, Smaling HJA, Huijbregts SCJ, Van Goozen SHM, Swaab H. Infant autonomic nervous system response and recovery: associations with maternal risk status and infant emotion regulation. Dev Psychopathol. 2017;29(3):759–73.
Thompson RA. Emotion regulation: a theme in search of definition. Monogr Soc Res Child Dev. 1994;59(2–3):25–52. https://doi.org/10.1111/j.1540-5834.1994.tb01276.x.
Ting V, Weiss J. Emotion regulation and parent co-regulation in children with autism spectrum disorder. J Autism Dev Disord. 2017;47:680–9. https://doi.org/10.1007/s10803-016-3009-9.
Totsika V, Hastings RP, Emerson E, Lancaster GA, Berridge DM. A population-based investigation of behavioral and emotional problems and maternal mental health: associations with autism spectrum disorder and intellectual disability. J Child Psychol Psychiatry. 2011;52:91–9.
Tronick EZ. Emotions and emotional communication in infants. Am Psychol. 1989;44(2):112–9. https://doi.org/10.1037/0003-066X.44.2.112.
Upshur C, Wenz-Gross M, Reed G. A pilot study of early childhood mental health consultation for children with behavioral problems in preschool. Early Child Res Q. 2009;24(1):29–45.
Weiss JA, Thomson K, Chan L. A systematic literature review of emotion regulation measurement in individuals with autism spectrum disorder. Autism Res. 2014;7:629–48.
Welsh JA, Nix RL, Blair C, Bierman KL, Nelson KE. The development of cognitive skills and gains in academic school readiness for children from low-income families. J Educ Psychol. 2010;102(1):43–53.
Wood JJ, Gadow KD. Exploring the nature and function of anxiety in youth with autism spectrum disorders. Clin Psychol: Sci Practice. 2010;17(4):281–92. https://doi.org/10.1111/j.1468-2850.2010.01220.x.
Zantinge G, van Rijn S, Stockmann L, Swaab H. Concordance between physiological arousal and emotion expression during fear in young children with autism spectrum disorders. Autism. 2019;23(3):629–38. https://doi.org/10.1177/1362361318766439.
The authors thank the Canadian Institutes of Health Research (CIHR), Brain Canada, Azrieli Foundation, and Autism Science Foundation for funding our research.
Department of Pediatrics, Autism Research Centre – E209, Glenrose Rehabilitation Hospital, University of Alberta, 10230-111 Avenue, Edmonton, AB, T5G 0B7, Canada
Lori-Ann R. Sacrey, Lonnie Zwaigenbaum & Sarah Raza
Bloorview Research Institute, University of Toronto, Toronto, ON, Canada
Jessica A. Brian
IWK Health Centre, Dalhousie University, Halifax, NS, Canada
Isabel M. Smith & Vickie Armstrong
University of Ottawa, Ottawa, ON, Canada
McMaster University, Hamilton, ON, Canada
Louis A. Schmidt
Lori-Ann R. Sacrey
Lonnie Zwaigenbaum
Isabel M. Smith
Vickie Armstrong
Sarah Raza
LRS analyzed and interpreted the data. VA coded the behavioral data. All authors contributed to the design of the research study, provided constructive feedback on initial drafts. All authors read and approved the final manuscript.
Correspondence to Lori-Ann R. Sacrey.
All participants provided written informed consent prior to enrollment in the study. All procedures were approved by their respective institutions [names blinded].
We obtained consent from the adults and caregiver of child depicted in Fig. 1 to display their images.
The authors declare that they have no competing interests
Brief Coding Scheme for Phases, Affect, and Gaze for the Emotion-Evoking Task.
Sacrey, LA.R., Zwaigenbaum, L., Brian, J.A. et al. Affect and gaze responses during an Emotion-Evoking Task in infants at an increased likelihood for autism spectrum disorder. Molecular Autism 12, 63 (2021). https://doi.org/10.1186/s13229-021-00468-0
Autism*
Increased likelihood cohort | CommonCrawl |
How does navigation system behavior influence human behavior?
Annina Brügger ORCID: orcid.org/0000-0003-3517-664X1,
Kai-Florian Richter2 &
Sara Irina Fabrikant3
Cognitive Research: Principles and Implications volume 4, Article number: 5 (2019) Cite this article
Navigation systems are ubiquitous tools to assist wayfinders of the mobile information society with various navigational tasks. Whenever such systems assist with self-localization and path planning, they reduce human effort for navigating. Automated navigation assistance benefits navigation performance, but research seems to show that it negatively affects attention to environment properties, spatial knowledge acquisition, and retention of spatial information. Very little is known about how to design navigation systems for pedestrian navigation that increase both navigation performance and spatial knowledge acquisition. To this end, we empirically tested participants (N = 64) using four different navigation system behaviors (between-subject design). Two cognitive processes with varying levels of automation, self-localization and allocation of attention, define navigation system behaviors: either the system automatically executes one of the processes (high level of automation), or the system leaves the decision of when and where to execute the process to the navigator (low level of automation). In two experimental phases, we applied a novel empirical framework for evaluating spatial knowledge acquisition in a real-world outdoor urban environment. First, participants followed a route assisted by a navigation system and, simultaneously, incidentally acquired spatial knowledge. Second, participants reversed the route using the spatial knowledge acquired during the assisted phase, this time without the aid of the navigation system. Results of the route-following phase did not reveal differences in navigation performance across groups using different navigation system behaviors. However, participants using systems with higher levels of automation seemed not to acquire enough spatial knowledge to reverse the route without navigation errors. Furthermore, employing novel methods to analyze mobile eye tracking data revealed distinct patterns of human gaze behavior over time and space. We thus can demonstrate how to increase spatial knowledge acquisition without harming navigation performance when using navigation systems, and how to influence human navigation behavior with varying navigation system behavior. Thus, we provide key findings for the design of intelligent automated navigation systems in real-world scenarios.
Envision that you exit a bus on your way to your friend's house, but you have no idea where your friend's house is. Luckily, you have your friend's address on your phone, which is equipped with a navigation system. You confidently follow the route suggested by your smart device. As you arrive at your friend's house, you discover that your friend is not there and that the battery of your phone is empty. On top of all this, you realize that you have lost your keys somewhere along the way. Would you be able to recall your path to the bus stop to search for your lost keys?
Navigation systems assist us during navigation, but they also affect our navigation behavior. During assisted navigation, we may completely rely on the system and tend to focus either on it or on matters other than navigation. We thus do not attend to the environment surrounding us, which degrades our spatial knowledge acquisition. Such behavioral changes are mostly unintentional and not properly empirically investigated, particularly in real-world environments.
In this study, we examine how navigation system behavior (in terms of automating cognitive processes) changes our behavior in and attention to the environment. We will only be able to design intelligent systems with a deeper understanding of their effects on human navigation behavior. Perhaps then the task of finding the same way back might not be as difficult as it was for some participants in our study.
The cognitive process of "navigation" consists of two major components: locomotion and wayfinding. Locomotion refers to the actual bodily motion of a human moving in his or her nearby surroundings. Wayfinding is the planning process of finding a destination. For example, using landmarks for orientation and decision-making (Montello, 2005). Finding a destination is an essential human behavior (Montello, 2005) and requires knowledge about the sequence of environmental properties, turns, segments, and sights along the route (Downs & Stea, 1973; O'Keefe & Nadel, 1978). To find our way from one place to another in partly or fully unknown environments, we nowadays often use automated navigation systems. Navigation systems primarily aim to deliver easy-to-understand navigation instructions that support people in reaching a destination more quickly and help reduce cognitive load during wayfinding (Allen, 1999). Despite the popularity of navigation systems, concerns have been raised in the literature about the negative effects on spatial knowledge acquisition caused by their extensive use (e.g., Gardony, Brunyé, Mahoney, & Taylor, 2013; Klippel, Hirtle, & Davies, 2010; Montello, 2005). These systems consume most of a pedestrian's attention, leading to decreased spatial knowledge (Parush, Ahuvia, & Erev, 2007) and even to fatal accidents (Lin, Kuehl, Schöning, & Hecht, 2017) due to divided attention between the survey perspective offered by the navigation system and the route perspective, i.e., the first-person view (Gardony et al., 2013). As the trend towards using navigation systems increases, a considerable amount of literature has recently examined how navigation systems negatively affect spatial knowledge acquisition and human navigation behavior. For example, previous research (e.g., Münzer, Zimmer, Schwalm, Baus, & Aslan, 2006) comparing paper maps with navigation systems has found that pedestrians using paper maps show better spatial knowledge and orientation, but at the cost of lower navigation performance (e.g., longer duration to destination) compared to when they are using navigation systems. Despite considerable research demonstrating detrimental spatial knowledge acquisition with navigation systems (e.g., Bertel, Dressel, Kohlberg, & von Jan, 2017; Parush et al., 2007; Willis, Hölscher, Wilbertz, & Li, 2009), surprisingly few empirical investigations have been conducted about ways to balance navigation performance and spatial knowledge acquisition during assisted navigation. However, achieving such a balance seems feasible because navigation systems can feature varying levels of automation and, with this, vary the level of human involvement in decision-making. The influence of different navigation system behaviors—that is, different levels of automation—on human behavior is not yet understood. One of the main limitations of many empirical navigation studies is the missing connection to a real-world environment and, thus, their ecological validity (Dai, Thomas, & Taylor, 2018; Kiefer, Giannopoulos, & Raubal, 2013). This study empirically investigates the effect of different navigation system behaviors on human navigation and spatial knowledge acquisition in real-world navigation tasks in an urban, outdoor environment. One of the most important goals in designing different navigation system behaviors is to keep navigation performance high while increasing the user's spatial knowledge acquisition. We will first briefly review the research investigating the impact of assisted navigation on spatial knowledge acquisition, which also motivates our research questions. We then introduce the empirical framework and study design. This is followed by a summary of the results of the study, which we critically discuss in the subsequent section. The paper ends with a summary of the implications of different automated navigation system behaviors on human behavior that should be considered when designing navigation systems and conducting outdoor studies.
Spatial knowledge acquisition with navigation systems
Spatial knowledge acquisition has been discussed in several different research fields. Spatial knowledge was originally divided into three types: landmark, route, and survey knowledge (Siegel & White, 1975). However, critical arguments have emerged on whether the three types can really be (strictly) separated (e.g., Montello, 1998). Research has addressed how these types of spatial knowledge might change during navigation system use in different environments (e.g., Ishikawa, Fujiwara, Imai, & Okabe, 2008; Münzer et al., 2006; Parush et al., 2007; Willis et al., 2009).
For navigational tasks, acquiring spatial knowledge is crucial to orient and navigate in space without losing the way (Montello, 2005; Siegel & White, 1975). However, the formation of mental spatial representations is demanding and limited by humans' attentional capacities (Downs & Stea, 1973; Münzer et al., 2006; Siegel & White, 1975; Wahn & König, 2017; Weisberg & Newcombe, 2018). Therefore, we often use a navigational aid to support the cognitive processes required to navigate as optimally as possible in an unknown environment (Ludwig, Müller, & Ohm, 2014). Several researchers have compared different kinds of navigation aids (e.g., Bakdash, Linkenauger, & Proffitt, 2008; Hirtle & Raubal, 2013; Ishikawa et al., 2008; Ishikawa & Takahashi, 2013; Klippel et al., 2010; Parush et al., 2007; Richter, Dara-Abrams, & Raubal, 2010; Willis et al., 2009). All found that modern digital navigation systems have a negative impact on the formation of mental spatial representations, but people using navigation systems are more time-efficient and effective in finding the route than people using paper maps (Dickmann, 2012; Lee & Cheng, 2008). On the one hand, a paper map can support tasks such as route planning, self-localization, and orientation (Thorndyke & Hayes-Roth, 1982), all of which require attending to the environment and acquiring information about environmental properties, such as spatial configuration and landmarks. On the other hand, navigation systems seem to change how humans attend to the environment, leading to a loss of the crucial skill of acquiring environmental knowledge (Parush et al., 2007). The use of a navigation system reduces what properties from the surroundings a navigator selects and diminishes the navigator's allocation of attentional resources (Ishikawa et al., 2008). The navigation system designers pre-determine which properties get selected and how they are represented by the system, which, thus, also pre-determines the allocation of attention (Parasuraman, 2000).
During assisted navigation, the navigation system automatically selects and depicts environmental properties (e.g., landmarks) without any user intervention, which leads to decreased attentiveness to relevant properties (Taylor, Brunyé, & Taylor, 2008). Consequently, the navigator does not attend to the traversed surroundings, but reallocates attention toward the automated navigation system (Gardony et al., 2013; Willis et al., 2009). The resources are transferred toward the system itself and a respective instruction execution (Parasuraman, 2000). The navigator has to constantly switch between a survey perspective offered by the navigation system and a route perspective, i.e., the first-person view (Dai et al., 2018; Gardony et al., 2013). The distribution of human attentional resources changes with the use of navigation systems compared to no use of a navigation aid. Without a system, humans actively make decisions and interact predominately with their surroundings. Several studies have demonstrated that automated guidance divides a navigator's attention between the navigation system and the environment (e.g., Gardony et al., 2013; Ishikawa et al., 2008). For example, a constantly updating GPS position signal (blinking light or beeping sound) on a navigation system can induce such an attentional division. As the navigator's position is continuously updated, the visual tracking of the GPS signal distracts the navigator's attention from the surroundings toward the system (Ishikawa et al., 2008). When continuously relying on this kind of positional updates, we do not attend to the information the traversed environment provides, and thus we lose the respective skill (Parush et al., 2007). But if the navigation system fails, navigators have to rely on their acquired knowledge, which would be challenging because not mentally processing properties along a travelled route results in decreased spatial knowledge in the end (e.g., Hirtle & Raubal, 2013; Huang, Schmidt, & Gartner, 2012; Münzer et al., 2006; Parasuraman, 2000; Parush et al., 2007).
To better understand attentional behavior in a spatial context, eye tracking is a technology that records a navigator's gaze behavior during navigation (Duchowski, 2007; Holmqvist et al., 2011; Kiefer, Giannopoulos, Raubal, & Duchowski, 2017). Mobile eye tracking is particularly interesting in navigation scenarios because it can measure a human sense (gaze behavior) in real-world environments quite accurately and thus provides some indication of the information acquisition process (Kiefer et al., 2017). Fixation durations as an eye tracking measure can be interpreted as a measure of cognitive function and visual complexity of the scene (Duchowski, 2007; Goldberg & Kotval, 1999). However, the annotation process of the recorded data is laborious due to individual walking speeds and viewing directions in a constantly changing spatio-temporal context (Kiefer et al., 2017). Efficient methodologies to analyze such data are yet to be developed.
Active role during spatial knowledge acquisition
An increasing number of empirical studies investigate how active and passive roles during navigation may influence the attention paid to the immediate surroundings of a navigator and, consequently, may support or hinder the formation of mental spatial representations. Münzer et al. (2006) introduced the active learning hypothesis. These authors contend that added active efforts during assisted navigation lead to spatial learning benefits. Attentiveness toward the environment (Klippel et al., 2010) and the level of control and the amount of decision-making (Bakdash et al., 2008) are suggested to yield differences in spatial knowledge acquisition. When actively making decisions and facing consequences, humans connect to their surroundings (Bakdash et al., 2008). Gardony et al. (2013) explored the relationship between navigators' attention to the surroundings and their spatial decision-making during navigation system use. They discovered that if both decision-making with and attention to the traversed environment decrease, the navigators' spatial knowledge acquisition also decreases.
Participants who knew that they had to learn a route (intentional learning) showed better route knowledge than participants who did not know that they would be asked to memorize a route (incidental learning; Chrastil & Warren, 2012). The ability of recalling objects for the two different learning types is different. Intentional learners are better at recalling the location of objects, and incidental learners are better at recalling the names of objects (Chrastil & Warren, 2012; Van Asselen, Fritschy, & Postma, 2006). Chrastil and Warren (2012, p.14) state that "(…) full route knowledge and survey knowledge appears to require the intention to learn, implying the need for attention to the relevant spatial relation. (…) intentional encoding appears to be necessary for place-action association, reproducing a route, and spatial relations between landmarks." A navigation system directing attention to specific properties in the surroundings can lead to active encoding of spatial knowledge (Chrastil & Warren, 2012) and should be considered when designing such a system.
Designing navigation systems that actively involve the user to increase spatial knowledge acquisition
During navigation system use, the locomotion component of navigation is emphasized over wayfinding, the planning and decision-making component (Montello, 2005)—planning and decision-making are essentially taken over by the system. A few scholars have begun to explore possible interventions during assisted wayfinding, such that users proactively make decisions to return their attention to the surroundings (Chung, Pagnini, & Langer, 2016; Kiefer, Giannopoulos, Sch, & Raubal, 2016; Parush et al., 2007). Such system interventions should be context-dependent, adaptable, and controllable by the navigator (Kiefer et al., 2016; Parasuraman, 2000; Richter, Tomko, & Cöltekin, 2015; Sheridan, 2002). However, Pielot and Rello (2017) found that system notifications distract a user and interrupt other activities. Moreover, Lee et al. (2014) found that smart devices featuring notifications increase the user's attention allocation on the system and away from the surroundings. People seem to worry that they may miss important information if they are not attending to these notifications (Pielot & Rello, 2017). In contrast, a navigation system should invite a navigator to proactively attend to the environment (Kraft & Hurtienne, 2017) and thereby increase a navigator's cognitive resource allocation for a task (Parasuraman, Sheridan, & Wickens, 2000), which in turn should lead to better spatial knowledge (Parush et al., 2007). To increase spatial knowledge acquisition, navigators need to interact with both their surroundings and the navigation system (Willis et al., 2009). Research suggests different cognitive problems using navigation systems and proposes several application solutions (Table 1).
Table 1 Two cognitive problems and their suggested solution based on Willis et al. (2009)
One way to do this is to engage navigators in a spatial location quiz, thus associating locations with a particular question (Parush et al., 2007), or to make them perform an otherwise automated action manually (Chrastil & Warren, 2012; Parasuraman et al., 2000) to improve mental representations. Parasuraman et al. (2000) identified different types (e.g., decisions or actions) and levels (from manual to full automation) of human interactions with automation. Figure 1 lists ten levels of automation for the decision and action selection. These levels vary from low (no system assistance; human makes all decisions and performs the actions) to high (system makes all the decisions; no human intervention possible).
Levels of automation of decision and action selection (original figure in Parasuraman et al., 2000; adjusted text and extended with a linear scale): either the system decides and performs actions (high level of automation = System), or the human decides and performs actions (low level of automation = Human)
All these studies clearly indicate that there is a relationship between system behavior and human behavior, and thus also with knowledge acquisition. However, it is unclear which kind of information and in what format a navigator ideally might need to get from a navigation system (e.g., Montello, 2009; Willis et al., 2009). Although some research has been carried out on spatial knowledge acquisition during navigation system use, very little has been done on how these findings translate to designing systems that assist navigators in both the navigational task and spatial knowledge acquisition, and, more generally, how pedestrians should engage with a system during navigation in outdoor environments (e.g., Dai et al., 2018; Giudice, Walton, & Worboys, 2010). The overall goal, according to Sheridan (2002), should be to design systems that complement humans.
The study presented in this paper empirically investigates human behavior during navigation tasks when facing different system behaviors according to the levels of automation as introduced by Parasuraman et al. (2000), using a novel empirical framework for efficiently testing pedestrians' spatial knowledge in real-world environments (Brügger, Richter, & Fabrikant, 2016).
This study aims to answer the following research questions:
Behavioral: How do varying navigation system behaviors (levels of automation) influence (i) navigation performance, (ii) spatial knowledge acquisition, and (iii) gaze behavior during navigation tasks in a real-world outdoor environment? We hypothesize that when more automation is built into a navigation system, (i) the better the navigation performance, (ii) the lesser the spatial knowledge acquisition, and (iii) a change in gaze behavior occurs during navigation.
Methodological: Is the experimental framework of an assisted and unassisted navigation phase a valid approach to gather useful data in terms of spatial knowledge acquisition, and to allow for a smooth execution of an outdoor experiment? We hypothesize that the experimental framework of an assisted and unassisted navigation phase offers an efficient, ecologically valid way of determining spatial knowledge acquisition without the need for standard questionnaires and tests.
This study aims to gain further insights into how navigation system behaviors influence human navigation and spatial knowledge acquisition. We conducted an empirical user study in an outdoor urban environment. We contend that the additional challenges of running studies in the real world are outweighed by the high ecological validity these settings offer (e.g., Kiefer et al., 2013). We applied a between-subject design by varying the behavior of the navigation system (independent variable) during an assisted route-following task. The dependent variables are (i) navigation performance, (ii) the acquired spatial knowledge during a route-reversal task, and (iii) gaze behavior.
In total, 64 participants (44 females and 20 males), mostly freshmen at the University of Zurich and the ETH Zurich with different disciplinary backgrounds, took part in the experiment. The mean age of participants was 25 years, ranging from 18 to 60 years (M = 25 years, SD = 8 years). All except two participants owned a smartphone, thus representing a sample with background knowledge in using mobile digital devices. Each participant received CHF 20.00 for participation in the experiment. Participants signed a consent form approved by the Department of Geography at the University of Zurich and were told that they could stop the experiment at any time.
The study was conducted outdoors in Zurich, Switzerland. The study area is located in an urban residential neighborhood close to the University of Zurich, but it was unknown to the participants (we asked about familiarity in a questionnaire). The location of the route is displayed in Fig. 2a. The route was chosen by two of the authors based on its variety of intersections, turns, and landmarks. Figure 2a shows a Google Maps excerpt with the route highlighted in black. The blue pin at the bottom of the map indicates the starting point, and the black flag at the top of the map depicts the destination. The route is approximately 800 m long with a decline of 11 m in total. The route consists of 14 intersections (marked in Fig. 2a with "I-" and a number indicating its position in the route-following task) and different kinds of landmarks, building types, parks, etc., representing a typical urban residential environment. The route consists of three right (I-3, I-7, I-13) and three left (I-9, I-12, I-14) turns in walking direction from the start to the destination. The turns did not follow a regular pattern and divide the route in seven straight segments of different length.
(a) Route (highlighted in black) with starting point (blue pin) and destination (black flag) depicted on the background map of Google Maps (© 2016 Google). All intersections are annotated with "I-" and a number indicating their position along the route. The intersection annotationss did not appear on the map display for the participants and are only added to this figure to allow references to intersections within this paper. (b) A participant holds the navigation system and wears eye-tracking glasses with an attached laptop in the backpack during the experiment. The experimenter follows the participant and takes notes (re-enacted scene). (Photo: Marc Latzel)
A base map (Google Maps API) with the highlighted route was displayed on a SAMSUNG Galaxy Tab S10.2 tablet. The test application was set to display a north-up street map and did not allow for switching layers (e.g., to a satellite image) to ensure that all participants used the same road map. However, participants could rotate, zoom, pan, and tilt the map according to their needs to provide a map use experience very similar to that on their personal devices. In contrast to the original Google Maps available on mobile devices, the test map would remain in north-up orientation and at the initial zoom level if participants chose not to interact manually with it.
We designed four navigation system variations that differ in their level of automation (i.e., system behavior). These variants combine two cognitive processes (CP), "allocation of attention" and "self-localization", each in two different modes (detailed description below). Using these two modes is motivated by the reviewed research and by the role of attention during learning (e.g., Chrastil & Warren, 2012), the active learning hypothesis (Münzer et al., 2006), and the system solutions for cognitive problems (Willis et al., 2009) listed in Table 1. Combining the two cognitive processes, each with one of the two implemented modes, results in four different navigation system behaviors that we tested (Fig. 3). Each navigation system behavior is associated with either a high level of active participation on the human side and a low level of system automation (Fig. 3 left; "Human", abbreviated "Hum") or a high level of system assistance and a low level of active participation on the human side (Fig. 3 right; "System", abbreviated "Sys"). We used a between-subject design in which participants were randomly assigned to one of the four different navigation system behaviors, such that each system behavior was used by 16 participants.
Four navigation system interface designs with varying navigation system behavior in terms of a combination of two cognitive processes (CP): allocation of attention (Alloc; green) and self-localization (Loc; blue). Each cognitive process involves either higher active participation from the human side (low level of automation; Hum) or higher system assistance (high level of automation; Sys). The figure illustrates for each navigation system design how the implemented processes map to the levels of automation of Parasuraman et al. (2000) (Fig. 1). The further up a horizontal bar is, the higher the level of automation. Background map is from Google Maps API (© 2016 Google)
CP1: Allocation of attention (abbreviated "Alloc") directs one's attention to a certain property in the environment, such as a landmark (Chrastil & Warren, 2012; Richter & Winter, 2014) and is implemented in the following modes to address the two cognitive problems stated by Willis et al. (2009) (Table 1).
The system (abbreviated "Sys") performs the process on its own, which means that the description of a certain landmark appears automatically on the map by displaying a marker symbol and a text description on the map as the user approaches it without any user interaction. The system vibrates for 5 s to make the user aware of the availability of this description and to make him or her attend to one of the three different landmarks used in this study (residential home, two adjacent flag poles, and a bus stop). This mode corresponds to "Level 9. The system informs the human only if the system decides to do so" by Parasuraman et al. (2000) as shown in Fig. 1.
The system offers the opportunity for the human (abbreviated "Hum") to type in some keywords that describe three self-chosen landmarks along the route. Participants are asked not only to make a decision regarding which landmark they wish to pay attention to, but also what kind of text they want to add at the chosen location. At three self-chosen locations along the route, participants were asked to press a "Marker" button in the app, which then allows them to type in a description of their current surroundings or some landmark in their vista space. The description is linked to the current position of the participant, but it does not appear on the map, i.e., the map does not change after performing this action. This mode corresponds to "Level 1. The system offers no assistance whatsoever" by Parasuraman et al. (2000).
CP2: Self-localization (abbreviated "Loc") is the process of determining one's current location in relationship to the environment by using visual clues (e.g., Meilinger, Hölscher, Büchner, & Brösamle, 2007). This was implemented in the following modes to address the two cognitive problems identified by Willis et al. (2009) (Table 1).
The system (abbreviated "Sys") performs the process on its own. This means that the location of the navigator is updated on the map as a blue dot and thus is permanently visible. This mode corresponds to "Level 10. The system decides everything, acts autonomously, ignores the human" of the levels of automation by Parasuraman et al. (2000) as shown in Fig. 1.
The system provides the human (abbreviated "Hum") with an opportunity to perform an action (pressing a "GPS on" button) to display the current location on the map for 10 s, after which the blue dot disappears. This mode corresponds to "Level 8. The system informs the human only if the human asks the system" of the levels of automation by Parasuraman et al. (2000).
Participants wore mobile SMI eye tracking glasses (SMI-ETG) during the experiment to record their gaze movement. Sunshades and wind protection stuck to the glasses reduced the infrared interferences and prevented participants from squinting. This protection was added to ensure better data quality from the eye movement recordings. The glasses were connected to a laptop that participants carried in a backpack. This laptop recorded all eye movements during the navigation experiment. Figure 2b shows the experimental setup, with a participant holding a navigation system and wearing a backpack with a recording device attached to the eye tracking glasses (for privacy reasons, a friend of the experimenter re-enacted the experimental scene).
Experimental framework
We divided the experiment into two phases. During Phase 1 (assisted route-following or "incidental knowledge acquisition"), we asked each participant to follow a route (Fig. 2a) presented by the navigation system (Fig. 3). Participants were first given a scenario that they had just left a bus at the starting point (blue pin in Fig. 2a) and that they had received a suggested route to a friend's house (black flag in Fig. 2a). Participants were asked to follow this route as quickly as possible, without running. The two participant groups using the navigation system with a low level of automation in terms of the cognitive process "allocation of attention" were additionally given the following instruction (translated from German to English): "On the way to your friend's house, you should write down three locations that are relevant for you for this route. The position at which this entry will be made is saved together with the text information and will be later integrated into the application. The entry is made when you click the 'Marker' button." Importantly, all participants from all groups knew that there was going to be a second part of the experiment, but they did not know what this second task would involve. Therefore, the spatial knowledge acquired during the first phase can be considered the result of incidental learning.
For Phase 2 (unassisted route-reversal or "knowledge recall"), we asked all participants to reverse the exact same route back to the starting point, similar in procedure to a study by Karimpur, Röser, and Hamburger (2016) that was conducted in a virtual reality (VR) setting. The scenario for this second phase was use-inspired. We told the participants that they had lost their keys, and because of a (fictitious) empty battery, they had to reverse the exact same route without using the navigation system. This also meant that the participants could not use any shortcuts in the route-reversal phase, even if they had been able to find them.
Experimental procedure
The experimenter individually contacted participants and arranged a date for the experiment. The experiment took place during daytime from September to November 2016, on days without any rain. In case of forecasted rain, the experimenter cancelled and rescheduled the experiment because it was conducted entirely outside. Participants were asked to complete an online demographic questionnaire and the self-assessment questionnaire "Räumliche Strategie" by Münzer and Hölscher (2011) in advance at home. The questionnaire asked participants to rate their spatial strategies in global-geocentric, survey scale, and cardinal directions. Münzer and Hölscher (2011) showed that these self-report measures are able to predict participants' spatial knowledge acquisition abilities. The experimenter sent a reminder to participants a day before the experiment and reminded them to fill out the questionnaire if they had not completed it by that date.
After arriving at the meeting point, participants were asked to sign a consent form and were introduced to the procedure of the experiment. Following this introduction, the experimenter explained the randomly assigned navigation system behavior (one of the four applications depicted in Fig. 3) to the participants, who then could get used to the application during a training session. Next, the experimenter asked participants to don the eye tracking glasses. This was followed by a three-point calibration phase with the eye tracking software iView. For the calibration of the eye tracking glasses, participants were asked to look at objects near them, such as street signs, in a distance of approximately 7–10 m. Participants who wear glasses were asked beforehand (via email) to wear contact lenses as a requirement for participation. The experimenter led the participant to the starting point of the test route where the participant was asked to read the instructions for Phase 1. The participant received the tablet with a running application, and was asked to perform the navigation task for Phase 1.
The experimenter shadowed participants at about 10 m distance, taking notes about potential changes in the environment for each participant. This was necessary because the experiment took place in a real dynamic urban environment (this point is taken up again in the Discussion section). After arriving at the destination, participants were given the instructions based on the scenario for Phase 2. Each participant was then asked to subjectively rate the difficulty of Phase 2 on a Likert scale from 1 (very easy) to 5 (very difficult). The rating before execution of this task provides a personal assessment of the perceived difficulty of the task, independent of a participant's actual performance. Next, the participant was asked to reverse the route and walk unassisted (that is from memory) to the starting point of the route. Again, the experimenter shadowed participants at about 10 m distance. If participants took a wrong turn at an intersection (decision point), the experimenter had to call them back to the intersection where they had to make a new decision. Participants received explicit feedback (e.g., "You took the wrong road. Come back and make a new decision") during their navigation performance (similar to Karimpur et al., 2016). After completing Phase 2, participants were asked 1) to draw the route on a printed map (the same map as shown on the starting screen, but without the route and starting and destination points), and 2) again rate the difficulty of Phase 2 on a five-point Likert scale. The participant rating indicates perceived navigation difficulty.
After completing the second phase of the experiment, participants were asked to fill out a post-test questionnaire and to complete the Building Memory test (Ekstrom, French, Harman, & Dermen, 1976). This test elicits an individual's ability to memorize the position of buildings on a street map. Results of the test indicate a participants' ability to memorize landmarks on a map (survey perspective) used during Phase 1, which in turn may explain parts of their performance of Phase 2. We administered the test after the main experiment so as to not give away the memory component of the experiment (Phase 2), which might have influenced their learning behavior during Phase 1. At the end of the experiment, participants received CHF 20.00 compensation, signed a confirmation of receipt, and were thanked for taking part in the experiment. The experimenter also reminded participants to keep the experimental procedure confidential. The experiment lasted about 70 min, on average.
Mobile eye tracking analysis
The first step in the eye tracking analysis was to segment the data such that it allows for comparison between participants' behaviors along the route. This was necessary because eye tracking data in real-world environments are not synchronized with any other sensor data and also are not synchronized between participants; behavior is highly dynamic and the recordings do not (automatically) provide any spatial references to the environment they are recorded in (i.e., at the same point in time two different participants might be at two very different locations along the route). We segmented the route at intersections that correspond to decision points for both experimental phases, according to Fig. 4. Each segment features the spatial context of approaching a decision point and the intersection itself, resulting in 13 segments both ways. The data analysis was performed with the iMotions© software using a duration dispersion-based fixation algorithm (fixation duration > 100 ms). We annotated the screen recordings of participants' eye movements with start and end position for each segment to assign each fixation (its duration) to a specific route segment.
Spatial segmentation of the route (schematic) into 13 segments. Each segment represents the spatial context of approaching a decision point in walking direction including the relevant intersection. Spatial segmentation of the route for Phase 1 (a) and Phase 2 (b)
Each fixation has a duration which we used to compute the mean fixation duration of each participant in each segment by using the following formula:
$$ \overline{x}=\frac{1}{N}\sum \limits_{i=1}^N{x}_i $$
where \( \overline{\mathrm{x}} \) is the mean fixation duration in the segment, N the total number of fixations in a segment, and xi the duration of fixation i in this segment.
We first describe the participant sample, including demographics, self-assessed spatial strategies, and spatial memory abilities. Second, we present the navigation performance for the two experimental phases separately. To evaluate participants' navigation performance, we use four standard measures suggested by Dillemuth (2005), and Meilinger, Franz, and Bülthoff (2012): time to task completion, interactions with the map (e.g., zooming, etc.), navigation errors, and the number of stops along the route. This is followed by the results of the gaze analysis recorded with the mobile eye tracking glasses during Phase 1 and Phase 2. We report results according to the four groups of the between-subject design. All figures and tables follow the same order of system behaviors provided above (Fig. 3).
The experimenter randomly assigned each participant to one of the four experimental groups. Each group consisted of 16 participants (11 females, 5 males). As mentioned, in an online questionnaire, we asked participants to report their frequency of using any kind of map application on their mobile system with a five-point Likert scale. Most participants (87%) use their smart device several times a month for navigational purposes. Apart from the use of digital map applications, we asked them to specify their experience in mapping-related fields, such as map reading and cartography (map production). The majority (80%) had experience in using map applications on a mobile device and in reading maps in general. The majority (60 to 70%) of the participants rate their experience with Geographic Information Systems (GIS) and with orienteering as little or none. These results suggest a relatively homogenous sample of participants in terms of map use in general and experience in using digital maps, specifically.
Spatial abilities
We collected participants' self-rated spatial abilities with the "Räumliche Strategie" questionnaire by Münzer and Hölscher (2011). Table 2 reports the mean ratings of this test for each application type group. The egocentric orientation scale evaluates how well a person knows directions and routes. The survey scale summarizes how well a person can build a mental map, and the cardinal direction scale assesses awareness of cardinal directions. Question 12 ("I am good in remembering routes and finding my way back without problems") directly matches the second experimental phase ("unassisted route-reversal"). The scores range from "1: I don't agree" to "7: I strongly agree". The higher the scores, the better participants assess their ability. Overall, the results reveal that all three scales show a large range within all four groups. There are no significant differences in ratings across the groups for any of the scales (egocentric, F(3,60) = 5.12, p = 0.525; survey, F(3,60) = 0.604, p = 0.615; cardinal, F(3,60) = 2.13, p = 0.106), tested with a one-way ANOVA. A Kruskal-Wallis test reveals that ratings for question 12 were also not significantly different between the groups (H(3) = 2.3191, p = 0.508). Hence, in terms of spatial abilities, these results indicate a homogenous sample of participants across and within the four system behavior groups.
Table 2 No difference in participants' self-assessed spatial strategies scores across groups (means and standard deviations)
Spatial memory
For the "Building Memory" test (Ekstrom et al., 1976), participants were asked to place buildings on an empty street map after studying the same layout with the buildings shown. Zero points were assigned if all buildings were placed at wrong locations. If all buildings were correctly positioned, the maximum achievable score was 24. Table 3 lists participants' average scores. The higher the test score, the more buildings were correctly located. All groups show high mean scores with rather small standard deviations. The group AllocSys_LocSys (allocation of attention and self-localization by system) is the only group with an average mean score of less than 20 and the one with the largest standard deviation. Overall, spatial memory ability of our participants is high. A Kruskal-Wallis test revealed no significant differences between the four groups (H(3) = 0.9761, p = 0.807).
Table 3 No differences in spatial memory scores for the "Building Memory" test (by Ekstrom et al., 1976) across groups (means and standard deviation)
The results of the spatial strategies and spatial memory tests indicate a homogenous distribution of spatial abilities across the four participant groups.
Phase 1: Assisted route-following (incidental knowledge acquisition)
The first set of analyses examined the impact of different navigation system behaviors on human navigation behavior during Phase 1 of the experiment. This included navigation efficiency, stops and hesitations (i.e., significantly slowing down) along the route, and the interactions with the map during the route-following task. The findings of Phase 1 might then explain potential differences in incidentally acquired spatial knowledge that was tested in Phase 2.
Navigation performance
One goal of this study was to test whether a higher active participation of the human navigator with a navigation system (lower level of automation) could be achieved without harming navigation performance. Figure 5 depicts the duration for walking the route from the starting point to the destination assisted by a navigation system. Overall, the time to walk the route ranged from 7 to 12 min (M = 9.26 min, SD = 1.08 min). A Kruskal-Wallis test revealed no significant differences for completion time between the four groups (H (3) = 3.356, p = 0.339). This result shows that different system behaviors did not affect the time it took for participants to complete Phase 1. Furthermore, none of the participants made any navigation errors during Phase 1.
Navigation assistance levels do not influence route completion times for Phase 1. Average duration for walking the route assisted with a navigation system. Black dots indicate outliers
We analyzed how many times participants stopped or hesitated (slowed down) along the route during Phase 1. A stop means that the participant has both feet on the ground and does not move in any direction. A hesitation is a clearly identifiable reduction of speed while continuing to move. Overall, participants hardly ever hesitated during the assisted route-following phase. The two groups AllocHum_LocHum (allocation of attention and self-localization by human) and AllocHum_LocSys (allocation of attention by human and self-localization by system) stopped on average two to three times (to type the required keywords), but without harming their efficiency, as Fig. 5 shows.
As mentioned, two out of the four groups (AllocHum_LocSys, AllocSys_LocSys) were perpetually shown their position on the digital map while navigating. The other two groups (AllocHum_LocHum, AllocSys_LocHum; allocation of attention by system and self-localization by human) had the option to display their current location on the map by pressing the "GPS on" button. On the one hand, pressing this button causes a distraction from attending to a navigated surrounding if it is unnecessarily used. On the other hand, this can help to self-localize and reorient in the environment, if used strategically. Figure 6a suggests that, on average, the AllocSys_LocHum group (Mdn = 14, SD = 7.4) used the "GPS on" button more often than the AllocHum_LocHum group (Mdn = 3, SD = 4.9). This difference is statistically significant (W = 33.5, p < 0.01, r = − 0.51; Wilcoxon test). Therefore, also the time that the self-localization information was displayed was considerably higher for the AllocSys_LocHum group (Mdn = 40, SD = 18.2) than for the AllocHum_LocHum group (Mdn = 7.5, SD = 12.8) (Fig. 6). This difference is statistically significant (W = 31, p < 0.05, r = − 0.53; Mann-Whitney U test).
Participants in group AllocSys_LocHum pressed the "GPS on" button more often than the group AllocHum_LocHum (a). Distribution within and across groups of counting the instances of participants pressing the "GPS on" button (statistically significant difference, **p < 0.01). Therefore, participants of the AllocSys_LocHum group had the self-localization displayed and accessible for a longer amount of time (statistically significant difference, *p < 0.05) (b). Black dots indicate outliers
Significant differences in counts of interactions (zoom, pan, rotate) with the map display during Phase 1 across groups (statistically significant difference between groups, **p < 0.01). Black dots indicate outliers
Each time a participant zoomed, panned, rotated, or tilted the map, the system recorded the type of interaction in a log file. Figure 7 shows all the interactions with the navigation system, aggregated across the four navigation system groups. Generally, some participants interacted a great deal with the map display, while others hardly ever interacted with the map. None of the participants used the tilt function.
The group AllocHum_LocHum has the largest range of interactions. The group AllocHum_LocSys shows the smallest range, with one outlier (Fig. 7). A Kruskal-Wallis test suggests that the amount of interactions with the navigation system significantly differs between the four groups (H(3) = 18.166, p < 0.01). Pairwise comparisons of the mean ranks between groups reveal the following significant differences; the critical difference for all comparisons was 17.36 (corrected for multiple comparisons) at a 0.05 level (Table 4).
Table 4 Differences in interactions with the navigation system between groups
Phase 2: Unassisted route-reversal (knowledge recall)
First, we report the number of errors participants made in the route-reversal task, which reveals how well participants are able to recall their incidental spatial knowledge acquired during Phase 1. Second, we examine the efficiency of participants' unassisted navigation by looking at duration of the route-reversal phase and counting of stops and hesitations along the route. Finally, we report participants' self-reports of task difficulty collected in Phase 2, before and after completing Phase 2 to compare self-assessed task difficulty with actual task performance.
Testing spatial knowledge
We tested the participants on how well they found their way back to the starting point unassisted. Because participants were asked to reverse the exact same route to the starting point, each wrong turn at an intersection was counted as one error. Table 5 summarizes the results for the different groups. In the two groups with more active navigator participation (AllocHum_LocHum and AllocHum_LocSys), three participants (18%) made a wrong route choice at one intersection. In the group AllocSys_LocSys, six (37.5%) participants made at least one mistake during Phase 2. What stands out is that 10 out of 16 participants (62.5%) in the group AllocSys_LocHum made a wrong navigation decision at at least one intersection. Table 5 also lists the number of errors per person and per group and the mean error per group. A Kruskal-Wallis test reveals that the mean error is significantly affected by the navigation system behavior (H(3) = 8.4962, p = 0.034). However, it is important to mention that the number of errors is often zero and generally low. Still, the number of participants with a navigation error varies greatly between the groups. The four participants with the highest number of errors (three) are in the two navigation groups using a navigation system that features lower active human participation (i.e., higher levels of automation). More errors suggest that these participants were less effective in recalling their spatial knowledge of the route compared to the other participants, and indeed acquired less (accurate) spatial knowledge during Phase 1. Twelve participants made only one error, and 42 participants made no errors at all. Hence, these participants were more effective in reversing the route.
Table 5 Different numbers of navigation errors across groups during the experimental Phase 2 indicating varying degrees of recalling the acquired spatial knowledge of the traversed route
Figure 8 depicts the duration for reversing the route unassisted from the destination back to the starting point. Overall, the time to walk the same route unassisted ranged from 6 to 13 min for participants (M = 8.3 min, SD = 1.2 min), with most participants returning to the starting point in less than 10 min. A Kruskal-Wallis test revealed no significant completion time differences between the four groups in Phase 2 (H(3) = 0.051, p = 0.997). This means that being exposed to differing navigation system behaviors during Phase 1 did not significantly influence navigation performance without any navigation system assistance for the reversed route (Phase 2).
Navigation assistance levels do not influence route completion times of Phase 2. Average duration for walking the route unassisted. Black dots indicate outliers
Similar to Phase 1, we counted how many times participants stopped or hesitated along the route during Phase 2. On average, participants hesitated zero to once across groups (Table 6). The two groups AllocSys_LocHum and AllocSys_LocSys stopped slightly more often than the groups AllocHum_LocHum and AllocHum_LocSys, who hardly ever stopped or hesitated during unassisted navigation phase.
Table 6 Count of hesitations and stops across groups during the unassisted route-reversal task (mean and standard deviations)
After participants had read the instructions for Phase 2, they were asked to rate their perceived difficulty of the task "finding the exact same way back without assistance" on a five-point Likert scale ranging from 1 (very easy) to 5 (very difficult). They were asked to rate the difficulty of the task again after completing their walk back, using the same scale. Table 7 shows the average scores across the four groups. Overall, on average, the ratings are all below 3, thus indicating they perceived the task to be easy. The range of ratings is larger before than after participants performed the route-reversal. The variation in ratings is very small for group AllocSys_LocSys, meaning that participants in this group agreed more about the difficulty of this task before and after Phase 2 compared to the other groups. All groups rated the difficulty of Phase 2 as easier after they performed it compared to before. This indicates an overestimation of task difficulty in their first rating. A Kruskal-Wallis test revealed no significant differences in ratings before (H(3) = 3.6814, p = 0.289) or after (H(3) = 0.75636, p = 0.8599) performing Phase 2 across the four groups.
Table 7 Average score of task difficulty across groups at different stages (before and after) in Phase 2
Mobile eye tracking
Overall, the differences in navigation performance and spatial knowledge acquisition during Phase 1 and Phase 2 indicate changes to human navigation behavior based on navigation system behavior. To see if navigation system behavior also influences gaze behavior, we now report the analysis and results of the eye tracking recordings during Phase 1 and Phase 2. Unfortunately, we could only analyze 26 of 64 participant recordings (AllocHum_LocHum, 6; AllocHum_LocSys, 6; AllocSys_LocHum, 8; AllocSys_LocHum, 6) that had adequate data quality for both experimental phases due to calibration and recording issues. Given the small sample size in each group, we did not run any statistical analyses on the eye tracking data.
Figure 9 shows the mean fixation durations for each segment during both test phases and across the four groups. What stands out first is that generally, for all groups, the mean fixation duration during Phase 1 (incidental knowledge acquisition) follows a wave pattern that starts with longer mean fixation durations in the first segments of the route, followed by segments with shorter mean fixation durations, and then again segments with longer fixation durations toward the end of the route. This wave pattern seems to be independent of the employed navigation system behavior. During Phase 2 (knowledge recall), we do not observe this wave pattern. Here, no clear pattern emerges, and fixation durations show large variations. Interestingly, the pattern of the AllocSys_LocHum group was inversed during Phase 2 compared to Phase 1.
Mean fixation duration in each segment of the route across the four conditions for the experimental Phase 1 and Phase 2. For each participant the walking direction was from Segment 1 to Segment 13 in Phase 1 (i.e., read graph from left to right) and from Segment 13 to Segment 1 in Phase 2 (i.e., read graph from right to left). Black dots indicate outliers
However, while there do not seem to be differences between the four navigation system behaviors, distinct differences in the fixation duration patterns emerge between the two experimental phases: incidental knowledge acquisition (Phase 1) and knowledge recall (Phase 2). We conclude that the difference in mean fixation durations depends on whether participants are using a navigation system or not, but not on the different behaviors of the navigation system.
Maps on mobile devices allow navigators to efficiently and effectively find their way across space. Researchers agree that the transformation of assisted navigation from static paper maps to interactive map displays (e.g., navigation systems) that provide information at any time and potentially at any location influences the way we perceive, remember, and interact with our surrounding environment (e.g., Ishikawa et al., 2008; Klippel et al., 2010; Parush et al., 2007). We designed an experiment to study the possible influence of different navigation system designs derived from different levels of automation (Parasuraman et al., 2000) on navigation system use, spatial knowledge acquisition, and gaze behavior during a route-following task. The implemented navigation system behaviors were selected based on research on spatial knowledge acquisition, active learning, and automated systems. Research that emphasizes the importance of engaging a user with the environment (e.g., Gardony et al., 2013) suggests that this active user participation with a navigation system benefits spatial learning during navigation. We developed a new two-phase empirical framework for testing incidental spatial knowledge acquisition in real-world outdoor environments. First, participants were asked to follow a pre-defined route assisted by a navigation system (incidental knowledge acquisition phase). Second, participants were asked to reverse the route without the navigation system (knowledge recall phase). We now discuss our empirical results with regard to the leading research questions and within the context of the research findings reported in the literature. We begin with the behavioral research question:
How do varying navigation system behaviors (levels of automation) influence (i) navigation performance, (ii) spatial knowledge acquisition, and (iii) gaze behavior during navigation tasks in a real-world outdoor environment?
Navigation performance and spatial knowledge acquisition
Research on assisted navigation has studied navigation efficiency (e.g., Lee & Cheng, 2008) or spatial knowledge acquisition (e.g., Gardony et al., 2013; Taylor et al., 2008). According to this research, successful navigators assisted by a navigation system should still make their own decisions, attend to their surroundings, and actively take part in the navigation process because these factors positively affect spatial knowledge acquisition (Chrastil & Warren, 2012; Chung et al., 2016; Kiefer, Giannopoulos, Athanasios Anagnostopoulos, Schöning, & Raubal, 2017; Parush et al., 2007). Based on these studies, we implemented two cognitive processes (i.e., the allocation of attention and self-localization) relevant for wayfinding (Glisky, 2007; Lobben, 2004) with different levels of automation in which either the navigation system or the navigator makes a decision and performs an action (Parasuraman et al., 2000).
The implemented system behaviors with higher levels of human participation aim to increase spatial knowledge acquisition during assisted navigation while still ensuring efficient navigation of their users. Indeed, our results did not reveal any difference in completion time for the assisted route-following phase across the four tested groups. This holds even though two groups had to first decide on and then enter short landmark descriptions into their system three times during Phase 1 and, consequently, needed to stop more often. Still, participants in these groups did not need more time to complete the route-following task compared to participants using systems that selected for them what they should allocate their attention to (e.g., automatic notifications). Thus, navigation system behavior did not influence time to task completion during the assisted part of the experiment. This finding has important implications for developing navigation systems that regulate active user participation (i.e., low level of automation) without harming navigation efficiency.
To determine the impact of system behavior on incidental spatial knowledge acquisition, participants had to reverse the same route without any navigation system assistance, thus using only their spatial knowledge that was incidentally acquired during the assisted route-following phase. We counted a wrong decision at an intersection as a navigation error. During the assisted navigation phase, all participants followed the route without any error. This may not be surprising because they were assisted by a navigation system. During the unassisted phase, the number of errors varied across the four groups.
The different navigation systems implemented the cognitive process "allocation of attention" with two modes at the extreme ends of the spectrum of levels of automation. The two tested modes of the cognitive process "self-localization" exhibit a less pronounced difference of these levels (Fig. 3). We observed a clear difference between the two extreme modes for acquiring spatial knowledge. Both groups with users' decisions on where to mark landmarks and, thus, where to allocate attention (AllocHum_LocHum, allocation of attention and self-localization by human; and AllocHum_LocSys, allocation of attention by human and self-localization by system) show 82% success rates in finding the exact same route back. The two groups in which the system allocates users' attention to landmarks show success rates of 63% (AllocSys_LocSys, allocation of attention and self-localization by system) and 38% (AllocSys_LocHum, allocation of attention by system and self-localization by human). The fact that so many participants did not find their way back correctly after just 10 min of walking along a simple route may seem surprising. However, these results support the hypothesis of Chrastil and Warren (2012), Parush et al. (2007), and Willis et al. (2009) that activating a user with a location-dependent task (in our case, typing three self-selected keywords into a navigation system) increases spatial knowledge acquisition. In contrast, the two study groups with notification texts (AllocSys_LocHum and AllocSys_LocSys) who were using a navigation system with a high level of automation show lower success rates. This result seems to confirm findings by Pielot and Rello (2017) and Lee et al. (2014), who demonstrated that system notifications can interrupt an activity. In our case, textual notifications indicated by tactile alarms forced users to focus on their navigation system, rather than the environment, at locations defined by the system. Navigation decisions were taken away from navigators by the system, and thus may have interrupted the process of acquiring spatial knowledge. We further explain this result with the fact that the AllocSys groups were forced to switch to the survey perspective at the system's discretion, while the AllocHum groups could maintain the first-person perspective until choosing themselves to make the switch to the survey perspective in order to make a place–action link (as described in Chrastil & Warren, 2012). This explanation aligns with the divided attention literature (Gardony et al., 2013) and the stated cognitive problem of "passive nature of interaction" (Willis et al., 2009). The effects of the two modes of the cognitive process "self-localization" are less pronounced in our study, which may at least in part be explained by the fact that they are similar in their level of automation (levels 8 and 10 in Fig. 1, respectively).
We initially hypothesized that the group faced with the highest level of automation, i.e., in which both decisions on attention allocation and self-localization are made by the device (AllocSys_LocSys), would acquire least spatial knowledge. Our results do not really support this hypothesis since the group AllocSys_LocHum made the most errors when reversing the route. One possible explanation for this might be that participants in this group interacted with the map more often than the two other groups AllocHum_LocSys and AllocSys_LocSys. Consequently, the AllocSys_LocHum group was frequently switching between the route and survey perspective and seemed to have paid more attention to the navigation system than to the environment compared to the other groups (Ishikawa et al., 2008).
Another explanation may lie in the use of the "GPS on" button to facilitate self-localization. The group AllocSys_LocHum used this button significantly more often than the other group with the same option (AllocHum_LocHum). The AllocHum_LocHum group, which needed to choose and type keywords about landmarks, had this particular task to concentrate on. In contrast, the participants of group AllocSys_LocHum, who did not have any other tasks to fulfill, used this button much more than necessary, and consequently had their position displayed on the map for a longer amount of time. Just having the option of pressing this button likely distracted participants in this group more than expected. There are several possible explanations for this result. First, users in the group AllocSys_LocHum may indeed have needed repeated confirmations of their current location on the map to successfully find the route during Phase 1. Second, they used the button just because they could or, third, just to offload cognition to the system to reduce "stressful" cognitive activity, which would confirm the findings by Willis et al. (2009). Overall, this result seems consistent with research that found that using smart devices can lead to excessive reliance on the system (Klippel et al., 2010; Parush et al., 2007). Additionally, navigation system use may diminish our navigation skills more generally and, with that, we may not be able to appropriately judge when the use of a navigation interface element becomes optional (Montello, 2009). Third, frequent perspective changes can interrupt the process of allocating enough attention to the surrounding environment.
In general, our results suggest that the use of interactive display elements (e.g., zoom, pan, rotation, GPS button, etc.) invites users to switch between perspectives (Dai et al., 2018) and thus facilitate the division of navigators' attention between the system and the environment (Gardony et al., 2013). Possibly the groups with a lower level of system automation engaged with the interactive display tools more strategically and in a goal-directed manner. The groups with a higher level of automation did not seem to invest cognitive resources in the navigation task, but rather explored the system's capabilities and looked for ways to let the system do all the work. Regarding cognitive processes involved in using navigation systems, our findings suggest that differences in the levels of automation of navigation system behavior, specifically, allocating attention and self-localization, affect human navigation behavior and, with it, incidental spatial knowledge acquisition. We further highlight the importance of better understanding the effects of interactive interface components (e.g., display buttons) in navigation system design because they can support, but also hinder, spatial knowledge acquisition, even if they may not affect navigation performance. Our study starts building knowledge to more deeply understand real-world navigation when using navigation systems, as suggested by Dai et al. (2018).
Gaze behavior
Because research has found that navigation systems change how humans allocate their attention to the environment and change their landmark selection (Gardony et al., 2013; Ishikawa et al., 2008; Parush et al., 2007; Taylor et al., 2008), we analyzed participants' gaze behavior during the two experimental phases. Eye movement behavior is one measure of information acquisition (Kiefer et al., 2017) and strategies (Holmqvist et al., 2011). The goal of the eye tracking analysis was to determine the spatio-temporal distribution of participants' fixation durations along the route.
The results of the fixation duration analysis did not reveal any differences in navigators' gaze behaviors across navigation system behaviors but, interestingly, did so between the two experimental phases. We found a clear gaze behavior pattern during the assisted route-following task, but no clear patterns emerged during the unassisted route-reversing task. We are unable to systematically identify what participants allocated their attention to during the experiment due to the vast amount of dynamic eye fixation data and the extensively laborious annotation process (Kiefer et al., 2017). However, the applied method generally can tell us something about potential similarity of gaze behaviors across groups in a spatio-temporal context. With our method of segmenting the route according to decision points (i.e., intersections), we were able to detect spatial segments that led to higher mean fixation durations, thus potentially indicating higher cognitive functions, and segments showing lower mean fixation durations, suggesting a potential for increasing visual complexity (Duchowski, 2007). Longer fixation durations in the early segments of the route might indicate that participants are actively becoming familiar with the task, the navigation system, and their surroundings, and, connected to this, higher information processing, or, conversely, that participants had more difficulty to extract information (Goldberg & Kotval, 1999). Longer fixations in this context could also be interpreted as making a clear place–action link, as described in Chrastil and Warren (2012). Interestingly, in segments 6 and 7 of the route, which can be characterized as an unremarkable and quiet street, participants showed the lowest mean fixation durations, with only small variations. According to the literature, spatial scenes with lower fixation durations show decreased cognitive functions and information processing and an increase in visual complexity of the environment. This could mean that unremarkable streets might have led to a switch into a passive navigation mode, i.e., navigators did not pay much attention to the task.
These results suggest that we might be able to relate human behavior to the spatial context during navigation system use, which is a clearly identifiable knowledge gap in the literature (Dai et al., 2018).
With our descriptive summary approach analyzing dynamic mobile eye tracking data, we are able to clearly distinguish different behaviors during different cognitive tasks along a route—this without exactly knowing what features participants attended to. These findings provide further insights into how the allocation of attention might shift between navigation system use and environmental context during navigation.
Limitations and future work
We present aggregated results across four navigation system groups. Participants do not show any differences across groups in spatial ability. Consequently, we cannot attribute errors made during Phase 2 to spatial ability. Due to small sample sizes, gender and spatial ability are not further analyzed in this study. Certainly, they could (or should) be assessed and/or controlled for in future navigation studies.
The current studied navigation behaviors rely on visual information only (i.e., map and text). In future work, it would be interesting to include auditory system modalities, as these were found to be beneficial for navigation performance (Klatzky, Marston, Giudice, Golledge, & Loomis, 2006). For example, the system could provide spoken route instructions. Navigation system modes used to reallocate attention could employ auditory modalities to better understand the impact of modality of the presented information. One way of applying these could be that participants are asked to voice-record landmark descriptions, instead of typing them into the system (AllocHum), or that the system voices a landmark description when navigators approach their locations, instead of displaying a label on the map (AllocSys).
To develop a fuller picture of the gaze behavior, laborious annotations of the eye tracking data are required, which in turn could help us to further verify interpretations of results.
Finally, a similar experiment and data analysis could also be performed in an indoor environment (Riehle, Lichter, & Giudice, 2008) to gain further insights into the influence of varying environmental contexts on navigation behavior.
Overall, our findings have important implications for designing and developing navigation systems that allow for efficient navigation while at the same time supporting acquisition of spatial knowledge. Navigation system design needs to be more thoroughly empirically investigated with respect to levels of automation, modality of information delivery, and where attention is allocated during navigation because these have direct consequences for human navigation behavior and for the ease of acquiring new spatial knowledge.
Is the experimental framework of an assisted and unassisted navigation phase a valid approach to gather useful data in terms of spatial knowledge acquisition and to allow for a smooth execution of an outdoor experiment?
A second goal of this study was to test a new empirical framework in an outdoor environment, and to use navigation errors at intersections as an indicator of spatial knowledge quality, as Dillemuth (2005), Hund and Gill (2014) and Lovelace and Hegarty (1999) suggested. So far, most research on spatial knowledge acquisition has been carried out in VR setups, under highly controlled conditions (e.g., Brunyé et al., 2014; Gardony et al., 2013). Studies testing spatial knowledge acquisition in the real world, and especially in outdoor environments, are still rare and usually only have small numbers of participants (e.g., Bertel et al., 2017; Frei, Richter, & Fabrikant, 2016). Other wayfinding studies in this domain either tested aspects of usability (Cheverst, Mitchell, & Davies, 2001; Gulliksen et al., 2003; Li & Longley, 2006; Looije, te Brake, & Neerincx, 2007) or of attention allocation (Gardony et al., 2013; Kiefer et al., 2013; Michon & Denis, 2001; Roger, Bonnardel, & Le Bigot, 2009; Ross, May, & Thompson, 2004). Our approach involved a real-world outdoor scenario in which participants' spatial knowledge was assessed with a route-reversal task that asked participants to find the identical route back. The framework, introduced in Brügger et al. (2016), is similar to the VR study by Karimpur et al. (2016), but was modified for execution in a dynamically changing outdoor urban environment. To be able to apply a real-life scenario (e.g., finding lost keys) in a real-world environment, we let the participants reverse the route and did not use any of the usually applied direction or distance estimation tasks (as, e.g., in Burte & Montello, 2017). What is more, the new proposed framework tests users' in situ recognition of the environment and allows for efficient experimental execution.
Challenges of applying the use-inspired framework in the real world
We conducted the experiments on days with similar weather conditions, and only during daytime. Weather conditions have led to cancellations and rescheduling of trials, which makes outdoor studies time-consuming and more difficult to plan. Because the experimental Phases 1 and 2 were executed within half an hour, the environmental testing conditions can be considered stable, except for moving objects (e.g., cars, pedestrians, etc.). Hence, changes in the environment might have occurred between trials. Another challenge of testing this use-inspired framework is the in situ change of participants' walking direction between Phase 1 and Phase 2. Because participants experience actual locomotion, are embedded in the real-world environment, and have a novelty of landmark perspectives (Bakdash et al., 2008; Klippel et al., 2010; Montello, 2005; Richter & Winter, 2014), we argue that the change of walking direction is easier to deal with in the real world compared to in virtual environments. Our results confirm that route reversal is a a valid use-inspired task for our purposes as, indeed, two-thirds of the participants were able to reverse the route without any navigation errors.
Self-assessed task difficulty of reversing a route
Participants' perceived task difficulty rating (collected before the navigation task) revealed that they expected the task to be manageable and without many problems. The same task was again rated after they completed the navigation task, and participants found it even easier than expected. This finding is surprising given that more than one-third of the participants made a navigation error. This underestimation of real-world navigation performance is interesting and requires further analysis of subjective perception on navigation performance across system designs.
Further development of the experimental framework
Being able to reverse a route one just walked may not represent the only goal of navigation (and neither would pointing back to the origin from a destination), but we contend that the approach we implemented in our study indeed represents an everyday problem. Furthermore, we argue that the framework successfully captures differences in spatial knowledge acquisition without using any of the standard measures, such as pointing or sketch map drawing. Still, to develop a fuller picture of spatial knowledge acquisition during assisted navigation, additional studies in outdoor environments need to further refine our proposed use-inspired framework. For example, it would be useful to develop a classification and quantification scheme for navigation errors (e.g., navigation error and behavior categorization, according to varying spatial contexts), which would allow for more detailed and meaningful analyses of spatial knowledge acquisition and human navigation behavior, beyond this study. Overall, our scenario of finding one's lost keys on a previously walked route without any navigation assistance does represent a real-world scenario that can be easily applied to different environments and locations, navigation modalities, and other empirical study contexts. Studying spatial knowledge acquisition in real-world outdoor environments makes an important contribution to the challenges of developing "realistic" outdoor studies, beyond the lab-standard of controllability, typically using impoverished environments and lacking realistic contexts. We see this as a benefit, not a limitation.
Current navigation systems primarily provide information that is useful for navigation performance (efficiency). Due to the way this is implemented in state-of-the-art systems, it typically consumes a navigator's attention, while in fact navigation systems could be leveraged to better manage attention allocation and self-localization, i.e., could benefit both navigation efficiency and spatial knowledge acquisition. The purpose of this study is to determine how navigation system behavior influences navigation performance, gaze behavior, and incidental spatial knowledge acquisition of pedestrians traversing outdoor environments. We applied a new empirical use-inspired framework that includes a real-world scenario of walking a route assisted by a navigation system and then reversing the same route without the assistance of a navigation system. We have further demonstrated that it is possible to study spatial knowledge acquisition in outdoor environments by recording navigation errors at intersections, using them as one of the indicators of lacking mental spatial representations. Further experimental studies are needed to gain a deeper understanding of the kinds of navigation errors participants make during an unassisted recall phase.
Our approach of deriving navigation system behaviors from levels of automation in cognitive processes relevant for wayfinding is unique, and it extends our knowledge of how navigation system behavior influences human behavior in real-world environments. A greater focus on the combination of cognitive processes during assisted navigation in outdoor environments would enhance our understanding of a navigator's active role during navigation and possible divided attention effects. The uncovered gaze pattern differences illustrate the opportunities eye tracking data offer to study navigation behavior in real-world and outdoor studies in order to relate human behavior and cognitive activity to spatial context and spatial tasks. We contend that once we find behavior patterns dependent on task, navigation system, and spatial context, we will be better able to design systems that allocate attention based on these patterns in real-time to better handle spatial knowledge acquisition and navigation performance. For example, the system might sense a specific behavior pattern (e.g., intensive, repeated use of a button, or constantly lowering fixation positions) and consequently may force the navigator to keep using his or her own skills by disabling the use of the button or by reminding the user to look up to the environment. With our study, we contributed to the design of future intelligent navigation systems that know where, when, and in which modality cognitive processes should be supported by automation to increase spatial knowledge acquisition during assisted navigation tasks. The task to reverse the same route without a navigation system should then be possible for everybody without any navigation errors.
Alloc:
Allocation of attention
AOI:
Cognitive processes
Eidgenössische Technische Hochschule
GIS:
Hum:
Self-localization
Mdn:
Sys:
Allen, G. L. (1999). Cognitive abilities in the service of wayfinding: A functional approach. The Professional Geographer, 51(4), 554–561.
Bakdash, J. Z., Linkenauger, S., & Proffitt, D. (2008). Comparing decision-making and control for learning a virtual environment: backseat drivers learn where they are going. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 52, 2117–2121. https://doi.org/10.1177/154193120805202707.
Bertel, S., Dressel, T., Kohlberg, T., & von Jan, V. (2017). Spatial knowledge acquired from pedestrian urban navigation systems. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services - MobileHCI '17, (pp. 1–6). New York: ACM Press. https://doi.org/10.1145/3098279.3098543.
Brügger, A., Richter, K.-F., & Fabrikant, S. I. (2016). Walk and learn: an empirical framework for assessing spatial knowledge acquisition during mobile map use. Miller, J., O'Sullivan, D., Wiegand, N. (eds.). Proceedings (short papers), 9th International Conference on Geographic Information Science, Sep. 27–30, 2016, Montreal, Canada, 37–40.
Brunyé, T. T., Gagnon, S. A., Gardony, A. L., Gopal, N., Holmes, A., Taylor, H. a., & Tenbrink, T. (2014). Where did it come from, where do you go? Direction sources influence navigation decisions during spatial uncertainty. Quarterly Journal of Experimental Psychology (2006), 68(3), 1–23. https://doi.org/10.1080/17470218.2014.963131.
Burte, H., & Montello, D. R. (2017). How sense-of-direction and learning intentionality relate to spatial knowledge acquisition in the environment. Cognitive Research: Principles and Implications, 2(1), 18. https://doi.org/10.1186/s41235-017-0057-4.
Cheverst, K., Mitchell, K., & Davies, N. (2001). Investigating context-aware information push vs. information pull to tourists. In Proceedings of Mobile HCI 01, 1, 2001.
Chrastil, E. R., & Warren, W. H. (2012). Active and passive contributions to spatial learning. Psychonomic Bulletin & Review, 19(1), 1–23. https://doi.org/10.3758/s13423-011-0182-x.
Chung, J., Pagnini, F., & Langer, E. (2016). Mindful navigation for pedestrians: Improving engagement with augmented reality. Technology in Society, 45, 29–33. https://doi.org/10.1016/j.techsoc.2016.02.006.
Dai, R., Thomas, A. K., & Taylor, H. A. (2018). When to look at maps in navigation: metacognitive control in environment learning. Cognitive Research: Principles and Implications, 3:36.
Dickmann, F. (2012). City maps versus map-based navigation systems – an empirical approach to building mental representations. The Cartographic Journal, 49(1), 62–69. https://doi.org/10.1179/1743277411Y.0000000018.
Dillemuth, J. (2005). Map design evaluation for mobile display. Cartography and Geographic Information Science, 32(4), 285–301. https://doi.org/10.1559/152304005775194773.
Downs, R. M., & Stea, D. (Eds.) (1973). Image and environment: Cognitive mapping and spatial behavior. Chicago: Aldine.
Duchowski, A. T. (2007). Eye tracking methodology - theory and practice. Cham: Springer-Verlag London. https://doi.org/10.1007/978-3-319-57883-5.
Ekstrom, R. B., French, J. W., Harman, H. H., & Dermen, D. (1976). Manual for kit of reference tests for cognitive factors. Princeton: Educational Testing Service.
Frei, P., Richter, K.-F., & Fabrikant, S. I. (2016). Stress supports spatial knowledge acquisition during wayfinding with mobile maps. In Miller, J., O'Sullivan, D., Wiegand, N. (eds.). Proceedings (short papers), 9th International Conference on Geographic Information Science, Sep. 27–30, 2016, Montreal: 100–103.
Gardony, A. L., Brunyé, T. T., Mahoney, C. R., & Taylor, H. A. (2013). How navigational aids impair spatial memory: Evidence for divided attention. Spatial Cognition and Computation, 13(4), 319–350. https://doi.org/10.1080/13875868.2013.792821.
Giudice, N. A., Walton, L. A., & Worboys, M. (2010). The informatics of indoor and outdoor space. A research agenda. In Proceedings of the 2nd ACM SIGSPATIAL International Workshop on Indoor Spatial Awareness - (ISA '10). New York: ACM, 47–53. http://dx.doi.org/10.1145/1865885.1865897.
Glisky, E. (2007). Changes in cognitive function in human aging. In D. R. Riddle (Ed.), Brain aging: models, methods and mechanisms, (vol. 20072731, pp. 3–20). New York: CRC.
Goldberg, J. H., & Kotval, X. P. (1999). Computer interface evaluation using eye movements: methods and constructs. International Journal of Industrial Ergonomics, 24(6), 631–645. https://doi.org/10.1016/S0169-8141(98)00068-7.
Gulliksen, J., Göransson, B., Boivie, I., Blomkvist, S., Persson, J., & Cajander, Å. (2003). Key principles for user-centred systems design. Behaviour & Information Technology, 22(6), 397–409. https://doi.org/10.1080/01449290310001624329.
Hirtle, S. C., & Raubal, M. (2013) Many to Many Mobile Maps. In M. Raubal, D. M. Mark, & A. U. Frank (Eds.), Cognitive and linguistic aspects of geographic space. Lecture Notes in Geoinformation and Cartography. Berlin: Springer. (pp. 141–157). https://doi.org/10.1007/978-3-642-34359-9.
Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & van de Weijer, J. (2011). Eye Tracking: A comprehensive guide to methods and measures. Oxford: Oxford University Press.
Huang, H., Schmidt, M., & Gartner, G. (2012). Spatial knowledge acquisition with mobile maps, augmented reality and voice in the context of GPS-based pedestrian navigation: results from a field test. Cartography and Geographic Information Science, 39(January), 107–116. https://doi.org/10.1559/15230406392107.
Hund, A. M., & Gill, D. M. (2014). What constitutes effective wayfinding directions: The interactive role of descriptive cues and memory demands. Journal of Environmental Psychology, 38, 217–224. https://doi.org/10.1016/j.jenvp.2014.02.006.
Ishikawa, T., Fujiwara, H., Imai, O., & Okabe, A. (2008). Wayfinding with a GPS-based mobile navigation system: A comparison with maps and direct experience. Journal of Environmental Psychology, 28(1), 74–82. https://doi.org/10.1016/j.jenvp.2007.09.002.
Ishikawa, T., & Takahashi, K. (2013). Relationships between methods for presenting information on navigation tools and users ' wayfinding behavior. Cartographic Perspectives, 75(75), 17–28.
Karimpur, H., Röser, F., & Hamburger, K. (2016). Finding the return path: landmark position effects and the influence of perspective. Frontiers in Psychology, 7(December), 1–16. https://doi.org/10.3389/fpsyg.2016.01956.
Kiefer, P., Giannopoulos, I., Athanasios Anagnostopoulos, V., Schöning, J., & Raubal, M. (2017). Controllability matters: The user experience of adaptive maps. GeoInformatica, 21(3), 619–641. https://doi.org/10.1007/s10707-016-0282-x.
Kiefer, P., Giannopoulos, I., & Raubal, M. (2013). Where am I? Investigating map matching during self-localization with mobile eye tracking in an urban environment. Transactions in GIS, 18(5), 660–686. https://doi.org/10.1111/tgis.12067.
Kiefer, P., Giannopoulos, I., Raubal, M., & Duchowski, A. (2017). Eye tracking for spatial research: Cognition, computation, challenges. Spatial Cognition & Computation, 17(1–2), 1–19. https://doi.org/10.1080/13875868.2016.1254634.
Kiefer, P., Giannopoulos, I., Sch, J., & Raubal, M. (2016). Controllability matters: The user experience of adaptive maps. This is the authors ' version of this article ( preprint ). The final version is available at SpringerLink: GeoInformatica.
Klatzky, R. L., Marston, J. R., Giudice, N. A., Golledge, R. G., & Loomis, J. M. (2006). Cognitive load of navigating without vision when guided by virtual sound versus spatial language. Journal of Experimental Psychology: Applied, 12(4), 223–232. https://doi.org/10.1037/1076-898X.12.4.223.
Klippel, A., Hirtle, S., & Davies, C. (2010). You-are-here maps: Creating spatial awareness through map-like representations. Spatial Cognition & Computation, 10(2–3), 83–93. https://doi.org/10.1080/13875861003770625.
Kraft, J. F., & Hurtienne, J. (2017). Transition animations support orientation in mobile interfaces without increased user effort. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services - MobileHCI '17, New York: ACM, (pp. 1–6). https://doi.org/10.1145/3098279.3098566.
Lee, U., Lee, J., Ko, M., Lee, C., Kim, Y., Yang, S., … Song, J. (2014). Hooked on smartphones: an exploratory study on smartphone overuse among college students. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (pp. 2327–2336). New York: ACM. https://doi.org/10.1145/2556288.2557366.
Lee, W.-C., & Cheng, B.-W. (2008). Effects of using a portable navigation system and paper map in real driving. Accident Analysis & Prevention, 40(1), 303–308. https://doi.org/10.1016/j.aap.2007.06.010.
Li, C., & Longley, P. (2006). A test environment for location-based services applications. Transactions in GIS, 10(1), 43–61.
Lin, A. Y., Kuehl, K., Schöning, J., & Hecht, B. (2017). Understanding "death by GPS": a systematic analysis of catastrophic incidents associated with personal navigation technologies. In CHI 2017. Denver: ACM. https://doi.org/10.1145/3025453.3025737.
Lobben, A. K. (2004). Tasks, strategies, and cognitive processes associated with navigational map reading: a review perspective. The Professional Geographer, 56(2), 270–281. https://doi.org/10.1111/j.0033-0124.2004.05602010.x.
Looije, R., te Brake, G. M., & Neerincx, M. A. (2007). Usability engineering for mobile maps. In Proceedings of the 4th international conference on mobile technology, applications, and systems and the 1st international symposium on Computer human interaction in mobile technology - Mobility '07, (pp. 532–539). New York: ACM Press. https://doi.org/10.1145/1378063.1378150.
Lovelace, K. L., & Hegarty, M. (1999). Elements of good route directions in familiar and unfamiliar environments. In: Freksa C., Mark D.M. (eds) Spatial Information Theory. Cognitive and Computational Foundations of Geographic Information Science. COSIT 1999. Lecture Notes in Computer Science, 1661. Berlin: Springer.
Ludwig, B., Müller, M., & Ohm, C. (2014). Empirical evidence for context-aware interfaces to pedestrian navigation systems. KI - Künstliche Intelligenz, 28(4), 271–281. https://doi.org/10.1007/s13218-014-0333-0.
Meilinger, T., Franz, G., & Bülthoff, H. H. (2012). From isovists via mental representations to behaviour: First steps toward closing the causal chain. Environment and Planning B: Planning and Design, 39(1), 48–62. https://doi.org/10.1068/b34048t.
Meilinger, T., Hölscher, C., Büchner, S. J., & Brösamle, M. (2007). How much information do you need? Schematic maps in wayfinding and self localisation. Spatial Cognition V Reasoning, Action, Interaction, 4387, 381–400. https://doi.org/10.1007/978-3-540-75666-8_22.
Michon, P.-E., & Denis, M. (2001). When and why are visual landmarks used in giving directions? In Montello D.R. (eds) Spatial Information Theory. COSIT 2001. Lecture Notes in Computer Science, 2205. Berlin: Springer, (pp. 292–305). https://doi.org/10.1007/3-540-45424-1_20.
Montello, D. R. (1998). A new framework for understanding thr acquisition of spatial knowledge in large-scale environments. In M. J. Egenhofer & R. G. Golledge (eds), Spatial and Temporal Reasoning in Geographic Information Systems, 143–154. New York: Oxford University Press.
Montello, D. R. (2005). Navigation. In P. Shah & A. Miyake (Eds.), The Cambridge handbook of visuospatial thinking. (pp. 257–294). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511610448.
Montello, D. R. (2009). A conceptual model of the cognitive processing of environmental distance information. In K. S. Hornsby, C. Claramunt, M. Denis, & G. Ligozat (Eds.), Spatial Information Theory, (vol. 5756, pp. 1–17). Berlin: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-03832-7.
Münzer, S., & Hölscher, C. (2011). Entwicklung und validierung eines fragebogens zu räumlichen strategien. Diagnostica, 57(3), 111–125. https://doi.org/10.1026/0012-1924/a000040.
Münzer, S., Zimmer, H. D., Schwalm, M., Baus, J., & Aslan, I. (2006). Computer-assisted navigation and the acquisition of route and survey knowledge. Journal of Environmental Psychology, 26(4), 300–308. https://doi.org/10.1016/j.jenvp.2006.08.001.
O'Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. Oxford: Oxford University Press. https://doi.org/10.1097/00005053-198003000-00018.
Parasuraman, R. (2000). Designing automation for human use: empirical studies and quantitative models. Ergonomics, 43(7), 931–951. https://doi.org/10.1080/001401300409125.
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics. Part A, Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354.
Parush, A., Ahuvia, S., & Erev, I. (2007). Degradation in spatial knowledge acquisition when using automatic navigation systems. In: Winter S., Duckham M., Kulik L., Kuipers B. (eds) Spatial Information Theory, COSIT 2007. Lecture Notes in Computer Science, vol 4736. Springer, Berlin, Heidelberg.
Pielot, M., & Rello, L. (2017). Productive, anxious, lonely. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services - MobileHCI '17, (pp. 1–11). New York: ACM Press. https://doi.org/10.1145/3098279.3098526.
Richter, K., Dara-Abrams, D., & Raubal, M. (2010). Navigating and learning with location based services: A user-centric design. In In G. Gartner & Y. Li (Eds.), Proceedings of the 7th International Symposium on LBS and Telecartography, (pp. 261–276).
Richter, K.-F., Tomko, M., & Cöltekin, A. (2015). Are we there yet? Spatial cognitive engineering for situated human-computer interaction. Cognitive Engineering for Spatial Information Processes: From User Interfaces to Model-Driven Design. Workshop at COSIT 2015.. Santa Fe, NM, USA.
Richter, K.-F., & Winter, S. (2014). Landmarks. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-05732-3.
Riehle, T. H., Lichter, P., & Giudice, N. a. (2008). An indoor navigation system to support the visually impaired. In Proceedings of the 30th annual IEEE Engineering in Medicine and Biology conference, 2008, 4435–4438. August 20-24, Vancouver, Canada. https://doi.org/10.1109/IEMBS.2008.4650195.
Roger, M., Bonnardel, N., & Le Bigot, L. (2009). Improving navigation messages for mobile urban guides: Effects of the guide's interlocutor model, spatial abilities and use of landmarks on route description. International Journal of Industrial Ergonomics, 39(3), 509–515. https://doi.org/10.1016/j.ergon.2008.10.009.
Ross, T., May, A., & Thompson, S. (2004). The use of landmarks in pedestrian navigation instructions and the effects of context. In: Brewster S., Dunlop M. (eds) Mobile Human-Computer Interaction - MobileHCI 2004. Mobile HCI 2004. Lecture Notes in Computer Science, vol 3160. Berlin: Springer.https://doi.org/10.1007/978-3-540-28637-0_26.
Sheridan, T. B. (2002). Humans and automation: System design and research issues. New York: Wiley.
Siegel, A. W., & White, S. H. (1975). The development of spatial representations of lage-scale environments. In H. W. Reese (eds.) Advances in child development and behavior, 10, 9–55. New York: Academic Press.
Taylor, H. A., Brunyé, T. T., & Taylor, S. (2008). Wayfinding and navigation: Mental representation and implications for navigational system design. Reviews of Human Factors and Ergonomics, 4, 1–40. https://doi.org/10.1518/155723408X342835.
Thorndyke, P. W., & Hayes-Roth, B. (1982). Differences in spatial knowledge acquired from maps and navigation. Cognitive Psychology, 14(4), 560–589. https://doi.org/10.1016/0010-0285(82)90019-6.
Van Asselen, M., Fritschy, E., & Postma, A. (2006). The influence of intentional and incidental learning on acquiring spatial knowledge during navigation. Psychological Research, 70(2), 151–156. https://doi.org/10.1007/s00426-004-0199-0.
Wahn, B., & König, P. (2017). Can limitations of visuospatial attention be circumvented? A review. Frontiers in Psychology, 8(October), 1–9. https://doi.org/10.3389/fpsyg.2017.01896.
Weisberg, S. M., & Newcombe, N. S. (2018). Cognitive maps: some people make them, some people struggle. Current Directions in Psychological Science, 27(4), 220–226. https://doi.org/10.1177/0963721417744521.
Willis, K. S., Hölscher, C., Wilbertz, G., & Li, C. (2009). A comparison of spatial knowledge acquisition with maps and mobile maps. Computers, Environment and Urban Systems, 33(2), 100–110. https://doi.org/10.1016/j.compenvurbsys.2009.01.004.
The authors thank Mary Hegarty and Jeremy M. Wolfe for their helpful comments on data analysis.
This research is funded by the Canton of Zurich, UZH, and the Emotive SNSF Project-Nr. 156072.
The data are available on request from the main author.
Department of Geography, University of Zurich, Winterthurerstr.190, 8057, Zurich, Switzerland
Annina Brügger
Department of Computing Science, Umeå University, 90 187, Umeå, Sweden
Kai-Florian Richter
Sara Irina Fabrikant
All authors developed the hypotheses and the experimental design. AB was responsible for data collection and data analysis. All authors were responsible for drafting, reading, editing, and approving the final manuscript.
Correspondence to Annina Brügger.
The ethics approval for this project was provided by the University of Zurich, as per the school's guidelines (swissethics number: Req-2017-00468). In line with ethical guidelines, all participants volunteered to participate, signed a written consent for study participation and were allowed to end the experiment at any time. All participants were older than 18 years old and were compensated with CHF 20.-.
The photographer (Fig. 2b) and individuals portrayed provided consent for the image to be used in this publication.
Brügger, A., Richter, KF. & Fabrikant, S.I. How does navigation system behavior influence human behavior?. Cogn. Research 4, 5 (2019). https://doi.org/10.1186/s41235-019-0156-5
Empirical user study
Human–computer interaction (HCI)
Location-based services (LBS)
Spatial cognition | CommonCrawl |
communications earth & environment
Biological nitrous oxide consumption in oxygenated waters of the high latitude Atlantic Ocean
Substantial oxygen consumption by aerobic nitrite oxidation in oceanic oxygen minimum zones
J. M. Beman, S. M. Vargas, … S. D. Wankel
Single cell analyses reveal contrasting life strategies of the two main nitrifiers in the ocean
Katharina Kitzinger, Hannah K. Marchant, … Marcel M. M. Kuypers
Microbial N2O consumption in and above marine N2O production hotspots
Xin Sun, Amal Jayakumar, … Bess B. Ward
Biological nitrogen fixation detected under Antarctic sea ice
Takuhei Shiozaki, Amane Fujiwara, … Naomi Harada
Identifying the origin of nitrous oxide dissolved in deep ocean by concentration and isotopocule analyses
Sakae Toyoda, Osamu Yoshida, … Shuichi Watanabe
Loss of fixed nitrogen causes net oxygen gain in a warmer future ocean
Andreas Oschlies, Wolfgang Koeve, … Paul Kähler
Potential for primary productivity in a globally-distributed bacterial phototroph
E. D. Graham, J. F. Heidelberg & B. J. Tully
Microbial niche differentiation explains nitrite oxidation in marine oxygen minimum zones
Xin Sun, Claudia Frey, … Bess B. Ward
Response of N2O production rate to ocean acidification in the western North Pacific
Florian Breider, Chisato Yoshikawa, … Naohiro Yoshida
Andrew P. Rees ORCID: orcid.org/0000-0003-3070-34471,
Ian J. Brown ORCID: orcid.org/0000-0001-7423-20641,
Amal Jayakumar ORCID: orcid.org/0000-0002-3568-14032,
Gennadi Lessin ORCID: orcid.org/0000-0001-9172-460X1,
Paul J. Somerfield1 &
Bess B. Ward ORCID: orcid.org/0000-0001-7870-26842
Communications Earth & Environment volume 2, Article number: 36 (2021) Cite this article
Element cycles
Marine chemistry
Nitrous oxide (N2O) is important to the global radiative budget of the atmosphere and contributes to the depletion of stratospheric ozone. Globally the ocean represents a large net flux of N2O to the atmosphere but the direction of this flux varies regionally. Our understanding of N2O production and consumption processes in the ocean remains incomplete. Traditional understanding tells us that anaerobic denitrification, the reduction of NO3− to N2 with N2O as an intermediate step, is the sole biological means of reducing N2O, a process known to occur in anoxic environments only. Here we present experimental evidence of N2O removal under fully oxygenated conditions, coupled with observations of bacterial communities with novel, atypical gene sequences for N2O reduction. The focus of this work was on the high latitude Atlantic Ocean where we show bacterial consumption sufficient to account for oceanic N2O depletion and the occurrence of regional sinks for atmospheric N2O.
N2O is now the 3rd most important greenhouse gas1 and number one stratospheric ozone depleting compound2. Constraining the magnitude of production and consumption processes is therefore essential to understand how the biotic environment contributes to atmospheric concentrations3. Sixty percent of the increasing atmospheric concentration has a natural source, approximately 30% of which is produced in coastal and oceanic waters4. Canonical understanding of the nitrogen cycle tells us that the only biological process that consumes N2O is the final stage of denitrification5 where N2O is reduced to N2 under anoxic conditions by the enzyme nitrous oxide reductase (N2OR), encoded by the nosZ gene6.
Whilst the global ocean is considered on balance to represent a strong source of N2O to the atmosphere4, this source is not uniformly distributed. Regions of the northern and southern polar waters are often undersaturated and provide a sink for atmospheric N2O7,8,9,10. Indeed from the limited number of observations made, it appears that undersaturation is the prevailing condition for the ice-free surface waters of the Arctic11,12,13. Zhan et al.10,13 argued that the extant undersaturation must be driven solely by physical processes, whilst Verdugo et al.14 proposed potential biological mechanisms.
Experimental evidence indicates the potential for dissolved N2O to be biologically consumed, even when oxygen is replete. The bacteria Paracoccus pantotrophus (previously Thiosphaera pantotropha) and Pseudomonas stutzeri have been shown under laboratory conditions to utilise both nitrate and oxygen as terminal electron acceptors in respiration during a process termed aerobic denitrification15,16. Aerobic denitrifiers have now been observed in diverse environments which include freshwater17 and saline18 wastewater treatments, in soils19 and in coastal marine sediments20. Whilst there are no reported observations of N2O consumption in oxygenated ocean waters, several authors have indicated the presence of microbial communities with the potential to reduce N2O21,22,23,24,25,26. In the Arabian Sea, Wyman et al.24 found a close affiliation between a group of alphaproteobacteria expressing nosZ and Trichodesmium, a colonial, nitrogen fixing cyanobacterium. Coates and Wyman21 extended the geographical coverage of these observations to include the tropical and sub-tropical Red Sea, Atlantic and Indian Oceans, and hypothesised denitrifying activity associated with anoxic microsites within the cyanobacterial colony. Raes et al.23 reported nosZ genes associated with Rhodobacteraceae in oxic waters of the south-eastern Indian Ocean whilst Farias et al. reported the potential for cultured and natural diazotrophs to consume N2O, though under conditions far in excess of ambient concentrations22.
The nosZ gene occurs as two equally abundant but distinct clades, one of which had previously been unaccounted for with respect to the reduction of N2O27. The well characterised conventional denitrifiers of the alpha-, beta- and gamma- proteobacteria are grouped as Clade I, whereas Clade II organisms possess the novel, atypical nosZ gene sequences that are found in organisms previously not associated with denitrification, including Firmicutes, CFB (Cytophaga-Flavobacteria-Bacteroides) supergroup and Verrumicrobia. Sun et al.26 showed that Clade II genes were particularly abundant in oxygen minimum zones (OMZs), even in surface waters, and suggested that they might be associated with aerobic N2O reduction. Both Clades I and II of the nosZ gene were detected in several geographical regions associated with OMZ of the Eastern Tropical Pacific and the Arabian Sea, and were also observed in oxygenated surface waters of the Arctic and Southern Oceans28. Neither N2O production rates nor the distribution of N2O consuming microbes has been investigated in the high latitude waters represented by this study.
There are regions of the polar oceans which are undersaturated in N2O relative to the atmosphere and therefore offer a sink to atmospheric N2O, this is recognised10,13 but not wholly explained. During two high latitude research expeditions in northern and southern waters of the Atlantic Ocean, including those presented in28 we performed incubation experiments and collected samples which indicated that N2O was consumed and that both Clade I and II denitrifiers were present in near surface waters. In the North Atlantic, measurements of dissolved N2O confirmed the existence of an N2O sink throughout the upper 100 m of the water column, with saturations in the upper 50 m of 90 ± 1% for areas north of 58.44° N. The consumption of N2O was observed at the base of the upper mixed layer (40 to 85 m) where rates were comparable to rates of net production. In near surface waters, production of N2O was not detected and the mean rates of N2O consumption were greater than the sum of the N2O supply from diapycnal transport of deep waters and surface exchange with the atmosphere. The determination of significant rates of N2O consumption coincident with the detection of organisms possessing atypical gene sequences for N2O reduction provides evidence of a biological mechanism that contributes to N2O sink conditions in these high latitude waters.
Research cruise JR271 onboard the RRS James Clark Ross during June 2012 made observations at 14 stations on a track between the northern North Sea, Greenland Sea, Fram Straits and the Barents Sea (Supplementary Fig. 1; Table 1). At five of these stations (EO1 – EO5), dissolved N2O was assayed at time of collection and then at 48 and 96 h in seawater from the near surface (10–20 m), which had been incubated under in-situ conditions of light and temperature for four days. At four out of the five stations N2O concentration decreased over the 96 h period at rates of between 0.17 and 0.31 nM d−1 (linear regression of concentration over time: r2 = 0.97, 0.99, 0.72, 0.78 for E01 to E04 respectively, Table 1). Logistical constraints meant that there was no sample replication within these experimental treatments and so data from the 5 stations were pooled and treated as replicates. To account for the heterogeneity of near-surface conditions, N2O concentrations were normalised to the initial ambient concentration and presented as percentage of the time zero value (Fig. 1a). The linear regression of N2O through time, averaged across the whole set of observations reveals a significant loss of N2O (adjusted r2 = 0.439, p < 0.005). At stations E01 to E04 the decrease equates to removal of 1.4–2.4% of the ambient N2O concentration per day. There was no indication of N2O removal at station E05 in the Barents Sea, here the mean N2O concentration remained at 10.37 ± 0.04 nM over the four-day period. Removing this station from the linear regression of decreasing N2O over time improved the fit to an adjusted r2 of 0.721 (p < 0.0005).
Table 1 Location of experiments and environmental variables encountered during JR271 (June 2012) and JR274 (January–February 2013).
Fig. 1: N2O in incubated samples.
a, b From the North Atlantic during research cruise JR271 from near surface and base of the upper mixed layer respectively; c from cruise JR274 to the South Atlantic, samples collected from the base of the upper mixed layer at stations B03 and B04. In a due to the large environmental variability individual station data is presented as percentage of the original N2O concentration. Stations were located at North Sea (E01 – filled black circle), South of Iceland (E02 – open black circle), Greenland Sea – ice edge (E03 – filled black square), Greenland Sea – in ice (E04 – open black square), Barents Sea (E05 – filled red diamond). In b mean N2O concentrations at each time point, for each station are joined by the grey lines as indicated. In b and c data from all stations were combined, data points (filled black circle) represent mean concentrations ± 1 standard deviation. In a and c the linear regression line representing loss of N2O over time is shown as the solid black line, in c production of N2O over the first 48 h is represented by the dotted line, the red dashed line represents the mean expected value of N2O concentration assuming complete equilibration with the atmosphere.
At the same five stations, seawater was also collected and incubated, this time in triplicate, from the base of the seasonal upper mixed layer (40–60 m), depths that were generally coincident with, or close to, maxima in the N2O concentration profile. All incubated samples showed an increase in N2O concentration over the first 48 h with net rates of N2O production (change in concentration over time) from 0.2–3.3 nMd−1 followed by a decrease in concentration (net consumption) of 1.0−3.7 nMd−1 over the subsequent 48 h (Fig. 1b, Table 1). Again using the pooled measurements from the 5 experiments as replicates, a one-way ANOVA showed that the mean N2O concentrations at each time point were significantly different (F2,12 = 10.2, p < 0.003). After 96 h of incubation, N2O had decreased in four (E01−E04) out of five incubations to a level below that of the initial concentration.
In order to interrogate the pattern of production and apparent consumption of N2O, further incubations were performed at Stations BO3 and BO4 in the South Atlantic during research expedition JR274 onboard RRS James Clark Ross (Supplementary Fig. 1). Similar incubations performed in triplicate on water from the base of the mixed layer were performed with an increased sampling resolution over a 5 day period (Fig. 1c, Table 1). The same pattern of production (linear regression analysis of the combined dataset over the first 48 h; slope = +0.035 ± 0.007, adjusted r2 = 0.83, p = 0.007) followed by consumption (slope = −0.041 ± 0.01, adjusted r2 = 0.741, p < 0.02) as in northern waters was observed. Production and consumption rates were comparable to those in the north (Table 1) and consumption continued up to 120 h after the start of the incubation. At 120 h the observed concentration of N2O at both stations was lower than both the start point and the atmospheric equilibration value. 2.0 and 0.9 nM of N2O were removed from sites in the vicinity of South Georgia and the South Sandwich Islands, equivalent to 8.9% and 4.4% per day of ambient concentrations respectively.
The first step of nitrification, the aerobic oxidation of ammonium to nitrite, is the dominant source of N2O in the ocean except for in surface water overlying oxygen minimum zones29, but measurements of nitrification are limited in the polar regions. The observations of N2O production presented in Table 1 for samples collected from the base of the upper mixed layer are higher than those reported previously elsewhere30, which are typically less than 1.7% of nitrite production. These rates of N2O production are not entirely unexpected if associated with high rates of nitrification, such as reported for the Southern Ocean31 and the North Sea32, and considering that N2O yield in culture of up to 10% have been recorded33. A N2O yield of 1.7% would imply nitrification rates of 10–195 nmol L−1d−1 at these stations. The pattern of reversal observed between production and consumption (Fig. 1b, c) during incubations argues that production is limited by a finite substrate not identified here, but likely NH4+:NH3, and that the observed consumption maintains a tight balance between the two processes.
nosZ gene abundance and diversity
Diverse nosZ genes, both in DNA and cDNA and including both conventional Clade I and atypical Clade II sequences, were detected in every sample (Fig. 2). Thus a surprising range of microbes, many of which were actively transcribing the functional gene for N2O reduction, were detected in oxygenated surface waters of these high latitude regions. The species richness of cDNA Archetypes (an Archetype represents organisms possessing a nosZ gene sequence with ≥ 87% identity to the probe sequence) exceeded that of DNA Archetypes at all stations except EO1 and geographically cDNA and DNA species richness was significantly higher at the northern vs southern stations (p < 0.023) (diversity measures are reported in Supplementary Table 1). cDNA and DNA community compositions were very similar within stations, with the exception of Station BO4. Multi-response permutation procedure analysis indicated that community composition was significantly different between the North Atlantic and Southern Ocean regions (p < 0.002). The community compositions represented by cDNA and DNA were not different (p < 0.875). Among the Clade I nosZ genes, Archetype NosZ20 (representing cultivated denitrifying strains Pseudomonas stutzeri and P. fluorescens) was among the most abundant archetypes in the northern samples, but was a minor component of those from the Southern Ocean. Clade II Archetypes, representing atypical nosZ genes with greatest identity to Anaeromyxobacter genes, were present in all samples but dominated the nosZ assemblage in the southern samples. Archetypes WnosZ16 and WnosZ21 had the highest relative abundance of all WnosZ archetypes in all samples.
Fig. 2: Stacked bar plot of nosZ genes.
Showing the community composition identified by microarray analysis during research cruises JR271 and JR274.
The suite of environmental and experimental variables collected for all samples from all stations is presented in Table 1 and Supplementary Table 2 and the Pearson correlation coefficient for these in Supplementary Table 3. Highly correlated variables were omitted from further ordination analysis. The ambient N2O concentration was negatively correlated with temperature (p < 0.0001) although both N2O production and consumption rates were positively correlated with temperature (p < 0.0001). Further study is required, not only to investigate the extent and distribution of these processes on geographic and seasonal scales, but also within the context of our warming environment. N2O concentration is largely governed by temperature dependent solubility, and as water temperatures increase there is potential for greater consumption (and production) which in concert with decreased solubilty indicate an increase in the polar ocean N2O sink. Despite the geographical differences in species richness and community composition noted above, N2O production and consumption were also correlated with nosZ copy number in both DNA and cDNA (p < 0.0003). Normalising the consumption rate to cDNA count (Fig. 3a) reveals a positive exponential relationship with temperature for Clade II archetypes which is not evident for the Clade I community.
Fig. 3: Relationship between Clade I and Clade II nosZ archetypes and environmental variables for samples collected at the base of surface mixed layer in the North and South Atlantic.
a N2O consumption rate normalised to cDNA count (Red = Clade II, Blue = Clade I, Black = total), b RDA plot showing relationships between environmental variables (T = temperature, BottomZ = water column depth, SampleZ = depth sample collected) and NosZ/WnosZ community compositions based on Archetypes (red cross) from the microarray analysis. Circles = DNA, triangles = cDNA. Green = EO1, orange = EO2, cyan = EO3, blue = EO4, purple = EO5, pink = BO3, yellow = BO4.
In the redundancy analysis (Fig. 3b) combining community composition and the six most important environmental variables, the first two axes explained 49% and 37% of the variability, respectively. The two southern stations clustered closely and were distinct from those from the north. NosZ65 (Marinobacter-like), and NosZ −20, −22 -and −42 (all derived from saltmarsh sediments) signals were positively correlated with qPCR_cDNA and were prominent in the North Sea station EO1 samples, which can explain why station EO1 was the only one that exhibited a strong correlation with gene copy number in cDNA (qPCR-cDNA). The other northern stations at which NosZ20 was a major signal in both the DNA and cDNA clustered in the upper left quadrant and were not highly correlated with any of the environmental variables.
At both of the polar stations (E04 and B04) presented in28 there was a notable lack of the nirS gene, which is responsible for the reduction of NO2− during the denitrification process, though the two dominant Clade II nosZ archetypes were abundant. This suggests a decoupling of the denitrification pathway and provides the potential for an efficient N2O scavenging mechanism in the absence of complete denitrification. The dominance of the WnosZ Archetypes at both of the southern stations, as well as their high relative abundance in all northern samples for both DNA and cDNA, suggests an important role for the Clade II microbes. This form of nosZ is preferentially associated with microbes that do not possess a full denitrification pathway, i.e., they do not possess the upstream enzymes to respire the soluble N oxides, and have been implicated in N2O consumption in soils27,34. It seems unlikely that the uncultivated microbes from agricultural soil and the cultivated Anaeromyxobacter strains, from which Archetypes WnosZ16 and WnosZ21, respectively, are derived, are present in the ocean. Rather, we suggest that N2O consuming marine microbes with highly similar enzymes are performing a similar function in the ocean.
The prevalence of the Clade II nosZ genes and the observed consumption of N2O at all geographical positions sampled argues very strongly that this is an important process, which had previously been unrecognised, in these high latitude waters. At the base of the upper mixed layer, the removal of N2O by the WnosZ archetypes moderates the flux of N2O from the deeper ocean to the near surface. In the near surface waters the rate of consumption is equivalent to the sum of fluxes from the atmosphere and deeper water (Fig. 4a) and thus provides a mechanism to contribute to the undersaturation of surface waters, which is prevalent in the polar regions7,11,12,13 and which was observed at all stations north of 58°N during this study (Fig. 4b).
Fig. 4: Summary budget of N2O fluxes and indication of N2O undersaturation in northern Atlantic waters during research cruise JR271.
a Mean (± 1 s.d.) rates of production, consumption & fluxes of N2O observed for stations E01 to E05 inclusive. Gross production assumes that consumption processes are constant throughout, which the experimental design did not resolve. b Mean N2O saturation (red line) ± 1 s.d. (dashed lines) for 13 stations north of 58.44°N during the same cruise. Grey line represents N2O saturation at the North Sea station E01 (56.16°N).
The melting of sea-ice has been shown to contribute to undersaturation of N2O in surface waters7,11 whilst the convection of low N2O water from the deep ocean coupled with air-sea exchange is used by Zhan et al.13 to account for what they describe as a permanent sink in the Nordic Seas. Neither of these mechanisms is necessarily exclusive to our observations, indeed all three are likely to contribute to N2O undersaturation. This current study presents coincident observations, at both poles of the Atlantic Ocean, of Clade II archetype WnosZ abundance and robust measures of apparent N2O consumption. This is a novel observation of the diverse presence and activity of an atypical gene sequence which enables an N2O removal mechanism not previously observed in oxygenated seawater. Although our measurements were restricted to summertime observations in both hemispheres it is clear that N2O consumption is occurring in oxygenated surface waters and is contributing to the presence of N2O understaturation in high latitude waters. However, more rigorous investigations are required over greater scales of time and space before incorporating this process into budgetary exercises at regional or global scales.
Seawater samples were collected from Niskin bottles deployed on a titanium frame at the near surface (10–20 m) and at the base of the surface mixed-layer (40 to 85 m) directly into ~4.5 L polycarbonate incubation bottles (NalgeneTM) (Table 1, Supplementary Fig. 1). Bottles from the near surface were sealed and transferred into incubators at ambient temperature and simulated light conditions to match those of collection. N2O concentration was determined on single bottles immediately and then at several time points up to 96 h later. Sample bottles collected from the base of the surface mixed-layer were sealed and incubated in the dark at ambient temperature within a purpose-built experimental laboratory container allowing precise temperature control. N2O concentration was determined on triplicate bottles immediately and then at several time points up to 120 h later. Temperature within a dummy incubation bottle was monitored using a traceable thermometer, while two recording thermometers were used to monitor air temperature in the incubator. Oxygen was determined by Winkler titration at the beginning of the experiment and for selected stations E05, B03 and B04 after 48 and 96 h. Samples were also collected for N2O analysis from CTD casts performed at 14 stations including the five experimental stations during JR271 only (Supplementary Fig. 1).
At six positions on a transect through the Atlantic Ocean between the UK and the Falkland Islands (Supplementray Fig. 1), tests were performed to confirm the validity of this approach and to test the integrity of the polycarbonate bottles to N2O diffusion. At each station up to 24 bottles were filled with seawater, sealed and and incubated in the dark at collection temperature. A second set of bottles were collected which were poisoned with the addition of 1 ml saturated HgCl2. N2O concentration was determined on collection, and thereafter triplicate analyses of N2O were made at several time points over 6 days. N2O concentration remained stable during each of these storage tests (Supplementary Fig. 2), the coefficient of variation varied between 1.3 and 5.2% (mean 3.9%, n = 16 time points × 3 analyses). F tests between N2O in killed samples and initial concentrations, and between killed samples and the expected (atmospheric equilibrium) concentration confirmed 86% and 87% similarity in variance respectively.
N2O analysis
Samples were collected using acid cleaned Tygon tubing directly from CTD Niskin bottles or by siphoning from 4.5 L incubation bottles into 1 L borosilicate flasks. Single samples were taken from CTD bottles and triplicates from the incubated sample. Samples were overfilled in order to expel air bubbles, poisoned with 200 µL of saturated HgCl2 solution and temperature equilibrated at 25.0 ± 0.5 °C. In all cases samples were analysed within 8 h of collection. N2O was determined by single‐phase equilibration gas chromatography with electron capture detection similar to that described by Upstill-Goddard et al.35. Each individual sample was calibrated against three certified (± 5%) reference standards of 287, 402 and 511 ppb (Air Products Ltd) which are traceable to NOAA WMO-N2O-X2006A scale for N2O mole fractions. Mean instrument precision from daily, triplicate analyses of the three calibration standards (n = 81) was 0.95%. Concentrations of N2O in seawater were calculated from solubility tables36 at equilibration temperature (~25 °C) and salinity. To capture variability within experimental treatments at the appropriate scale, mean values from each location-time combination were used as replicates in standard statistical analyses (linear regression and analysis of variance), which were performed using the Data Analysis ToolPak add-in module in Excel for Microsoft 365.
Air-sea flux
The exchange of N2O between the ocean and the atmosphere, the sea–air flux density (FN2O), was estimated from:
$$F_{{\mathrm{N}}2{\mathrm{O}}} = \left. {\left( {K_{\mathrm{w}}\left( {{\mathrm{S}}_{\mathrm{c}}/600} \right)^{0.5}} \right)} \right) \bullet \left( {{\mathrm{C}}_{\mathrm{w}} - {\mathrm{C}}_{\mathrm{a}}} \right)$$
Where Kw is the gas exchange coefficient adjusted for Sc the Schmidt number for N2O as described in ref. 31. Cw is the measured seawater concentration and Ca is the equilibrium concentration of N2O in seawater based on the measured atmospheric value.
Diapycnal flux
The flux of N2O from below the pycnocline to the surface layer (QN) was estimated from32:
$${\mathrm{Q}}_{\mathrm{N}} = K_{\mathrm{z}} \cdot \delta {\mathrm{N}}/\delta {\mathrm{z}}$$
where: δN/δz is the vertical gradient of N2O across the pycnocline, and Kz is the vertical turbulent diffusion coefficient, estimated from
$$K_{\mathrm{z}} = 0.24 \cdot {\upvarepsilon}/{\mathrm{N}}^2$$
where ε is the turbulent kinetic energy dissipation, and N is the buoyancy (Brunt-Vaisala) frequency
$${\mathrm{N}}^2 = - g/{\uprho} \cdot \Delta {\uprho}/\Delta {\mathrm{z}}$$
where g is gravitational acceleration, ρ is the density and Δρ/Δz is the vertical density gradient across the pycnocline.
Nucleic acid manipulations and quantitative PCR analysis
Seawater samples (up to 8 L) were filtered onto 0.2 µm pore size Sterivex filters (Millipore, Billerica, MA) using a peristaltic pump, and filters were flash frozen in liquid nitrogen and stored at –80 °C. Total DNA and RNA were both extracted from the same Sterivex capsule filters using the ALLPrep DNA/RNA Mini Kit (Qiagen Sciences, Maryland, USA). Only one filter was extracted for each depth, i.e., no biological replicates. Reverse transcription from RNA to cDNA was performed using SuperScript R III First-Strand Synthesis System for RT-PCR (InvitrogenTM by Life TechnologiesTM). Any remaining RNA was removed by RNase at the end of the synthesis. PCR and qPCR using SYBR Green and the Qiagen master mix (Qiagen Sciences, Maryland, USA) (as described for seawater samples37) were used to amplify a 259 bp fragment of the nosZ gene using primers nosZ1F (WCSYTGTTCMTCGACAGCCAG) and nosZ1R (ATGTCGATCARCTGVKCRTTYTC)38. Standardisation and verification of specificity for qPCR assays was performed as described in28. Primers nosZ1F and nosZ1R targeted the Clade I genes; Clade II genes were not amplified separately. Rhodopseudomonas palustris (Clade I), was used as a positive control to optimise the reaction and to construct a plasmid for use as a standard in the qPCR assays. The amplified products were visualised after electrophoresis in 1.0% agarose gels stained with ethidium bromide. Assays were carried out within a single assay plate39. Each assay included triplicates of the no template controls (NTC), no primer control (NPR), four or more serial dilutions for the standard curve, and triplicates of known quantity of the environmental DNA samples (20–25 ng). DNA was quantified using Pico Green fluorescence (MolecularProbes, Eugene, OR) calibrated with several dilutions of phage lambda standards. Quantitative PCR was performed using a Stratagene MX3000P (AgilentTechnologies, LaJolla, CA, USA). Automatic analysis settings were used to determine the threshold cycle (Ct) values. The copy numbers were calculated according to:
$${\mathrm{Copy}}\,{\mathrm{number}} = \left( {{\mathrm{ng}} \ast {\mathrm{number}}\,{\mathrm{per}}\,{\mathrm{mole}}} \right)/\left( {{\mathrm{bp}} \ast {\mathrm{ng}}\,{\mathrm{per}}\,{\mathrm{g}} \ast {\mathrm{g}}\,{\mathrm{per}}\,{\mathrm{mole}}\,{\mathrm{of}}\,{\mathrm{bp}}} \right)$$
and then converted to copy number ml−1 seawater filtered assuming 100% extraction efficiency. To maintain continuity and consistency among qPCR assays a subset of samples from the first assay was included in subsequent assays, as well as fresh dilution series for standard curves on every assay. Template DNA and plasmid DNA were quantified prior to every assay as above using Pico Green fluorescence to account for DNA loss that occurs upon repeated freeze-thaw cycles.
The microarray (BC016) was developed following the archetype array approach described and employed previously (e.g.40,41). An archetype is defined as all sequences that hybridise with an individual probe, here within 87% identify to the probe sequence. Each 90-mer oligonucleotide probe included a nosZ-specific 70-mer region and a 20-mer control region (5′-GTACTACTAGCCTAGGCTAG-3′) bound to a glass slide. The design and spotting of the probes has been described previously40. Microarray BC016 contained two probe sets (NosZ and WnosZ) for the nosZ gene29. Clade I nosZ, most commonly found in marine and terrestrial heterotrophic denitrifying bacteria and associated with cultured strains representing alpha-, beta-,and gamma-Proteobacteria, was represented by 71 NosZ probes derived from whole genome sequences in public databases plus sequences obtained from clone libraries made using DNA extracts from the Great Sippewissett Marsh in Falmouth, MA, USA42. An additional 43 WnosZ probes were included to capture the atypical Clade II nosZ sequences27,34. Cultivated members of the atypical WnosZ probe set include alpha-, beta- and delta-Proteobacteria, CFB supergroup, Firmicutes and Verrumicrobia. The probe accession numbers and sequences are listed in Table S2 and the phylogenetic trees of the probe sequences are found in28. Arrays were printed on glass slides43 by Microarrays, Inc. (Huntsville, AL, USA). Targets were prepared from the amplified DNA produced in the qPCR assays above according to41. DNA concentration of the targets were measured on a Nanodrop spectrophotometer and the volume required for 200 ng of DNA was hybridised to duplicate arrays overnight in sealed tubes and then washed according to41. Arrays were scanned with an Agilent laser scanner 4300 (Agilent, Palo Alto, CA) and analysed using GenePix 6.0 software. Replicate features on the same array were averaged to calculate the Cy3/Cy5 ratio for each probe. Relative fluorescence ratio (RFR, the fraction that each probe fluorescence (Cy3/Cy5 ratio) contributes to the total fluorescence of the probe group) and normalised fluorescence ratio (FRn, the Cy3/Cy5 ratio of each probe normalised to the maximum Cy3/Cy5 detected on that array for the probe group) were used for plotting and statistical analysis. The vegan package in R (CRAN website; http://www.R-project.org)44 was used for ordination and diversity analysis of the array data. FRn values were transformed (Arcsin(Square root)) in order to normalise the proportional data. Archetypes with FRn < 0.01 were considered absent. Environmental data were standardised around zero (decostand in vegan). The transformed data were used in redundancy analysis (RDA) using the vegan package in R45.
All N2O data from this study are available from the British Oceanographic Data Centre according to https://doi.org/10.5285/268dfd3b-dcc6-3f4a-e053-6c86abc0c2f946. The original nosZ array data are available at Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/projects/geo/) at the National Center for Biotechnology Information under GEO Accession No. GSE121473. Further supporting datasets from both cruises (JR271 and JR274) can be found at: https://www.bodc.ac.uk/projects/data_management/uk/ukoa/data_inventories/cruise. All data used in Figs. 1–4 are available either from the locations above or are found within Table 1 and supplementary information.
Butler, J. H. A. S. A. M. The NOAA Annual Greenhouse Gas Index (AGGI). 2018; Available from: http://www.esrl.noaa.gov/gmd/aggi/aggi.html.
Ravishankara, A. R., Daniel, J. S. & Portmann, R. W. Nitrous oxide (N(2)O): the dominant ozone-depleting substance emitted in the 21st century. Science 326, 123–125 (2009).
Zamora, L. M. & Oschlies, A. Surface nitrification: a major uncertainty in marine N2O emissions. Geophys. Res. Lett. 41, 4247–4253 (2014).
Bange, H. W. New directions: the importance of oceanic nitrous oxide emissions. Atmos. Environ. 40, 198–199 (2006).
Devol, A. H. Denitrification Including Anammox, in Nitrogen in the Marine Environment (Second Edition), (eds Douglas, D.A.B., Capone, G., Mulholland, M.R. & Carpenter, E.J.) 263–301 (Academic Press, 2008).
Zumft, W. G. Cell biology and molecular basis of denitrification. Microbiol. Mol. Biol. Rev. 61, 533 (1997).
Rees, A. P., Owens, N. J. P. & UpstillGoddard, R. C. Nitrous oxide in the Bellingshausen Sea and Drake Passage. J. Geophys. Res. 102, 3383–3391 (1997).
Zhan, L. Y. and L. Q. Chen, Distributions of N2O and its air-sea fluxes in seawater along cruise tracks between 30 degrees S-67 degrees S and in Prydz Bay, Antarctica. J. Geophys. Res., 114 (2009).
Zhan, L. Y. et al. A vertical gradient of nitrous oxide below the subsurface of the Canada Basin and its formation mechanisms. J. Geophys. Res. 120, 2401–2411 (2015).
Zhan, L. Y. et al. Austral summer N2O sink and source characteristics and their impact factors in Prydz Bay, Antarctica. J. Geophys. Res. 120, 5836–5849 (2015).
Fenwick, L. et al. Methane and nitrous oxide distributions across the North American Arctic Ocean during summer, 2015. J. Geophys. Res. 122, 390–412 (2017).
Randall, K. et al. First measurements of nitrous oxide in Arctic sea ice. J. Geophys. Res., 117 (2012).
Zhan, L. et al. A permanent N2O sink in the Nordic Seas and its strength and possible variability over the past four decades. J. Geophys. Res. 121, 5608–5621 (2016).
Verdugo, J. et al. Climate relevant trace gases (N2O and CH4) in the Eurasian Basin (Arctic Ocean). Deep-Sea Res. 117, 84–94 (2016).
Robertson, L. A. et al. Confirmation of 'aerobic denitrification' in batch cultures, using gas chromatography and 15N mass spectrometry. FEMS Microbiol. Ecol. 18, 113–119 (1995).
Su, J.-J., Liu, B.-Y. & Liu, C.-Y. Comparison of aerobic denitrification under high oxygen atmosphere by Thiosphaera pantotropha ATCC 35512 and Pseudomonas stutzeri SU2 newly isolated from the activated sludge of a piggery wastewater treatment system. J. Appl. Microbiol. 90, 457–462 (2001).
Frette, L., Gejlsbjerg, B. & Westermann, P. Aerobic denitrifiers isolated from an alternating activated sludge system. FEMS Microbiol. Ecol. 24, 363–370 (1997).
Fu, G. P. et al. Isolation and identification of a salt-tolerant aerobic denitrifying bacterial strain and its application to saline wastewater treatment in constructed wetlands. Bioresource Technol., 290 (2019).
Cavigelli, M. A. & Robertson, G. P. Role of denitrifier diversity in rates of nitrous oxide consumption in a terrestrial ecosystem. Soil Biol. Biochem. 33, 297–310 (2001).
Marchant, H. K. et al. Denitrifying community in coastal sediments performs aerobic and anaerobic respiration simultaneously. ISME J. 11, 1799 (2017).
Coates, C. J. & Wyman, M. A denitrifying community associated with a major, marine nitrogen fixer. Environ. Microbiol. 19, 4978–4992 (2017).
Farias, L. et al. Biological N2O fixation in the Eastern South Pacific Ocean and Marine Cyanobacterial cultures. PLoS ONE, 8 (2013).
Raes, E. J. et al. Reduction of the powerful greenhouse gas N2O in the South-Eastern Indian Ocean. PLoS ONE, 11 (2016).
Wyman, M., Hodgson, S., Bird, C. Denitrifying alphaproteobacteria from the Arabian Sea that express nosZ, the gene encoding nitrous oxide reductase, in oxic and suboxic waters. Appl. Environ. Microbiol. 79, 2670–2681 (2013).
Cornejo, M., Murillo, A. A. & Farias, L. An unaccounted for N2O sink in the surface water of the eastern subtropical South Pacific: Physical versus biological mechanisms. Prog. Oceanogr. 137, 12–23 (2015).
Sun, X., A. Jayakumar, and B. B. Ward, Community composition of nitrous oxide consuming bacteria in the oxygen minimum zone of the Eastern Tropical South Pacific. Front. Microbiol., 8 (2017).
Jones, C. M. et al. The unaccounted yet abundant nitrous oxide-reducing microbial community: a potential nitrous oxide sink. ISME J 7, 417–426 (2013).
Jayakumar, A. et al. Community composition of nitrous oxide reducing bacteria investigated using a functional gene microarray. Deep Sea Res. 156, 44–50 (2018).
Ji, Q. X. et al. Global nitrous oxide production determined by oxygen sensitivity of nitrification and denitrification. Global Biogeochem. Cycles 32, 1790–1802 (2018).
Ji, Q. X. & Ward, B. B. Nitrous oxide production in surface waters of the mid-latitude North Atlantic Ocean. J. Geophys. Res. 122, 2612–2621 (2017).
Nightingale, P. D. et al. In situ evaluation of air-sea gas exchange parameterizations using novel conservative and volatile tracers. Global Biogeochem. Cycles 14, 373–387 (2000).
Slawyk, G. et al. Isotopic and enzymatic analyses of planktonic nitrogen utilisation in the vicinity of Cape Sines (Portugal) during weak upwelling activity. Deep-Sea Res. 44, 1–25 (1997).
Goreau, T. J. et al. Production of NO2- and N2O by nitrifying bacteria at reduced concentrations of oxygen. Appl Environ Microbiol. 40, 526–532 (1980).
Sanford, R. A. et al. Unexpected nondenitrifier nitrous oxide reductase gene diversity and abundance in soils. Proc. Natl Acad. Sci. USA 109, 19709–19714 (2012).
Upstill-Goddard, R. C., Rees, A. P. & Owens, N. J. P. Simultaneous high-precision measurements of methane and nitrous oxide in water and seawater by single phase equilibration gas chromatography. Deep-Sea Res. 43, 1669–1682 (1996).
Weiss, R. F. & Price, B. A. Nitrous-oxide solubility in water and seawater. Marine Chem. 8, 347–359 (1980).
Jayakumar, A., Peng, X. F. & Ward, B. B. Community composition of bacteria involved in fixed nitrogen loss in the water column of two major oxygen minimum zones in the ocean. Aquatic Microb. Ecol. 70, 245 (2013).
Henry, S. et al. Quantitative detection of the nosZ gene, encoding nitrous oxide reductase, and comparison of the abundances of 16S rRNA, narG, nirK, and nosZ genes in soils. Appl. Environ. Microbiol. 72, 5181–5189 (2006).
Smith, C. J. et al. Evaluation of quantitative polymerase chain reaction-based approaches for determining gene copy and gene transcript numbers in environmental samples. Environ. Microbiol. 8, 804–815 (2006).
Bulow, S. E. et al. Sediment denitrifier community composition and nirS gene expression investigated with functional gene microarrays. Environ. Microbiol. 10, 3057–3069 (2008).
Ward, B. B. and N. J. Bouskill, The utility of functional gene arrays for assessing community composition, relative abundance, and distribution of ammonia-oxidising bacteria and archaea, in Methods in Enzymology, Vol 46: Research on Nitrification and Related Processes, Pt B, (eds Klotz, M.G. & Stein, L.Y.) 2011. p. 373–396.
Kearns, P. J. et al. Long-term nutrient addition differentially alters community composition and diversity of genes that control nitrous oxide flux from salt marsh sediments. Estuarine Coast. Shelf Sci. 154, 39–47 (2015).
DeRisi, J. L., Iyer, V. R. & Brown, P. O. Exploring the metabolic and genetic control of gene expression on a genomic scale. Science 278, 680–686 (1997).
Borcard, D., R. Gillet, and P. Legendre, Numerical Ecology with R. Use R! 2011: Springer. 306.
Veuger, B. et al. Nitrification and growth of autotrophic nitrifying bacteria and Thaumarchaeota in the coastal North Sea. Biogeosciences 10, 1775–1785 (2013).
Brown, I. and A. P. Rees, Nitrous Oxide (N2O) concentration data from CTD casts and Bioassay experiments during cruises JR271 (North Sea to Arctic Ocean) in June 2012 and JR274 (Scotia Sea, Southern Ocean) in January-February 2013, U.K. British Oceanographic Data Centre - Natural Environment Research Council, Editor. 2015.
We thank the captains and crew of the RRS James Clark Ross during research cruises JR271, JR274 and JR300. We are grateful to Eric Acterberg, Richard Sanders, Mark Moore and Mark Stinchcombe for nutrient and oxygen measurements and acknowledge the contributions made to this work by Vas Kitidis, Darren Clark and Phil Nightingale. This work was funded by the UK Natural Environment Research Council Grant, UKOA-Ocean Acidification impacts on sea surface biology, biogeochemistry & climate (NE/H017259/1) and further by NERC National Capability funding as part of the Climate Linked Atlantic Section Science (CLASS) programme (NE/R015953/1).
Plymouth Marine Laboratory, Prospect Place, Plymouth, UK
Andrew P. Rees, Ian J. Brown, Gennadi Lessin & Paul J. Somerfield
Department of Geosciences, Guyot Hall, Princeton University, Princeton, NJ, USA
Amal Jayakumar & Bess B. Ward
Andrew P. Rees
Ian J. Brown
Amal Jayakumar
Gennadi Lessin
Paul J. Somerfield
Bess B. Ward
A.R., I.B., and B.W. designed the research; A.R. and I.B. measured and analysed the N2O; A.J. and B.W. measured and analysed nosZ; Statistical analysis of the N2O experiments was by PS; Interpretation of data and writing of manuscript was led by A.R. with contributions from B.W., G.L., P.S., I.B. and A.J.
Correspondence to Andrew P. Rees.
Peer review information Primary handling editor: Teresa Ortner.
Rees, A.P., Brown, I.J., Jayakumar, A. et al. Biological nitrous oxide consumption in oxygenated waters of the high latitude Atlantic Ocean. Commun Earth Environ 2, 36 (2021). https://doi.org/10.1038/s43247-021-00104-y
Accepted: 20 January 2021
Unexpectedly minor nitrous oxide emissions from fluvial networks draining permafrost catchments of the East Qinghai-Tibet Plateau
Liwei Zhang
Sibo Zhang
Emily H. Stanley
Nitrous oxide and methane in a changing Arctic Ocean
Hermann W. Bange
Carol Turley
Ambio (2022)
Communications Earth & Environment (Commun Earth Environ) ISSN 2662-4435 (online) | CommonCrawl |
The antityrosinase and antioxidant activities of flavonoids dominated by the number and location of phenolic hydroxyl groups
Ai-Ren Zuo1,2,
Huan-Huan Dong2,
Yan-Ying Yu3,
Qing-Long Shu2,
Li-Xiang Zheng2,
Xiong-Ying Yu2 &
Shu-Wen Cao1,3
Compounds with the ability to scavenge reactive oxygen species (ROS) and inhibit tyrosinase may be useful for the treatment and prevention from ROS-related diseases. The number and location of phenolic hydroxyl of the flavonoids will significantly influence the inhibition of tyrosinase activity. Phenolic hydroxyl is indispensable to the antioxidant activity of flavonoids. Isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin have respectively one, two, three, four, or five phenolic hydroxyls. The different molecular structures with the similar structure to l-3,4-dihydroxyphenylalanine (l-DOPA) were expected to the different antityrosinase and antioxidant activities.
This investigation tested the antityrosinase activity, the inhibition constant, and inhibition type of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin. Molecular docking was examined by the Discovery Studio 2.5 (CDOCKER Dock, Dassault Systemes BIOVIA, USA). This experiment also examined the antioxidant effects of the five compounds on supercoiled pBR322 plasmid DNA, lipid peroxidation in rat liver mitochondria in vitro, and DPPH, ABTS, hydroxyl, or superoxide free radical scavenging activity in vitro.
The compounds exhibited good antityrosinase activities. Molecular docking results implied that the compounds could interact with the amino acid residues in the active site center of antityrosinase. These compounds also exhibited antioxidant effects on DPPH, ABTS, hydroxyl, or superoxide free radical scavenging activity in vitro, lipid peroxidation in rat liver mitochondria induced by Fe2+/vitamin C system in vitro, and supercoiled pBR322 plasmid DNA. The activity order is isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin. The results showed the compounds with more phenolic hydroxyls have more antioxidant and antityrosinase activities.
This was the first study of molecular docking for modeling the antityrosinase activity of compounds. This was also the first study of the protective effects of compounds on supercoiled pBR322 plasmid DNA, the lipid peroxidation inhibition activity in liver mitochondria. These results suggest that the compounds exhibited antityrosinase and antioxidant activities may be useful in skin pigmentation and food additives.
Flavonoids play a key role in the treatment of various diseases. Compounds with the ability to protect against DNA damage caused by reactive oxygen species (ROS) and inhibit tyrosinase may be useful for the treatment and prevention from ROS-related diseases. Flavonoids are a large type of compounds in natural products. Flavonoids already have been used widely as lead compounds or drugs.
Some studies showed that the number and location of phenolic hydroxyl on the flavonoids will significantly influence the inhibition of tyrosinase activity [1,2,3]. The number of phenolic hydroxyl on the B ring of flavonoids or catechins structure or resorcinol structure, can greatly enhance the inhibition of tyrosinase activity. At present, 4-hexyl resorcinol have been used as commodity in shrimp preservation [4]. The number and position of phenolic hydroxyl on the 1,2-diphenylethene derivatives can greatly effect the inhibition of tyrosinase activity. Two phenol hydroxyls compared to one hydroxyl and phenol hydroxyl replaced methoxyl will significantly enhance the inhibition of tyrosinase activity [5,6,7].
The tyrosinase inhibition mechanism of phenol hydroxyl compounds was analysed. Because the activity center of tyrosinase is hydrophobic, H +, combined with Eoxy double oxygen, only come from the hydroxyl of tyrosine and dopamine. Phenol hydroxyl compounds, similar to tyrosine and dopamine, can inhibit the activity of tyrosinase [8].
Phenolic hydroxyl is indispensable to the antioxidant activity of flavonoids. Many studies showed that the antioxidant activity increased with the phenol hydroxyl number in B ring of flavonoids. Seyoum [9] studied the activity of scavenging free radicals of 52 kinds of flavonoids. The result showed that two or three phenol hydroxyls compared to one hydroxyl in A ring or B ring, will greatly enhance the antioxidant activity.
The relationship between phenolic hydroxyl number and antioxidant activity of flavonoids is very significant. The reason may be: (1) the more phenolic hydroxyl number, the more H+ combined with free radicals; (2) the phenolic hydroxyl has strongly denounce electronic effect, which result to the free radicals reaction; (3) the more phenolic hydroxyl number, the more hydrogen bonding, antioxidant activity is also enhanced obviously [10].
The number and location of phenolic hydroxyl of the flavonoids will significantly influence the inhibition of tyrosinase activity. Phenolic hydroxyl is indispensable to the antioxidant activity of flavonoids. Isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin have respectively one, two, three, four, or five phenolic hydroxyls. The different molecular structures with the similar structure to l-3,4-dihydroxyphenylalanine (l-DOPA) were expected to the different antityrosinase and antioxidant activities.
Tyrosinase (EC 1.14.18.1) plays a key role in the biosynthesis of melanin pigment [11]. Under normal physiological conditions, melanin plays a key role in the protection against UV injury, animal mimicry and camouflage [12]. Thus, it has attracted researchers to find efficient tyrosinase inhibitors. Recently, molecular docking for modeling the antityrosinase activity of compounds had been used widely in drug design [13].
Isoeugenol is the major constituent of Eugenia caryophyllata Thunb., which has extensive pharmacological activities, such as antimicrobe, stomach-invigorating. The result of Jin [14] indicated that isoeugenol analogs exhibited the cytotoxic activity against A549, KB, and KB-VCR cell lines.
Shikonin is the major constituent of Arnebia euchroma(Royle)Johnst, which has extensive pharmacological activities. Shikonin has good antioxidant activities, which supports the use of shikonin as the new anti-aging candidate drug, cosmetic materials and food additives. The results of Chen [15] revealed that SK-Hep-1 cells apoptosis induced by shikonin proceeds by involvement of reactive oxygen species and an oxidative stress-mediated pathway.
Baicalein, a kind of oriental medicine, exhibits antioxidant and anti-inflammatory activities. The results of Li-Weber [16] revealed that baicalein can inhibit several genes of the cell cycle, attenuate NF-κB activity, and scavenge many kinds of oxidative radicals.
Rosmarinic acid, isolated from Perilla frutescens (L.) or Rosmarinus officinalis, exhibits many potent biological activities. The result of Zhu [17] indicated that rosmarinic acid extract exhibits the high activity of inhibiting á-glucosidase for allergy treatments and diabetes mellitus.
Dihydromyricetin can be used to scavenge the free radicals. It also has the effects of anti-oxidation and anti-tumour. Based on the results of Xin [18], dihydromyricetin was less toxic and highly effective as a good, natural antioxidant for polypropylene.
This investigation tested the antityrosinase activity, the inhibition constant, and inhibition type of compounds. Molecular docking can simulate the binding mode and binding affinity of the tyrosinase and compounds. This investigation also tested the antioxidant effects of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin on supercoiled pBR322 plasmid DNA, lipid peroxidation, and DPPH, ABTS, hydroxyl, or superoxide free radical scavenging activity in vitro.
Isoeugenol, shikonin, baicalein, rosmarinic acid, dihydromyricetin, l-3,4-dihydroxyphenylalanine (l-DOPA), tyrosinase (EC 1.14.18.1), phenanthroline, pyrogallol, 2, 2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS), diphenyl-2-picrylhydrazyl (DPPH), thiobarbituric acid (TBA), and 2,2′-azobis(2-methylpropionamidine)dihydrochloride (AAPH) were purchased from the Sigma Chemical Company (St. Louis, MO, USA). C3606 reagent kit for organization mitochondria separation was purchased from Shanghai Biyuntian company. Disodium phosphate, sodium dihydrogen phosphate, K2S2O8, potassium sulfate, and ferrous sulfate were purchased from Sinopharm Chemical Reagent Co., Ltd (Shanghai, China). All other solvents and chemicals with analytical grade were commercially available. The Minimum Standards of Reporting Checklist contains details of the experimental design, and statistics, and resources used in this study (Additional file 1).
Tyrosinase activity assay
According to the reference of Chen et al. [19], the tyrosinase activity was measured using l-DOPA as a substrate. Dimethyl sulfoxide (DMSO) was used to dissolve the inhibitor samples. l-DOPA in PBS buffer (pH 6.8) was previously incubated at 30 °C. Then, 0.1 mL sample was mixed with 2.8 mL l-DOPA (0.5 mM). After 1 min, the mixture was added to 0.1 mL of tyrosinase solution (5.33 μg/mL) at 475 nm for 400 s, the absorbance was immediately monitored. The relative enzyme activity was regarded as the slope of the linear part. The inhibitory concentration 50 (IC50) was used to examined the antityrosinase activity. Each sample was examined in five times and averaged. The inhibitory rate was examined according to the formula:
$${\text{Inhibitory}}\,{\text{rate}}\left( \% \right) = [({\text{S}}_{0} - {\text{S}}_{ 1} )/{\text{S}}_{0} ] \times 100\%$$
where S1 is the slope value with samples and S0 is the slope value without samples.
Determination of the inhibition type and inhibition constant
By Lineweaver–Burk plot, the inhibition type was assayed. The inhibition constant was assayed by the second plots of the apparent Km/Vmapp or 1/Vmapp versus the concentration of the inhibitor.
Molecular docking study
Molecular docking can predict the binding mode and binding affinity of the tyrosinase and compounds. From the Protein Data Bank (UCSD/SDSC and Rutgers, http://www.rcsb.org/), the crystal structure of tyrosinase (PDB code: 2Y9X) was available [20]. The polar hydrogen was added and all ligands and bound water were eliminated. The ligands were used as configuration of each compound. Using Discovery Studio Version 4.5 (CDOCKER Dock, Dassault Systemes BIOVIA, USA), molecular docking was carried out and the interactions were analyzed [21].
DPPH free radical scavenging activity
According to the references of Lee et al. [22], DPPH free radical scavenging capacity was measured. In the tube, 1 mL of tested samples in different concentrations was added in turn. 3.5 mL of ethanol and 0.5 mL of 0.6 mmol/L DPPH methanol solution were added. In room temperature and a dark environment, the reaction lasted 30 min. The wavelength used was 517 nm. Each sample was examined in three times and averaged. The DPPH scavenging activity was examined according to the formula:
$${\text{DPPH}}\,{\text{scavenging}}\,{\text{activity }}\left( \% \right) = \left[ {\left( {A_{\text{C}} - A_{\text{S}} } \right)/A_{\text{C}} } \right] \times 100\%$$
where AS is the absorbance value with samples and AC is the absorbance value without samples.
ABTS free radical scavenging activity
According to the references of Wan et al. [23], ABTS free radical scavenging capacity was measured. ABTS was dissolved in water to make 7 mmol/L ABTS water solution. ABTS + was produced by reacting 2.45 mmol/L potassium persulfate (K2S2O8) with the ABTS stock solution. The reaction lasted 12–16 h at room temperature in the dark. The absorbance of ABTS+ stock solution at 734 nm was 0.70 ± 0.02, diluted with methanol.
Samples (0.5 mL) were added to ABTS+ (5 mL) for 6 min. The control group contains 0.5 mL of ethanol and 5 mL of ABTS+ solution. Each sample was examined in three times and averaged. The ABTS+ scavenging activity was examined according to the formula:
$${\text{ABTS}}^{ + } \,{\text{scavenging}}\,{\text{activity }}\left( \% \right) = \left[ {\left( {A_{\text{C}} - A_{\text{S}} } \right)/A_{\text{C}} } \right] \times 100\%$$
Hydroxyl free radical scavenging activity
According to the references of De Avellar IGJ et al. [24], hydroxyl free radical scavenging capacity was measured. In the tube, 0.2 mL of samples, 1 mL of PBS buffer (pH = 7.4),0.2 mL of 5 mmol/L phenanthroline, 0.2 mL of 7.5 mmol/L FeSO4, 0.2 mL of 0.05% H2O2, 3.2 mL of ethanol were added in turn for 20 min in 37 °C. The wavelength used was 536 nm. Each sample was examined in three times and averaged. The hydroxyl free radical scavenging activity was examined according to the formula:
$${\text{Hydroxyl}}\,{\text{free}}\,{\text{radical}}\,{\text{scavenging}}\,{\text{activity }}\left( \% \right) = \left[ {\left( {A_{\text{C}} - A_{\text{S}} } \right)/A_{\text{C}} } \right] \times 100\%$$
Superoxide free radical scavenging activity
According to the references of Shen et al. [25], superoxide free radical scavenging capacity was measured using Varioskan Flash multifunction microplate reader (Thermo scientific, USA) and 96 wells plates. Each well was added 264 μL PBS buffer (pH = 8.2), 12 μL samples of different concentrations, 25 °C for 10 min. Then 24 μL of 1.25 mmoL/L pyrogallol solution was added and shaken 3 s quickly. The blank group is ethanol. Absorbance values were measured every 30 s. The reaction lasted 5 min in 37 °C. The wavelength used was 320 nm. Each sample was measured in triplicate and averaged. The slope is the self-oxidation rate of pyrogallol. The lower slope indicated the better superoxide free radicals scavenging capacity.
Each sample was examined in three times and averaged. The inhibitory rate was examined according to the formula:
$${\text{Superoxide}}\,{\text{free}}\,{\text{radical}}\,{\text{scavenging}}\,{\text{activity }}\left( \% \right) = \left[ {\left( {S_{\text{C}} - S_{\text{S}} } \right)/S_{\text{C}} } \right] \times 100\%$$
where SC is the slope value without samples and SS is the slope value with samples.
Lipid peroxidation assay in liver mitochondria in vitro
Using the diagnostic kits from Biyuntian (Shanghai, China), liver mitochondria were obtained. The liver mitochondria from Sprague–Dawley (SD) rats were obtained, according to the references of Zuo et al. [26].
In the tubes, 1 mL of mitochondria liquid, 0.5 mL of antioxidant solution, 0.25 mL of 1 mM Vitamin C, and 0.25 mL of 0.1 mM Fe2+ were added in turn. The positive control group contains 0.5 mL of 0.05 M PBS buffer, instead of the antioxidant solution. The blank group was added 1 mL of mitochondrial liquid and 1 mL of 0.05 M PBS buffer. The reaction lasted for 1 h at 37 °C. 2.5% hydrochloric acid solution and 2 mL of 20% CCl3COOH were added for 10 min, followed by 0.3% NaOH solution and 2 mL of 0.67% TBA were added. The test tubes were placed in the water for 30 min at 95 °C, then centrifuged for 10 min at 1372g. The wavelength used was 532 nm. Each sample was examined in three times and averaged. The lipid peroxidation inhibition activity was examined according to the formula:
$${\text{Lipid\,peroxidation\,inhibition\,activity }}\left( \% \right) = [(A_{\text{C}} - A_{\text{S}} )/A_{\text{C}} ] \times 100\%$$
Supercoiled pBR322 plasmid DNA assay
According to the references of Lin et al., and Zuo et al. [27, 28], supercoiled pBR322 plasmid DNA assay was measured. Briefly, 10 mM AAPH in PBS (pH 7.4) was added 100 ng of pBR322 DNA to a final volume of 25 μL in microcentrifuge tubes at 37 °C for 1 h. The 25 μL solution contains 15 μL AAPH, 5 μL DNA, 5 μL antioxidants. Five microliter distilled water was used in the absence of antioxidants. After incubation, 2 μL 10× loading buffer were mixed with the samples, loaded into a 0.8% agarose gel. The agarose gel was electrophoresed in 1× TAE gel buffer for 75 min (20 mA, 50 V). Using the Bio-Rad Gel Doc XR system (New York, America), the gels were then photographed under UV transillumination. DNA strand breaks were evaluated. The amount of supercoiled DNA was quantified by the Bio-Rad Quantity One software.
One-way ANOVA was used to analyze the differences among means, and statistically significant was considered by a P < 0.05 value (SPSS version 13.0, SPSS).
The substrate of tyrosinase for the diphenolase activity assay was l-DOPA. The results showed that a group of lines with different slopes passing through the origin was the progress curve of enzyme reaction. The slope indicated the diphenolase activity. In the progress of oxidation of l-DOPA, the lag period did not exist. Isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin exhibited, with dose dependence, inhibitory effect on tyrosinase diphenolase activity. The IC50 values of the five compounds on the tyrosinase diphenolase activity were respectively 33.33 μmol/L, 26.67 μmol/L, 13.33 μmol/L, 6.67 μmol/L, and 3.33 μmol/L (n = 5, P < 0.05, Fig. 1; Table 1). The order of activity was: isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin. Therefore, the five compounds had obvious inhibitory effects on the tyrosinase diphenolase activity. The order of activity was very consistent with the docking score between tyrosinase and compounds.
The inhibition effects of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin on the diphenolase activity of mushroom tyrosinase. The IC50 values of the five compounds on the tyrosinase diphenolase activity were respectively 33.33 μmol/L, 26.67 μmol/L, 13.33 μmol/L, 6.67 μmol/L, and 3.33 μmol/L (n = 5, P < 0.05)
Table 1 The IC50 values of flavonoids
Inhibition mechanism on the diphenolase activity of tyrosinase
The inhibitory mechanism of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin on tyrosinase for oxidation of l-DOPA was examined. The relationship between the concentration of five compounds and enzyme activity was examined. The inhibitory mechanism of shikonin on tyrosinase was tested. As shown in Fig. 2, at different inhibitor concentrations, the plots of enzyme activity versus the enzyme concentration gave a family of straight lines, which all passed through the origin. The final concentration of shikonin for curves 1–5 was respectively 0 μmol/L, 3.3 μmol/L, 6.67 μmol/L, 13.33 μmol/L, and 26.67 μmol/L. The presence of an inhibitor resulted in the inhibition of enzyme activity, but did not reduce the amount of enzyme. The inhibitors showed the same behavior. The results exhibited that isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were reversible inhibitors of tyrosinase diphenolase.
Determination of the inhibitory mechanism of shikonin on mushroom tyrosinase. The results showed that shikonin was reversible inhibitor of tyrosinase for the oxidation of l-DOPA. l-DOPA = l-3,4-dihydroxyphenylalanine
By Lineweaver–Burk double-reciprocal plots for the inhibition tyrosinase diphenolase, the inhibition type of the five compounds was examined. The enzyme kinetics in the presence of shikonin are shown in Fig. 3. The final concentration of shikonin for curves 1–6 was respectively 0 μmol/L, 3.3 μmol/L, 6.67 μmol/L, 13.33 μmol/L, 26.67 μmol/Land 33.33 μmol/L. Lineweaver–Burk double-reciprocal plots were the plots of 1/v versus 1/[S]. A family of straight lines intercepted in the second quadrant, which indicated that shikonin was a competitive–uncompetitive mixed type inhibitor (Fig. 3a). It indicated that shikonin can combine with not only enzyme–substrate complexes, but also free enzymes. From a plot of the slope (Km/Vmapp) versus the concentration of the inhibitor, KI was measured (Fig. 3b). From a plot of the vertical intercept (1/Vmapp) versus the concentration of inhibitor, KIS was measured (Fig. 3c). The values of KI and KIS were determined as 19.0 μM and 48.6 μM, respectively. By contrast, isoeugenol was the same inhibitor type as shikonin, and the inhibitor constants (KI and KIS) were determined as 25.6 μM and 64.7 μM, respectively. Baicalein was the same inhibitor type as shikonin, and the inhibitor constants (KI and KIS) were determined as 16.5 μM and 38.4 μM, respectively. Rosmarinic acid was the same inhibitor type as shikonin, and the inhibitor constants (KI and KIS) were determined as 14.3 μM and 29.8 μM, respectively. Dihydromyricetin was the same inhibitor type as shikonin, and the inhibitor constants (KI and KIS) were determined as 10.26 μM and 23.6 μM, respectively.
a Lineweaver–Burk plots for the inhibition of shikonin on mushroom tyrosinase for the oxidation of l-DOPA. b The plot of slope versus the concentration of shikonin for determining the inhibition constants KI. KI = 19 μmol/L. c The plot of intercept versus the concentration of shikonin for determining the inhibition constants KIS. KIS = 48.6 μmol/L.KI = equilibrium constant for inhibitor binding with free enzyme; KIS = enzyme–substrate complex; l-DOPA = l-3,4-dihydroxyphenylalanine
Molecular docking
Figure 4 shows that docking simulations colored 2D-representations of binding mode and binding position between tyrosinase and compound isoeugenol (a), shikonin (b), baicalein (c), rosmarinic acid (d), and dihydromyricetin (e), respectively. The binding interactions between tyrosinase and compound include mainly the pi–pi stacked, conventional hydrogen bond, pi–alkyl, and alkyl. Molecular docking results implied that the compounds could interact with the amino acid residues in the active center of tyrosinase.
Docking simulations 2D diagram of binding position and binding mode between tyrosinase and compound isoeugenol (a), shikonin (b), baicalein (c), rosmarinic acid (d), and dihydromyricetin (e), respectively
The docking score between tyrosinase and compound isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin was 33.14, 36.13, 37.93, 44.56, 50.98, respectively. The order of activity was: isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin. The order of activity was very consistent with the experimental results (Fig. 1). Docking score indicates the interaction affinity between enzyme and ligand by the optimized algorithm, which helps to speculate the scope of inhibitory activity. The main significance of docking score is the evaluation index for quick preliminary screening compounds. In this paper, based on the docking score, the inhibit tyrosinase activity of five typical compounds was verified by the experiments in vitro.
Figure 5 shows that docking simulations of conformational changes and binding position between tyrosinase and inhibitors. Colored 3D-representations of the protein–ligand complex showed that surface and conformation changes of compounds before (a) and after (b) docking into tyrosinase. Docking simulations of binding position of compound isoeugenol (A), shikonin (B), baicalein (C), rosmarinic acid (D), and dihydromyricetin (E), respectively, in the hydrophobic pocket of tyrosinase (c), which indicates the inhibition mechanism on the diphenolase activity of tyrosinase.
Colored 3D-representations of the protein–ligand complex showed that surface and conformation changes of compounds before (a) and after (b) docking into tyrosinase, and docking simulation of compound isoeugenol (A), shikonin (B), baicalein (C), rosmarinic acid (D), and dihydromyricetin (E), respectively, in the hydrophobic pocket of tyrosinase (c)
The combination mode and binding sites of tyrosinase and five typical compounds were studied by molecular simulation. The results showed that these compounds enter the hydrophobic activity cavity of tyrosinase, change the enzyme conformation, which in turn affect the catalytic activity. The hydrogen bonds between Met 280, Val 283, His 85 residues and compounds, the pi–pi bonds between Phe 264, His 244, His 259, or His 263 and compounds or pi–alkyl bonds between Val 283, Val 248 and compounds, may be related to the identification and fix the ligand and tyrosinase. Besides phenolic hydroxyls, the scaffold components of different compounds may also effect on their antityrosinase activities. Particularly, different hydrophobic groups may have significant contribution to binding with the hydrophobic cavity of the target proteins. The molecular docking results showed the detailed information and the visual evidence of the binding position between the tyrosinase and inhibitors. The similar binding position and binding mode may be the similar inhibition mechanism. However, without any experimental evidence, the developed models will be too early to be applicable for antityrosinase activity of compounds. The result of Seo [29] indicated that CDOCKER and CDOCKER interaction energies of quercetin and its analogues were decreased by C151W mutation whereas benzoic acid and its analogues did not lower the energies. In particular, the results illustrated the blockage of pi–pi stacked or pi–alkyl interactions between quercetin and quercetin-4′-methyl ether and His154 or Val132. These results indicate that the influence of Cys 151 residue of Keap1 keeps on the interaction between compounds and Keap1 protein.
Figure 6 shows that isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin had obvious DPPH free radical scavenging activity. The IC50 values of DPPH free radical scavenging capacity of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 101.6 μmol/L, 83.2 μmol/L, 58.6 μmol/L, 28.5 μmol/L, and 12.4 μmol/L (n = 3, P < 0.05, Table 1). The order of activity was: isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin.
The relationship between final concentration and the ratio of scavenging DPPH radicals. The IC50 values of DPPH free radical scavenging capacity of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 101.6 μmol/L, 83.2 μmol/L, 58.6 μmol/L, 28.5 μmol/L, and 12.4 μmol/L (n = 3, P < 0.05). DPPH 1,1-diphenyl-2-picrylhydrazyl
The result of Zhu [17] indicated that IC50 of DPPH radical scavenging activity of rosmarinic acid extract was 5.5 ± 0.2 μg/mL, and IC50 of α-glucosidase inhibitory activity was 0.23 ± 0.01 mg/mL. The result of Liu [30] showed that IC50 of DPPH radical scavenging activity of the dihydromyricetin–lecithin complex was 22.60 μg/mL. The result of Xu [31] showed that the scavenging capacity of hydroxyl radical (·OH), superoxide radical (O2·), and alkane radical (ROO·) for dihydromyricetin was 83.9%, 90.0%, and 63.9% respectively.
Figure 7 shows that isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin had obvious ABTS free radical scavenging activity. The IC50 values of ABTS free radical scavenging capacity of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 36.36 μmol/L, 27.27 μmol/L, 9.09 μmol/L, 6.82 μmol/L, and 3.41 μmol/L (n = 3, P < 0.05, Table 1). The order of activity was: isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin.
The relationship between final concentration and the ratio of scavenging ABTS radicals. The IC50 values of ABTS free radical scavenging capacity of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 36.36 μmol/L, 27.27 μmol/L, 9.09 μmol/L, 6.82 μmol/L, and 3.41 μmol/L (n = 3, P < 0.05). ABTS = 2,2′-azino-bis-(3-ethylbenzothiazoline-6-sulphonic acid)
Figure 8 shows that isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin had obvious hydroxyl free radical scavenging activity. The IC50 values of hydroxyl free radical scavenging capacity of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 32.5 μmol/L, 18.3 μmol/L, 11.6 μmol/L, 8.3 μmol/L, and 4.2 μmol/L (n = 3, P < 0.05, Table 1). The order of activity was: isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin.
The relationship between final concentration and the ratio of scavenging hydroxyl radicals. The IC50 values of hydroxyl free radical scavenging capacity of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 32.5 μmol/L, 18.3 μmol/L, 11.6 μmol/L, 8.3 μmol/L, and 4.2 μmol/L (n = 3, P < 0.05)
Figure 9 shows that isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin had obvious superoxide free radical scavenging activity. The IC50 values of superoxide free radical scavenging capacity of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 38.2 μmol/L, 31.5 μmol/L, 16.1 μmol/L, 12.3 μmol/L, and 7.6 μmol/L (n = 3, P < 0.05, Table 1). The order of activity was: isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin.
The relationship between final concentration and the ratio of scavenging superoxide radicals. The IC50 values of superoxide free radical scavenging capacity of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 38.2 μmol/L, 31.5 μmol/L, 16.1 μmol/L, 12.3 μmol/L, and 7.6 μmol/L (n = 3, P < 0.05)
Figure 10 shows that isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin had obvious activity of inhibiting lipid peroxidation. The IC50 values of inhibiting lipid peroxidation of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 25.1 μmol/L, 16.67 μmol/L, 12.5 μmol/L, 8.33 μmol/L, and 6.25 μmol/L (n = 3, P < 0.05, Table 1). The order of activity was: isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin.
The relationship between final concentration and the ratio of inhibiting lipid peroxidation. The IC50 values of inhibiting lipid peroxidation of isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin were respectively 25.1 μmol/L, 16.67 μmol/L, 12.5 μmol/L, 8.33 μmol/L, and 6.25 μmol/L (n = 3, P < 0.05)
Figure 11a shows that in the absence of AAPH, the plasmid DNA was mainly supercoiled. The supercoiled form of plasmid DNA was changed into the linear forms and open circular with the addition of 10 mM AAPH. In the presence of 10 μM compounds, the amount of supercoiled form increased, but the amount of the linear and circular forms decreased. The amount of supercoiled plasmid DNA was quantified by the Bio-Rad Quantity One software. Figure 11b shows the observed values. Thus, these compounds exhibited protection against free radical injury induced by AAPH in a dose-dependent manner. The order of inhibition activity was: isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin.
a Agarose gel electrophoretic patterns of supercoiled pBR322 plasmid DNA converted into the open circular by AAPH in the presence or absence of samples (10 μM). b The effects of samples on supercoiled pBR322 plasmid DNAconverted into the open circular by AAPH in the presence or absence of samples(10 μM). Lane 1: control (native pBR322 DNA, without AAPH); Lane 2: AAPH; Lane 3: AAPH + isoeugenol; Lane 4: AAPH + shikonin; Lane 5: AAPH + baicalein; Lane 6: AAPH + rosmarinic acid; Lane 7: AAPH + dihydromyricetin. The density of the supercoiled DNA form was quantified by Quantity One (Bio-Rad). Data are the average of three determinations; C open circular, S supercoil, AAPH 2,2′-azobis(2-methylpropionamidine)dihydrochloride
An index of DNA damage is used as the change of bacteriophage or plasmid DNA from the supercoiled form to the linear forms. Strand breaks in pBR322 DNA can be caused by the presence of AAPH [32].
Isoeugenol is the major constituent of E. caryophyllata Thunb. The result of Hubungan [33] indicated that antioxidant activities in the following orders: butylated hydroxytoluene (BHT) > mannich product of isoeugenol > isoeugenol > mannich product of eugenol > eugenol. The result of Ko [34] indicated that demethyldi-isoeugenol inhibited Fe2+-induced lipid peroxidation. It also scavenged superoxide anion generated by peroxyl radical (ROO.) derived from AAPH.
Shikonin is the major constituent of Arnebia euchroma(Royle)Johnst. The observed results revealed that shikonin demonstrated higher reducing ability (0.431%), and deoxy-shikonin showed maximum inhibition (0.440%) to DPPH-radical scavenging assay.
Baicalein is the major constituent of Rheum officinale. The results of Nishioka [35] revealed that baicalein can inhibit the express of human intestinal sucrase in the Caco-2 cells. The results of Tsai [36] revealed that baicalein can protect against the acute lung injury induced by lipopolysaccharide in rats. The results of Jeli [37] revealed that baicalein exhibit good inhibitory activities of both production of cytokine IL-6 and tyrosine kinase.
Rosmarinic acid can inhibit the enzymatic browning of fruits and vegetables. The result of Ha [38] showed that rosmarinic acid possess mushroom tyrosinase inhibitory activities (IC50 of 4.0 μM). The result of Ding [39] showed that rosmarinic acid methyl ester can inhibit tyrosinase, and reduce the melanin contents in B16 cells. The result of Fujimoto [40] showed that rosmarinic acid afforded a highly tyrosinase-inhibitory active product. Rosmarinic acid has antioxidant and prooxidant activities. The result of Sánchez-Campillo [41] indicated that rosmarinic acid can be used as a good photo-protective agent.
Zhao et al. [42] evaluated the antioxidant properties of Citri Exocarpium Rubrum based on its DPPH free radical scavenging activity, ferric ion reducing antioxidant power (FRAP) and trolox equivalent antioxidant capacity (TEAC) assays. Bivariate correlation analysis revealed correlations between the characteristic peaks and the antioxidant activities of the samples. Sambucus williamsii Hance (Jiegumu) is traditionally used in Chinese medicine to treat bone and joint diseases. The major phytochemicals are phenolic acids, lignans, and terpenoids. This compounds may have the antioxidant, anti-inflammatory, bone fracture healing, and anti-osteoporotic effects [43].
Tyrosinase (EC 1.14.18.1) play a key role in melanin biosynthesis [44]. Due to the over expression of tyrosinase, excessive melanin leads to melasma and age spots [45]. Tyrosinase is responsible for the browning of vegetables and fruits in the food industry, which results in reduced market value and shorter product shelf life [46]. Increased attention has also drawn to the applications of antioxidants and tyrosinase inhibitors as preservatives in skin-protective ingredients in cosmetics and in the food industry. On the other hand, ROS could induce oxidative damage of proteins and DNA, and peroxidation of membrane lipids. Lipid peroxidation will generate malondialdehyde (MDA), and do harm to cells [47]. It may be useful in diets to obtain properly antioxidants.
In conclusion, isoeugenol, shikonin, baicalein, rosmarinic acid, and dihydromyricetin exhibited good antityrosinase activities. These compounds also exhibited good antioxidant effects on lipid peroxidation, supercoiled pBR322 plasmid DNA, and DPPH, ABTS, hydroxyl, or superoxide free radical scavenging activity. The different molecular structures lead to the different antityrosinase and antioxidant activities. The activity order is isoeugenol < shikonin < baicalein < rosmarinic acid < dihydromyricetin. The results showed the compounds with more phenolic hydroxyls have more antioxidant and antityrosinase activities. This was the first study of molecular docking for modeling the antityrosinase activity of compounds. This was also the first study of the lipid peroxidation inhibition activity of compounds in liver mitochondria induced by Fe2+/vitamin C(Vc) system in vitro, the protective effects on supercoiled pBR322 plasmid DNA. In a word, the results support the use of compounds as the new anti-aging candidate drugs, cosmetic materials and food additives.
ROS:
reactive oxygen species
l-DOPA:
l-3,4-dihydroxyphenylalanine
DPPH:
diphenyl-2-picrylhydrazyl
TBA:
thiobarbituric acid
ABTS:
2, 2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid)
AAPH:
2,2′-azobis(2-methylpropionamidine)dihydrochloride
IC50:
inhibitory concentration 50
Kim D, Park J, Kim J. Flavonoids as mushroom tyrosinase inhibitors: a fluorescence quenching study. J Agric Food Chem. 2006;54:935–41.
Khatib S, Nerya O, Musa R. Chalcones as potent tyrosinase inhibitors: the importance of a 2,4-substituted resorcinol moiety. Bioorg Med Chem. 2005;13:433–41.
Shimizu K, Kondo R, Sakai K. Inhibition of tyrosinase by flavonoids, stilbenes and related 4–substituted resorcinols: structure-activity investigations. Planta Med. 2000;66:11–5.
Nerya O, Musa R, Khatib S. Chalcones as potent tyrosinase inhibitors: the effect of hydroxyl positions and numbers. Phytochem. 2004;65:1389–95.
Shimizu K, Yasutake S, Kondo R. A new stilbene with tyrosinase inhibitory activity from Chlorophora excelsa. Chem Pharm Bull. 2003;51:318–9.
Kenji O, Toshiyuki T, Tetsuro I. Inhibitory effects of resveratrol derivatives from dipterocarpaceae plants on tyrosinase activity. Biosci Biotechnol Biochem. 2003;67:1587–9.
Ohguchi K, Tanaka T, Iliya I. Gnetol as a potent tyrosinase inhibitor from Genus Gnetum. Biosci Bio-technol Biochem. 2003;67:663–5.
Thomeby-andersson K, Sterner O, Hansson C. Tyrosi-nase-mediated formation of a reactive quinone from the depigmenting agents, 4-tert-butyphenol and 4-tert-butyl-catechol. Pigment Cell Res. 2000;13:33–8.
Seyoum A, Asres K, El-Fiky FK. Structure-radical scavenging activity relationships of flavonoids. Phytochem. 2006;67:2058–70.
Cai YZ, Sun M, Xing J. Structure-radical scavenging activity relationships of phenolic compounds from traditional Chinese medicinal plants. Life Sci. 2006;78:2872–88.
Ferro S, De Luca L, Germano MP, Buemi MR, Lelo L. Chemical exploration of 4-(4-fluorobenzyl) piperidine fragment for the development of new tyrosinase inhibitors. Eur J Med Chem. 2017;125:591–8.
Pillaiyar T, Manickam M, Namasivayam V. Skin whitening agents: medicinal chemistry perspective of tyrosinase inhibitors. J Enzyme Inhib Med Chem. 2017;32:403–25.
Wang JL, Cheng LP, Wang TC, Deng W, Wu FH. Molecular modeling study of CP-690550 derivatives as JAK3 kinase inhitors through combined 3D-QSAR, molecular docking, and dynamics simulation techbiques. J Mol Graph Model. 2017;72:178–86.
Jin T. Dehydrozingerone, chalcone, and isoeugenol analogues as in vitro anticancer agents. J Nat Prod. 2006;69:1445–9.
Chen CH. Involvement of reactive oxygen species, but not mitochondrial permeability transition in the apoptotic induction of human SK-Hep-1 hepatoma cells by shikonin. Planta Med. 2003;69:1119–24.
Li-Weber M. New therapeutic aspects of flavones: the anticancer properties of Scutellaria and its main active constituents Wogonin, Baicalein and Baicalin. Cancer Treat Rev. 2009;35:57–68.
Zhu F, Asada T, Sato A. Rosmarinic acid extract for antioxidant, antiallergic, and α-glucosidase inhibitory activities, isolated by supramolecular technique and solvent extraction from Perilla leaves. J Agric Food Chem. 2014;62:885–92.
Xin ML, Ma Y, Lin W, Xu K, Chen M. Use of dihydromyricetin as antioxidant for polypropylene stabilization. J Therm Anal Calori. 2015;120:1741–7.
Chen LH, Hu YH, Song W, Song KK, Liu X, Chen QX. Synthesis and antityrosinase mechanism of benzaldehyde thiosemicarbazones: novel tyrosinase inhibitors. J Agric Food Chem. 2012;60:1542–9.
Dong HH, Liu J, Liu XR, Yu YY, Cao SW. Molecular docking and QSAR analyses of aromatic heterocycle thiosemicarbazone analogues for finding novel tyrosinase inhibitors. Bioorg Chem. 2017;75:106–17.
Dong HH, Liu J, Liu XR, Yu YY, Cao SW. Combining molecular docking and QSAR studies for modeling the antityrosinase activity of aromatic heterocycle thiosemicarbazone analogues. J Mol Struct. 2018;1151:353–65.
Lee YL, Yang JH, Mau JL. Antioxidant properties of water extracts from Monascus fermented soybeans. Food Chem. 2008;106:1127–37.
Wan CP, Yu YY, Zhou SR, Liu W, Cao SW. Antioxidant activity and free radical-scavenging capacity of Gynura divaricata leaf extracts at different temperatures. Phcog Mag. 2011;7:40–5.
De Avellar IGJ, Magalhaas MMM, Silvan AB. Reevaluating the role of 1,10-phenanthroline in oxidative reactions involving ferrous ions and DNA damage. BBA-Gen Subjects. 2004;1675:46–53.
Shen J, Huang C, Jiang L, Gao F. Enhancement of cisplatin induced apoptosis by suberoylanilide hydroxamic acid in human oral squamous cell carcinoma cell lines. Biochem Pharmacol. 2007;73:1901–9.
Zuo AR, Yu YY, Li J, Xu BB, Yu XY, Qiu Y, Cao SW. Study on the relation of structure and antioxidant activity of isorhamnetin, quercetin, phloretin, silybin and phloretin isonicotinyl hydrazone. Free Rad Antiox. 2011;1:39–47.
Lin X, Yang DJ, Cai WQ, Zhao QY, Gao YF, Chen Q, Wang R. Endomorphins, endogenous opioid peptides, provide antioxidant defense in the brain against free radical-induced damage. BBA-Mol Basis Dis. 2003;195:1639–45.
Zuo AR, Yu YY, Shu QL, Zheng LX, Wang XM, Peng SH, Xie YF, Cao SW. Hepatoprotective effects and antioxidant, antityrosinase activities of phloretin and phloretin isonicotinyl hydrazone. J Chin Med Assoc. 2014;77:290–301.
Seo JY, Kim SK, Nguyen PH, Lee JY, Tung PHT, Sung SH, Oh WK. Chemical constituents from a Gynostemma laxum and their antioxidant and neuroprotective activities. Chin Med. 2017;12:15–27.
Liu BG, Du JQ, Zeng J, Chen CG, Niu SY. Characterization and antioxidant activity of dihydromyricetin–lecithin complex. Eur Food Res Technol. 2009;230:325–31.
Xu JJ, Yao MY, Xu G. Study on antioxidant activities of dihydromyricetin. Food Sci. 2007;28:43–5.
Zhang P, Omaye S. DNA strand breakage and oxygen tension: effects of β-carotene, α-tocopherol and ascorbic acid. Food Chem Toxicol. 2001;39:239–45.
Hubungan A, Antioksidan A, Isoeugenol D, Vanilin E, Dan T. Structure-antioxidant activities relationship. Indonesian J Chem. 2010;7:10–4.
Ko FN, Liao CH. KuoYH, LinYL. Antioxidant properties of demethyldi-isoeugenol. Biochim Biophys Acta. 1995;1258:145–52.
Nishioka T, Kawabata J, Aoyama Y. Baicalein, an alpha-glucosidase inhibitor from Scutellaria baicalensis. J Nat Prod. 1998;61:1413–5.
Tsai CL, Lin YC, Wang HM. Baicalein, an active component of Scutellaria baicalensis, protects against lipopolysaccharide-induced acute lung injury in rats. J Ethnopharmacol. 2014;153:197–201.
Jeli D, Lower-Nedza AD, Brantner AH. Baicalin and Baicalein Inhibit Src Tyrosine Kinase and Production of IL-6. J Chem. 2016;8:1–6.
Ha TJ, Lee MH, Kwon HS. Oxidation of rosmarinic acid catalyzed by mushroom tyrosinase. J Korean Soc Appl Bio Chem. 2011;54:619–22.
Ding HY, Tzunghan C, Liang CH. Antioxidant and antimelanogenic properties of rosmarinic acid methyl ester from Origanum vulgare. Food Chem. 2010;123:254–62.
Fujimoto A, Shingai Y, Nakamura M, Maekawa T, Sone Y. Anovel ring-expanded product with enhanced tyrosinase inhibitory activity from classical Fe-catalyzed oxidation of rosmarinic acid, a potent antioxidative. Bioorg Med Chem Lett. 2011;42:7393–6.
Sánchez-Campillo M, Gabaldon JA, Castillo J, Benavente-García O. Rosmarinic acid, a photo-protective agent against UV and other ionizing radiations. Food Chem Toxicol. 2009;47:386–92.
Zhao Y, Kao CP, Liao CR, Wu KC, Zhou X, Ho YL, Chang YS. Chemical compositions, chromatographic fingerprints and antioxidant activities of Citri Exocarpium Rubrum (Juhong). Chin Med. 2017;12:6–20.
Xiao HH, Zhang Y, Cooper R, Yao XS, Wong MS. Phytochemicals and potential health effects of Sambucus williamsii Hance (Jiegumu). Chin Med. 2016;11:36–51.
Han YK, Park YJ, Ha YM, Park D, Lee JY, Lee N. Characterization of a novel tyrosinase inhibitor, (2RS,4R)-2-(2,4-dihydroxyphenyl) thiazolidine-4-carboxylic acid (MHY384). BBA-Gen Subjects. 2012;542:1820–6.
Fu B, Li H, Wang X, Lee FSC, Cui S. Isolation and identification of flavonoids in licorice and a study of their inhibitory effects on tyrosinase. J Agric Food Chem. 2005;53:7408–13.
Zhang JP, Chen QX, Song KK, Xie JJ. Inhibitory effects of salicylic acid family compounds on the diphenolase activity of mushroom tyrosinase. Food Chem. 2006;95:579–84.
Stangl V, Lorenz M, Ludwig A. The flavonoid phloretin suppresses stimulated expression of endothelial adhesion molecules and reduces activation of human platelets. J Nutr. 2005;135:172–8.
ARZ, SWC, and QLS proposed and designed the experiment. HHD, YYY, LXZ, and XYY were involved in the experiment. SWC and QLS revised the manuscript and were responsible for the overall research. All authors read and approveed the final manuscript.
The data sets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
This study was approved by the Ethical Committee of Jiangxi University of Traditional Chinese Medicine.
This project was supported by the National Natural Sciences Foundation of China (Nos. 20962014, 81473455, 31560038), the Traditional Chinese Medicine Sciences Foundation of Jiangxi (Nos. 2015A042, 2017A288), and the Sciences Foundation of Jiangxi University of Traditional Chinese Medicine (No. 201610412020).
State Key Laboratory of Food Science and Technology, Nanchang University, Nanchang, 330047, Jiangxi, China
Ai-Ren Zuo & Shu-Wen Cao
Jiangxi University of Traditional Chinese Medicine, Nanchang, Jiangxi, China
Ai-Ren Zuo, Huan-Huan Dong, Qing-Long Shu, Li-Xiang Zheng & Xiong-Ying Yu
Department of Chemistry, Nanchang University, Nanchang, Jiangxi, China
Yan-Ying Yu & Shu-Wen Cao
Ai-Ren Zuo
Huan-Huan Dong
Yan-Ying Yu
Qing-Long Shu
Li-Xiang Zheng
Xiong-Ying Yu
Shu-Wen Cao
Correspondence to Qing-Long Shu or Shu-Wen Cao.
Additional file
Minimum Standards of Reporting Checklist.
Zuo, AR., Dong, HH., Yu, YY. et al. The antityrosinase and antioxidant activities of flavonoids dominated by the number and location of phenolic hydroxyl groups. Chin Med 13, 51 (2018). https://doi.org/10.1186/s13020-018-0206-9
Accepted: 06 September 2018
Antityrosinase activity
Phenolic hydroxyl
Shikonin
Pharmacology and Applications of Chinese Medicine | CommonCrawl |
Homomorphisms of Sparse Signed Graphs
Clément Charpentier
Reza Naserasr
Éric Sopena
The notion of homomorphism of signed graphs, introduced quite recently, provides better interplay with the notion of minor and is thus of high importance in graph coloring. A newer, but equivalent, definition of homomorphisms of signed graphs, proposed jointly by the second and third authors of this paper and Thomas Zaslavsky, leads to a basic no-homomorphism lemma. According to this definition, a signed graph $(G, \sigma)$ admits a homomorphism to a signed graph $(H, \pi)$ if there is a mapping $\phi$ from the vertices and edges of $G$ to the vertices and edges of $H$ (respectively) which preserves adjacencies, incidences, and signs of closed walks (i.e., the product of the sign of their edges). For $ij=00, 01, 10, 11$, let $g_{ij}(G,\sigma)$ be the length of a shortest nontrivial closed walk of $(G, \sigma)$ which is, positive and of even length for $ij=00$, positive and of odd length for $ij=01$, negative and of even length for $ij=10$, negative and of odd length for $ij=11$. For each $ij$, if there is no nontrivial closed walk of the corresponding type, we let $g_{ij}(G, \sigma)=\infty$. If $G$ is bipartite, then $g_{01}(G,\sigma)=g_{11}(G,\sigma)=\infty$. In this case, $g_{10}(G,\sigma)$ is certainly realized by a cycle of $G$, and it will be referred to as the \emph{unbalanced-girth} of $(G,\sigma)$.
It then follows that if $(G,\sigma)$ admits a homomorphism to $(H, \pi)$, then $g_{ij}(G, \sigma)\geq g_{ij}(H, \pi)$ for $ij \in \{00, 01,10,11\}$.
Studying the restriction of homomorphisms of signed graphs on sparse families, in this paper we first prove that for any given signed graph $(H, \pi)$, there exists a positive value of $\epsilon$ such that, if $G$ is a connected graph of maximum average degree less than $2+\epsilon$, and if $\sigma$ is a signature of $G$ such that $g_{ij}(G, \sigma)\geq g_{ij}(H, \pi)$ for all $ij \in \{00, 01,10,11\}$, then $(G, \sigma)$ admits a homomorphism to $(H, \pi)$.
For $(H, \pi)$ being the signed graph on $K_4$ with exactly one negative edge, we show that $\epsilon=\frac{4}{7}$ works and that this is the best possible value of $\epsilon$. For $(H, \pi)$ being the negative cycle of length $2g$, denoted $UC_{2g}$, we show that $\epsilon=\frac{1}{2g-1}$ works.
As a bipartite analogue of the Jaeger-Zhang conjecture, Naserasr, Sopena and Rollovà conjectured in [Homomorphisms of signed graphs, {\em J. Graph Theory} 79 (2015)] that every signed bipartite planar graph $(G,\sigma)$ satisfying $g_{ij}(G,\sigma)\geq 4g-2$ admits a homomorphism to $UC_{2g}$. We show that $4g-2$ cannot be strengthened, and, supporting the conjecture, we prove it for planar signed bipartite graphs $(G,\sigma)$ satisfying the weaker condition $g_{ij}(G,\sigma)\geq 8g-2$.
In the course of our work, we also provide a duality theorem to decide whether a 2-edge-colored graph admits a homomorphism to a certain class of 2-edge-colored signed graphs or not.
P3.6 | CommonCrawl |
A summary of "Probability and Statistics in Data Science using Python", offered from UCSD DSE210x
Sep 2, 2020 • Chanseok Kang • 15 min read
Python edX Data_Science Statistics Probability
Elements, sets, and membership
Common Sets
Set Definition in python
Membership in python
Testing if Empty, Size
Some simple sets
Sets within Sets
Integer intervals
Real intervals
Divisibility
Set of Multiples
Intervals, Multiples in python
Visualizing Sets
Venn Diagram
Ven Diagram in Python
Set Relations
Relation Types
Strict Subsets
Belongs to vs. Subset of
Set relations in python
Disjoint
Subsets and Supersets
Strict subset
Strict superset
Tuples and products
Tuples and Ordered Pairs
Cartesian Products
Discrete Sets
Cartesian products in Python
Russell's Paradox
Sets in Sets
Exercies 2.1
Foundation, building blocks of sets
Can be anything
Collection of elements
Define : {specify elements}
Coin = {heads, tails}
Bits = {0, 1}
Die = {1, 2, 3, 4, 5, 6}
Implicit
Digits = {0, 1, $\dots$ 9}
Letters = {a, b, $\dots$, z}
Days = {Monday, $\dots$, Sunday}
{4-letter words} = {love, like, dear, $\dots$}
Integers {$\dots$, -2, -1, 0, 1, 2, $\dots$} $\mathbb{Z}$
Naturals {0, 1, 2, $\dots$ } $\mathbb{N}$
Positives {1, 2, 3, $\dots$} $\mathbb{P}$
Rationals {interger ratio m / n, n $\neq 0$} $\mathbb{Q}$
Reals { whole number } $\mathbb{R}$
Note: $\mathbb{Z}$ comes from german word Zahl, meaning numbers
Usually, Sets are expressed with Upper case (A, B, etc), and Elements are expressed with Lower case (a, b, etc), as a convention.
If element $x$ is in a set $A$, it is a member of, or belongs to $A$, denoted $x$ $\in$ $A$.
$$ 0 \in \{0, 1\} \qquad 1 \in \{0, 1\} \qquad \pi \in \mathbb{R} $$
Equivalently, $A$ contains $x$, written $A$ $\ni$ $x$.
$$ \{0, 1\} \ni 0 \qquad \{0, 1\} \ni 1 \qquad \mathbb{R} \in \pi $$
If $x$ is not in $A$, then $x$ is not a member, or does not belong to $A$, denoted $x$ $\notin$ $A$.
Equivalently, $A$ does not contain $x$, $A$ $\not\owns$ $x$.
Order: {0, 1} = {1, 0}
Repetition: {0, 1} = {0, 1, 1, 1}
If you want to consider:
Order matters: use ordered tuples ((0, 1) $\neq$ (1, 0))
Repetition matters: use multisets or bags
Empty set: contains no elements ($\emptyset$ or {}, $\forall x, x \notin \emptyset$)
Note: $\forall$ means 'all', or 'every'
Universal set: all possible elements ($\Omega$, $\forall x, x \in \Omega$)
$\Omega$ lets us consider only relevant elements. $\Omega$ can be $\mathbb{Z}$ (the integer) or "prime number"
$\Omega$ depends on application (temperature, text, etc...)
$\emptyset$ is only one in whole case, this is the set with no elements.
Define a set
{...} or set({...})
For empty set
set() or set({})
Note: In python, {} is not a empty set, it is dictionary.
$\in \quad \rightarrow$ in
$\notin \quad \rightarrow$ not in
S = set()
not S
T = {1, 2}
not T
len(S)
len(T)
Specify a set within a universe, or any other set, $$ \{x \in A \vert \dots \} $$ means elements $x$ in $A$ such that. Sometimes it expresses like this, $$ \{x \in A : \dots \} $$
$$ \mathbb{N} = \{x \in \mathbb{Z} \vert x \geq 0 \} $$ $$ \mathbb{P} = \{x \in \mathbb{N} \vert x \gt 0 \} $$
It usually express the solution to equations,
$$ \{ x \in \mathbb{R} \vert x^2 \geq 0\} = \mathbb{R} $$ $$ \{ x \in \mathbb{R} : x^2 = 1 \} = \{-1, 1\} $$ $$ \{ x \in \mathbb{R} \vert x^2 = 0 \} = \{0\} $$
Note: a single-element set is called singleton
$$ \{ x \in \mathbb{R} \vert x^2 = -1 \} = \emptyset $$ $$ \{ x \in \mathbb{C} \vert x^2 = -1 \} = \{i, -i\} $$
$$ \{m, \dots n\} = \{i \in \mathbb{Z} \vert m \leq i \leq n \} $$
It is a set of integers from $m$ to $n$, inclusively.
$$ \{3, \dots, 5\} = \{i \in \mathbb{Z} \vert 3 \leq i \leq 5 \} = \{3, 4, 5\} $$
$$ \{3, \dots, 4\} = \{i \in \mathbb{Z} \vert 3 \leq i \leq 4 \} = \{3, 4\} $$ $$ \{3, \dots, 3\} = \{i \in \mathbb{Z} \vert 3 \leq i \leq 3 \} = \{3\} $$ $$ \{3, \dots, 2\} = \{i \in \mathbb{Z} \vert 3 \leq i \leq 2 \} = \emptyset $$
For convention, $[n] = \{1, \dots, n\}$
$$[a, b] \qquad \rightarrow \{x \in \mathbb{R} \vert a \leq x \leq b \} $$ $$(a, b) \qquad \rightarrow \{x \in \mathbb{R} \vert a \lt x \lt b \} $$ $$[a, b) \qquad \rightarrow \{x \in \mathbb{R} \vert a \leq x \lt b \} $$ $$(a, b] \qquad \rightarrow \{x \in \mathbb{R} \vert a \lt x \leq b \} $$
In $m, n \in \mathbb{Z}$, if $n = c \dot m$ for some $c \in \mathbb{Z}$, we say that n is a multiple of m, or $m$ divides $n$, and write $m \vert n$
If no such $c$ exists, $m$ does not divide $n$, or $n$ is not a multiple of $m$ denoted $m \not\vert n$.
$$\text{There is no } c \in \mathbb{Z} \quad \text{such that} \quad 4 = c \cdot 3 \quad \rightarrow 3 \not\vert 4 $$ $$ 0 \not\vert n \quad \text{for any } n \neq 0 $$
Integer multiples of $m$ $$ m \in \mathbb{Z} \qquad {}_m\mathbb{Z} \overset{\underset{\mathrm{def}}{}}{=} \{i \in \mathbb{Z} : m \vert i \}$$
Example $$ \begin{aligned} {}_2\mathbb{Z} &= \{\dots, -4, -2, 0, 2, 4, \dots \} \overset{\underset{\mathrm{def}}{}}{=} \mathbb{E} \quad \rightarrow \text{even number} \\ {}_1\mathbb{Z} &= \{\dots , -2, -1, 0, 1, 2, \dots \} = \mathbb{Z} \\ {}_0\mathbb{Z} &= \{0\} \end{aligned} $$
Multiples of $m$ in $\{1..n\}$ $$ m \in \mathbb{Z}, n \in \mathbb{P} \qquad {}_m[n] \overset{\underset{\mathrm{def}}{}}{=} \{i \in [n] : m \vert i\}$$
$$ \begin{aligned} {}_3[13] &= \{i \in \{1, \dots, 13\} : 3 \vert i \} = \{3, 6, 9, 12\} \\ {}_7[13] &= \{7\} \\ {}_1[13] &= [13] \\ {}_{14}[13] &= {}_0[13] = \emptyset \end{aligned} $$
$\{0,\dots, n-1\} \quad \rightarrow$ range(n)
$\{m, \dots, n-1\} \quad \rightarrow$ range(m, n)
$\{m, m+d, m+2d, \dots \} \leq n - 1 \quad \rightarrow$ range(m, n, d)
set(range(3))
{0, 1, 2}
set(range(2, 5))
set(range(2, 12, 3))
{2, 5, 8, 11}
Developed by John Venn
Used to visualize Sets, Regions, Elements, Points
!pip install matplotlib_venn
import matplotlib_venn as venn
S = {1, 2, 3}
T = {0, 2, -1, 5}
venn.venn2([S, T], set_labels=('S', 'T'));
U = { 10, 8, 0, 2, -1}
venn.venn3([S, T, U], set_labels=('S', 'T', 'U'));
Set $A$ and $B$ are equal, denoted $A = B$, if they have exactly the same elements
$$\{0, 1\} = \{1, 0\}$$
If $A$ and $B$ are not equal, they are different, denoted $A \neq B$
$$\{0, 1\} \neq \{1, 2\}$$
All elements must be identical: $\{1, 2, 4\} = \{4, 1, 2\}$
One different element enough: $\{1, 2, 4\} \neq \{1, 2, 4, 8\}$
Two sets intersect if they share at least one common element. Mathematically, it can express like this,
$$ \exists x, \quad x \in A \wedge x \in B $$
Two sets are disjoint if they share no elements.
$$ \neg \exists x, \quad x \in A \wedge x \in B $$
$\emptyset$ disjoint from any set
Non-empty $\Omega$ intersects every set
A set intersects itself if and only if it is not-empty.
Several sets
intersect if all share a common element
mutually disjoint if every two are disjoint
It generalizes $\leq$
If every element in $A$ is also in $B$, then $A$ is a subset of $B$, denoted $A \subseteq B$
$$ \{0\} \subseteq \{0, 1\} \\ \{0\} \subseteq \{0\}$$
Equivalently, $B$ is a superset of, or contains, $A$, denoted $B \supseteq A$
$$ \{0, 1\} \supseteq \{0\} $$
If $A$ has an element that's not in $B$, then $A$ is not a subset of $B$, denoted $A \not \subseteq B$, or $B \not \supseteq A$
$$ \{0, 1\} \not \subseteq \{1, 2\} \\ \{1, 2\} \not \supseteq \{0, 1\} $$
$\mathbb{P} \subseteq \mathbb{N} \subseteq \mathbb{Z} \subseteq \mathbb{Q} \subseteq \mathbb{R}$
$\emptyset \subseteq A \subseteq A \subseteq \Omega$
(transitivity) $A \subseteq B$ and $B \subseteq C \rightarrow A \subseteq C$
Note: $\subseteq$ is called transitive
$A \subseteq B$ and $B \subseteq A \rightarrow A = B$
It generalizes $\gt$
If $A \subseteq B$ and $A \neq B$, $A$ is a strict subset of $B$, denoted $A \subset B$ and $B$ is a strict superset of $A$, denoted $B \supset A$.
$$ \{0\} \subset \{0, 1\} \\ \{0, 1\} \supset \{0\} $$
If $A$ is not a strict subset of $B$, we write $A \not \subset B$ or $B \not \supset A$
Reason: $ A \not \subseteq B$ , $A = B$
$\in$ (Belongs to)
Relation between an element and a set
$x \in A$: element $x$ belongs to, or is contained in, set $A$
ex) $\{0, 1\}$ has two elements: 0 and 1
$$ \rightarrow 0 \in \{0, 1\} , \{0\} \not \in \{0, 1\} $$
$\subseteq$
Relation between two sets
$A \subseteq B$ : $A$ is a subset of set $B$
$$ \{0\} \subseteq \{0, 1\} $$
0 is an element of $\{0, 1\}$, but 0 is not a set. ($0 \not \subseteq \{0, 1\}$)
S1 = {0, 1}
S2 = set({0, 1})
S3 = {1, 0, 1}
print(S1 == T)
print(S1 == S2)
print(S1 != S2)
print(S1 != T)
S1.isdisjoint(T)
S1.isdisjoint({2})
zero = {0}
zplus = {0, 1}
zminus = {0, -1}
zminus <= zplus
zero.issubset(zminus)
zplus < zminus
zero < zminus
zplus >= zminus
zplus.issuperset(zero)
zminus > zminus
zplus > zero
Order and repetition do not matter $$ \{a, b, c\} = \{b, c, a\} $$
Both order and repetition matter $$ (a, b, c) \neq (b, c, a) \\ (a, a, a) \neq (a) $$
$n$-tuple
Tuple with $n$ elements $$ (a_1, a_2, \dots, a_n) $$
2-tuple
Ordered pair $$ (3, 7) $$
The cartesian product of $A$ and $B$ is the set $A \times B$ of ordered pairs $(a, b)$ where $a \in A$ and $b \in B$. Mathematically,
$$ A \times B = \{(a, b): a \in A, b \in B\} $$
$A \times A = A^2$
$\mathbb{R}^2 = \{(x, y): x, y \in \mathbb{R}\} \quad \rightarrow$ Cartesian Plane
$A, B \subseteq \mathbb{R} \quad \rightarrow A \times B \subseteq \mathbb{R}^2$
$$ A = [0, 2], B=[1, 4] \\ A \times B = \{(x, y): x \in [0, 2], y \in [1, 4]\} $$
Similar, simpler
$$ \begin{aligned} \{a, b\} \times \{1, 2, 3\} &= \{(x, y): x \in \{a, b\}, y \in \{1, 2, 3\}\} \\ &= \{ (a, 1), (a, 2), (a, 3), (b, 1), (b, 2), (b, 3)\} \end{aligned}$$
1st coordinate: vertical, 2nd coordinate: horizontal
$A \times \emptyset = \emptyset \times A = \emptyset$
$A \times (B \cup C) = A \times B \cup A \times C$
$A \times (B \cap C) = A \times B \cap A \times C$
$A \times (B - C) = A \times B - A \times C$
Use product function in itertools library
Faces = set({'J', 'Q', 'K'})
Suits = {'\u2666\uFE0F', '\u2660\uFE0F'}
for i in product(Faces, Suits):
print(i)
('K', '♠️')
('K', '♦️')
('Q', '♠️')
('Q', '♦️')
('J', '♠️')
('J', '♦️')
Sets can be elements
Every set is a subset of itself
$$ \{0\} \subseteq \{0\} $$
Can a set belong to (be an element of) itself? $ \rightarrow S \in S$
Typically, sets do not belong to themselves $\quad \{0\} \not \in \{0\} , \emptyset \not \in \emptyset $
But some sets do belong to themselves! (infinite recursion)
Some sets $\in$ themselves, others don't ($\{0\}$)
Define a set that cannot exist
$$ R = \{\text{sets that don't belong to themselves}\} = \{S: S \not \in S\} $$
$R \in R \quad \rightarrow R \not \in R$ (contradiciton)
$R \not \in R \quad \rightarrow R \in R$ (contradiction)
If R existed, then both $R \in R$ and $R \not \in R$ would hold
R defined but cannot exist!!
ex) The set that contains only the empty set $\emptyset$ is not empty
De Morgan's first law states the following for any two sets $A$ and $B$ $$(A\cup B)^c = A^c\cap B^c$$
In the following two exercises we calculate $(A\cup B)^c$ in two different ways. Both functions must take $A$, $B$ and the universal set $U$ as their inputs.
Write the function complement_of_union that first determines $A\cup B$ and then evaluates the complement of this set. Output the tuple: $\begin{pmatrix}A\cup B,\, (A\cup B)^c\end{pmatrix}$.
**Code**
A = {1, 2, 3}
B = {3, -6, 2, 0}
U = {-10, -9, -8, -7, -6, 0, 1, 2, 3, 4}
complement_of_union(A, B, U)
**Output**
({-6, 0, 1, 2, 3}, {-10, -9, -8, -7, 4})
def complement_of_union(A, B, U):
# inputs: A, B and U are of type 'set'
# output: a tuple of the type (set, set)
union = A.union(B)
complement_union = U.difference(union)
return (union, complement_union)
A = {1, 2, 3, 4, 5}
B = {0, 2, -6, 5, 8, 9}
U = A|B|{-3, 7, 10, -4}
assert( complement_of_union(A, B, U) == ({-6, 0, 1, 2, 3, 4, 5, 8, 9}, {-4, -3, 7, 10}) )
Write the function intersection_of_complements that first determines $A^c$ and $B^c$ and then evaluates the intersection of their complements. Output the tuple: $\begin{pmatrix}A^c, \, A^c\cap B^c\end{pmatrix}$
intersection_of_complements(A, B, U)
({-10, -9, -8, -7, -6, 0, 4}, {-10, -9, -8, -7, 4})
def intersection_of_complements(A, B, U):
# output: a tuple of the form (set, set)
complement_a = U.difference(A)
complement_b = U.difference(B)
complement_intersect = complement_a.intersection(complement_b)
return (complement_a, complement_intersect)
assert( intersection_of_complements(A, B, U) == ({-6, -4, -3, 0, 7, 8, 9, 10}, {-4, -3, 7, 10}) )
This problem illustrates a property of cartesian products of unions of two or more sets. For four sets $A$, $B$, $S$ and $T$, the following holds:
$$(A\cup B)\times(S\cup T) = (A\times S)\cup(A\times T)\cup(B\times S)\cup(B\times T)$$
Write the following functions to determine $(A\cup B)\times(S\cup T)$ in two different ways.
Write function product_of_unions that first determines $(A\cup B)$ and $(S\cup T)$ and then evaluates the cartesian products of these unions. Output the tuple $\begin{pmatrix}(A\cup B),\, (A\cup B)\times(S\cup T)\end{pmatrix}$.
B = {1, 3}
S = {-1, 0}
T = {0, 10}
product_of_unions(A, B, S, T)
({1, 2, 3},
{(1, -1),
(1, 0),
(1, 10),
(2, -1),
(3, 10)})
def product_of_unions(A, B, S, T):
# inputs: A, B, S and T are sets
union_a_b = A.union(B)
union_s_t = S.union(T)
product_a_b_s_t = set()
for i in product(union_a_b, union_s_t):
product_a_b_s_t.add(i)
return (union_a_b, product_a_b_s_t)
A = {5}
S = {-1, 0, 1}
assert( product_of_unions(A, B, S, T) == \
({5, 6}, {(5, -1), (5, 0), (5, 1), (5, 2), (6, -1), (6, 0), (6, 1), (6, 2)}) )
Write a function union_of_products that first determines $(A\times S)$ and the other three cartesian products that appear on the right hand side of the identity above, then evaluates the union of these cartesian products. Output the tuple $\begin{pmatrix}(A\times S),\, (A\times S)\cup(A\times T)\cup(B\times S)\cup(B\times T)\end{pmatrix}$.
union_of_products(A, B, S, T)
({(1, -1), (1, 0), (2, -1), (2, 0)},
def union_of_products(A, B, S, T):
product_a_s = set(x for x in product(A, S))
product_a_t = set(x for x in product(A, T))
product_b_s = set(x for x in product(B, S))
product_b_t = set(x for x in product(B, T))
union_all = product_a_s.union(product_a_t).union(product_b_s).union(product_b_t)
return (product_a_s, union_all)
assert( union_of_products(A, B, S, T) == \
({(5, -1), (5, 0), (5, 1)}, \
{(5, -1), (5, 0), (5, 1), (5, 2), (6, -1), (6, 0), (6, 1), (6, 2)}) \ | CommonCrawl |
Sebastian Bahamonde
Postdoctoral Researcher in Theoretical Physics at Tokyo Institute of Technology, Japan.
Talks & Posters Grants & Awards CV Contact
Modified theories of gravity and its applications to cosmology and astrophysics
One usually calls modified gravity when the Einstein-Hilbert action is modified/extended. One can modify it by changing the matter/energy (right-hand side of the Einstein field equations) or the parts related to the geometry (left-hand side of the Einstein field equations). Usually, changing the matter/energy is not considered as being part of modified gravity since the geometrical gravitational theory is the same but new different kinds of sources are considered. This is an alternative way to understand some issues related to cosmology or astrophysics.
There are different ways of modifying General Relativity. In the below figure one can see a bird's eye view of the spectrum of theories, deviating from GR. Together with the different ways of modifying GR, some of the most important example theories are depicted as well. The figure is drawn by thinking on breaking some of the conditions in the Lovelock's theorem. It turns out that some parts of the figure are connected. For example, some theories which add invariants can be rewritten as scalar-tensor theories. Furthermore, theories can be part of multiple branches of the figure.
For example, quintessence models introduce a scalar field to understand the late-time accelerating behaviour of the Universe. In this perspective, the accelerated expansion at late times is due to some field sourcing on the right-hand side of the Einstein field equations. This, however, is not the only possible approach to achieve a theoretical description of cosmic acceleration. Another option is to consider cosmic acceleration as a breakdown of general relativity at cosmological scales. In other words, instead of introducing some new matter fluid, one changes the left-hand side of the Einstein field equations, i.e. the pure gravitational sector, in order to obtain the needed late-time accelerated expansion. Modifications of general relativity have a history almost as long as the one of general relativity itself. With the development of the semiclassical approaches to quantum gravity and, successively, of unification schemes like supergravity and M-theory, it was realised that in many cases low energy versions of these theories correspond to modifications of general relativity, leading to an increased interest in such models, such that nowadays the exploration of modifications of GR occupies a significant part of the research in relativistic gravitation.
I have been interested in studying different modified theories of gravity with the aim of understanding the dark energy and dark matter components of the Universe. The cosmological equations describing the dynamics of a homogeneous and isotropic Universe are systems of ordinary differential equations, and one of the most elegant ways these can be investigated is by casting them into the form of dynamical systems. This allows the use of powerful analytical and numerical methods to gain a quantitative understanding of the cosmological dynamics derived by the models under study. We published a long review in this topic in Physics Reports
Representation of some possible ways of modifying General Relativity
Teleparallel theories of gravity
Soon after the original formulation of this geometrical theory of gravity, it was noted that there exists an alternative geometrical formulation that is based on a globally flat geometry with torsion. The key mathematical result to this approach goes back to Weitzenböck who noted that it is indeed possible to choose a connection such that the curvature vanishes everywhere. This formulation gives equivalent field equations to those of general relativity and we refer to this as the teleparallel formulation. This naming convention stems from the fact that the notion of parallelism is global instead of local on flat manifolds.
Clearly, the Einstein-Hilbert action can now be represented in different ways, either using the Ricci scalar or the torsion scalar, and consequently giving identical equations of motion since the Teleparallel equivalent of GR action is $$\displaystyle S_{\rm TEGR}={1 \over 16\pi G}\int T e\,\mathrm {d} ^{4}x\;,$$ where $e=\sqrt{-g}=\textrm{det}(e^a{}_\mu)$ and $T$ is the so-called scalar torsion which is related to the Ricci scalar $\bar{R}$ via $$\bar{R}= -T +\frac{2}{e}\partial_{\mu}(e T^\mu) =-T+B\,.$$ It is easy to notice that the TEGR action gives the Einstein field equations since $T$ and $\bar{R}$ differ by a boundary term $B$. I have been interested in studying such theories and also their extensions in order to see how different they are with respect to the standard modified theories from General Relativity. We wrote a review on Teleparallel gravity and its applications (see here)
Metric-Affine theories of gravity
One possible route to try to tackle these issues is to extend the role of the geometric structure present in the theoretical framework towards a full post-Riemannian description of gravity with dynamical metric, torsion and nonmetricity tensors. Such an extension can be related to the existence of a new fundamental symmetry in nature by applying the gauge principles not only to the external rotations and translations but also to the scale and shear transformations, which leads to the formulation of the Metric-Affine Gauge theory of gravity (MAG). From a geometrical point of view, the standard framework of GR can be consistently formulated as a particular case of a more general class of metric-affine theories, where the geometry of the space-time is described by a metric, a coframe and an independent linear connection [4]. Accordingly, the affine connection encodes additional postRiemannian degrees of freedom, which indeed represent the torsion and nonmetricity deformations of an affinely connected metric space-time $$T^{\lambda}\,_{\mu \nu}=2\tilde{\Gamma}^{\lambda}\,_{[\mu \nu]}\,,\quad Q_{\lambda \mu \nu}=\tilde{\nabla}_{\lambda}g_{\mu \nu}\,.$$ The components of the affine connection can then be split into independent pieces as follows: $$ \tilde{\Gamma}^{\lambda}\,_{\mu \nu}=\Gamma^{\lambda}\,_{\mu \nu}+K^{\lambda}\,_{\mu \nu}+L^{\lambda}\,_{\mu \nu}\,, $$ % where $K^{\lambda}\,_{\mu \nu}$ is a metric-compatible contortion tensor containing torsion and $L^{\lambda}\,_{\mu \nu}$ a disformation tensor depending on nonmetricity: $$ K^{\lambda}\,_{\mu \nu}=\frac{1}{2}\left(T^{\lambda}\,_{\mu \nu}-T_{\mu}\,^{\lambda}\,_{\nu}-T_{\nu}\,^{\lambda}\,_{\mu}\right)\,,\quad L^{\lambda}\,_{\mu \nu}=\frac{1}{2}\left(Q^{\lambda}\,_{\mu \nu}-Q_{\mu}\,^{\lambda}\,_{\nu}-Q_{\nu}\,^{\lambda}\,_{\mu}\right)\,. $$ The resulting geometric structure is then provided by a metric tensor and an asymmetric affine connection which in general does not preserve the lengths and angles of vectors under parallel transport. I have been interested on studying different MAG theories and we have obtained black hole solutions within these theories with the torsion and nonmetricity tensor being dynamical.
The following is a list of colleagues with whom I am collaborating or worked together in the past:
Mustapha Azreg-Aïnou
Christian G. Böhmer
Kazuharu Bamba
David Benisty
Salvatore Capozziello
Sante Carloni
Ugur Camci
Mauricio Cataldo
Edmund J. Copeland
Konstantinos F. Dialektopoulos
Celia Escamilla-Rivera
Mir Faizal
Wei Fang
Viktor Gakis
Jorge Gigante Valcarcel
Eduardo I. Guendelman
Manuel Hohmann
Mubasher Jamil
Laur Järv
Kimet Jusufi
Tomi S. Koivisto
Martin Krŝŝák
Jackson Levi Said
Francisco S.N. Lobo
Mihai Marciu
Patrizio Neff
Rafael da C. Nunes
Sergei D. Odintsov
Vasilis K. Oikonomou
Christian Pfeifer
Petar Pavlovic
Prabir Rudra
Diego Saez-Chillon Gomez
Emmanuel N. Saridakis
Marko Sossich
Nicola Tamanini
Muhammad Zubair
See in inSPIRE the complete list of Collaborators
https://scimeter.org/
Designed by Free CSS Templates, Thanks to web design company | CommonCrawl |
Non-stationary analysis of the frequency and intensity of heavy precipitation over Canada and their relations to large-scale climate patterns
Xuezhi Tan1,2 &
Thian Yew Gan1
Climate Dynamics volume 48, pages2983–3001(2017)Cite this article
In recent years, because the frequency and severity of floods have increased across Canada, it is important to understand the characteristics of Canadian heavy precipitation. Long-term precipitation data of 463 gauging stations of Canada were analyzed using non-stationary generalized extreme value distribution (GEV), Poisson distribution and generalized Pareto (GP) distribution. Time-varying covariates that represent large-scale climate patterns such as El Niño Southern Oscillation (ENSO), North Atlantic Oscillation (NAO), Pacific decadal oscillation (PDO) and North Pacific Oscillation (NP) were incorporated to parameters of GEV, Poisson and GP distributions. Results show that GEV distributions tend to under-estimate annual maximum daily precipitation (AMP) of western and eastern coastal regions of Canada, compared to GP distributions. Poisson regressions show that temporal clusters of heavy precipitation events in Canada are related to large-scale climate patterns. By modeling AMP time series with non-stationary GEV and heavy precipitation with non-stationary GP distributions, it is evident that AMP and heavy precipitation of Canada show strong non-stationarities (abrupt and slowly varying changes) likely because of the influence of large-scale climate patterns. AMP in southwestern coastal regions, southern Canadian Prairies and the Great Lakes tend to be higher in El Niño than in La Niña years, while AMP of other regions of Canada tends to be lower in El Niño than in La Niña years. The influence of ENSO on heavy precipitation was spatially consistent but stronger than on AMP. The effect of PDO, NAO and NP on extreme precipitation is also statistically significant at some stations across Canada.
Ahmari H, Blais E-L, Greshuk J (2015) The 2014 flood event in the Assiniboine River Basin: causes, assessment and damage. Can Water Resour J. doi:10.1080/07011784.2015.1070695
Alexander LV et al (2006) Global observed changes in daily climate extremes of temperature and precipitation. J Geophys Res 111:D05109. doi:10.1029/2005JD006290
Allan RP, Soden BJ (2008) Atmospheric warming and the amplification of precipitation extremes. Science 321:1481–1484
Blais E-L, Greshuk J, Stadnyk T (2015) The 2011 flood event in the Assiniboine River Basin: causes, assessment and damages. Can Water Resour J. doi:10.1080/07011784.2015.1046139
Bond NA, Harrison DE (2000) The Pacific decadal oscillation, air–sea interaction and central north Pacific winter atmospheric regimes. Geophys Res Lett 27:731–734
Bonsal B, Shabbar A (2008) Impacts of large-scale circulation variability on low streamflows over Canada: a review. Can Water Resour J 33:137–154
Burn DH, Taleghani A (2013) Estimates of changes in design rainfall values for Canada. Hydrol Process 27:1590–1599
Buttle JM, Allen DM, Caissie D, Davison B, Hayashi M, Peters DL, Pomeroy JW, Simonovic S, St-Hilaire A, Whitfield PH (2016) Flood processes in Canada: regional and special aspects. Can Water Resour J. doi:10.1080/07011784.2015.1131629
Cameron AC, Trivedi PK (1990) Regression-based tests for overdispersion in the poisson model. J Econom 46:347–364
Coles S (2001) An introduction to statistical modeling of extreme values. Springer, London
Coulibaly P (2006) Spatial and temporal variability of Canadian seasonal precipitation (1900–2000). Adv Water Resour 29:1846–1865
Coulibaly P, Burn DH (2005) Spatial and temporal variability of Canadian seasonal streamflows. J Clim 18:191–210
Environment Canada (2014) Canada's top ten weather stories for 2013. http://www.ec.gc.ca/meteo-weather/default.asp?lang5En&n55BA5EAFC-1
Franzke CL (2013) Persistent regimes and extreme events of the North Atlantic atmospheric circulation. Philos Trans A Math Phys Eng Sci 371:20110471
Gan TY, Gobena AK, Wang Q (2007) Precipitation of southwestern Canada: wavelet, scaling, multifractal analysis, and teleconnection to climate anomalies. J Geophys Res 112:D10110. doi:10.1029/2006JD007157
Gilleland E, Katz RW (2011) New software to analyze how extremes change over time. Eos Trans Am Geophys Union 92(2):13–14
Government of Alberta (2014) Alberta 2013–2014 flood recovery update. http://alberta.ca/Flood-recovery-update.cfm
Groisman PY et al (1999) Changes in the probability of heavy precipitation: important indicators of climatic change. Clim Change 42:243–283
Higgins RW, Leetmaa A, Kousky VE (2002) Relationships between climate variability and winter temperature extremes in the United States. J Clim 15:1555–1572
Hurrell JW, Loon HV (1997) Decadal variations in climate associated with the North Atlantic oscillation. Clim Change 36:301–326
Jiang R, Gan TY, Xie J, Wang N (2014) Spatiotemporal variability of Alberta's seasonal precipitation, their teleconnection with large-scale climate anomalies and sea surface temperature. Int J Climatol 34:2899–2917
Kalnay E et al (1996) The NCEP/NCAR 40-year reanalysis project. Bull Am Meteorol Soc 77:437–471
Kenyon J, Hegerl GC (2008) Influence of modes of climate variability on global temperature extremes. J Clim 21:3872–3889
Khaliq MN, Ouarda TBMJ, Ondo JC, Gachon P, Bobée B (2006) Frequency analysis of a sequence of dependent and/or non-stationary hydro-meteorological observations: a review. J Hydrol 329:534–552
Kunkel KE (2003) North American trends in extreme precipitation. Nat Hazards 29:291–305
Kunkel KE, Andsager K (1999) Long-term trends in extreme precipitation events over the conterminous United States and Canada. J Clim 12:2515–2572
Kuo C-C, Gan TY, Gizaw M (2015) Potential impact of climate change on intensity duration frequency curves of central Alberta. Clim Change 130:115–129
Kyselý J, Picek J, Beranová R (2010) Estimating extremes in climate change simulations using the peaks-over-threshold method with a non-stationary threshold. Global Planet Change 72:55–68
Mailhot A, Kingumbi A, Talbot G, Poulin A (2010) Future changes in intensity and seasonal pattern of occurrence of daily and multi-day annual maximum precipitation over Canada. J Hydrol 388:173–185
Mailier PJ, Stephenson DB, Ferro CAT (2006) Serial clustering of extratropical cyclones. Mon Weather Rev 134:2224–2240
Mallakpour I, Villarini G (2015) The changing nature of flooding across the central United States. Nat Clim Change 5:250–254
Mantua NJ, Hare SR (2002) The Pacific decadal oscillation. J Oceanogr 58:35–44
Mantua NJ, Hare SR, Zhang Y, Wallace JM, Francis RC (1997) A pacific interdecadal climate oscillation with impacts on salmon production. Bull Am Meteorol Soc 78:1069–1079
Maraun D, Rust HW, Osborn TJ (2010) Synoptic airflow and UK daily precipitation extremes. Extremes 13:133–153
Mekis E, Hogg WD (1999) Rehabilitation and analysis of Canadian daily precipitation time series. Atmos Ocean 37:53–85
Mekis É, Vincent LA (2011) An overview of the second generation adjusted daily precipitation dataset for trend analysis in Canada. Atmos Ocean 49:163–177
Milrad SM, Gyakum JR, Atallah EH (2015) A meteorological analysis of the 2013 Alberta Flood: antecedent large-scale flow pattern and synoptic-dynamic characteristics. Mon Weather Rev 143(7):2817–2841. doi:10.1175/mwr-d-14-00236.1
Min S-K, Cai W, Whetton P (2013) Influence of climate variability on seasonal extremes over Australia. J Geophys Res Atmos 118:643–654
Mladjic B, Sushama L, Khaliq MN, Laprise R, Caya D, Roy R (2011) Canadian RCM projected changes to extreme precipitation characteristics over Canada. J Clim 24:2565–2584
Muggeo VM (2003) Estimating regression models with unknown break-points. Stat Med 22:3055–3071
Newton B, Burrell BC (2015) The April–May 2008 flood event in the Saint John River Basin: causes, assessment and damages. Can Water Resour J. doi:10.1080/07011784.2015.1009950
Peterson TC, Zhang X, Brunet-India M, Vázquez-Aguirre JL (2008) Changes in North American extremes derived from daily weather data. J Geophys Res 113:D07113. doi:10.1029/2007JD009453
Pinto JG, Bellenbaum N, Karremann MK, Della-Marta PM (2013) Serial clustering of extratropical cyclones over the North Atlantic and Europe under recent and future climate conditions. J Geophys Res Atmos 118:12476–412485
Pomeroy JW, Stewart RE, Whitfield PH (2015) The 2013 flood event in the South Saskatchewan and Elk River basins: causes, assessment and damages. Can Water Resour J. doi:10.1080/07011784.2015.1089190
Rayner NA et al (2003) Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. J Geophys Res Atmos 108:4407. doi:10.1029/2002JD002670
Ropelewski CF, Halpert MS (1986) North American precipitation and temperature patterns associated with the El Niño/Southern Oscillation (ENSO). Mon Weather Rev 114:2352–2362
Ropelewski CF, Jones PD (1987) An extension of the Tahiti-Darwin southern oscillation index. Mon Weather Rev 115:2161–2165
Saad C, St-Hilaire A, Gachon P, El Adlouni S (2015) The 2011 flood event in the Richelieu River basin: causes, assessment and damages. Can Water Resour J. doi:10.1080/07011784.2014.999825
Seager R, Naik N, Vogel L (2012) Does global warming cause intensified interannual hydroclimate variability? J Clim 25:3355–3372
Shabbar A, Skinner W (2004) Summer drought patterns in canada and the relationship to global sea surface temperatures. J Clim 17:2866–2880
Shabbar A, Bonsal B, Khandekar M (1997) Canadian precipitation patterns associated with the Southern Oscillation. J Clim 10:3016–3207
Shang H, Yan J, Zhang X (2011) El Niño–Southern Oscillation influence on winter maximum daily precipitation in California in a spatial model. Water Resour Res 47:W11507
Shook K, Pomeroy J (2012) Changes in the hydrological character of rainfall on the Canadian prairies. Hydrol Process 26(12):1752–1766. doi:10.1002/hyp.9383
Sillmann J, Croci-Maspoli M, Kallache M, Katz RW (2011) Extreme cold winter temperatures in Europe under the influence of north Atlantic atmospheric blocking. J Clim 24:5899–5913
St. Jacques J-M, Sauchyn DJ, Zhao Y, (2010) Northern Rocky Mountain streamflow records: global warming trends, human impacts or natural variability? Geophys Res Lett 37:6. doi:10.1029/2009GL042045
St. Jacques J-M, Huang YA, Zhao Y, Lapp SL, Sauchyn DJ (2014) Detection and attribution of variability and trends in streamflow records from the Canadian Prairie Provinces. Can Water Resour J 39(3):270–284. doi:10.1080/07011784.2014.942575
Stadnyk T, Dow K, Wazney L, Blais E-L (2015) The 2011 flood event in the Red River Basin: causes, assessment and damages. Can Water Resour J 1–9. doi:10.1080/07011784.2015.1008048
Sugahara S, da Rocha RP, Silveira R (2009) Non-stationary frequency analysis of extreme daily rainfall in Sao Paulo, Brazil. Int J Climatol 29:1339–1349
Szeto K, Brimelow JC, Gysbers P, Stewart RE (2015) The 2014 extreme flood on the southeastern Canadian prairies. Bull Am Meteorol Soc 96(12):520–552
Tramblay Y, Neppel L, Carreau J, Najib K (2013) Non-stationary frequency analysis of heavy rainfall events in southern France. Hydrolog Sci J 58:280–294
Trenberth KE (1997) The definition of El Niño. Bull Am Meteorol Soc 78:2771–2777
Trenberth KE, Hurrell JW (1994) Decadal atmosphere-ocean variations in the Pacific. Clim Dyn 9:303–319
Villarini G, Smith JA, Baeck ML, Vitolo R, Stephenson DB, Krajewski WF (2011) On the frequency of heavy rainfall for the Midwest of the United States. J Hydrol 400:103–120
Villarini G, Smith JA, Serinaldi F, Ntelekos AA, Schwarz U (2012) Analyses of extreme flooding in Austria over the period 1951–2006. Int J Climatol 32:1178–1192
Villarini G, Smith JA, Vecchi GA (2013) Changing frequency of heavy rainfall over the central United States. J Clim 26:351–357
Vincent LA, Mekis É (2006) Changes in daily and extreme temperature and precipitation indices for Canada over the twentieth century. Atmos Ocean 44:177–193
Wazney L, Clark SP (2015) The 2009 flood event in the Red River Basin: causes, assessment and damages. Can Water Resour J. doi:10.1080/07011784.2015.1009949
Wilks DS (2006) On "field significance" and the false discovery rate. J Appl Meteorol Clim 45:1181–1189
Zhang X, Vincent LA, Hogg WD, Niitsoo A (2000) Temperature and precipitation trends in Canada during the 20th century. Atmos Ocean 38:395–429
Zhang X, Hogg WD, Mekis É (2001) Spatial and temporal characteristics of heavy precipitation events over Canada. J Clim 14:1923–1936
Zhang X, Wang J, Zwiers FW, Groisman PY (2010) The influence of large-scale climate variability on winter maximum daily precipitation over North America. J Clim 23:2902–2915
The authors thank the two anonymous reviewers for their constructive suggestions which significantly improved the paper. The first author was partly funded by the Chinese Scholarship Council (CSC) of China, and by the University of Alberta. We are grateful to Éva Mekis from Climate Research Division Environment Canada for providing us the precipitation data use in this study.
Department of Civil and Environmental Engineering, University of Alberta, Edmonton, AB, T6G 2W2, Canada
Xuezhi Tan & Thian Yew Gan
State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan, China
Xuezhi Tan
Thian Yew Gan
Correspondence to Thian Yew Gan.
Supplementary material 1 (DOCX 10493 kb)
Appendix 1: GEV distribution
Let M = max {Z 1,…, Z n } for large n, where Z 1, Z 2,… is a sequence of independent (or weakly dependent) identically distributed observations. In this study, Z t represents daily observed precipitation recorded at a particular station on day t, and M is the AMP. Asymptotic results state that under some regularity conditions, normalizing sequences{a n } and {b n > 0} can be found such that (Coles 2001):
$$\Pr \left( {\frac{{M - a_{n} }}{{b_{n} }} \le y} \right) \to {\text{GEV}}\left( y \right)$$
as n → ∞, for a non-degenerate distribution function, which is the GEV distribution with the cumulative distribution function:
$${\text{GEV}}\left( {y;\mu ,\sigma ,\xi } \right) = \left\{ {\begin{array}{*{20}l} {\exp \left\{ { - \left[ {1 + \xi \frac{y - \mu }{\sigma }} \right]^{{ - {1 \mathord{\left/ {\vphantom {1 \xi }} \right. \kern-0pt} \xi }}} } \right\}} \hfill & {\xi \ne 0} \hfill \\ {\exp \left[ { - \exp \left( {\frac{(y - \mu )}{\sigma }} \right)} \right]} \hfill & {\xi = 0} \hfill \\ \end{array} } \right.$$
where 1 + ξ(y − μ)/σ > 0, μ, σ and ξ are the location, scale, and shape parameters, respectively. The shape parameter ξ determines the type of tail behavior. ξ < 0, ξ = 0 and ξ > 0 correspond to the Weibull (Type III), Gumbel (Type I) and Fréchet (Type II) distributions, respectively.
For a non-stationary process, the time-varying GEV parameters can be estimated by time-varying covariates. For instance, the GEV location parameter is defined through a linear function of covariates:
$$\mu = \beta X = \beta_{0} + \beta_{1} x_{1} + \cdots + \beta_{m} x_{m}$$
where X = (1, x 1,…, x m ) is a matrix of the time-varying covariate vectors x 1,…, x m , β = (β 0, β 1, …β m ) is the parameter vector to be estimated, in which β 0 is the intercept and β 1,…β m are the regression coefficients for the corresponding covariates; m is the number of covariates considered. The scale and shape parameters of the GEV distribution can be similarly expressed as Eq. (3).
Appendix 2: Poisson regression
The numbers of days (counts) of extreme values exceeding a threshold over a specified time interval (a year in this study) can be modeled by a Poisson distribution with an equal-dispersion (the mean equals the variance). However, the variance of observed data tends to be larger than the mean, known as over-dispersion, which can partly be attributed to the effect of temporal clustering (Mallakpour and Villarini 2015; Pinto et al. 2013; Villarini et al. 2011, 2013). The statistical significance of dispersion coefficients different from unity at 5 % significance level can be tested using the regression-based tests (Cameron and Trivedi 1990) for testing over-dispersion in a Poisson model.
A Poisson regression models discrete data, in which the predict and follows a Poisson distribution. The counts in year i as N i have a conditional Poisson distribution with the rate of occurrence parameter λ i , given that:
$$P\left( {N_{i} = k\left| {\lambda_{i} } \right.} \right) = \frac{{e^{{ - \lambda_{i} }} \lambda_{i}^{k} }}{k!} \, \quad \left( {k = 0,1,2, \ldots } \right)$$
where λ i is a non-negative random variable. In a Poisson regression model, λ i can be modeled as a function of predictors x 1i , x 2i ,…, x mi in a manner similar to parameters of a non-stationary GEV (see Eq. 3):
$$\lambda_{i} = \exp \left( {\beta_{0} + \beta_{1} x_{1i} + \beta_{2} x_{2i} + \cdots + \beta_{m} x_{mi} } \right)$$
where β j is the coefficient for the j-th predictor (x ji ) estimated by the maximum likelihood method. If β j estimated is non-zero at a 5 % significance level, then there is a statistically significant relationship between the occurrence of extreme events and the predictor x j . By relating λ i to the time using an exponential function λ i = exp (β 0 + β 1 i), changes in the mean number of occurrences of heavy precipitation with time can be examined. If β 1 is non-zero at the 5 % significance level, temporal changes in the mean number of extreme events are statistically significant (Villarini et al. 2011, 2012, 2013). The abrupt change points of the occurrences of extreme events can be further identified by a segmented regression in which the relation between the predictand and the predictor is piecewise linear. We used the function segmented in the R package 'segmented' (Muggeo 2003) to detect change points and to estimate β 0 and β 1 for the Poisson regression model.
Appendix 3: GP distribution
The exceedance, Q = Z – u (where Z is the observed precipitation and u the threshold) can be modeled as a GP distribution (Coles 2001):
$$\Pr \left( {Q \le q\left| {Z > u} \right.} \right) = GP_{\sigma ,\xi } \left( q \right) = \left\{ {\begin{array}{*{20}l} {1 - \exp \left[ {{{ - q} \mathord{\left/ {\vphantom {{ - q} \sigma }} \right. \kern-0pt} \sigma }} \right]} \hfill & {\xi = 0} \hfill \\ {1 - \left[ {1 + {{\xi q} \mathord{\left/ {\vphantom {{\xi q} \sigma }} \right. \kern-0pt} \sigma }} \right]^{{{{ - 1} \mathord{\left/ {\vphantom {{ - 1} \xi }} \right. \kern-0pt} \xi }}} } \hfill & {\xi \ne 0} \hfill \\ \end{array} } \right.$$
For q ≥ 0 and 1 + ξq/σ > 0, where σ and ξ are the scale and shape parameters of a GP distribution. For ξ = 0, GP reduces to an exponential distribution. The GP distribution can be set up to model non-stationary processes, usually by making the scale parameter σ depend on particular covariate(s) (Coles 2001; Khaliq et al. 2006). The log of σ is regressed against covariates X, log (σ) = βX, as shown in Eqs. 3 and 5.
The return level y l is exceeded on average l times over a fixed period. Since there are on average λ peaks in the whole time series, the probability that an arbitrary peak exceeds y l equals l/λ. Thus, y l is obtained by adding the threshold to the (1 − l/λ) quantile of the excess distribution (Coles 2001):
$$y_{l} = u + GP_{\sigma ,\xi }^{ - 1} \left( {{{1 - l} \mathord{\left/ {\vphantom {{1 - l} \lambda }} \right. \kern-0pt} \lambda }} \right) = \left\{ \begin{aligned} u + \sigma \ln \left( {{l \mathord{\left/ {\vphantom {l \lambda }} \right. \kern-0pt} \lambda }} \right) \quad \xi = 0 { } \hfill \\ u + \frac{\sigma }{\xi }\left[ {1 - \left( {\frac{l}{\lambda }} \right)^{ - \xi } } \right] \quad \xi \ne 0 \hfill \\ \end{aligned} \right.$$
For presentation, it is often more convenient to give return levels on an annual scale, so that the N-year return level is the level expected to be exceeded once every N years.
Appendix 4: The likelihood-ratio test
The likelihood-ratio test can compare results obtained from GEV and GP distributions of parameters expressed with covariates of various complexities, such that the base covariate (e.g., M 0) is a subset of a more complex covariate (e.g., M 1). The likelihood-ratio test can determine which sets of model parameters will lead to the overall best model performance for GEV and GP. Suppose a base model M 0 is nested within a model M 1, and L 0 (L 1) is the negative log-likelihood value for M 0 (M 1), then a deviance statistics is given by (Coles 2001):
$$D = - 2\left( {L_{1} - L_{0} } \right)$$
Large values of D indicate that M 1 is more adequate for representing the data than its base counterpart M 0. The D statistic follows a Chi square distribution with degree of freedom, ν (difference between the number of parameters of the models M 0 and M 1). D α is the (1 − α) quantile of the Chi square distribution at the α significant level. The null hypothesis D = 0 is rejected if D > D α . We used functions in the R package 'extRemes' (Gilleland and Katz 2011) for inferring the parameters of GEV and GP distributions and testing the significance of the relations between parameters and covariates.
Tan, X., Gan, T.Y. Non-stationary analysis of the frequency and intensity of heavy precipitation over Canada and their relations to large-scale climate patterns. Clim Dyn 48, 2983–3001 (2017). https://doi.org/10.1007/s00382-016-3246-9
Issue Date: May 2017
Canadian extreme precipitation
Non-stationary probability distributions
Poisson regression
Climate indices
El Niño Southern Oscillation
Large-scale climate patterns | CommonCrawl |
What Is 40% Of 2000
Numerixs investment technologies
Callum crane (bill
Detailed national-level presidential
Census long form
Potential homebuyers' frustrations
Peak Asset Management LLC Sells 11,073 Shares of Canon Inc. – numerixs investment technologies Inc lifted its holdings in Canon by 2,000.0% in the fourth quarter.. The company has a quick ratio of 1.40, a current ratio of 1.99 and a debt-to-equity ratio of.
Defense lawyer and all-around sleaze callum crane (bill Pullman) has a nice wife (Joanne Whalley), two daughters by his wife's first marriage, and an obscenely perfect house.
Quicken Loans Qualifications Preapproval vs. Prequalification | Quicken Loans Blog | ZING. – If your eligibility in the program does not change and your mortgage loan does not close, you will receive $1,000. This offer does not apply to new purchase loans submitted to Quicken Loans through a mortgage broker. Additional conditions or exclusions may apply. Verified Approval within 24 hours of receipt of all requested documentation.
If $\$2000$ is $40\%$, it is $\frac{4}{10}$ of the total. So the remaining $60\%$ would be $(\$2000/40)\cdot60=\$3000$.
Catholic Institute of Consecrated Life of Order of Cistercians of the Strict Observance
Nyack Sketch Log: Dr. Lauren Dulberg of Two Rivers. – There are over 2000 points on the body, which are on a specific meridian and activate an action for a specific symptoms. There is usually a formula of points used.. This led me to having a big.
2000 Presidential General Election Results – detailed national-level presidential Election Results for 2000.
Race and ethnicity in the United States – Wikipedia – Race and ethnicity in the United States is a complex topic both because the United States has a racially and ethnically diverse population. in the 2000 census long form and the American Community Survey;. 40% of Hispanics age 25 and older have had a college experience. In 2000 the.
Indonesia Overview – worldbank.org – In 2011, The World Bank supported the development of two geothermal fields: Ulubelu in Lampung at the southern part of the Sumatra Island (capacity 110 MW), and Lahendong in North Sulawesi (capacity 40.
What we learnt from young South Africans about the minimum wage and employment – This stems from evidence following the introduction of minimum wages in some sectors, starting in about 2000. Earlier research. with between 40% and 50% of young people earning less than.
BRIC Countries – Background, Facts, News and Original Articles – BRIC countries is an investing concept for the four large emerging markets and developing countries of Brazil, Russia, India and China.
No, palm oil is not responsible for 40% of global deforestation – In the palm oil sector, non-agro-industrial farms account for about 40% of the area and these also contribute to. in Indonesia (especially on Sumatra and Borneo islands) between 2000 and 2012 as.
What is 40% of $2000? | Yahoo Answers – What is 40% of $2000? Update: Umm I got the answer to my question so thanks. U people got 2 much time on ur hands if you got nothin to do but try and put others down for askin a question find a hobby OOPS this is ur hobby!
First Time Homebuyer Credit 2017 potential homebuyers' frustrations on the rise as house prices keep rising, report says – You may be coming down with the homebuyer's. the 24% reported in the 2017 Q4 survey. Similarly, only 54% of prospective buyers in Q4 2018 are actively trying to find a home, instead of just making.
40 percent off 2000 – percentage-off-calculator.com – How to calculate 40% off 2000 dollars or pounds. In calculating 40% of a number, sales tax, credit cards cash back bonus, interest, discounts, interest per annum, dollars, pounds, coupons,40% off, 40% of price or something, we use the formula above to find the answer. | CommonCrawl |
Physical deep learning with biologically inspired training method: gradient-free approach for physical hardware
Deep physical neural networks trained with backpropagation
Logan G. Wright, Tatsuhiro Onodera, … Peter L. McMahon
Connecting reservoir computing with statistical forecasting and deep neural networks
Lina Jaurigue & Kathy Lüdge
In situ training of feed-forward and recurrent convolutional memristor networks
Zhongrui Wang, Can Li, … J. Joshua Yang
A solution to the learning dilemma for recurrent networks of spiking neurons
Guillaume Bellec, Franz Scherr, … Wolfgang Maass
Long short-term memory networks in memristor crossbar arrays
Can Li, Zhongrui Wang, … Qiangfei Xia
Equivalent-accuracy accelerated neural-network training using analogue memory
Stefano Ambrogio, Pritish Narayanan, … Geoffrey W. Burr
Deep learning incorporating biologically inspired neural dynamics and in-memory computing
Stanisław Woźniak, Angeliki Pantazi, … Evangelos Eleftheriou
Realising and compressing quantum circuits with quantum reservoir computing
Sanjib Ghosh, Tanjung Krisnanda, … Timothy C. H. Liew
Neuroinspired unsupervised learning and pruning with subquantum CBRAM arrays
Yuhan Shi, Leon Nguyen, … Duygu Kuzum
Mitsumasa Nakajima ORCID: orcid.org/0000-0001-9451-46751,
Katsuma Inoue ORCID: orcid.org/0000-0002-5394-01302,
Kenji Tanaka ORCID: orcid.org/0000-0003-4438-58301,
Yasuo Kuniyoshi2,3,
Toshikazu Hashimoto1 &
Kohei Nakajima ORCID: orcid.org/0000-0001-5589-40542,3
Ever-growing demand for artificial intelligence has motivated research on unconventional computation based on physical devices. While such computation devices mimic brain-inspired analog information processing, the learning procedures still rely on methods optimized for digital processing such as backpropagation, which is not suitable for physical implementation. Here, we present physical deep learning by extending a biologically inspired training algorithm called direct feedback alignment. Unlike the original algorithm, the proposed method is based on random projection with alternative nonlinear activation. Thus, we can train a physical neural network without knowledge about the physical system and its gradient. In addition, we can emulate the computation for this training on scalable physical hardware. We demonstrate the proof-of-concept using an optoelectronic recurrent neural network called deep reservoir computer. We confirmed the potential for accelerated computation with competitive performance on benchmarks. Our results provide practical solutions for the training and acceleration of neuromorphic computation.
Machine learning based on artificial neural networks (ANNs) has successfully demonstrated its excellent ability through record-breaking performance in image processing, speech recognition, game playing, and so on1,2,3. Although these algorithms resemble the workings of the human brain, they are basically implemented on a software level using conventional von Neumann computing hardware. However, such digital-computing-based ANNs are facing issues regarding energy consumption and processing speed4. These issues have motivated the implementation of ANNs using alternative physical platforms5, such as spintronic6,7,8, ferroelectric9,10, soft-body11,12, photonic hardware13,14,15,16,17,18, and so on19,20,21,22. Interestingly, even passive physical dynamics can be used as a computational resource in randomly connected ANNs. This framework is called a physical reservoir computer (RC)21,22,23 or an extreme learning machine (ELM)24,25,26, whose ease of implementation has greatly expanded the choice of implementable materials and its application range. Such physically implemented neural networks (PNNs) enable the outsourcing of the computational load for specific tasks to a physical system such as a memory27, optical link28,29, sensor component30,31, or robotic body32. The experimental demonstrations of these unconventional computations have revealed performance competitive with that of conventional electronic computing33,34,35.
Constructing deeper physical networks is one promising direction for further performance improvement because they can extend network expression ability exponentially36,37, as opposed to the polynomial relationship in wide (large-node-count) networks. This has motivated proposals of deep PNNs using various physical platforms14,16,30,38,39,40,41,42,43. Their training has basically relied on a method called backpropagation (BP), which has seen great success in the software-based ANN. However, BP is not suitable for PNNs in the following respects. First, the physical implementations of the BP operation are still complex and unscalable40,41,42,43. Thus, the calculation for BP for a PNN is typically executed on an external regular computer with a simulation model of a physical system14,16,30,39,44. This strategy results in a loss of any advantage in speed or energy associated with using the physical circuit in the training process. Thus, this method is not suitable for in-situ (online) training; it is only usable for "train once and infer many times" application. Second, BP requires accurate knowledge about the whole physical system. Thus, the performance of the PNNs entirely relies on the model representation or measurement accuracy of the physical system45. In addition, when we apply BP to RC, these requirements spoil the unique features of physical RC, i.e. we need to know and simulate a black-boxed physical random network accurately.
Like BP in PNNs, the operational difficulty of BP in biological neural networks has also been pointed out in the brain science field; the plausibility of BP in the brain—the most successful analog physical computer—has been doubted46,47,48. These considerations have motivated the development of biologically plausible training algorithms49,50,51,52. One promising recent direction is direct feedback alignment (DFA)53,54,55. In this algorithm, fixed random linear transformations of the error signal at the final output layer are employed instead of the backward error signals. Thus, this approach does not require layer-by-layer propagation of error signals or knowledge of the weight. In addition, it has been reported that DFA scales to modern large-scale network models54. The success of such biologically motivated training suggests that there is a more suitable way than BP to train PNNs. However, DFA still requires the derivative f'(a) of a nonlinear function f(x) for the training, which hinders the application of DFA methods to physical systems. Although previous studies on DFA training for spiking neural networks (SNNs) have reported that an approximated function can be used as an alternative56, this approach still requires modeling and simulation of the physical system. Thus, a more drastic extension of DFA is important for PNN applications.
In this paper, we demonstrate physical deep learning by augmenting the DFA algorithm. In the augmented DFA, we replace the differential of physical nonlinear activation f'(a) in the standard DFA with arbitrary nonlinearity g(a) and show that the performance is robust to the choice of g(a). Owing to this augmentation, we no longer need to simulate f'(a) accurately. As the proposed method is based on parallel random projection with arbitrary nonlinear activation, we can execute the computation for the training on a physical system in the same manner as with the physical ELM or RC concept21,22,23. This enables the physical acceleration of both inference and training. To demonstrate the proof-of-concept, we constructed an FPGA-assisted optoelectronic deep physical RC as a benchtop. Although our benchtop is simple and easy to apply to various physical platforms with only software-level updates, we achieved performance comparable to that of large-scale complex state-of-the-art systems. Moreover, we compared the whole processing time, including that for digital processing, and found the possibility of physical acceleration of the training procedure. We also numerically found that the proposed augmented DFA is applicable to various network models, including more practical architecture and SNNs, suggesting the scalability of our approach. Our approach provides a practical alternative solution for the training and acceleration of neuromorphic physical computation.
Direct feedback alignment and its augmentation for physical deep learning
Fig. 1a shows the basic concept of PNNs. The forward propagation of a standard multilayer network is described as \({x}^{\left(l+1\right)}=f\left({a}^{\left(l\right)}\right)\), where \({a}^{\left(l\right)}={W}^{\left(l\right)}{x}^{\left(l\right)}\) with the weight \({W}^{\left(l\right)}\epsilon {\mathbb{R}}^{{N}^{(l+1)}\times {N}^{(l)}}\) and input \({x}^{\left(l\right)}\epsilon {\mathbb{R}}^{{N}^{(l)}}\) for the lth layer, and f denotes an element-wise nonlinear activation. In the PNN framework, this operation is executed on a physical system; i.e. x(l), W(l), and f correspond to the physical inputs (e.g., optical intensity, electric voltage, vibration), physical interconnections (e.g., optical, electrical, or mechanical coupling) in the physical system, and physical nonlinearity (e.g., nonlinear optical/magnetic/mechanical effects), respectively. To train such networks, we need to update W(l) to reduce given cost function E. A general solution is the BP algorithm shown in Fig. 1b. The gradients for BP are obtained through the chain-rule as follows:
$${e}^{(l)}=\left[{W}^{\left(l+1\right),T}{e}^{\left(l+1\right)}\right]\odot {f^{\prime}}({a}^{(l)}),$$
where \({e}^{\left(l\right)}\epsilon {\mathbb{R}}^{{N}^{(l+1)}}\) is the error signal at the lth layer, defined as \({e}^{(l)}=\partial E/\partial {a}^{(l)}\) with \({a}^{(l)}={W}^{(l)}{x}^{(l)}\), the superscript T denotes transposition, and ⊙ denotes the Hadamard product. From Eq. (1), we can compute the gradient for each W(l) as \(\delta {W}^{(l)}=-{e}^{(l)}{x}^{\left(l\right),T}\). The training using Eq. (1) is typically executed on a regular external computer by constructing a physical simulation model14,16,30,39,44, which incurs large computational cost. Thus, this strategy is not suitable for in-situ training. In addition, the error in the simulation model significantly affects PNN performance. Therefore, the training method for PNNs is still under consideration despite the success of BP in software-based ANNs.
Fig. 1: Concept of PNN and its training by BP and augmented DFA.
a Schematics of physical neural networks (PNNs). Training sequence of PNN with b BP, and c augmented biologically plausible training called direct feedback alignment (DFA). Augmented DFA enables parallel, scalable, and physically accelerable training of deep physical networks based on random projection with alternative nonlinearity g(a).
Let us consider DFA as an alternative solution [see Fig. 1c]. In the standard DFA framework, the weighted gradient signals in Eq. (1) are replaced with a linear random projection of the error signal at the final layer L53,54. Then, we can obtain the following update rule:
$${e}^{(l)}=\left[{B}^{\left(l\right)}{e}^{\left(L\right)}\right]\odot {f^{\prime}}\big({a}^{(l)}\big),$$
where \({B}^{\left(l\right)}\epsilon {\mathbb{R}}^{{N}^{(l+1)}\times {N}^{(y)}}\) is a random projection matrix for the lth layer update, and f' denotes the gradient of f. As shown in Eq. (2), we can estimate the gradient without information about W (l). In addition, physical implementation of random projection process \({B}^{\left(l\right)}{e}^{\left(L\right)}\) can be implemented by using various devices because this process is the same as in the physical ELM and RC approach. By using commercially available photonic components, we can emulate 5 × 105 by 5 × 105 matrix operations on a single integrated optics57, which are enough for the single hidden layer even in the state-of-the-art model. Note that in this hypothesis, we assumed a liquid-crystal-on-silicon (LCOS) spatial-light modulator (SLM) for the input encoding and a passive optical diffuser for the random matrix. Thus, the input vector (e(L) for the DFA processing) is reconfigurable, but the random matrix (B(l) for the DFA processing) is not. By using an additional SLM instead of the random scattering medium, we can implement programmable random matrices34,58. The limitations due to such a photonic implementation are discussed in Supplementary Information S.9. Also, the photonic acceleration of this process has already been demonstrated59. In addition, the DFA process can be parallelized because it is not a sequential equation unlike the one for BP. Despite its simplicity, DFA can scale modern neural network models (see Supplementary Information S1 and ref. 54). However, f'(a) remains in Eq. (2), requiring accurate modeling and simulations, which is the bottleneck in the learning process for PNNs.
Here, we replace the f'(a) function with the function g(a) to investigate the robustness against the choice of g(a). Then, we derive the update rule as
$${e}^{(l)}=\left[{B}^{\left(l\right)}{e}^{\left(L\right)}\right]\odot g\big({a}^{(l)}\big),$$
As f'(a) is replaced with g(a), the equation no longer includes the knowledge for the parameters in the forward propagation. The gradient δW(l) can be estimated from the final error e(l) and alternative nonlinear projection of given a(l). Thus, we no longer require the gradient of the original networks, which is highly advantageous for PNN training. As shown in the following section, we can select a broader range of g(a). The only requirement is to avoid the function uncorrelated to f'(a). Notably, the computation of g(a) can also be implementable to a physical system. A concrete example is shown in the Physical Implementation section below. We named this algorithm as augmented DFA.
Interestingly, the augmented DFA is also useful for black-box fixed physical networks such as a physical RC, where black-box means that we do not know (or only have rough information about) W(l) and f. When we apply the BP algorithm to physical RC, we need to simulate the gradient of the physical system using a regular computer. Thus, we need to open the black-box (need to measure and approximate W(l) and f) to estimate the gradients, which spoils the advantage of such a randomly fixed physical network. On the other hand, the augmented DFA can train a physical RC without the BP and knowledge about the physical system. Although the update rule of the augmented DFA for the RC requires additional computation compared with Eq. (3) [see Eqs. (6-11) in Methods], this can be executed on physical hardware in the same manner as forward propagation in an RC. Thus, we can improve the performance of RC while maintaining its unique features. The detailed update rule for the RC is described in Methods, and the concrete experimental demonstration is shown in the following section.
For the actual physical implementation using the augmented DFA, it should be mentioned how a(l) is obtained from the PNNs for the computation of Eq. (3). For most feedforward-type PNNs, the physical (or electrical) nonlinear layer and linear multiply-accumulate layer (e.g., the fully connected layer) are separated. Thus, we can measure a(l) as an output of a physical network and we can use it as an input parameter for Eq. (3). On the other hand, some physical networks cannot separate the nonlinear and linear layer. For example, an RC includes nonlinearity in the physical dynamics itself. In this case, we can directly obtain g(a(l)) by changing the nonlinearity in the physical system. A concrete example of the former (separated physical nonlinearity) and latter (physical nonlinearity included) cases are shown in Supplementary Information S2 and the Physical Implementation section below.
Basic characterization of augmented DFA
First, we investigated the effect of the augmentation of DFA, that is, the effect of g(a). For this purpose, we used the standard image classification benchmark called the Modified National Institute of Standards and Technology database (MNIST) task with a simple multilayer-perceptron (MLP) model. In the experiment, the MLP model was composed of four fully connected layers with 800 nodes for each layer and two types of nonlinear activation f(a), namely a hyperbolic tangent (tanh), sine (sin), and cosine (cos) function. These nonlinearities correspond to a simple model of common photonic implementation. In this experiment, we generated g(a) from some well-used functions (sine, cos, tanh, triangle) and the random Fourier series \(g\left(a\right)={\sum }_{k=1}^{K}{p}_{k}{{\sin }}\left({ka}\pi \right)+{q}_{k}{{\cos }}(k{{{{{\rm{a}}}}}}\pi )\), where pk and qk, are the random uniform coefficients sampled from \({\mathbb{R}}\)∈[−1:1]. K was set to 4 and normalized by the relationship \(\mathop{\sum }\nolimits_{k=1}^{K}{|p}_{k}\left |+{{|q}}_{k}\right |=1\). A hundred random Fourier series were examined in this experiment. Random matrix B(l) was generated from the uniform distribution.
The training curves for the augmented DFA and BP are shown in Fig. 2a. For comparison, we also show the results for the BP algorithm in Eq. (1) with f'(a) replaced with g(a). Here, the g(a) was generated from the various nonlinear activations and the random Fourier series with 100 random seeds. Thus, correlation coefficient (corr) η between f'(a) and g(a) varied with the difference in the random seed and the choice of nonlinearity (see [Eq. (12)] in Methods for the definition). The solid line and the shaded area indicate the averaged test error for all the experiments and the region between the maximum and minimum values, respectively. The color difference indicates the difference in the examined nonlinearity in the forward propagation f. As can be seen, the average accuracy of the augmented DFA is far superior to that for BP when we replace the nonlinearity. The training curve of the augmented DFA seems to converge for the examined case, while the one for the BP seems to often diverge. Figure 2c shows the test error as a function of η. The whiskers, plots, and filled area show the minimum and maximum values, average values, and density of data distribution, respectively. The case for η = 1 means that g(a) equals f'(a), which corresponds to the case of the standard BP and DFA in Eqs. (1) and (2). The cases for η = 0 and −1 correspond to uncorrelation and negative correlation, respectively. The test error of BP increases sharply when η deviates from 1. In particular, almost no meaningful training was possible when the correlation coefficient was negative. This is because the update direction became opposite from the correct direction. On the other hand, the test accuracy for the augmented DFA showed a gentle dependency on η. These results indicate that the training is highly robust to the choice of g(a). The test error for the augmented DFA was maximized at η = 0 [i.e., g(a) became the uncorrelated function of f'(a)]. Even in the worst case, we were able to obtain an accuracy of about 89%, which is far superior to that for BP. In addition, learning can be performed when the correlation coefficient is negative. We think that this is due to the random weight matrix B; that is, the randomly distributed linear projection term in Eq. (3) erases the positive/negative sign of g(a). Note that the observed convergence of the augmented DFA is not always guaranteed: e.g., we observed divergence of the training curve when f'(a) was a random Fourier series (see Supplementary Information S5). However, the robustness against the choice of g(a) was much superior to that for BP in all examined cases.
Fig. 2: BP vs augmented DFA.
Training curve of 4-layer MLP with 800 hidden nodes for a augmented DFA and b BP with various g(a):sin(a), cos(a), triangle(a), tanh(a) functions, and 40 random Fourier series. Color differences indicate differences in activation function [ f(a)=sin(a), cos(a), tanh(a)]. Bold line and shaded area indicate the average and maximum-minimum region, respectively. c Test error distribution of four-layer fully connected neural network as a function of the correlation coefficients between f '(a) and g(a), η. Blue and red boxplots in c are the results for the model trained by BP and augmented DFA, respectively. η was scanned by using various g(a):sin(a), cos(a), triangle(a), tanh(a) functions and a random Fourier series. The whiskers, plots, and filled area show the minimum and maximum values, average values, and density of data distribution, respectively. d Test error of four-layer RC with 400 nodes as a function of η. The spectral radius of reservoir weight was set to 0.9 because the performance of the deep RC was maximized in this region (see Supplementary Information S13). Red and blue plots in d are the results for the model trained by BP and augmented DFA. η was scanned by using g(a) = sin(a+θ) with θ = 0, 15, 30, 45, 60, 75, 80, 85, 90, 95, 100, 105, 120, 135, 150, 165, 180°. As a reference, cos(θ) is displayed as a second x axis. Data in this figure were obtained using standard CPU/GPU computation. Each experiment was repeated five times.
One index for evaluating whether the feedback alignment algorithm operates well or not is the alignment angle (∠δBP/DFA), which is the angle between δBP and δDFA, where δBP, δDFA, and ∠δBP/DFA are defined as δBP = WTe(l) and δDFA = B(l)e(L), ∠δBP/DFA=cos−1(δBP ・δDFA)52. When the alignment angle lies within 90°, the network trained by the augmented DFA is roughly in the same direction as the BP would be. Here, we analyzed the alignment angle of the network with four hidden layers (see Supplementary Information S11 for details). We found that the alignment angles for the BP are significantly increased when η is apart from one, which reflects the test error increase shown in Fig. 2c. On the other hand, the alignment angle for augmented DFA is highly robust to the η value and smaller than 90°. These results suggest that we can train the deep physical network using inaccurate f'(a) (or even using an alternative nonlinear function), which provides ease of physical implementation.
Let us discuss the effectiveness of the augmented DFA against a randomly fixed deep network60,61,62 towards application to a physical RC. Before the experiment, we investigated the applicability of DFA itself to such network using the deep RC and ELM models (see Supplementary Information S6 and S10). As in the analysis for the MLP case shown in Fig. 2a–c, we investigated robustness against the choice of g(a) in the augmented DFA algorithm using the deep RC with four hidden layers. For this purpose, f(a) and g(a) were set to cos(a) and sin(a+θ). The spectral radius of reservoir weight was set to 0.9 because the performance of the deep RC was maximized in this region (see Supplementary Information S13). By varying the θ value from 0 to π, we could scan η from −1 to 1 easily. Figure 2d shows the test accuracy for the MNIST task as a function of η. For comparison, we also plotted the results for the same network trained by BP by replacing f'(a) with g(a). As can be seen, unlike for the BP training, the accuracy of the RC trained by the augmented DFA is robust against the choice of g(a), which is the same trend as in the results for the MLP. The accuracy became worse when η approached zero, and we should avoid this region to achieve better performance. These results basically support the effectiveness of the augmented DFA approach even in an RC. As shown in Supplementary Information S12, the robustness against η highly depends on the number of nodes. This suggests that the number of nodes is important not only for accuracy but also for robustness. The analysis of the alignment angle for this network is shown in Supplementary Information S11. In contrast to the MLP case, we found that the alignment angle for η = 0 was larger than 90°. However, even in the region beyond the alignment angle of 90°, we were able to obtain an error of around 5%. We think that the network only trains the final layer weights in this case. In our experiment, the multilayer network did not have nonlinear activation in the final layer, as in the same manner for the standard RC. As the gradient in the final layer is the same as in the standard network, the weight in the final layer is simply varied to minimize the final error even in the region with η = 0. In fact, we were able to obtain an error of approximately ~6% in the readout-only training, which supports our inference. The applications of the augmented DFA to other network models, including an MLP-Mixer, vision Transformer, and ResNet, deep-ELM are described in Supplementary Information S1 and S10.
Here, we show a concrete example of physical implementations of PNNs trained by the augmented DFA, namely a prototype hardware/software implementation of an optoelectronic RC using an FPGA-assisted fiber-optic system. Numerical simulations of other physical implementations, including a diffractive optical neural network and a nanophotonic neural network are described in Supplementary Information S2.
Up to now, various physical implementations of single-layer RC have been achieved by using a delayed dynamical system with a single nonlinear device8,17,18,21,22,23 By expanding this concept, we implemented deep RC by cascading the delayed dynamical system. Figure 3a shows a schematic explanation of our constructed optoelectronic deep RC benchtop. The equilibrium network topology is shown in Fig. 3b. In this system, temporal input signals of the lth layer, xi(l)(n), are masked and converted to Mi(l)xi(l)(n) using mask function Mi(l), where i is the virtual node number (i = 1, 2,…, N, where N is the number of virtual nodes in each reservoir layer). This operation generates quasi-random connections between inputs and virtual nodes. The masked inputs are converted to optical signals using an optical intensity modulator with time interval θrc. Thus, the input signals are elongated to time interval T = Nθrc. The signals are introduced into a delay ring with a single nonlinear system, which acts as the reservoir layer. If we set the length of the delay ring τ as τ = T, each node only couples with the previous state of itself [meaning Ω in Eq. (6) becomes diagonal matrix]. On the other hand, by choosing τ = (N + k)θrc, we can obtain a coupling between xi and xi−k, which provides richer dynamics18. Thus, we set the delay time as the desynchronized condition τ = (N + 1)θrc. The signals are directly detected by a single photodiode, and their discretized dynamic responses are considered as virtual nodes. These signals are converted to digital signals, which are stored in the memory. Then, they are considered as the next layer input signal xi(l+1)(n). They are masked by Mi(l+1) and re-input to the RC system. The rest of the processing scheme is the same as in the previous layer. Since this scheme shares all the hardware components, the device architecture is simple, cost-effective, and easy to implement. Other possible photonic implementations of the deep RC are summarized in Supplementary Information S3.
Fig. 3: Optoelectronic deep RC system with augmented DFA training.
a Schematic of the constructed optoelectronic deep RC. The input signals are masked by a digital processor and sent to the optoelectronic RC system to solve Eqs. (4) and (5). The change in the nonlinearity from f(a) to g(a) is realized by applying a bias to the Mach–Zehnder modulator. Based on physically solved x and s values, the mask for each layer is updated. b Equilibrium network topology for the constructed optoelectronic RC. Each reservoir layer shows ring topology since the RC system is composed of a delay-based nonlinear fiber ring.
As a nonlinear device, we employed a Mach-Zehnder interferometer (MZM), which provides the activation f(x) = cos(x+Φbias). Then, the obtained virtual node response xi(l+1)(n) can be described as
$${x}_{i}^{\left(l\right)}(n)=\cos \left\{\alpha {x}_{i-1}^{\left(l\right)}\left(n-1\right)+{M}_{i}^{\left(l\right)}{x}_{i}^{\left(l-1\right)}\left(n\right)+{\varPhi }_{{bias}}\right\},$$
where α is feedback gain in the nonlinear delay ring. The operation in the next layer is the same as in the first layer. The outputs y(n) are obtained from weighted summation of final layer output xi(L)(n), the same as in Eq. (7) in Methods. Therefore, this stacked architecture of nonlinear delay line-based oscillators can simply emulate a special type of deep RC. For the training, we need to calculate Eqs. (8)–(11) in Methods. In particular, Eq. (9) requires heavy computational costs because it includes recurrent computation with O(N2) operation with each time step, the same as Eq. (6) in the forward propagation case. However, in this system, Eq. (9) can be solved by using the same optical system characterized by Eq. (4). By setting the bias as (φbias + π/2), we can generate an alternative nonlinear activation as follows:
$${s}_{i}^{\left(l\right)}\left(n\right)=\sin \left\{{\alpha s}_{i-1}^{\left(l\right)}\left(n-1\right)+{M}_{i}^{\left(l\right)}{x}_{i}^{\left(l-1\right)}\left(n\right)+{\varphi }_{{bias}}\right\}.$$
By scanning the φbias value, we can sweep the η value from −1 to 1 to investigate the robustness against η. From digitally solved Eqs. (8), (10), and (11) and physically solved Eq. (9) shown in the Methods section, we can train the inherent parameters \({M}_{i}^{\left(l\right)}\)and final readout ω. Note that we can solve Eq. (8) using additional optical hardware for optical random matrix operation (see Supplementary Information S9), and this approach might accelerate the computational speed further.
Based on the above proposals, we constructed a deep optoelectronic RC by combining a high-speed optoelectronic system developed for optical telecom with a highly programmable field programmable gate array (FPGA) with fast digital-to-analog/analog-to-digital converters (DAC/ADC). We also developed Pytorch/Python-compatible middleware for ease of use of our device (see Supplementary Information S7). Using the benchtop, we examined the standard benchmark tasks. Figure 4a shows the layer dependence of the test accuracy of the MNIST task (the details of the experimental setup are described in Methods; the image processing scheme using the deep RC is described in Supplementary Information S4). In this experiment, we set to the virtual node number N = 404, α = 0.9, and φbias to 0 [i.e., g(a) is equal to f'(a) ideally]. L = 0 means readout-only training; L = 1 means both readin and readout training. As can be seen, the performance was improved by increasing the number of layers, as expected from the simulation. This suggests the effectiveness of the augmented DFA algorithm for physical RCs. Figure 4b shows the experimentally obtained test error as a function of correlation coefficient η. Note that the achieved accuracy was slightly lower than that in Fig. 4a even for θ = 0 because we stopped the training at the tenth epoch in this experiment. For comparison, the simulation results are also plotted. The robustness of the test error to the change in g(a) exceeded that expected from the simulation. We think that this is due to the inherent non-idealities of the device. In our device, we changed the shape of function g by applying bias voltage to optoelectronic modulator [LiNbO3-based MZM (LN-MZM)]. In general, this bias voltage drifts during RC operation because the phase shift in the MZM-arm drift due to changes in temperature and humidity or some external perturbations. We compensated for such a drift effect by measuring the LN-MZM outputs every 20 minutes. However, we think that we were unable to eliminate this effect perfectly, which would explain the fluctuation of η value in the training phase. This fluctuation smoothed out the steep dependency near the η = 0. We believe that this result is one of the useful examples of physical non-ideality for physical computing.
Fig. 4: Performance of optoelectronic deep RC system.
a Training accuracy as a function of layer number in RC with 606 nodes. Blue and orange plots show the results for the constructed benchtop and simulation on a CPU. Inset in a shows training and test accuracy under training. b Test error of the 4-layered RC with 404 nodes as a function of η. The accuracy is robust against the shape of g(a). c Processing time as a function of node count. The same results with a log-log-scale plot are displayed in d. Data in this figure were obtained using the optoelectronic RC benchtop. Reference simulation data were obtained using standard CPU and GPU computation. The test was repeated three times.
The obtained test accuracy for MNIST, Fashion MNIST, and CIFAR-10 are summarized in Table 1. The reported values for previous photonic DNN implementations and RC-based on other physical dynamics are also summarized in Table 134,35,63,64,65,66,67,68,69. As references, the state-of-the-art results obtained with standard computers for these benchmarks are also shown in Table70,71,72. Despite the simplicity of our delay-based hardware implementation, we achieved performance competitively with the state-of-the-art large-scale benchtop for all the examined tasks. This supports the effectiveness of our approach.
Table 1 Performance comparison of PNNs for benchmark tasks
To evaluate the efficiency of our scheme, we measured the computational time for training our system. Although previous studies also compared the processing time of PNNs, they basically only evaluated the matrix operation time [e.g., \({x}^{(l+1)}=f({W}^{(l)}{x}^{(l)})\)]. However, PNNs require many additional operations such as data transfer to the physical system, training based on simulation models, DAC/ADC operations, and pre- and post-processing for physical processing. Thus, the whole processing time was still under consideration. Owing to our constructed physically accelerable algorithm and its FPGA-assisted hardware implementation with full Pytorch/Python-compatible software, we can evaluate the whole processing time of our device, including the training time. As a first step, we investigated the processing time of PNNs by changing the node count, since the advantage of the physical RC approach lies in the acceleration to solve Eqs. (6) and (8) in the Methods section with O(N2) computational costs.
Figure 4c shows the measured training time per image for our constructed optoelectronic RC benchtop. For comparison, we also show the results for the augmented DFA and BP training on a CPU (Intel Core i7-9700, 8 cores, 3.0-GHz clock) and GPU (Nvidia Quadro P5000, 2560 cores, 16GB memory). The same results with the log-log-scaled plot are shown in Fig. 4d. The budget of processing time for the RC benchtop is broken down as follows: FPGA processing (data-transfer, memory allocation, and DAC/ADC) of ~92%; digital processing of ~8% for pre/post-processing, including the time for solving of Eqs. (7), (8), (10), and (11); and optoelectronic processing time of ~0.02% for solving Eqs. (6) and (8). Thus, the processing time is dominated by the digital computation on the FPGA and CPU in the current stage. This is because our optoelectronic benchtop implements only a reservoir layer using a single nonlinear delay line; that is, we need to transfer and measure the large-scale hidden state using a serial transmission line. These limitations can be relaxed by utilizing fully parallel and all-optical computation hardware in future73. As can be seen, the computation on the CPU and GPU shows O(N2) trends against the node count, whereas the benchtop shows O(N), which is due to the data-transfer bottleneck. (We need O(N) memory on the FPGA board, but the memory size on the FPGA is limited. Thus, we need to increase the number of data-transfers by reducing minibatch size, which results in a linear time increment against N.) The physical acceleration beyond the CPU was observed at N ~5,000 and ~12,000 for the BP and augmented DFA algorithm, respectively. However, in terms of computation speed, the effectiveness against the GPU has not been directly observed yet due to the memory limitation of the GPU. By extrapolating the GPU trend, we think that we could observe physical acceleration beyond that of a GPU at N ~80,000. These estimations are on the same order as a previous estimation on the forward propagation of a photonic RC57. To the best of our knowledge, this is the first comparison of the whole training process and the first demonstration of physical training acceleration using PNNs.
Augmentability to other physical systems
In this study, we have verified the effectiveness of our approach through physical experimentations using an optoelectronic delay-based implementation. The remaining question is its applicability to other systems. To answer it, we performed numerical simulations using a widely investigated photonic neural network, and revealed the effectiveness of our approach even in complex-valued diffractive networks and nanophotonic unitary networks (see Supplementary Information S2). In addition, our experimentally demonstrated delay-based RC was shown to be highly suitable for various physical systems. The major difference from other physical systems is the nonlinearity in Eq. (4), which is sometimes difficult to identify accurately. However, as described above, our method is highly robust to g(a), which suggest the algorithm is effective for such cases. Regarding the scalability of the physical system, the major issue for constructing a deep network is its intrinsic noise. Here, we investigated the effect of noise by numerical simulation (see Supplementary Information S8 and previous works74,75). We found the system to be robust to noise. Regarding to scalability of RC approach to more large-scale datasets, it has been reported that an RC-based transformer model (transformer with a fixed layer trained by BP) and a vision transformer-like RC works well76,77. As the transformer can be applied to many practical models, our deep RC scheme might scale to more advanced models. Further investigation will be performed in future.
Scalability and limitation of proposed method
Current physical implementations of neural networks are mainly focusing simple models such as an RC and MLP. We demonstrated the applicability of the augmented DFA to these models through the simulations and physical experiments described above. Here, we consider the scalability of the DFA-based approach to more modern models. One of the most commonly used models for practical deep learning is a deeply connected convolutional neural network (CNN). However, it has been reported that the DFA algorithm is difficult to apply to standard CNNs78.Thus, the proposed method may be difficult to apply to convolutional PNNs33,67,70 in a simple manner.
On the other hand, a recent study revealed that a full-connection network named MLP-Mixer can achieve state-of-the-art performance79. Although DFA-based training may be effective for such convolution-free models, the applicability of DFA for MLP-Mixer has not been investigated. In addition, it has also been reported that DFA can train modern network architectures without a convolution layer, including a graph neural network and transformer54. Those findings suggest that our algorithm might work on such a practical network structure. Considering analog hardware implementations, the applicability to SNNs is also an important topic. The suitability of DFA-based training for SNNs has been reported56, which implies that our proposed augmented DFA could make the training easier. Considering DFA for the CNN-based model, the investigation in a previous study was limited to the models without skip connections. It has been reported that the DFA angle increases with depth, which leads to failure of the training78. At the same time, it has been reported that the alignment angle in the convolution layer near the final layer is small enough even in CNNs, suggesting a shallow path to the final layer is one key to the success of the DFA-based training even in a CNN. Notably, it has been reported that forming skip connections is equivalent to forming an ensemble of deep and shallow networks. In addition, it has also been shown that most of the effective gradients in ResNet come from a shallow path. Thus, it is expected that ensembled shallow paths will have a positive impact on DFA training. In such a network, there remains the possibility of successful DFA training even for deep CNNs.
While the DFA-based algorithm has the potential to scale to above more practical models beyond a simple MLP or RC, the effectiveness of applying DFA-based training to such networks is still unknown. Here, as additional work in this research, we investigated the scalability of DFA-based training (DFA itself and the augmented DFA) to the above-mentioned models (MLP-Mixer, Vision transformer (ViT), ResNet, and SNNs). The details are described in Supplementary Information S1, and the main results for the MNIST, CIFAR-10 benchmarks for the examined results are summarized in Table 2. We found that the DFA-based training is effective even for the explored practical models. While the achievable accuracy of DFA-based training is basically lower than that of BP training, some tuning of model and/or algorithm could improve the performance. Notably, the accuracies of DFA and the augmented DFA are comparable for all the explored experimental setups, suggesting that the further improvement of the DFA itself will directly contribute to improving the augmented DFA. The results suggest that our approach is scalable to future implementation of practical models to PNNs beyond simple MLP or RC model.
Table 2 Applicability of augmented DFA to practical network models
BP vs DFA in physical hardware
In general, BP is extremely difficult to implement in physical hardware because it requires all the information in the computational graph. Thus, the training of physical hardware has been done by computational simulation, which incurs large computational costs. Also, the difference between the model and actual system leads the degradation of accuracy. In contrast, the augmented DFA does not require accurate prior knowledge about the physical system. Thus, in deep PNNs, our DFA-based approach is more effective even in terms of accuracy than the BP-based one. In addition, the computation is accelerable by using physical hardware as demonstrated in the Results section. While our first demonstration was slower than GPU implementations, it showed the potential to accelerate the computation of both inference and training on physical hardware. In addition, the DFA training does not require sequential error propagation with layer-by-layer computation, which means that the training of each layer can be executed in parallel. Therefore, a more optimized and parallel implementation of DFA could lead to more significant speed-up. These unique features suggest the effectiveness of our DFA-based approach, especially for physical hardware-based neural networks. On the other hand, the accuracy of the model trained by the augmented DFA was still inferior to one trained by BP. Further improvement of the accuracy for DFA-based training remains future work. One approach for the improvement (combination of DFA and BP) is described in Supplementary Information S1.2.
How to select alternative nonlinearity
In this work, we introduced alternative activation for the training. Although g(a) is basically an arbitrary function, we should avoid it near η = 0. One simple way to do this is to use g(a) = sin(a + θ). By scanning θ, we can sweep the η value for various functions and find a good solution. In addition, this nonlinearity is suitable for some physical implementations and, as shown in this article, we can accelerate the operation even in the training phase. Another approach is to use optimization problems such as a genetic algorithm (GA). Although a GA is hard to implement in a physical system, we can find a good solution for complex physical nonlinearity. An example of optimization is shown in Supplementary Information S5.
Further physical acceleration
Our physical implementation confirmed the acceleration of recurrent processing for RC with a large-node count. However, its advantage is still limited, and further improvement is required. As mentioned in the Results section, the processing time of our current prototype is denoted as the data-transfer and memory allocation to the FPGA. Thus, integrating all the processes into the FPGA would improve the performance much more, with the sacrifice of experimental flexibility. In addition, in future, an on-board optics approach will reduce transfer cost drastically. Large-scale optical integration and on-chip integration will further improve the optical computing performance itself.
Augmented DFA in RC
The forward propagation of RC is given by
$${x}^{\left(l\right)}(n)=f\left\{{\varOmega }^{\left(l\right)}{x}^{\left(l\right)}\left(n-1\right)+{M}^{\left(l\right)}{x}^{\left(l-1\right)}\left(n\right)\right\},$$
where \({x}^{\left(l\right)}\epsilon {\mathbb{R}}^{{N}^{(l)}}\) is the internal state of the lth reservoir layer (\({x}^{\left(l\right)}\left(0\right)=0\)), \({M}^{\left(l\right)}\epsilon {\mathbb{R}}^{{N}^{(l-1)}\times {N}^{(l)}}\) is the connection between (l−1)th and lth reservoir layers (called a mask function), \(\varOmega \epsilon {\mathbb{R}}^{{N}^{(l)}{\times N}^{(l)}}\) is the fixed random internal connection in the lth reservoir layer, and n is the discrete time step. The final output y is obtained by
$$y(n)=\omega {x}^{(L)}(n)$$
where \(\omega \epsilon {\mathbb{R}}^{{N}^{(y)}{\times N}^{(l)}}\)is the output weight. For the image classification task, we weighted the multiple time step signals (see Supplementary Information S4). Based on the update rule for the DFA in a recurrent neural network, gradients δM(l) and δω can be calculated by using the following equations80.
$${e}^{(l)}(n)=\left[{B}^{(l)},\,{T}{e}^{(L)}(n)\right]\odot{s}^{(l)}(n),$$
$${s}^{\left(l\right)}\left(n\right)=g({\varOmega }^{\left(l\right)}{s}^{\left(l\right)}\left(n-1\right)+{M}^{\left(l\right)}{x}^{\left(l-1\right)}\left(n\right)),$$
$$\delta {M}^{\left(l\right)}(n)=\frac{\partial E}{\partial {M}^{\left(l\right)}}=-{e}^{\left(l\right)}\left(n\right){x}^{\left(l\right),T}\left(n\right),$$
$$\delta \omega (n)=\frac{\partial E}{\partial \omega }=-{e}^{\left(L\right)}(n){x}^{\left(L\right),T}\left(n\right),$$
where g is the arbitrary function, \({s}^{\left(l\right)}\epsilon {{\mathbb{R}}}^{{N}^{(l)}}\) is the auxiliary state of the lth reservoir layer (\({s}^{\left(l\right)}\left(0\right)=0\)), and e(L) is the error at the final layer [see Supplementary Information S6 for the derivation and another possible candidate for Eq. (9)]. In the standard RC framework, only ω is trained by linear regression. On the other hand, our algorithm enables the training of both ω and M(l) for each layer. In a typical physical RC system, the operation of M(l)x(n) is executed by digital preprocessing. Therefore, the training M is familiar with the physical implementation. Although the training of M(l) can be executed by BP, it requires prior knowledge of Ω(l), M(l), and f. Meanwhile, augmented DFA does not require any knowledge about the physical system. By comparing with the augmented DFA for standard fully connected layers, we need to calculate Eq. (9) additionally. However, this output can be calculated by using a physical system. In addition, the DFA training does not require sequential error propagation with layer-by-layer computation, which means that the training of each layer can be executed in parallel.
Correlation coefficient
To discuss the distance between f' (a) and g(a) quantitatively, we measure the correlation coefficient η between f' (a) and g(a), defined as
$$\eta=\frac{{\int }_{-e}^{e}\left\{{f}^{\prime}\left(a\right)-\overline{{f}^{\prime}\left(a\right)}\right\}\{g\left(a\right)-\overline{g\left(a\right)}\}{da}}{\sqrt{{\int }_{-e}^{e}{{{{{{\rm{|}}}}}}\,{f}^{\prime}\left(a\right)-\overline{{f}^{\prime}\left(a\right)}{{{{{\rm{|}}}}}}}^{2}{da}}\sqrt{{\int }_{-e}^{e}{{{{{\rm{|}}}}}}{g\left(a\right)-\overline{g\left(a\right)}{{{{{\rm{|}}}}}}}^{2}{da}}},$$
where e is the natural logarithm, and the overlines mean the average. In order to discuss Eq. (12) in the bounded range where the data is distributed, the integration range is set to [-e: e]. The reason for this integral range is that we thought that periodic functions such as sin and cos and non-periodic functions such as tanh would yield correlations that differ from the actual situation if the range of integration is not determined. Although the distribution of internal state a depends on the data set and weight value, we decided to integrate from e to −e, assuming that the data distribution falls within this range.
Optoelectronic benchtop
In our device shown in Fig. 3a, datasets on the standard computer were transferred to the FPGA (Xilinx Zynq, Ultra-scale) via an ethernet cable. The matrix operation of M (l)x (l) was executed on the FPGA. Then, the signals are sent to the DAC (3-GHz, 4 GSa/s, 8-bit resolution) on the FPGA. The analog electrical signals were converted to the optical intensity by using a LiNbO3-based Mach-Zehnder modulator [Thorlabs LN05FC, 32-GHz bandwidth (BW)]. After the signals had been transmitted through the optical fiber-based delay line, they were detected by a photodetector (PD) [Finisar XPRV2022, 33-GHz BW]. The detected signals were amplified by a radio-frequency (RF) amplifier [SHF-S807C (SHF), 50-GHz BW:]. The internal dynamics were received by the PD and RF amplifiers via a 1:1 splitter. The optical signal was converted to an electrical signal by the PD and then sampled by the ADC on the FPGA. The received signals were reintroduced into the optoelectronic reservoir for the next layer calculation. After the forward propagation processing [Eq. (4)] of each minibatch, we changed the bias condition from Φbias to φbias to change the nonlinearity from f(a) to g(a). Then, the same operation as the above-described forward propagation was re-executed to solve Eq. (5). After the operation, augmented DFA-based training was done on the CPU using the outputs from the optoelectronic processing. The optical system was configured to have a ring topology when the number of nodes N = 3636 and the sampling rate S = 4 GSa/s. The sampling rate can be changed under the constraint S = Smax/k, where Smax = 4 GSa/s, and k is a natural number. The number of nodes can be changed by controlling S under the condition NS = constant. The feedback gain α (spectral radius) can be controlled by changing the variable optical attenuator value. All the above-described processes were implemented on the Pytorch-compatible software interface described in Supplementary Information S7. Thus, we can use this optoelectronic RC like a standard CPU or GPU (in Python code, the optoelectronic device can only be described as device="oe_rc"). The bottleneck of the computational speed is determined by the sampling rate of the DAC/ADC. Node increments up to 29,088 displayed in Fig. 4c are realized by using the node-reuse scheme proposed by Takano et al.81, which enables virtual node increments beyond the distance limitation of the delay ring.
Numerical simulation
The numerical experiments were executed on a standard desktop computer with the CPU (Intel Core i7-9700, 8 cores, 3.0-GHz clock) and GPU (Nvidia Quadro P5000, 2560-core, 16GB memory). While most of the experiments were done by using our original Pytorch/Python codes, we also utilized the TinyDFA-module on the github54 for the experiments on the commonly used ANN models (Supplementary Information S1 and Table 2). All the detailed experimental parameters are summarized in Supplementary Information S14.
The benchmark datasets for this work are publicly available in Pytorch or TensorFlow. All the data and methods needed to evaluate the conclusions of this work are presented in the main text and Supplementary Information. Additional data can be requested from the corresponding author.
Codes that are used in this paper are not available publicly due to industrial secrets. They are available from the corresponding author on reasonable request.
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016).
Graves, A., Mohamed, A. R. & Hinton, G. Speech recognition with deep recurrent neural networks. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 6645–6649 (2013).
Thompson, N. C., Greenewald, K., Lee, K. & Manso, G. F. The computational limits of deep learning. https://arxiv.org/abs/2007.05558 (2020).
Marković, D., Mizrahi, A., Querlioz, D. & Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2, 499–510 (2020).
Romera, M. Vowel recognition with four coupled spin-torque nano-oscillators. Nature 563, 230–234 (2018).
Grollier, J. et al. Neuromorphic spintronics. Nat. Electron. 3, 360–370 (2020).
Torrejon, J. et al. Neuromorphic computing with nanoscale spintronic oscillators. Nature 547, 428–431 (2017).
Oh, S., Hwang, H. & Yoo, I. K. Ferroelectric materials for neuromorphic computing. APL Mater. 7, 091109 (2019).
Boyn, S. Learning through ferroelectric domain dynamics in solid-state synapses. Nat. Commun. 8, 878 (2017).
Nakajima, K., Hauser, H., Li, T. & Pfeifer, R. Information processing via physical soft body. Sci. Rep. 5, 1–11 (2015).
Garrad, M., Sorter, G., Conn, A. T., Hauser, H. & Rossiter, J. soft matter computer for soft robots. Sci. Robot. 4, eaaw6060 (2019).
Shastri, B. J. et al. Photonics for artificial intelligence and neuromorphic computing. Nat. Photon. 15, 102–114 (2021).
Lin, X. et al. All-optical machine learning using diffractive deep neural networks. Science 361, 1004–1008 (2018).
Article ADS MathSciNet CAS MATH Google Scholar
Hamerly, R., Bernstein, L., Sludds, A., Soljačić, M. & Englund, D. Large-scale optical neural networks based on photoelectric multiplication. Phys. Rev. X 9, https://arxiv.org/abs/1812.07614 (2019).
Shen, Y. et al. Deep learning with coherent nanophotonic circuits. Nat. Photon. 11, 441–446 (2017).
Larger, L. et al. Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing. Opt. Express 20, 3241 (2012).
Paquot, Y. et al. Optoelectronic reservoir computing. Sci. Rep. 2, 1–6 (2012).
Zhang, W., Mazzarello, R., Wuttig, M. & Ma, E. Designing crystallization in phase-change materials for universal memory and neuro-inspired computing. Nat. Rev. Mater. 4, 150–168 (2019).
Chen, T. et al. Classification with a disordered dopant-atom network in silicon. Nature 577, 341–345 (2020).
Nakajima, K. Physical reservoir computing—an introductory perspective. Jpn J. Appl. Phys. 59, 060501 (2020).
Tanaka, G. et al. Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100–123 (2019).
Nakajima, K., & Fischer, I. (Eds.). (2021). Reservoir Computing: Theory, Physical Implementations, and Applications. Springer Nature.
GB Huang, Q. Z. C. S. Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006).
Ortín, S. et al. A unified framework for reservoir computing and extreme learning machines based on a single time-delayed neuron. Sci. Rep. 5, 1–11 (2015).
Rahimi, A. & Recht B. Random features for large-scale kernel machines. Adv. Neural Inf. Process Syst. 20 (2007).
D Ielmini, H. W. In-memory computing with resistive switching devices. Nat. Electron. 1, 333–343 (2018).
Hamerly, R. et al. Netcast: low-power edge computing with WDM-defined optical neural networks. https://arxiv.org/abs/2207.01777 (2022).
Huang, C. et al. Demonstration of photonic neural network for fiber nonlinearity compensation in long-haul transmission systems. Optical Fiber Communications Conference and Exhibition (OFC) Th4C–6 (2020).
Nakajima, M., Tanaka, K. & Hashimoto, T. Neural Schrödinger equation: physical law as neural network. IEEE Trans. Neural Netw. Learn. Syst. 33, 2686–2700 (2022).
Mennel, L. Ultrafast machine vision with 2D material neural network image sensors. Nature 579, 62–66 (2020).
Horii, Y. et al. Physical reservoir computing in a soft swimming robot. ALIFE 2022: The 2022 Conference on Artificial Life 00426, 92 https://doi.org/10.1162/ISAL_A_00426 (2021).
Xu, X. et al. 11 TOPS photonic convolutional accelerator for optical neural networks. Nature 589, 44–51 (2021).
Zhou, T. et al. Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit. Nature Photonics 15, 367–373 (2021).
Nakajima, M., Tanaka, K. & Hashimoto, T. Scalable reservoir computing on coherent linear photonic processor. Commun. Phys. 4, 1–12 (2021).
Montúfar, G., Pascanu, R., Cho, K. & Bengio, Y. On the number of linear regions of deep neural networks. Adv. Neural Inf. Process Syst. 4, 2924–2932 (2014).
Cohen, N., Sharir, O. & Shashua, A. On the expressive power of deep learning: a tensor analysis. J. Mach. Learn. Res. 49, 698–728 (2015).
Penkovsky, B., Porte, X., Jacquot, M., Larger, L. & Brunner, D. Coupled nonlinear delay systems as deep convolutional neural networks. Phys. Rev. Lett. 123, 054101 (2019).
Wright, L. G. et al. Deep physical neural networks trained with backpropagation. Nature 601, 549–555 (2022).
Boon, M. N. et al. Gradient Descent in Materio. https://arxiv.org/abs/2105.11233 (2021).
Lvovsky, A. I. et al. Backpropagation through nonlinear units for the all-optical training of neural networks. Photonics Res. 9, B71–B80 (2021).
Cruz-Cabrera, A. A. et al. Reinforcement and backpropagation training for an optical neural network using self-lensing effects. IEEE Trans. Neural Netw. 11, 1450–1457 (2000).
Minkov, M., Fan, S., Hughes, T. W. & Shi, Y. Training of photonic neural networks through in situ backpropagation and gradient measurement. Optica 5, 864–871 (2018).
Hughes, T. W., Minkov, M., Shi, Y. & Fan, S. Training of photonic neural networks through in situ backpropagation and gradient measurement. Optica 5, 864 (2018).
Englund, D., Hamerly, R., Bandyopadhyay, S. & Englund, D. Hardware error correction for programmable photonics. Optica 8, 1247–1255 (2021).
Crick, F. The recent excitement about neural networks. Nature 337, 129–132 (1989).
Whittington, J. C. R. & Bogacz, R. Theories of error back-propagation in the brain. Trends Cogn. Sci. 23, 235–250 (2019).
Grossberg, S. Competitive learning: from interactive activation to adaptive resonance. Cogn. Sci 11, 23–63 (1987).
O'Reilly, R. Biologically plausible error-driven learning using local activation differences: the generalized recirculation algorithm. Neural Comput. 8, 895–938 (1996).
Ororbia, A. G. & Mali, A. Biologically motivated algorithms for propagating local target representations. 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019 4651–4658 (2019).
Mazzoni, P., Andersen, R. A. & Jordan, M. I. A more biologically plausible learning rule for neural networks. Proc. Natl Acad. Sci. USA 88, 4433–4437 (1991).
Lillicrap, T. P., Cownden, D., Tweed, D. B. & Akerman, C. J. Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7, 13276 (2016).
Nøkland, A. Direct feedback alignment provides learning in deep neural networks. Adv. Neural Inf. Process Syst. https://arxiv.org/abs/1609.01596 (2016).
Launay, J., Poli, I., Boniface, F. & Krzakala, F. Direct feedback alignment scales to modern deep learning tasks and architectures. Adv. Neural Inf. Process Syst. 33, 9346–9360 (2020).
Refinetti, M., Ohana, R. & Goldt, S. Align, then memorise: the dynamics of learning with feedback alignment. J. Phys. A: Math. Theor. 55, 044002 (2022).
Article ADS MathSciNet MATH Google Scholar
Samadi, A., Lillicrap, T. P. & Tweed, D. B. Deep learning with dynamic spiking neurons and fixed feedback weights. Neural Comput. 29, 578–602 (2017).
Article MathSciNet MATH Google Scholar
Rafayelyan, M., Dong, J., Tan, Y., Krzakala, F. & Gigan, S. Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction. Phys. Rev. X 10, 041037 (2020).
Wang, T. et al. An optical neural network using less than 1 photon per multiplication. Nat. Commun. 13, 1–8 (2022). 2022 13:1.
Launay, J. et al. Hardware beyond backpropagation: a photonic co-processor for direct feedback alignment. https://arxiv.org/abs/2012.06373 (2020).
Gallicchio, C. & Scardapane, S. Deep randomized neural networks. https://arxiv.org/abs/2002.12287 (2021).
Cappelli, A., Launay, J., Meunier, L., Ohana, R. & Poli, I. ROPUST: improving robustness through fine-tuning with photonic processors and synthetic gradients. https://arxiv.org/abs/2108.04217 (2021).
Cappelli, A. et al. Adversarial robustness by design through analog computing and synthetic gradients. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 3493–3497 (2022).
Milano, G. et al. In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks. Nat. Mater. 21, 195–202 (2021).
Du, C. et al. Reservoir computing using dynamic memristors for temporal information processing. Nat. Commun. 8, 1–10 (2017).
Jiang, W. et al. Physical reservoir computing using magnetic skyrmion memristor and spin torque nano-oscillator. Appl. Phys. Lett. 115, 192403 (2019).
Feldmann, J. et al. Parallel convolutional processing using an integrated photonic tensor core. Nature 589, 52–58 (2021).
Midya, R. et al. Reservoir computing using diffusive memristors. Adv. Intell. Syst 1, 1900084 (2019).
Antonik, P., Marsal, N. & Rontani, D. Large-scale spatiotemporal photonic reservoir computer for image classification. IEEE J. Sel. Top. Quantum Electron. 26, 1–12 (2020).
Chang, J., Sitzmann, V., Dun, X., Heidrich, W. & Wetzstein, G. Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification. Sci. Rep. 8, 1–10 (2018).
An, S., Lee, M., Park, S., Yang, H. & So, J. An ensemble of simple convolutional neural network models for MNIST digit Recognition. https://arxiv.org/abs/2008.10400 (2020).
Zhong, Z., Zheng, L., Kang, G., Li, S. & Yang, Y. Random erasing data augmentation. Proc. Conf. AAAI Artif. Intell. 34, 13001–13008 (2020).
Tan, M. & Le, Q. V. EfficientNet: rethinking model scaling for convolutional neural networks. 36th International Conference on Machine Learning, ICML 2019, 10691–10700 (2019).
Porte, X. et al. A complete, parallel and autonomous photonic neural network in a semiconductor multimode laser. J. Phys. Photonics 3, 024017 (2021).
Ohana, R. et al. Photonic differential privacy with direct feedback alignment. Adv. Neural Inf. Process Syst. 34, 22010–22020 (2021).
Lee, J. & Kifer, D. Differentially private deep learning with direct feedback alignment. https://arxiv.org/abs/2106.03645 (2020).
Shen, S. et al. Reservoir Transformers. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 1, 4294–4309 (2020).
Wei, X. et al. ViR:the vision reservoir. https://arxiv.org/abs/2112.13545 (2021).
Launay, J., Lighton, I. P. & Krzakala, F. Principled training of neural networks with direct feedback alignment. https://arxiv.org/abs/1906.04554 (2019).
Tolstikhin, I. et al. MLP-mixer: an all-MLP architecture for vision. Adv. Neural. Inf. Process Syst. 34, 24261–24272 (2021).
Murray, J. M. Local online learning in recurrent networks with random feedback. Elife 8, 43299 (2019).
Takano, K. et al. Compact reservoir computing with a photonic integrated circuit. Opt. Exp. 26, 29424–29439 (2018).
The authors are grateful to Mr. F. Sugimoto for his support in the hardware experiment. They also thank Mr. T. Tamori for his support with the software implementation. They are also grateful to Drs. A. Samadi, T. P. Lillicrap, and D. B. Tweed for sharing the experimental codes of the SNN model. This work was partially supported by the project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). K. N. was supported by JSPS KAKENHI grants numbers JP18H05472 and by JST CREST grant number JPMJCR2014.
NTT Device Technology Labs., 3-1 Morinosato-Wakamiya, Atsugi, Kanagwa, 243-0198, Japan
Mitsumasa Nakajima, Kenji Tanaka & Toshikazu Hashimoto
Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
Katsuma Inoue, Yasuo Kuniyoshi & Kohei Nakajima
Next Generation Artificial Intelligence Research Center, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
Yasuo Kuniyoshi & Kohei Nakajima
Mitsumasa Nakajima
Katsuma Inoue
Kenji Tanaka
Yasuo Kuniyoshi
Toshikazu Hashimoto
Kohei Nakajima
M.N., K.I., and K.N. conceived the basic concept of the presented physical deep learning method. M.N. and K.I. performed the numerical simulations. M.N. constructed the optoelectronic benchtop and executed the optical experiment. M.N. and K.T. developed the FPGA-based electric interface and Pytorch-based software implementation for the experiment. T.H., Y.K., and K.N. supervised the project. M.N. wrote the initial draft of the manuscript. All the authors discussed the results and contributed to writing the manuscript.
Correspondence to Mitsumasa Nakajima, Katsuma Inoue or Kohei Nakajima.
: Nature Communications thanks Ruben Ohana and the other, anonymous, reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Nakajima, M., Inoue, K., Tanaka, K. et al. Physical deep learning with biologically inspired training method: gradient-free approach for physical hardware. Nat Commun 13, 7847 (2022). https://doi.org/10.1038/s41467-022-35216-2 | CommonCrawl |
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
Is the spin-rotation symmetry of Kitaev model $D_2$ or $Q_8$?
It is known that the Kitaev Hamiltonian and its spin-liquid ground state both break the $SU(2)$ spin-rotation symmetry. So what's the spin-rotation-symmetry group for the Kitaev model?
It's obvious that the Kitaev Hamiltonian is invariant under $\pi$ rotation about the three spin axes, and in some recent papers, the authors give the "group"(see the Comments in the end) $G=\left \{1,e^{i\pi S_x}, e^{i\pi S_y},e^{i\pi S_z} \right \}$, where $(e^{i\pi S_x}, e^{i\pi S_y},e^{i\pi S_z})=(i\sigma_x,i\sigma_y,i\sigma_z )$, with $\mathbf{S}=\frac{1}{2}\mathbf{\sigma}$ and $\mathbf{\sigma}$ being the Pauli matrices.
But how about the quaternion group $Q_8=\left \{1,-1,e^{i\pi S_x}, e^{-i\pi S_x},e^{i\pi S_y},e^{-i\pi S_y},e^{i\pi S_z}, e^{-i\pi S_z}\right \}$, with $-1$ representing the $2\pi$ spin-rotation operator. On the other hand, consider the dihedral group $D_2=\left \{ \begin{pmatrix}1 & 0 &0 \\ 0& 1 & 0\\ 0&0 &1 \end{pmatrix},\begin{pmatrix}1 & 0 &0 \\ 0& -1 & 0\\ 0&0 &-1 \end{pmatrix},\begin{pmatrix}-1 & 0 &0 \\ 0& 1 & 0\\ 0&0 &-1 \end{pmatrix},\begin{pmatrix}-1 & 0 &0 \\ 0& -1 & 0\\ 0&0 &1 \end{pmatrix} \right \}$, and these $SO(3)$ matrices can also implement the $\pi$ spin rotation.
So, which one you choose, $G,Q_8$, or $D_2$ ? Notice that $Q_8$ is a subgroup of $SU(2)$, while $D_2$ is a subgroup of $SO(3)$. Furthermore, $D_2\cong Q_8/Z_2$, just like $SO(3)\cong SU(2)/Z_2$, where $Z_2=\left \{ \begin{pmatrix}1 & 0 \\ 0 &1\end{pmatrix} ,\begin{pmatrix}-1 & 0 \\ 0 &-1 \end{pmatrix} \right \}$.
Comments: The $G$ defined above is even not a group, since, e.g., $(e^{i\pi S_z})^2=-1\notin G$.
Remarks: Notice here that $D_2$ can not be viewed as a subgroup of $Q_8$, just like $SO(3)$ can not be viewed as a subgroup of $SU(2)$.
Supplementary: As an example, consider a two spin-1/2 system. We want to gain some insights that what kinds of wavefunctions preserves the $Q_8$ spin-rotation symmetry from this simplest model. For convenience, let $R_\alpha =e^{\pm i\pi S_\alpha}=-4S_1^\alpha S_2^\alpha$ represent the $\pi$ spin-rotation operators around spin axes $\alpha=x,y,z$, where $S_\alpha=S_1^\alpha+ S_2^\alpha$. Therefore, by saying a wavefunction $\psi$ has $Q_8$ spin-rotation symmetry, we mean $R_\alpha\psi=\lambda_ \alpha \psi$, with $\left |\lambda_ \alpha \right |^2=1$.
After a simple calculation, we find that a $Q_8$ spin-rotation symmetric wavefunction $\psi$ could only take one of the following 4 possible forms:
$(1) \left | \uparrow \downarrow \right \rangle-\left | \downarrow \uparrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(1,1,1)$ (Singlet state with full $SU(2)$ spin-rotation symmetry), which is annihilated by $S_x,S_y,$ and $S_z$,
$(2) \left | \uparrow \downarrow \right \rangle+\left | \downarrow \uparrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(-1,-1,1)$, which is annihilated by $S_z$,
$(3) \left | \uparrow \uparrow \right \rangle-\left | \downarrow \downarrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(1,-1,-1)$, which is annihilated by $S_x$,
$(4) \left | \uparrow \uparrow \right \rangle+\left | \downarrow \downarrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(-1,1,-1)$, which is annihilated by $S_y$.
Note that any kind of superposition of the above states would no longer be an eigenfunction of $R_\alpha$ and hence would break the $Q_8$ spin-rotation symmetry.
quantum-mechanics condensed-matter symmetry quantum-spin group-theory
Kai LiKai Li
$\begingroup$ When talking the rotational symmetry, people tends to refer to the SO(3) group and its subgroups. So the symmetry group here is $\tilde{D}_2$. $D_4$ is its projective representation (or projective symmetry group). $\endgroup$ – Everett You Dec 30 '13 at 15:49
$\begingroup$ To which papers are you referring? There's a notational irritation that always occurs when dealing with the Dihedral groups where some people write $D_n$ and some write $D_{m}$ - where $n$ is the number of edges and vertices of the n-gon and $m=2n$ is the number of group elements - for the same group. However, I don't think either of your $D_2$ or $D_4$ are groups. In the case of your $D_4$, $e^{i\pi S_x}e^{i\pi S_y}$ is not an element of your set. However, if both were taken to generate all elements of the group, it should be apparent that they would generate the same group here. $\endgroup$ – Matthew Titsworth Dec 30 '13 at 15:56
$\begingroup$ @ Matthew TItsworth The notation I used here is obvious and it is not the key point of my question. $\endgroup$ – Kai Li Dec 30 '13 at 18:26
$\begingroup$ @K-boy : As you noticed, your "$D_2$" is not a group, so your notation is wrong, and your $\tilde D_2$ is the true $D_2$, the dihedral group of rank $4$. And your "$D_4$" is not the dihedral group of rank $8$, but the quaternion group $Q= Q_4$ of rank $8$ (the first in the family of the dicyclic groups $Q_{2n}$, of rank $4n$). And the true dihedral $D_4$ group is not isomorphic to $D_2 \times Z_2$ (while $D_2$ is isomorphic to $Z_2 \times Z_2$). Ref: Ramond, Group Theory, Cambridge, pages $13-17$. Finally, (abstract) groups, and representations of groups, are two different things. $\endgroup$ – Trimok Dec 30 '13 at 20:18
$\begingroup$ @K-boy. Look at the order of the elements in your $D_4$. They are $\{1,2,4,4,4,4\}$. The The dihedral group of order $8$ has two elements of order $4$ and five elements of order $2$. See also here,here,and here. Trimok is right. $\endgroup$ – Matthew Titsworth Dec 30 '13 at 20:43
The set $G$ gives the representation of the identity and generators of the abstract group of quaternions as elements in $SL(2,\mathbb C)$ which are also in $SU(2)$. Taking the completion of this yields the representation $Q_8$ of the quaternions presented in the question.
From the description of the symmetry group as coming from here, consider the composition of two $\pi$ rotations along the $\hat x$, $\hat y$, or $\hat z$ axis. This operation is not the identity operation on spins (that requires a $4\pi$ rotation). However, all elements of $D_2$ given above are of order 2.
This indicates that the symmetry group of the system should be isomorphic to the quaternions and $Q_8$ is the appropriate representation acting on spin states. The notation arising there for $D_2$ is probably from the dicyclic group of order $4\times 2=8$ which is isomorphic to the quaternions.
Matthew TitsworthMatthew Titsworth
$\begingroup$ It probably worth mentioning that the quaternion group $Q$ is one of the two Schur covers of the Klein four-group $K$. The other one is $D_4$, the dihedral group of degree 4. $\endgroup$ – Isidore Seville Dec 30 '13 at 22:26
$\begingroup$ @ Matthew TItsworth Thanks for your clear summary. $\endgroup$ – Kai Li Dec 30 '13 at 22:30
Not the answer you're looking for? Browse other questions tagged quantum-mechanics condensed-matter symmetry quantum-spin group-theory or ask your own question.
Why we call the ground state of Kitaev model a Spin Liquid?
What is the minimal symmetry required for a spin Hamiltonian to describe a spin-liquid ground state?
Can spin liquids without spin-rotation and time-reversal symmetries possess nonzero Spin Density Wave (SDW) order parameters?
Why are the spin operators defined as they are?
How to form the spin triplet - singlet states from two electrons with spin not an eigenstate of $ S_z $ (spins not along z-axis))
Expectation value of two spin 1/2 particles where particle 1 along z axis and particle 2 along another axis?
Particle-Hole transformation in Superconductor
How to get the expectation value of the spin of a generate spin triplet states?
How to find the eigenvalue(s) for joint spin operator states?
Rotations of eigenstates of $S_z$
Physics interpretation of the expectation value of an electrons spin components
Deriving spin rotation with the commutator of function relation | CommonCrawl |
Testing the capability of low-energy light ions identification of the TRACE silicon detectors (1803.09575)
N. Cieplicka-Oryńczak, D. Mengoni, M. Ciemała, S. Leoni, B. Fornal, J. A. Dueñas, S. Brambilla, C. Boiano, P. R. John, D. Bazzacco, G. Benzoni, G. Bocchi, S. Capra, F. C. L. Crespi, A. Goasduff, K. Hadyńska-Klęk, Ł. W. Iskra, G. Jaworski, F. Recchia, M. Siciliano, D. Testov, J. J. Valiente-Dobón
March 26, 2018 physics.ins-det
The in-beam tests of two Si pixel type TRACE detectors have been performed at Laboratori Nazionali di Legnaro (Italy). The aim was to investigate the possibility of identifying heavy-ion reactions products with mass A~10 at low kinetic energy, i.e., around 10 MeV. Two separate read-out chains, digital and analog, were used. The Pulse Shape Analysis technique was employed to obtain the identification matrices for the digitally processed part of the data. Separation in both charge and mass was obtained, however, the $\alpha$ particles contaminated significantly the recorded data in the lower energy part. Due to this effect, the identification of the light products ($^{7,6}$Li isotopes) could be possible down only to ~20 MeV
Charged particle decay of hot and rotating $^{88}$Mo nuclei in fusion-evaporation reactions (1509.03184)
S. Valdré, S. Piantelli, G. Casini, S. Barlini, S. Carboni, M. Ciemała, M. Kmiecik, A. Maj, K. Mazurek, M. Cinausero, F. Gramegna, V.L. Kravchuk, L. Morelli, T. Marchi, G. Baiocco, L. Bardelli, P. Bednarczyk, G. Benzoni, M. Bini, N. Blasi, A. Bracco, S. Brambilla, M. Bruno, F. Camera, A. Chbihi, A. Corsi, F.C.L. Crespi, M. D'Agostino, M. Degerlier, D. Fabris, B. Fornal, A. Giaz, M. Krzysiek, S. Leoni, M. Matejska-Minda, I. Mazumdar, W. Mȩczyński, B. Million, D. Montanari, S. Myalski, R. Nicolini, A. Olmi, G. Pasquali, G. Prete, O.J. Roberts, J. Styczeń, B. Szpak, B. Wasilewska, O. Wieland, J.P. Wieleczko, M. Ziȩbliński
Sept. 10, 2015 nucl-ex
A study of fusion-evaporation and (partly) fusion-fission channels for the $^{88}$Mo compound nucleus, produced at different excitation energies in the reaction $^{48}$Ti + $^{40}$Ca at 300, 450 and 600 MeV beam energies, is presented. Fusion-evaporation and fusion-fission cross sections have been extracted and compared with the existing systematics. Experimental data concerning light charged particles have been compared with the prediction of the statistical model in its implementation in the Gemini++ code, well suited even for high spin systems, in order to tune the main model parameters in a mass region not abundantly covered by exclusive experimental data. Multiplicities for light charged particles emitted in fusion evaporation events are also presented. Some discrepancies with respect to the prediction of the statistical model have been found for forward emitted $\alpha$-particles; they may be due both to pre-equilibrium emission and to reaction channels (such as Deep Inelastic Collisions, QuasiFission/QuasiFusion) different from the compound nucleus formation.
Nuclear astrophysics with radioactive ions at FAIR (1310.1632)
R. Reifarth, S. Altstadt, K. Göbel, T. Heftrich, M. Heil, A. Koloczek, C. Langer, R. Plag, M. Pohl, K. Sonnabend, M. Weigand, T. Adachi, F. Aksouh, J. Al-Khalili, M. AlGarawi, S. AlGhamdi, G. Alkhazov, N. Alkhomashi, H. Alvarez-Pol, R. Alvarez-Rodriguez, V. Andreev, B. Andrei, L. Atar, T. Aumann, V. Avdeichikov, C. Bacri, S. Bagchi, C. Barbieri, S. Beceiro, C. Beck, C. Beinrucker, G. Belier, D. Bemmerer, M. Bendel, J. Benlliure, G. Benzoni, R. Berjillos, D. Bertini, C. Bertulani, S. Bishop, N. Blasi, T. Bloch, Y. Blumenfeld, A. Bonaccorso, K. Boretzky, A. Botvina, A. Boudard, P. Boutachkov, I. Boztosun, A. Bracco, S. Brambilla, J. Briz Monago, M. Caamano, C. Caesar, F. Camera, E. Casarejos, W. Catford, J. Cederkall, B. Cederwall, M. Chartier, A. Chatillon, M. Cherciu, L. Chulkov, P. Coleman-Smith, D. Cortina-Gil, F. Crespi, R. Crespo, J. Cresswell, M. Csatlós, F. Déchery, B. Davids, T. Davinson, V. Derya, P. Detistov, P. Diaz Fernandez, D. DiJulio, S. Dmitry, D. Doré, J. Dueṅas, E. Dupont, P. Egelhof, I. Egorova, Z. Elekes, J. Enders, J. Endres, S. Ershov, O. Ershova, B. Fernandez-Dominguez, A. Fetisov, E. Fiori, A. Fomichev, M. Fonseca, L. Fraile, M. Freer, J. Friese, M. G. Borge, D. Galaviz Redondo, S. Gannon, U. Garg, I. Gasparic, L. Gasques, B. Gastineau, H. Geissel, R. Gernhäuser, T. Ghosh, M. Gilbert, J. Glorius, P. Golubev, A. Gorshkov, A. Gourishetty, L. Grigorenko, J. Gulyas, M. Haiduc, F. Hammache, M. Harakeh, M. Hass, M. Heine, A. Hennig, A. Henriques, R. Herzberg, M. Holl, A. Ignatov, A. Ignatyuk, S. Ilieva, M. Ivanov, N. Iwasa, B. Jakobsson, H. Johansson, B. Jonson, P. Joshi, A. Junghans, B. Jurado, G. Körner, N. Kalantar, R. Kanungo, A. Kelic-Heil, K. Kezzar, E. Khan, A. Khanzadeev, O. Kiselev, M. Kogimtzis, D. Körper, S. Kräckmann, T. Kröll, R. Krücken, A. Krasznahorkay, J. Kratz, D. Kresan, T. Krings, A. Krumbholz, S. Krupko, R. Kulessa, S. Kumar, N. Kurz, E. Kuzmin, M. Labiche, K. Langanke, I. Lazarus, T. Le Bleis, C. Lederer, A. Lemasson, R. Lemmon, V. Liberati, Y. Litvinov, B. Löher, J. Lopez Herraiz, G. Münzenberg, J. Machado, E. Maev, K. Mahata, D. Mancusi, J. Marganiec, M. Martinez Perez, V. Marusov, D. Mengoni, B. Million, V. Morcelle, O. Moreno, A. Movsesyan, E. Nacher, M. Najafi, T. Nakamura, F. Naqvi, E. Nikolski, T. Nilsson, C. Nociforo, P. Nolan, B. Novatsky, G. Nyman, A. Ornelas, R. Palit, S. Pandit, V. Panin, C. Paradela, V. Parkar, S. Paschalis, P. Pawłowski, A. Perea, J. Pereira, C. Petrache, M. Petri, S. Pickstone, N. Pietralla, S. Pietri, Y. Pivovarov, P. Potlog, A. Prokofiev, G. Rastrepina, T. Rauscher, G. Ribeiro, M. Ricciardi, A. Richter, C. Rigollet, K. Riisager, A. Rios, C. Ritter, T. Rodríguez Frutos, J. Rodriguez Vignote, M. Röder, C. Romig, D. Rossi, P. Roussel-Chomaz, P. Rout, S. Roy, P. Söderström, M. Saha Sarkar, S. Sakuta, M. Salsac, J. Sampson, J. Sanchez del Rio Saez, J. Sanchez Rosado, S. Sanjari, P. Sarriguren, A. Sauerwein, D. Savran, C. Scheidenberger, H. Scheit, S. Schmidt, C. Schmitt, L. Schnorrenberger, P. Schrock, R. Schwengner, D. Seddon, B. Sherrill, A. Shrivastava, S. Sidorchuk, J. Silva, H. Simon, E. Simpson, P. Singh, D. Slobodan, D. Sohler, M. Spieker, D. Stach, E. Stan, M. Stanoiu, S. Stepantsov, P. Stevenson, F. Strieder, L. Stuhl, T. Suda, K. Sümmerer, B. Streicher, J. Taieb, M. Takechi, I. Tanihata, J. Taylor, O. Tengblad, G. Ter-Akopian, S. Terashima, P. Teubig, R. Thies, M. Thoennessen, T. Thomas, J. Thornhill, G. Thungstrom, J. Timar, Y. Togano, U. Tomohiro, T. Tornyi, J. Tostevin, C. Townsley, W. Trautmann, T. Trivedi, S. Typel, E. Uberseder, J. Udias, T. Uesaka, L. Uvarov, Z. Vajta, P. Velho, V. Vikhrov, M. Volknandt, V. Volkov, P. von Neumann-Cosel, M. von Schmid, A. Wagner, F. Wamers, H. Weick, D. Wells, L. Westerberg, O. Wieland, M. Wiescher, C. Wimmer, K. Wimmer, J. S. Winfield, M. Winkel, P. Woods, R. Wyss, D. Yakorev, M. Yavor, J. Zamora Cardona, I. Zartova, T. Zerguerras, I. Zgura, A. Zhdanov, M. Zhukov, M. Zieblinski, A. Zilges, K. Zuber
Oct. 6, 2013 nucl-ex, astro-ph.IM
The nucleosynthesis of elements beyond iron is dominated by neutron captures in the s and r processes. However, 32 stable, proton-rich isotopes cannot be formed during those processes, because they are shielded from the s-process flow and r-process beta-decay chains. These nuclei are attributed to the p and rp process. For all those processes, current research in nuclear astrophysics addresses the need for more precise reaction data involving radioactive isotopes. Depending on the particular reaction, direct or inverse kinematics, forward or time-reversed direction are investigated to determine or at least to constrain the desired reaction cross sections. The Facility for Antiproton and Ion Research (FAIR) will offer unique, unprecedented opportunities to investigate many of the important reactions. The high yield of radioactive isotopes, even far away from the valley of stability, allows the investigation of isotopes involved in processes as exotic as the r or rp processes.
Measurement of light charged particles in the decay channels of medium-mass excited compound nuclei (1309.2149)
S. Valdre', S. Barlini, G. Casini, G. Pasquali, S. Piantelli, S. Carboni, M. Cinausero, F.Gramegna, T. Marchi, G. Baiocco, L. Bardelli, G. Benzoni, M. Bini, N. Blasi, A. Bracco, S. Brambilla, M. Bruno, F. Camera, A. Corsi, F. Crespi, M. D Agostino, M. Degerlier, V. L. Kravchuk, S. Leoni, B. Million, D. Montanari, L. Morelli, A. Nannini, R. Nicolini, G. Poggi, G. Vannini, O. Wieland, P. Bednarczyk, M. Ciemała, J. Dudek, B. Fornal, M.Kmiecik, A. Maj, M. Matejska-Minda, K. Mazurek, W. Meczynski, S. Myalski, J. Styczen, M. Zieblinski
Sept. 9, 2013 nucl-ex
The 48Ti on 40Ca reactions have been studied at 300 and 600 MeV focusing on the fusion-evaporation (FE) and fusion-fission (FF) exit channels. Energy spectra and multiplicities of the emitted light charged particles have been compared to Monte Carlo simulations based on the statistical model. Indeed, in this mass region (A about 100) models predict that shape transitions can occur at high spin values and relatively scarce data exist in the literature about coincidence measurements between evaporation residues and light charged particles. Signals of shape transitions can be found in the variations of the lineshape of high energy gamma rays emitted from the de-excitation of GDR states gated on different region of angular momenta. For this purpose it is important to keep under control the FE and FF processes, to regulate the statistical model parameters and to control the onset of possible preequilibrium emissions from 300 to 600 MeV bombarding energy.
Characterization of Large Volume 3.5 x 8 inches LaBr3:Ce Detectors (1308.6085)
A. Giaz, L.Pellegri, S. Riboldi, F.Camera, N. Blasi, C. Boiano, A.Bracco, S. Brambilla, S. Ceruti, S.Coelli, F.C.L. Crespi, M.Csatlòs, S. Frega, J.Gulyàs, A. Krasznahorkay, S.Lodetti, B. Million, A. Owens, F. Quarati, L.Stuhl, O.Wieland
Aug. 28, 2013 physics.ins-det
The properties of large volume cylindrical 3.5 x 8 inches (89 mm x 203 mm) LaBr3:Ce scintillation detectors coupled to the Hamamatsu R10233-100SEL photo-multiplier tube were investigated. These crystals are among the largest ones ever produced and still need to be fully characterized to determine how these detectors can be utilized and in which applications. We tested the detectors using monochromatic gamma-ray sources and in-beam reactions producing gamma rays up to 22.6 MeV; we acquired PMT signal pulses and calculated detector energy resolution and response linearity as a function of gamma-ray energy. Two different voltage dividers were coupled to the Hamamatsu R10233-100SEL PMT: the Hamamatsu E1198-26, based on straightforward resistive network design, and the LABRVD, specifically designed for our large volume LaBr3:Ce scintillation detectors, which also includes active semiconductor devices. Because of the extremely high light yield of LaBr3:Ce crystals we observed that, depending on the choice of PMT, voltage divider and applied voltage, some significant deviation from the ideally proportional response of the detector and some pulse shape deformation appear. In addition, crystal non-homogeneities and PMT gain drifts affect the (measured) energy resolution especially in case of high-energy gamma rays. We also measured the time resolution of detectors with different sizes (from 1x1 inches up to 3.5x8 inches), correlating the results with both the intrinsic properties of PMTs and GEANT simulations of the scintillation light collection process. The detector absolute full energy efficiency was measured and simulated up to gamma-rays of 30 MeV
Identification and rejection of scattered neutrons in AGATA (1306.2788)
M. Şenyiğit, A. Ataç, S. Akkoyun, A. Kaşkaş, D. Bazzacco, J. Nyberg, F. Recchia, S. Brambilla, F. Camera, F.C.L. Crespi, E. Farnea, A. Giaz, A. Gottardo, R. Kempley, J. Ljungvall, D. Mengoni, C. Michelagnoli, B. Million, M. Palacz, L. Pellegri, S. Riboldi, E. Şahin, P.A. Söderström, J.J. Valiente Dobon, the AGATA collaboration
June 12, 2013 nucl-ex
Gamma rays and neutrons, emitted following spontaneous fission of 252Cf, were measured in an AGATA experiment performed at INFN Laboratori Nazionali di Legnaro in Italy. The setup consisted of four AGATA triple cluster detectors (12 36-fold segmented high-purity germanium crystals), placed at a distance of 50 cm from the source, and 16 HELENA BaF2 detectors. The aim of the experiment was to study the interaction of neutrons in the segmented high-purity germanium detectors of AGATA and to investigate the possibility to discriminate neutrons and gamma rays with the gamma-ray tracking technique. The BaF2 detectors were used for a time-of-flight measurement, which gave an independent discrimination of neutrons and gamma rays and which was used to optimise the gamma-ray tracking-based neutron rejection methods. It was found that standard gamma-ray tracking, without any additional neutron rejection features, eliminates effectively most of the interaction points due to recoiling Ge nuclei after elastic scattering of neutrons. Standard tracking rejects also a significant amount of the events due to inelastic scattering of neutrons in the germanium crystals. Further enhancements of the neutron rejection was obtained by setting conditions on the following quantities, which were evaluated for each event by the tracking algorithm: energy of the first and second interaction point, difference in the calculated incoming direction of the gamma ray, figure-of-merit value. The experimental results of tracking with neutron rejection agree rather well with Geant4 simulations.
AGATA - Advanced Gamma Tracking Array (1111.5731)
S. Akkoyun, A. Algora, B. Alikhani, F. Ameil, G. de Angelis, L. Arnold, A. Astier, A. Ataç, Y. Aubert, C. Aufranc, A. Austin, S. Aydin, F. Azaiez, S. Badoer, D. L. Balabanski, D. Barrientos, G. Baulieu, R. Baumann, D. Bazzacco, F. A. Beck, T. Beck, P. Bednarczyk, M. Bellato, M. A. Bentley, G. Benzoni, R. Berthier, L. Berti, R. Beunard, G. Lo Bianco, B. Birkenbach, P. G. Bizzeti, A. M. Bizzeti-Sona, F. Le Blanc, J. M. Blasco, N. Blasi, D. Bloor, C. Boiano, M. Borsato, D. Bortolato, A. J. Boston, H. C. Boston, P. Bourgault, P. Boutachkov, A. Bouty, A. Bracco, S. Brambilla, I. P. Brawn, A. Brondi, S. Broussard, B. Bruyneel, D. Bucurescu, I. Burrows, A. Bürger, S. Cabaret, B. Cahan, E. Calore, F. Camera, A. Capsoni, F. Carrió, G. Casati, M. Castoldi, B. Cederwall, J.-L. Cercus, V. Chambert, M. El Chambit, R. Chapman, L. Charles, J. Chavas, E. Clément, P. Cocconi, S. Coelli, P. J. Coleman-Smith, A. Colombo, S. Colosimo, C. Commeaux, D. Conventi, R. J. Cooper, A. Corsi, A. Cortesi, L. Costa, F. C. L. Crespi, J. R. Cresswell, D. M. Cullen, D. Curien, A. Czermak, D. Delbourg, R. Depalo, T. Descombes, P. Désesquelles, P. Detistov, C. Diarra, F. Didierjean, M. R. Dimmock, Q. T. Doan, C. Domingo-Pardo, M. Doncel, F. Dorangeville, N. Dosme, Y. Drouen, G. Duchêne, B. Dulny, J. Eberth, P. Edelbruck, J. Egea, T. Engert, M. N. Erduran, S. Ertürk, C. Fanin, S. Fantinel, E. Farnea, T. Faul, M. Filliger, F. Filmer, Ch. Finck, G. de France, A. Gadea, W. Gast, A. Geraci, J. Gerl, R. Gernhäuser, A. Giannatiempo, A. Giaz, L. Gibelin, A. Givechev, N. Goel, V. González, A. Gottardo, X. Grave, J. Grȩbosz, R. Griffiths, A. N. Grint, P. Gros, L. Guevara, M. Gulmini, A. Görgen, H. T. M. Ha, T. Habermann, L. J. Harkness, H. Harroch, K. Hauschild, C. He, A. Hernández-Prieto, B. Hervieu, H. Hess, T. Hüyük, E. Ince, R. Isocrate, G. Jaworski, A. Johnson, J. Jolie, P. Jones, B. Jonson, P. Joshi, D. S. Judson, A. Jungclaus, M. Kaci, N. Karkour, M. Karolak, A. Kaşkaş, M. Kebbiri, R. S. Kempley, A. Khaplanov, S. Klupp, M. Kogimtzis, I. Kojouharov, A. Korichi, W. Korten, Th. Kröll, R. Krücken, N. Kurz, B. Y. Ky, M. Labiche, X. Lafay, L. Lavergne, I. H. Lazarus, S. Leboutelier, F. Lefebvre, E. Legay, L. Legeard, F. Lelli, S. M. Lenzi, S. Leoni, A. Lermitage, D. Lersch, J. Leske, S. C. Letts, S. Lhenoret, R. M. Lieder, D. Linget, J. Ljungvall, A. Lopez-Martens, A. Lotodé, S. Lunardi, A. Maj, J. van der Marel, Y. Mariette, N. Marginean, R. Marginean, G. Maron, A. R. Mather, W. Mȩczyński, V. Mendéz, P. Medina, B. Melon, R. Menegazzo, D. Mengoni, E. Merchan, L. Mihailescu, C. Michelagnoli, J. Mierzejewski, L. Milechina, B. Million, K. Mitev, P. Molini, D. Montanari, S. Moon, F. Morbiducci, R. Moro, P. S. Morrall, O. Möller, A. Nannini, D. R. Napoli, L. Nelson, M. Nespolo, V. L. Ngo, M. Nicoletto, R. Nicolini, Y. Le Noa, P. J. Nolan, M. Norman, J. Nyberg, A. Obertelli, A. Olariu, R. Orlandi, D. C. Oxley, C. Özben, M. Ozille, C. Oziol, E. Pachoud, M. Palacz, J. Palin, J. Pancin, C. Parisel, P. Pariset, G. Pascovici, R. Peghin, L. Pellegri, A. Perego, S. Perrier, M. Petcu, P. Petkov, C. Petrache, E. Pierre, N. Pietralla, S. Pietri, M. Pignanelli, I. Piqueras, Z. Podolyak, P. Le Pouhalec, J. Pouthas, D. Pugnére, V. F. E. Pucknell, A. Pullia, B. Quintana, R. Raine, G. Rainovski, L. Ramina, G. Rampazzo, G. La Rana, M. Rebeschini, F. Recchia, N. Redon, M. Reese, P. Reiter, P. H. Regan, S. Riboldi, M. Richer, M. Rigato, S. Rigby, G. Ripamonti, A. P. Robinson, J. Robin, J. Roccaz, J.-A. Ropert, B. Rossé, C. Rossi Alvarez, D. Rosso, B. Rubio, D. Rudolph, F. Saillant, E. Şahin, F. Salomon, M.-D. Salsac, J. Salt, G. Salvato, J. Sampson, E. Sanchis, C. Santos, H. Schaffner, M. Schlarb, D. P. Scraggs, D. Seddon, M. Şenyiğit, M.-H. Sigward, G. Simpson, J. Simpson, M. Slee, J. F. Smith, P. Sona, B. Sowicki, P. Spolaore, C. Stahl, T. Stanios, E. Stefanova, O. Stézowski, J. Strachan, G. Suliman, P.-A. Söderström, J. L. Tain, S. Tanguy, S. Tashenov, Ch. Theisen, J. Thornhill, F. Tomasi, N. Toniolo, R. Touzery, B. Travers, A. Triossi, M. Tripon, K. M. M. Tun-Lanoë, M. Turcato, C. Unsworth, C. A. Ur, J. J.Valiente-Dobon, V. Vandone, E. Vardaci, R. Venturelli, F. Veronese, Ch. Veyssiere, E. Viscione, R. Wadsworth, P. M. Walker, N. Warr, C. Weber, D. Weisshaar, D. Wells, O. Wieland, A. Wiens, G. Wittwer, H. J. Wollersheim, F. Zocca, N. V. Zamfir, M. Ziȩbliński, A. Zucchiatti
Sept. 17, 2012 nucl-ex, physics.ins-det
The Advanced GAmma Tracking Array (AGATA) is a European project to develop and operate the next generation gamma-ray spectrometer. AGATA is based on the technique of gamma-ray energy tracking in electrically segmented high-purity germanium crystals. This technique requires the accurate determination of the energy, time and position of every interaction as a gamma ray deposits its energy within the detector volume. Reconstruction of the full interaction path results in a detector with very high efficiency and excellent spectral response. The realization of gamma-ray tracking and AGATA is a result of many technical advances. These include the development of encapsulated highly-segmented germanium detectors assembled in a triple cluster detector cryostat, an electronics system with fast digital sampling and a data acquisition system to process the data at a high rate. The full characterization of the crystals was measured and compared with detector-response simulations. This enabled pulse-shape analysis algorithms, to extract energy, time and position, to be employed. In addition, tracking algorithms for event reconstruction were developed. The first phase of AGATA is now complete and operational in its first physics campaign. In the future AGATA will be moved between laboratories in Europe and operated in a series of campaigns to take advantage of the different beams and facilities available to maximize its science output. The paper reviews all the achievements made in the AGATA project including all the necessary infrastructure to operate and support the spectrometer.
Response of AGATA Segmented HPGe Detectors to Gamma Rays up to 15.1 MeV (1209.1188)
F.C.L. Crespi, R. Avigo, F. Camera, S. Akkoyun, A. Atac, D. Bazzacco, M. Bellato, G. Benzoni, N. Blasi, D. Bortolato, S. Bottoni, A. Bracco, S. Brambilla, B. Bruyneel, S. Cerutia, M. Ciemala, S. Coelli, J. Eberth, C. Fanin, E. Farnea, A. Gadea, A. Giaz, A. Gottardo, H. Hess, M. Kmiecik, S. Leoni, A. Maj, D. Mengoni, C. Michelagnoli, B. Million, D. Montanari, L. Pellegri, F. Recchia, P. Reiter, S. Riboldi, C.A. Ur, V. Vandone, J.J. Valiente-Dobon, O. Wieland, A. Wiens, The AGATA Collaboration
The response of AGATA segmented HPGe detectors to gamma rays in the energy range 2-15 MeV was measured. The 15.1 MeV gamma rays were produced using the reaction d(11B,ng)12C at Ebeam = 19.1 MeV, while gamma-rays between 2 to 9 MeV were produced using an Am-Be-Fe radioactive source. The energy resolution and linearity were studied and the energy-to-pulse-height conversion resulted to be linear within 0.05%. Experimental interaction multiplicity distributions are discussed and compared with the results of Geant4 simulations. It is shown that the application of gamma-ray tracking allows a suppression of background radiation following neutron capture by Ge nuclei. Finally the Doppler correction for the 15.1 MeV gamma line, performed using the position information extracted with Pulse-shape Analysis, is discussed.
Neutron-skin thickness from the study of the anti-analog giant dipole resonance (1205.2325)
A. Krasznahorkay, L. Stuhl, M. Csatlós, A. Algora, J. Gulyás, J. Timár, N. Paar, D. Vretenar, K. Boretzky, M. Heil, Yu.A. Litvinov, D. Rossi, C. Scheidenberger, H. Simon, H. Weick, A. Bracco, S. Brambilla, N. Blasi, F. Camera, A. Giaz, B. Million, L. Pellegri, S. Riboldi, O. Wieland, S. Altstadt, M. Fonseca, J. Glorius, K. Göbel, T. Heftrich, A. Koloczek, S. Kräckmann, C. Langer, R. Plag, M. Pohl, G. Rastrepina, R. Reifarth, S. Schmidt, K. Sonnabend, M. Weigand, M.N. Harakeh, N. Kalantar-Nayestanaki, C. Rigollet, S. Bagchi, M.A. Najafi, T. Aumann, L. Atar, M. Heine, M. Holl, A. Movsesyan, P. Schrock, V. Volkov, F. Wamers, E. Fiori, B. Löher, J. Marganiec, D. Savran, H.T. Johansson, P. Diaz Fernández, U. Garg, D.L. Balabanski
The gamma-decay of the anti-analog of the giant dipole resonance (AGDR) has been measured to the isobaric analog state excited in the p(124Sn,n) reaction at a beam energy of 600 MeV/nucleon. The energy of the transition was also calculated with state-of-the-art self-consistent random-phase approximation (RPA) and turned out to be very sensitive to the neutron-skin thickness (\DeltaR_(pn)). By comparing the theoretical results with the measured one, the \DeltaR_(pn) value for 124Sn was deduced to be 0.175 \pm 0.048 fm, which agrees well with the previous results. The energy of the AGDR measured previously for ^(208)Pb was also used to determine the \DeltaR_(pn) for ^(208)Pb. In this way a very precise \DeltaR_(pn) = 0.181 \pm 0.031 neutron-skin thickness has been obtained for 208Pb. The present method offers new possibilities for measuring the neutron-skin thicknesses of very exotic isotopes.
Search for the Jacobi shape transition in light nuclei (nucl-ex/0302004)
A. Maj, M. Kmiecik, M. Brekiesz, J. Grebosz, W. Meczynski, J. Styczen, M. Zieblinski, K. Zuber, A. Bracco, F. Camera, G. Benzoni, B. Million, N. Blasi, S. Brambilla, S. Leoni, M. Pignanelli, O. Wieland, J. Nyberg, M. Kicinska-Habior, C.M. Petrache, J. Dudek, K. Pomorski
Feb. 7, 2003 nucl-ex
The gamma-rays following the reaction 105 MeV 18O + 28Si have been measured using the EUROBALL IV, HECTOR and EUCLIDES arrays in order to investigate the predicted Jacobi shape transition. The high-energy gamma-ray spectrum from the GDR decay indicates a presence of large deformations in hot 46Ti nucleus, in agreement with new theoretical calculations based on the Rotating Liquid Drop model. | CommonCrawl |
Progress in Earth and Planetary Science
Research article | Open | Published: 07 February 2017
Seismic and inter-seismic ground surface deformations of the Murono mud volcano (central Japan): a laser scanning approach
Yuichi S. Hayakawa1,
Shigekazu Kusumoto2 &
Nobuhisa Matta3
Progress in Earth and Planetary Sciencevolume 4, Article number: 3 (2017) | Download Citation
A small mud volcano in Murono, Niigata Prefecture, north-central Japan, shows active ground surface displacements, not only when large earthquakes occur in the region but also during quiescent periods between earthquake events. The site recently underwent abrupt deformations due to strong regional earthquakes in 2004, 2007, 2011, and 2014, while gradual surface deformations were reported during quiescent periods between the earthquakes. To detect the spatial distribution of the changes in the mud volcano's ground surface elevation, we carried out multi-temporal terrestrial laser scanning. Point cloud datasets were registered at different times by minimizing the distance between the closest points in different clouds for stable ground features, which revealed centimeter- to decimeter-scale deformations around the domain of the conspicuous uplift. The spatial distribution of the deformation triggered by the earthquakes, including both central uplift and peripheral subsidence, exhibits an elliptical pattern, on which open crack fractures, associated with the earthquake-triggered uplift, were formed. The displacement and stress fields for the earthquakes were modeled numerically, and anomalously high pressure and/or weakening of the surficial materials was expected for the formation of fractures in the local domain. In contrast, continuous uplift was observed during the inter-seismic quiescent periods, the domain of which seems to have changed after the strong earthquake in 2014. In the coming years, further measurements will be necessary to unravel the physical subsurface mechanics of the mud volcano.
Mud volcanoes are characteristic landforms that occur in both subaerial and submarine areas, formed by the cumulative extrusion of liquid mud (Brown 1990; Hovland et al. 1997; Kopf 2002). There are thousands of onshore and offshore mud volcanoes on the Earth (Milkov 2000; Dimitrov 2003), which vent liquid mud, water, gas, and (occasionally) oil, either periodically or continuously (Hovland et al. 1997). Mud volcanoes are recognized as a significant source of gas emissions, including carbon dioxide and methane, into the atmosphere. This emitted gas is supplied from deeply buried sediments (Dimitrov 2002, 2003; Milkov 2005) and may contribute to global climate change (Etiope 2005; Judd 2005). The chemical and isotopic components of the mud, which include water and oil, can be an indicator of natural resources such as petroleum (Kopf 2002; Milkov 2005) and can be strongly affected by near-surface (several to tens of kilometers in depth) geological structures, particularly in compressive tectonic zones (Martinelli and Dadomo 2005b; Feyzullayev et al. 2005; Mazzini 2009).
The activity of mud volcanoes is strongly related to tectonic conditions such as the seismicity of the region (Panahi 2000, 2005; Martinelli and Dadomo 2005a; Mellors et al. 2007; Mazzini 2009). In many cases, large remote earthquakes trigger mud volcano eruptions (Chigira and Tanaka 1997; Mellors et al. 2007; Mori and Kano 2009). In rare cases, these eruptions can, in turn, trigger weak local earthquakes (Panahi 2005). Furthermore, surface deformation of and gas emission from mud volcanoes can occur not only periodically through the occurrence of earthquakes but also continuously in quiescent regimes (Dimitrov 2002; Etiope 2005; Moerz et al. 2005; Kusumoto et al. 2014). Among other factors, the surface deformation of mud volcanoes is one of the most distinct and clear indicators of their activity (Hovland et al. 1997). Therefore, investigating the morphological characteristics and dynamics of surface deformation is crucial in revealing the detailed mechanisms and future activity of mud volcanoes. In particular, although earthquake-triggered surface deformation of mud volcanoes is often obvious (e.g., Manga et al. 2009; Onishi et al. 2009; Rudolph and Manga 2012), inter-seismic deformation of mud volcanoes is relatively less well recognized, necessitating further investigation (Kusumoto et al. 2014).
The deformation of active mud volcanoes has been investigated using various approaches on a wide variety of scales (e.g., Kopf 2002; Wang and Manga 2010). For large mud volcanoes (on the scale of hundreds of meters to kilometers), surface deformation can be detected by long-range remote sensing techniques, including satellite interferometric synthetic aperture radar (InSAR) (Mellors et al. 2007; Fukushima et al. 2009; Antonielli et al. 2014), aerial photographs (Shakirov et al. 2004; Istadi et al. 2009), and airborne laser scanning (ALS) (Doshida et al. 2007). Detection of surface deformation of small mud volcanoes (smaller than hundreds of meters) requires finer measurements that may include leveling, total station, and the use of high-precision global navigation satellite system (GNSS) (e.g., Onishi et al. 2009; Kusumoto et al. 2014, 2015). Even these approaches are often limited in their ability to reveal the spatial variation of surface deformation because of the low spatial density of their measurement points. As an efficient approach for exploring spatially variable deformation of landforms, terrestrial laser scanning (TLS) has been applied to obtain high-definition topographic data by using dense point clouds on the ground surface (e.g., Heritage and Large 2009; Whitworth et al. 2006; Hayakawa and Oguchi 2016). TLS enables the detection of temporal changes in topography at millimeter to centimeter scales when the measurements are performed multiple times and the multi-temporal point clouds are accurately registered to each other (Lane et al. 2003; Teza et al. 2007; Olsen et al. 2009; Milan et al. 2011; DeLong et al. 2015). Thus, this approach is potentially advantageous in the detection of spatially variable surface deformation of small mud volcanoes, although such application of TLS on mud volcanoes has been limited so far.
Performing a preliminary analysis on TLS data, Hayakawa et al. (2016) measured temporal changes in the ground surface of a small mud volcano in Murono, central Japan, within an accuracy of centimeters. This study expands this analysis to a longer time period, including two major earthquakes that affected the study region, aiming to give a primitive discussion on the links between the surficial changes and subsurface fluid dynamics of the mud volcano, with and without the impact of earthquake events.
Methods/Experimental
The Murono mud volcano is located in Tokamachi City, Niigata Prefecture in north-central Japan (Fig. 1). The monthly mean air temperature ranges from −0.2 °C in January to 24.9 °C in August, and the mean annual precipitation is 2496.7 mm, about one third of which is supplied as snow in winter (Japan Meteorological Agency 2016). Many fold structures in the NE-SW direction are present in the area, and the study site is located in the vicinity of an unnamed anticline limb between the Gimyo anticline and the Naradate syncline (Noda 1962, Takeuchi et al. 2000) (Fig. 1b). The substrate rock in the study site is massive black mudstone of the Early Pliocene Sugawa Formation, which widely covers both the unnamed and Gimyo anticlines (Sm in Fig. 1b). Younger rock formations (Late Pliocene to Early Pleistocene) appear toward the Naradate syncline southeast (Fig. 1b), while older rocks appear in the farther southeast areas, toward another major anticline (the Matsunoyama anticline, outside the map boundary of Fig. 1b). Because of such geological structures, there is abundant production of petroleum and natural gas, as well as hot springs in the area. The activity of the Murono mud volcano can be a key to unraveling the subsurface structure and mud flow dynamics in the area. As a result, various studies have been carried out involving this mud volcano using geodetic, geophysical, and geochemical approaches (e.g., Onishi et al. 2009; Shinya and Tanaka 2009; Suzuki et al. 2009; Etiope et al. 2011). While convex-up relief is commonly observed in large mud volcanoes that are hundreds of meters in length and tens of meters in height (Higgins and Saunders 1974; Chigira and Tanaka 1997), the small Murono mud volcano has a relatively flat surface, with a very small mound having a height of only decimeters.
Study area: Murono mud volcano. a Overview. b Regional view on a background of a 1:50,000 geological map (after Takeuchi et al. 2000) with topographic hillshade. Black solid and dashed lines indicate the anticline and syncline structures, respectively. Sm massive black mudstone of the Sugawa Formation (Early Pliocene), Tm, Ta, Ts massive siltstone, interbedded sandstone and siltstone, and thick-bedded sandstone, respectively, of the Tamugigawa Formation (Late Pliocene), Hs interbedded sandy siltstone of the Higashigawa Formation (Late Pliocene), Uc marine sand with gravel of the Uonuma Formation (Late Pliocene to Early Pleistocene). c Aerial image of the study site. The present vent of the mud volcano is at the center, and this study's target zone is located in the eastern side of the area. Triangular arrows indicate locations and directions of photographs shown in Fig. 2
The altitude of the site is about 316 m above sea level (a.s.l.), and the main active area of the mud volcano is approximately 130 m × 180 m (Fig. 1c). The western side of the site is partially deformed, while the center of the site, which contains a vent, does not show obvious uplift (Kusumoto et al. 2014, 2015). In contrast, the eastern side has particularly large and frequent deformations. A portion of this side approximately 50 m × 60 m (Fig. 1c) was selected as the target zone of this study.
In the last decade, large earthquakes have hit the region frequently, and the mud volcano has been significantly deformed by the strong jolts. Such deformations of the site have been investigated by different methods of topographic measurements, including GNSS, leveling, and TLS surveys. At the time of the Niigata-ken Chuetsu-oki Earthquake (M w = 6.6, epicenter 44 km away from the site) in July 2007, the maximum acceleration at a monitoring station near the site (NIG021-Tokamachi, 21 km away) was recorded to be 3.02 cm/s2 (National Research Institute for Earth Science and Disaster Resilience 2016). Although there was the Niigata-ken Chuetsu Earthquake in October 2004 (M w = 6.7, epicenter 33 km away from the site) with the maximum acceleration of 17.5 cm/s2 at NIG021, the influence of this earthquake on the mud volcano is unknown.
Onishi et al. (2009) performed extensive GNSS surveys at the site with more than 4000 measurement points over the paved ground surface in June 2006 and September 2008. Although the direct influence of the 2007 earthquake is unknown due to the 1 year time lag between earthquake and the later measurement, it was assumed that the amount of vertical deformation of the mud volcano during this period was closely related to the 2007 earthquake. According to the widespread GNSS measurements, the cumulative vertical deformation of the ground surface in the period from 2006 to 2008 was spatially variable: the maximum uplift was observed to be approximately 400 mm near the central portion of the mud volcano (the northwestern half of the surveyed target zone; Fig. 1c), while the easternmost side of the mud volcano (the southeastern half of the surveyed target zone; Fig. 1c) showed less uplift (<100 mm) (Onishi et al. 2009). The area of high uplift is spatially correlated with a high attenuation zone for ground penetrating radar (Yokota et al. 2008), indicating the presence of very shallow mudstone layers decomposed into soft clay with high water content. On the other hand, the area with less uplift correlates well with a zone of high S-wave velocity in a very shallow (<20 m deep) area (Onishi et al. 2009), likely unaffected by the mud volcano activity.
The North Nagano Prefecture Earthquake (M w = 6.4, epicenter 16 km away, maximum acceleration of 3.08 cm/s2) in March 2011 and the Nagano-ken Kamishiro Fault Earthquake (M w = 6.7, epicenter 76 km away, maximum acceleration of 0.22 cm/s2) in November 2014 also caused remarkable vertical deformations of greater than 200 mm (revealed by laser scanning) and 46 mm (by leveling survey), respectively (Matta et al. 2012; Kusumoto et al. 2015; National Research Institute for Earth Science and Disaster Resilience 2016). Vertical deformation continues even in the inter-seismic period with seasonal variations of about ±5 mm, revealed by leveling surveys (Kusumoto et al. 2014). Although it is known that the land manager repaired the ground pavement after distinct cracks were identified subsequent to the earthquakes (Fig. 2), we suppose that artificial modifications had only limited effects on the surface elevation because their purpose was only to replace the fractured pavement, which would not change the surface elevation significantly. Indeed, the fractures formed in 2011 (Fig. 2a, c, e) are almost as clean as observed in 2013 (Fig. 2b, d, f), but the pavement surface was not truly flattened and the uplifted deformations surrounding the paved surface remain.
Photographs of the site. a, c, e The open cracks on the surface pavement formed by the 2011 earthquake. b, d, f The repaired pavement. Photograph locations are shown in Fig. 1c
Data acquisition and analysis
Topographic measurements by terrestrial laser scanning
Field surveys were carried out four times between June 2011 and November 2015. The North Nagano Prefecture Earthquake (March 2011) occurred before any of these surveys, and the Kamishiro Earthquake (November 2014) occurred just before the third one.
The terrestrial laser scanners used for the measurements include a Topcon GLS-1500 scanner for the first set of measurements in 2011 and Trimble TX5 scanners for the other measurements in 2013 to 2015. The GLS-1500 is a medium-range scanner, with a maximum measurable distance of 500 m at a scan rate of 30,000 points per second and a range accuracy of 4 mm within 150 m of the unit (Topcon 2010). The TX5 is a lightweight short-range scanner with phase-based laser ranging capability, with a maximum measurable distance of 120 m at a maximum scan rate of 900,000 points per second, and a range accuracy of 0.3–1.1 mm at 10–25 m from the unit (Trimble Navigation Limited 2012). Topcon ScanMaster v.2.1 and Trimble RealWorks v.8.1 software, bundled with the scanners, were used for the data processing of the point clouds. Both scanners have the ability to adjust their horizons using built-in inclination sensors.
Because laser emissions are directional, measurements taken from only one scan position may result in insufficient point cloud coverage with a large fraction of shadows in the data (Hayakawa and Oguchi 2016). In order to cover the target area correctly with a laser scanner, multiple scan positions should be set in the field at the time of each measurement. The point clouds from the different scan positions must then be registered and merged to obtain point cloud data coverage of the entire target zone (internal registration). For this, we apply a cloud-based registration method, which utilizes partial point clouds that represent key morphological features, including the ground surface of the target area, tree trunks, and poles in the surrounding area. In this cloud-based registration, the iterative closest point (ICP) algorithm is performed to minimize the distances between the nearest points on key features (Besl and McKay 1992; Bergevin et al. 1996). In this algorithm, a point cloud is iteratively transformed to fit another reference point cloud, based on overlapping areas with the same morphological features. The amount of error of the cloud-based registration depends primarily on the density of the points, and accuracy on the scale of several millimeters can be obtained if the point density has millimeter-scale spacing (e.g., Teza et al. 2007). The registered point clouds are all merged into one point cloud data point for each survey time.
The georeference of the point clouds, i.e., the external registration of the merged point cloud onto geographical coordinates, was primarily performed using several target references whose geographical coordinates (in Universal Transverse Mercator (UTM) Zone 54N, JGD2000 datum) are obtained from GNSS measurements with the capability to post-process carrier-phase corrections. We used a Trimble GeoExplorer 6000XH as a receiver, and the positioning log data were corrected using data from nearby GEONET (GNSS network in Japan) base stations, provided by the Geospatial Authority of Japan. The fix solution provides centimeter-level accuracies for the GNSS positioning. However, GNSS-based georeferencing errors of point clouds often exceed centimeters, which restrict accurate comparison of the point clouds at different times. Therefore, we refined the alignment of the point clouds at different times by means of cloud-based ICP registration based on features that are thought to stay in the same location, and do not change shape, distributed around the target zone. For this process, the changing main target area of measurement is cropped out, and stable areas that do not include changes in surrounding areas, such as major tree trunks, electric poles, and buildings, are used for the alignment. The ICP procedure was repeatedly applied to refine the external registration to minimize the point-to-point distances between the clouds. The third measurement dataset in 2014 was set as the reference, as it had the most accurate GNSS-based georeferencing. Each of the other datasets was successively aligned to its adjacent dataset.
To examine the topographic changes in the ground surface at the target zone, the zone was first extracted from the original point cloud, while unnecessary points representing vegetation and other artificial objects such as buildings and poles were removed. Then, the digital elevation model (DEM), a two-dimensional raster dataset representing the topography projected on the UTM coordinates, was generated from the extracted point cloud. Geographic information system (GIS) software (ESRI ArcGIS 10.3) was used for the DEM data processing. The resolution of DEMs is determined based on the spatial density of their point cloud data. In the conversion from point cloud to DEM, a triangular irregular network (TIN) model is generated to perform linear interpolation for the randomly distributed points. Furthermore, areas far from the scanner position, that have insufficient point density, less overlapping scan coverage, or vegetation (mostly low-height plants, <40 cm) where the ground surface is hard to detect, were cropped out by setting a mask on the DEM and excluded from the following analyses. Three section lines were then set in the target zone to extract topographic profiles from the DEMs.
Topographic data comparisons
A 2 m resolution DEM, obtained by ALS and used as the initial condition of the comparative analysis with TLS data, was also used for topographic data comparisons. The ALS measurements were performed in July 2004 by Kokusai-Kogyo Co., and a filtered digital terrain model (DTM) was derived, showing the ground surface in the region after removing ground objects such as vegetation and buildings (Fig. 1b).
Differences in DEMs were then computed for each period between consecutive survey times. The four periods are defined as period I, 2004 to 2011; period II, 2011 to 2013; period III, 2013 to 2014; and period IV, 2014 to 2015.
Crack mapping and numerical modeling of stress field
Since the target zone exhibited apparent elliptical bulging of the ground surface with distinct open cracks in the paved surface at the time of the June 2011 measurement, the open cracks (fractures) were traced using the DEM generated for that time. Hillshade image and local variation of elevation calculated from the DEM (3 × 3 cell statistics) were supportively used to highlight the crack features to be manually extracted. The general orientation of the crack lines was then summarized. Cracks with orientations similar to those in 2011 were also visually observed at the time of measurement in December 2014, just after the Kamishiro Earthquake, although they could not be mapped because they were not distinctly open enough to be identified as a surficial shape in the point cloud.
We performed numerical modeling of the vertical displacement and stress fields for crack formation associated with the elliptical uplift. The vertical displacement field of a uniformly loaded elliptical plate with a clamped edge is given as follows (e.g., Timoshenko and Woinowsky-Krieger 1959):
$$ {u}_z={w}_0{\left(1-\frac{x^2}{a^2}-\frac{y^2}{b^2}\right)}^2, $$
where a and b are the semi-major axis and semi-minor axes, respectively, and
$$ {w}_0=\frac{p}{D}{\left(\frac{24}{a^4}+\frac{16}{a^2{b}^2}+\frac{24}{b^4}\right)}^{-1}. $$
In Eq. (2), p is a load acting uniformly on the plate and D is the flexural rigidity of the plate defined by the following equation:
$$ D=\frac{E{ h}^3}{12\left(1-{v}^2\right)}. $$
Here, h, E, and v are the thickness, Young's modulus, and Poisson's ratio of the plate, respectively.
Since the horizontal displacement components of the vertical displacement are u x = −z(∂u z /∂x) and u y = −z(∂u z /∂y) (e.g., Timoshenko and Woinowsky-Krieger 1959; Nakahara et al. 2001), the horizontal displacement fields at the surface (z = h/2) are given as follows:
$$ {u}_x=\frac{2{w}_0}{a^2} h x\left(1-\frac{x^2}{a^2}-\frac{y^2}{b^2}\right), $$
$$ {u}_y=\frac{2{w}_0}{b^2} h y\left(1-\frac{x^2}{a^2}-\frac{y^2}{b^2}\right). $$
Stress fields (σ x , σ y , σ z , τ xy , τ yz , τ zx ) occurring on the plate surface caused by bending due to load p are given as
$$ {\sigma}_x=\frac{3 p}{l{ h}^2}\left[\frac{1}{a^2}+\frac{v}{b^2}-\frac{1}{a^2}\left(\frac{3}{a^2}+\frac{v}{b^2}\right){x}^2-\frac{1}{b^2}\left(\frac{1}{a^2}+\frac{3 v}{b^2}\right){y}^2\right], $$
$$ {\sigma}_y=\frac{3 p}{l{ h}^2}\left[\frac{v}{a^2}+\frac{1}{b^2}-\frac{1}{a^2}\left(\frac{3 v}{a^2}+\frac{1}{b^2}\right){x}^2-\frac{1}{b^2}\left(\frac{v}{a^2}+\frac{3}{b^2}\right){y}^2\right], $$
$$ {\tau}_{xy}=\frac{-6\left(1- v\right)}{a^2{b}^2{h}^2 l} p x y, $$
$$ {\sigma}_z={\tau}_{yz}={\tau}_{z x}=0, $$
and l is defined by
$$ l=\frac{3}{a^4}+\frac{2}{a^2{b}^2}+\frac{3}{b^4}. $$
Summary of topographic data
Based on the land cover type (paved or vegetation) and the availability of data as shown by the overlapping TLS point clouds, the target area for the analysis (2188 m2) was defined on the paved ground surface (Fig. 3a). Figure 4 shows an oblique view of the point clouds, and Table 1 summarizes the properties of the four point clouds for each TLS measurement. The density of the point clouds in the area of interest ranges from 583.9 to 35,814.1 points per square meter, which is equivalent to an average point spacing of 5.3–41.4 mm. Consequently, the DEM resolution achievable from these point clouds is 50 mm for the 2011 data and 10 mm for 2013, 2014, and 2015.
Detailed view of the target zone in the Murono mud volcano. a Areas showing number of overlaps of the TLS measurements. The target zone is shown by a yellow polygon. b Location of the three profile sections (Fig. 7) in the target zone
Point clouds obtained by TLS in a 2011, b 2013, c 2014, and d 2015. The point clouds are shown in RGB colors, although the 2015 point cloud is in monochrome because color capturing was skipped in the field as there was insufficient sunlight after sunset
Table 1 Properties of TLS-derived point clouds for each measurement
The 2011 point cloud is relatively sparse compared to the others because of the limitations of the device used (medium-range GLS-1500 scanner). Although the point clouds from 2013 to 2015 were all obtained using the same TX5 short-range scanner, the point densities vary due to differences in the scanning settings and environmental conditions. Particularly, the relatively sparse densities for 2014 and 2015 are due to wet conditions on the ground surface from rain (Fig. 4c, d).
The accuracy of the internal registration ranges over 2.9–9.8 mm, which is sufficiently small for the average spacing of the entire set of point clouds. As noted before, the third measurement dataset in 2014 was set as the reference, and each dataset was successively aligned to the adjacent dataset. The external registration errors from the ICP algorithm were 6.2–19.8 mm for this dataset. These are comparable to the previous accuracy assessment for TLS measurement at this site reported by Hayakawa et al. (2016). Changes in the land surface exceeding these values (>20 mm) are discussed in the following sections.
Temporal changes in surface elevation
Differences in the elevations of the DEMs, rasterized from the point clouds, were computed for each period. The centimeter-scale vertical changes in the ground surface appear to be spatially variable (Fig. 5). Figure 6 shows the histograms of surface elevation changes, displaying their aerial percentage for each period (I–IV), and Table 2 shows the basic statistics (mean, standard deviation, 1st and 99th percentiles) and the mean rate of elevation change. Here, in order to avoid erroneous outliers, the 1st and 99th percentiles serve as proxies for the minimum and maximum values. Figure 7 illustrates the topographic profiles of the target area at the different times. The initial ground surface in 2004 appears to be relatively flat (Fig. 7), and changes in elevation around the dome-like uplift (Fig. 5) seem to accumulate, forming a symmetric pattern of profiles throughout the measurement periods (Fig. 7). On the other hand, the southeastern area is characterized by a continuous decrease in ground surface elevation since 2004 (Figs. 5 and 7c). More details of the temporal changes in the ground surface elevation for each period are described below.
Differences in DEMs for each period, showing temporal changes in ground surface elevation. Areas in blue indicate uplift, while areas in yellow to red show subsidence. a Period I (July 2004 to June 2011, 83 months). b Period II (June 2011 to December 2013, 29 months). c Period III (December 2013 to December 2014, 12 months). d Period IV (December 2014 to November 2015, 12 months)
Distribution of elevation differences within the target zone for each period
Table 2 Amount and rate of changes in ground surface elevation for each period
Topographic profiles along the sections in the target zone. Locations of the three section lines (A–A', B–B', and C–C') are shown in Fig. 3b. The areas outside the target zone are masked with gray in (c)
In period I, positive changes in elevation (uplift) are apparent around the center of the target zone where there was a concentration of distinct open cracks forming an elliptical pattern in 2011 (P in Fig. 5a). On the other hand, negative changes (subsidence) were also found in the surroundings (Fig. 5a). The most distinct, wide ranging changes in both the positive and negative directions (Fig. 6) are likely due to both the longest elapsed time between measurements (approximately 7 years) and the distinct changes caused by the two large earthquakes, which had accelerations of 17.5 cm/s2 by 2004 Chuetsu and about 3.0 cm/s2by 2007 Chuetsu-oki and 2011 North Nagano (Fig. 5a, Table 2). Over the entire period, uplift seems to have been more dominant than subsidence, resulting in a net elevation change of 44.1 mm with a maximum uplift of 397.5 mm (Table 2). Onishi et al. (2009) detected a maximum surface uplift of about 400 mm in the conspicuous uplift area during 2006 and 2008. Based on the map provided by Onishi et al. (2009), the spatially averaged uplift of this period (2006–2008) in the target zone is visually assumed to be around 200 ± 50 mm, which is considerably higher than the mean uplift of period I (44.1 mm) observed in this study (Table 2). Even if the subsided area is excluded, the mean value for only the uplifted area in period I is calculated to be 133 mm, which is still smaller than the assumed mean uplift during 2006–2008. This indicates that considerable subsidence could have occurred in the conspicuous uplift area during the inter-seismic period of 2008–2011. Slight post-earthquake subsidence was actually observed in the area after the 2011 earthquake as described later (period II, Fig. 7b), and such subsidence could have been dominant after the 2007 earthquake. Although the details are unknown, extensive repair work might also explain the ground surface reset after 2008. In any case, supposing that the ground surface elevation after 2008 decreased to the level of the mean uplift in 2006–2008 (~200 mm) due to either natural subsidence or artificial modification, the uplift directly affected by the 2011 earthquake is assumed to be about 200 mm. This is the maximum value of the potential uplift by the 2011 earthquake. Since separating the effects of the 2007 and 2011 earthquakes precisely is difficult, we assume the potential uplift by the 2011 earthquake to be 0–200 mm.
On the other hand, the large area of subsidence seen in period I (partially reaching <−200 mm; Figs. 5a and 6, Table 2) was not fully recognized in 2006–2008 by Onishi et al. (2009). They only provided a map showing a limited area of subsidence in the peripheral area, although the exact locations of subsidence were not mentioned and are difficult to identify in their map. The subsidence could have occurred following the 2007 earthquake, as noted above. However, as in the case of period III described later, the subsidence could also be co-seismic. Our data for period I have insufficient temporal resolution to identify the exact timing and amount of such a co-seismic uplift and subsidence in 2007 and 2011, but the possibility that both uplift and subsidence occur at such a local scale as a result of earthquakes is worth further assessment with respect to the co-seismic activity of the mud volcano.
Although the time span of period II (30 months) is longer than periods III and IV (12 months for each), the unchanged area is the largest in period II (white areas in Fig. 5b and the highest peak in Fig. 6), likely because of the lack of distinct earthquakes. The mean elevation change was slightly positive (4.7 mm; Table 2), however. Although this amount of uplift is less than the significant level of change detection limit (20 mm), the spatial variation in the uplift indicates that partial uplift around the previous cracks (P in Fig. 5b) contributed to the net positive change. Note that the large maximum value of uplift in period II (59.7 mm, Table 2) could be due to the difference in the deep crack bottom in 2011 and repaired surface in 2013 and the amount of natural uplift could be a few centimeters (Fig. 7a). In addition, the eastern side shows some negative changes (Q in Fig. 5b), which is also apparent in sections B–B′ and C–C′ (Fig. 7b, c). Although the area where subsidence exceeded the detectable level (20 mm) is limited, this demonstrates that post-earthquake uplift and subsidence in period II is spatially variable.
In period III, the elliptical pattern of uplift in the central portion is obvious (P in Fig. 5c), while subsidence also appears on the eastern side (R in Fig. 5c). The contrast of uplift and subsidence is also clear in profile section C–C′ (Fig. 7c). Compared with periods II and IV, the histogram of elevation change in period III has a relatively wide distribution (Fig. 6), but the mean change is nearly zero (Table 2). This suggests that the 2014 earthquake likely affected the spatial pattern, including both uplift and subsidence. The maximum uplift (30 mm) in this period roughly corresponds to that derived from a leveling survey (46 mm, Kusumoto et al. 2015) but is less than that observed in period I. This may be attributed to the relatively low acceleration experienced in the area (0.22 cm/s2) in the 2014 earthquake (Table 2).
Unlike the earlier periods, period IV shows less change in the central portion, having no detectable differences (±20 mm), while it shows positive surface changes (uplift) on the eastern side (S in Fig. 5d). Though not exceeding the significant level of ±20 mm, slight subsidence is also observed in the southwestern side of profile section A–A′ (Fig. 5a). Thus, the spatial pattern of uplift and subsidence in this period is quite different from the others. Furthermore, the histogram of elevation change in period IV shows a biased positive trend (Fig. 6), and the mean value of uplift is large (11.6 mm; Table 2). In turn, detectable subsidence areas are almost absent in period IV (Fig. 5d). The mean uplift rate of period IV (11.7 mm/year for 12 months) is considerably higher than that of period II (1.9 mm/year for 30 months). Both are assumed to be gradual since there were no distinct earthquakes in either period.
Formation of surface fractures by earthquakes
In the earthquake-affected periods I and III, the areas of uplift show elliptical spatial patterns (Fig. 5a, c). The cumulative changes in surface elevation after the first TLS measurement, from 2011 to 2015, also reflect this elliptical pattern, whose major axis direction was W36°N (Fig. 8). Figure 8 also shows the mapped crack lines on the paved ground surface found in June 2011, whose average orientation is W37.5°N, almost the same as that of the elliptic uplift. This may reflect the underground fluid dynamics of the mud, likely caused by pressure variability triggered by the earthquakes. Here, we discuss the fracture patterns and pressure estimates related to the earthquakes. As noted, in these earthquake-affected periods, there were not only areas of uplift but also areas of subsidence, particularly in the southeastern side of the target zone. Although Onishi et al. (2009) argued that because no significant surficial change was present through the 2007 earthquake, the southeastern area is unaffected by the mud volcano activity, the presence of subsidence found in this study may suggest that the mud volcano affects the surrounding area extensively, inducing subsurface lateral migration of fluid mud. However, the target zone of this study does not seem to cover the entire peripheral area that may experience subsidence. It does include the area with conspicuous uplift that exhibits more clear features, comprising the elliptical uplift pattern and surficial fractures. Therefore, we simply focus on the pressure mechanisms of uplift and fractures.
Cumulative surface elevation changes throughout periods II–IV based on TLS-derived data. Traces of the open cracks observed in 2011 are shown as solid black lines. Dashed line indicates the elliptical uplift zone
Although Kusumoto et al. (2015) pointed out that the uplift area was elliptical with major and minor axes of about 80 and 40 m, respectively, the conspicuous, essential uplift area is narrower. In previous studies, the conspicuous uplift areas were reported in the same location, and they have a common size with major and minor axes of about 40 and 30 m, respectively. In order to focus on the mechanisms of uplift and fracture formation observed in this study, we modeled the uplift area using a = 20 m and b = 15 m for Eqs. (1)–(10).
In the study area, an elastic surface layer, estimated to be about 1 m thick, was identified using the surface wave method, which estimates distributions of shear-wave velocity by surface wave inversion (Onishi et al. 2009). In addition, because the layer is at the surface and its elastic constants have not been measured, we assumed a low Young's modulus of 1 GPa and a typical Poisson's ratio of 0.25 for the layer (e.g., Bell 2000).
We first estimated the magnitude of an overpressure that could form the uplift that potentially reached 200 mm, caused by the 2011 North Nagano Prefecture Earthquake. To assess the maximum uplift, we took x = y = 0 in Eq. (1) and, with Eqs. (1), (2), and (3), obtained the equation that gives the maximum uplift,
$$ {u}_{z\_ \max }=\frac{12\left(1-{v}^2\right) p}{E{ h}^3}{\left(\frac{24}{a^4}+\frac{16}{a^2{b}^2}+\frac{24}{b^4}\right)}^{-1}. $$
We rewrite this equation as
$$ p=\frac{E{ h}^3{u}_{z\_ \max }}{12\left(1-{v}^2\right)}\left(\frac{24}{a^4}+\frac{16}{a^2{b}^2}+\frac{24}{b^4}\right), $$
and by taking a = 20 m, b = 15 m, v = 0.25, h = 1.0 m, E = 1 GPa, and u z_max = 200 mm, we obtained p = 14.26 kPa as the overpressure that gives an uplift of 200 mm.
In Fig. 9a, we show the vertical displacement field caused by the overpressure (p = 14.26 kPa). The calculated vertical displacement field turns out to be consistent with the observed vertical displacement field.
Modeled vertical displacement and stress fields. Vertical displacement and principal stress fields estimated for a uniformly loaded elliptical plate model with a clamped edge. a Vertical displacement field caused by overpressure of 14.26 kPa (displacement in meters). The modeled plate is 1 m thick with Young's modulus of 1 GPa and Poisson's ratio of 0.25. b Principal stress field caused by overpressure of 14.26 kPa. Blue and red ticks are the maximum principal stress and minimum principal stress axes, respectively. Stresses are in megapascal
Figure 9b shows the distribution of the principal stress axes around the center part of the uplift. The maximum principal stress axes, σ 1, are distributed along the major axis of the elliptical uplift area, and the minimum principal stress axes are perpendicular to σ 1, so they are distributed along the minor axis of the elliptical uplift area. This indicates that if the minimum principal stress exceeded the tensile strength of the medium, open fractures (cracks) would appear at the surface and their strike directions would correspond to the direction of the maximum principal stress axis, which is the major axis of the elliptical uplift area.
Fractures observed at the surface are distributed along the major axis of the elliptical uplift area (Fig. 8). This distribution pattern is consistent with the direction of the maximum principal stress axis shown in Fig. 9b. In addition, the observed fractures are basically open fractures, including some strike-slip fracturing, and it is known that the in situ tensile strength of most of the rocks is between 0.5 and 6 MPa (e.g., Haimson and Rummel 1982; Amadei and Stephansson 1997; Schultz 1997; Gudmundsson 2011). The general tensile strength of asphalt mixtures is known to be around 4 MPa when colder than 0 °C but less than 1 MPa when the temperature is more than 10 °C (Yoshida et al. 2001). Since the maximum calculated stress σ 3 was about 1.5 MPa (Fig. 9b), an uplift of 200 mm can form fractures at the paved asphalt surface, particularly in higher summer temperatures. From the observed data and simulated results, we can conclude that an elliptical uplift area potentially reaching 200 mm can be explained by an overpressure change as high as 14.26 kPa. Furthermore, we can conclude that fractures were formed as the minimum principal stress (>1.5 MPa) exceeded the tensile strength at this elliptical area surface and were propagated in the direction of the maximum principal stress axes. In turn, assuming that the tensile strength of the paved asphalt surface is as low as 1 MPa, fractures could have formed with an uplift of 133 mm or more. Although the exact uplift amount by the 2011 earthquake is unknown, the calculation above indicates the potential uplift at this time, given the extensive formation of fractures, to be approximately 100–200 mm.
Although Onishi et al. (2009) does not report on the occurrence of fractures, fractures would have formed and been observed at the surface because the uplift, which exceeded 400 mm, would have produced a minimum principal stress of more than 3 MPa. On the other hand, since it is estimated that the uplift of 46 mm caused by the 2014 Kamishiro Fault Earthquake (Kusumoto et al. 2015) would have produced a minimum principal stress of 0.35 MPa, the uplift would not have formed open fractures if the pavement's tensile strength at the site were 0.5 MPa or higher. However, many open fractures were actually observed at the surface, and their distribution pattern was similar to the pattern shown in Fig. 8. Although the tensile strength of asphalt material can drop to 0.2–0.4 MPa at a temperature of around 20 °C (Yoshida et al. 2001), the ground surface temperature of the site on the autumn night of the earthquake (22:08 on November 22) could not have been that high. From observations of open fracture formation caused by such low tensile stress, we can infer a decrease of the tensile strength at this site, including in the subsurface rocks, caused by repeated tensile fracturing, since conspicuous uplifts by earthquakes and fractures have always been formed in the same area, at least until the 2014 earthquake (Fig. 5).
Seismic and inter-seismic surface deformations
The spatial pattern of the mud volcano's vertical displacement was found to be different for the earthquake-affected periods (I and III) and the inter-seismic periods (II and IV). The elliptical pattern of the uplift and the subsidence distribution, as well as the formation of surface fractures, characterizes the earthquake-affected periods (Fig. 5a, c), whereas the gradual changes in the inter-seismic periods are spatially variable but result in the dominance of uplift (Table 2).
Although the ground surface tends to generally rise during the inter-seismic period, the patterns of surface uplift are different for period II and period IV. In period II, the uplift pattern follows the crack fractures formed by the 2011 North Nagano Prefecture Earthquake, i.e., the largest uplifts are observed along the crack lines (Figs. 5b and 7a). Similarly, the center of uplift is concentrated in the northwestern area in periods I–III. In contrast, a widespread uplift pattern in period IV was found in the eastern side of the target area (Figs. 5d and 7b), while fewer changes were observed in the northwestern area where the conspicuous uplift had been previously observed (Figs. 5d and 7a). This shift in the location of uplift may indicate a change in local subsurface fluid dynamics after the 2014 Nagano-ken Kamishiro Fault Earthquake. The earthquake could have triggered a change in the local pressure field within the domain of the mud volcano.
A large-scale change in the center of the activity of a mud volcano may be related to the dynamics constrained by the lithological and tectonic structures that exist at depths of hundreds of meters to kilometers (e.g., Kopf 2002; Planke et al. 2003; Istadi et al. 2009; Shinya and Tanaka 2009). Using the controlled source audio-frequency magneto-telluric (CSAMT) method, low resistivity areas indicating mud chambers, located several hundred meters to kilometers from the surface, have been found around the Murono mud volcano (Suzuki et al. 2009). However, since the observed surface deformations in the Murono mud volcano are small, it is more likely that the slight change of several meters in the surface activity observed for the Murono mud volcano is related to shallower subsurface structure within tens of meters in depth. In fact, very shallow low-velocity layers (1 to 5 m deep or deeper than 13 m) were observed by Onishi et al. (2009).
Although Kusumoto et al. (2014) suggested overpressure changes in fluid mud flow were a direct source of ground surface deformation during the quiescent phase, the details of the subsurface fluid dynamics remain to be clarified by geophysical measurements, and the relationship between fluid mud flow and surface deformation has not yet been directly revealed. As shown in this paper, subtle changes in the surface topography indicate changes in subsurface fluid activity, providing a guideline for further geophysical analyses. Also, in order to obtain a clearer pattern image of the shift of the central portion of uplift of the mud volcano, it is necessary to carry out repeated, widespread additional topographic measurements to monitor the activity in the following years.
We performed TLS measurements on the small mud volcano at Murono, central Japan, revealing the spatial variations of its vertical displacements at the centimeter scale. The detection of such spatially variable small-scale changes including both central uplift and peripheral subsidence, as well as the mapping of small topographic features including open cracks, enabled us to discuss the detailed changes in both seismic and inter-seismic deformations of the ground surface of the mud volcano. We also quantified the magnitude of the pressure field induced by earthquakes by modeling the pressure required to produce the cracks observed in the conspicuous, elliptical uplift area. The cracks suggested the presence of a local pop-up of subsurface fluid induced by the earthquakes, as well as the weakening of surface materials by repeated uplift. The uplift pattern in the quiescent period was found to be similar during periods I–III but changed after the 2014 Nagano-ken Kamishiro Fault Earthquake, suggesting changes in subsurface fluid dynamics after that earthquake. Further topographic measurements, as well as other geophysical data, are expected to provide a better understanding of the local-scale subsurface mechanisms of the mud volcano. In particular, although the modeled uplift and fracture formation were limited to a small conspicuous area (20 m × 15 m), more widespread modeling efforts that include the surrounding subsidence area will further clarify the mechanisms of the mud volcano activity, for which expanding the target area of measurement would be necessary. Increased frequency of topographic measurements, particularly after a strong earthquake, would also be helpful in revealing detailed temporal changes in the mud volcano.
a.s.l.:
Above sea level
ALS:
Airborne laser scanning
CSAMT:
Controlled source audio-frequency magneto-telluric
DEM:
DTM:
Digital terrain model
GIS:
GNSS:
Global navigation satellite system
ICP:
Iterative closest point
InSAR:
Interferometric synthetic aperture radar
TIN:
Triangular irregular network
TLS:
Terrestrial laser scanning
Universal Transverse Mercator
Amadei B, Stephansson O (1997) Rock stress and its measurement. Chapman & Hall, London
Antonielli B, Monserrat O, Bonini M, Righini G, Sani F, Luzi G, Feyzullayev A, Aliyev C (2014) Pre-eruptive ground deformation of Azerbaijan mud volcanoes detected through satellite radar interferometry (DInSAR). Tectonophysics 637:163–177. doi:10.1016/j.tecto.2014.10.005
Bell FG (2000) Engineering properties of rocks, 4th edn. Blackwell, Oxford
Bergevin R, Soucy M, Qagnon H, Laurendeau D (1996) Towards a general multi-view registration technique. IEEE Trans Pattern Anal Mach Intell 18:540–547. doi:10.1109/34.494643
Besl PJ, McKay ND (1992) A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 14:239–256. doi:10.1109/34.121791
Brown KM (1990) The nature and hydrogeologic significance of mud diapirs and diatremes for accretionary systems. J Geophys Res 95:8969–8982. doi:10.1029/JB095iB06p08969
Chigira M, Tanaka K (1997) Structural features and the history of mud volcanoes in southern Hokkaido, northern Japan. J Geol Soc Japan 103:781–791. doi:10.5575/geosoc.103.781
DeLong SB, Lienkaemper JJ, Pickering AJ, Avdievitch NN (2015) Rates and patterns of surface deformation from laser scanning following the South Napa earthquake, California. Geosphere 11:1–17. doi:10.1130/GES01189.1
Dimitrov LI (2002) Mud volcanoes—the most important pathway for degassing deeply buried sediments. Earth Sci Rev 59:49–76. doi:10.1016/S0012-8252(02)00069-7
Dimitrov LI (2003) Mud volcanoes—a significant source of atmospheric methane. Geo-Mar Lett 23:155–161. doi:10.1007/s00367-003-0140-3
Doshida S, Chigira M, Nakamura T (2007) Morphological analysis of shallow landslides on mud volcanoes by using airborne laser scanner. Trans Jpn Geomorphol Union 28:23–39
Etiope G (2005) Methane emission from mud volcanoes. In: Martinelli G, Panahi B (eds) Mud volcanoes, Geodyn. Seism. Springer-Verlag, Berlin/Heidelberg, pp 141–146
Etiope G, Nakada R, Tanaka K, Yoshida N (2011) Gas seepage from Tokamachi mud volcanoes, onshore Niigata Basin (Japan): origin, post-genetic alterations and CH4-CO2 fluxes. Appl Geochem 26:348–359. doi:10.1016/j.apgeochem.2010.12.008
Feyzullayev AA, Kadirov FA, Aliyev CS (2005) Mud volcano model resulting from geophysical and geochemical research. In: Martinelli G, Panahi B (eds) Mud volcanoes, Geodyn. Seism. Springer-Verlag, Berlin/Heidelberg, pp 251–262
Fukushima Y, Mori J, Hashimoto M, Kano Y (2009) Subsidence associated with the LUSI mud eruption, East Java, investigated by SAR interferometry. Mar Pet Geol 26:1740–1750. doi:10.1016/j.marpetgeo.2009.02.001
Gudmundsson A (2011) Rock fractures in geological processes. Cambridge Univ. Press, Cambridge
Haimson BC, Rummel F (1982) Hydrofracturing stress measurements in the Iceland Research Drilling Project drill hole at Reydarfjordur, Iceland. J Geophys Res Solid Earth 87:6631–6649. doi:10.1029/JB087iB08p06631
Hayakawa YS, Oguchi T (2016) Applications of terrestrial laser scanning in geomorphology (in Japanese with English abstract). J Geogr (Chigaku Zasshi) 125:299–324. doi:10.5026/jgeography.125.299
Hayakawa YS, Kusumoto S, Matta N (2016) Application of terrestrial laser scanning for detection of ground surface deformation in small mud volcano (Murono, Japan). Earth Planets Space 68:114. doi:10.1186/s40623-016-0495-0
Heritage GL, Large ARG (2009) Laser scanning for the environmental sciences. Wiley-Blackwell, Oxford
Higgins G, Saunders J (1974) Mud volcanoes: their nature and origin. Verh Naturfsch Ges Bersel 84:101–152
Hovland M, Hill A, Stokes D (1997) The structure and geomorphology of the Dashgil mud volcano, Azerbaijan. Geomorphology 21:1–15. doi:10.1016/S0169-555X(97)00034-2
Istadi BP, Pramono GH, Sumintadireja P, Alam S (2009) Modeling study of growth and potential geohazard for LUSI mud volcano: East Java, Indonesia. Mar Pet Geol 26:1724–1739. doi:10.1016/j.marpetgeo.2009.03.006
Japan Meteorological Agency (2016) Monthly climate: AMEDAS 54676-Tokamachi. http://www.data.jma.go.jp/obd/stats/etrn/view/nml_amd_ym.php?prec_no=54&block_no=0537&year=&month=&day=&view=. Accessed 30 Sep 2016.
Judd A (2005) Gas emissions from mud volcanoes. In: Martinelli G, Panahi B (eds) Mud volcanoes, Geodyn. Seism. Springer-Verlag, Berlin/Heidelberg, pp 147–157
Kopf AJ (2002) Significance of mud volcanism. Rev Geophys 40:1005. doi: 10.1029/2000RG000093
Kusumoto S, Sudo K, Kawabata M, Uda T, Fukuda Y (2014) Vertical movement during the quiescent phase of the Murono mud volcano, Niigata, Japan. Earth Planets Space 66:14. doi:10.1186/1880-5981-66-14
Kusumoto S, Hamamoto T, Fukuda Y, Takahashi A (2015) Vertical movements of the Murono mud volcano in Japan caused by the Naganoken Kamishiro Fault Earthquake in 2014. Earth Planets Space 67:53. doi:10.1186/s40623-015-0223-1
Lane SN, Westaway RM, Hicks DM (2003) Estimation of erosion and deposition volumes in a large gravel-bed, braided river using synoptic remote sensing. Earth Surf Process Landf 28:249–271. doi: 10.1002/esp.483
Manga M, Brumm M, Rudolph ML (2009) Earthquake triggering of mud volcanoes. Mar Pet Geol 26:1785–1798. doi:10.1016/j.marpetgeo.2009.01.019
Martinelli G, Dadomo A (2005a) Mud volcano monitoring and seismic events. In: Martinelli G, Panahi B (eds) Mud volcanoes, Geodyn. Seism. Springer-Verlag, Berlin/Heidelberg, pp 187–199
Martinelli G, Dadomo A (2005b) Geochemical model of mud volcanoes from reviewed worldwide data. In: Martinelli G, Panahi B (eds) Mud Volcanoes, Geodyn. Seism. Springer-Verlag, Berlin/Heidelberg, pp 211–220
Matta N, Hayakawa YS, Hori K, Kuo Y-T, Sugito N (2012) Uplift of the Matsudai mud volcano associated with the earthquake near the border of Nagano and Niigata Prefectures, measured by 3D laser scanner (in Japanese). Trans Jpn Geomorphol Union 33:94–95
Mazzini A (2009) Mud volcanism: processes and implications. Mar Pet Geol 26:1677–1680. doi:10.1016/j.marpetgeo.2009.05.003
Mellors R, Kilb D, Aliyev A, Gasanov A, Yetirmishli G (2007) Correlations between earthquakes and large mud volcano eruptions. J Geophys Res Solid Earth 112:B04304. doi: 10.1029/2006JB004489.
Milan DJ, Heritage GL, Large ARG, Fuller IC (2011) Filtering spatial error from DEMs: implications for morphological change estimation. Geomorphology 125:160–171. doi:10.1016/j.geomorph.2010.09.012
Milkov AV (2000) Worldwide distribution of submarine mud volcanoes and associated gas hydrates. Mar Geol 167:29–42. doi:10.1016/S0025-3227(00)00022-0
Milkov AV (2005) Global distribution of mud volcanoes and their significance in petroleum exploration as a source of methane in the atmosphere and hydrosphere and as a geohazard. In: Martinelli G, Panahi B (eds) Mud Volcanoes, Geodyn. Seism. Springer-Verlag, Berlin/Heidelberg, pp 29–34
Moerz T, Fekete N, Kopf A, Brueckmann W, Kreiter S, Huehnerbach V, Masson D, Hepp DA, Schmidt M, Kutterolf S, Sahling H, Abegg F, Spiess V, Suess E, Ranero C (2005) Styles and productivity of mud diapirism along the Middle American margin. In: Martinelli G, Panahi B (eds) Mud Volcanoes, Geodyn. Seism. Springer-Verlag, Berlin/Heidelberg, pp 49–76
Mori J, Kano Y (2009) Is the 2006 Yogyakarta earthquake related to the triggering of the Sidoarjo, Indonesia mud volcano? Chigaku Zasshi (J Geogr) 118:492–498. doi:10.5026/jgeography.118.492
Nakahara I, Shibuya T, Tsuchida E, Kasano H, Tsuji T, Inoue H (2001) Handbook of elasticity (in Japanese). Asakura Shoten, Tokyo
National Research Institute for Earth Science and Disaster Resilience (2016) Strong-motion seismograph networks (K-NET, KiK-net). http://www.kyoshin.bosai.go.jp/ [accessed 30 Aug 2016]
Noda H (1962) The geology and paleontology of the environs of Matsunoyama, Niigata Prefecture, with reference to the so-called black shale (in Japanese with English abstract). Sci Rep Res Inst Tohoku Univ 2:199–236
Olsen MJ, Johnstone E, Driscoll N, Ashford SA, Kuester F (2009) Terrestrial laser scanning of extended cliff sections in dynamic environments: parameter analysis. J Surv Eng 135:161–169
Onishi K, Sanada Y, Yokota T, Tokunaga T, Mogi K, Safani J, O'Neill A (2009) Investigation of subsurface s-wave velocity structures beneath a mud volcano in the Matsudai-Murono District by surface wave method (in Japanese with English abstract). Chigaku Zasshi (J Geogr) 118:390–407. doi:10.5026/jgeography.118.390
Panahi B (2000) On spatial and time correlation of earthquakes and mud volcano eruptions and seismic regime of Azerbaijan-Caspian Sea region. Geophys News Azerbaijan 1:26–29
Panahi BM (2005) Mud volcanism, geodynamics and seismicity of Azerbaijan and the Caspian Sea region. In: Martinelli G, Panahi B (eds) Mud Volcanoes, Geodyn. Seism. Springer-Verlag, Berlin/Heidelberg, pp 89–104
Planke S, Svensen H, Hovland M, Banks DA, Jamtveit B (2003) Mud and fluid migration in active mud volcanoes in Azerbaijan. Geo-Mar Lett 23:258–268. doi:10.1007/s00367-003-0152-z
Rudolph ML, Manga M (2012) Frequency dependence of mud volcano response to earthquakes. Geophys Res Lett 39:1–5. doi:10.1029/2012GL052383
Schultz RA (1997) Displacement-length scaling for terrestrial and Martian faults: implications for Valles Marineris and shallow planetary grabens. J Geophys Res Solid Earth 102(B6):12009–12015. doi: 10.1029/97JB00751
Shakirov R, Obzhirov A, Suess E, Salyuk A, Biebow N (2004) Mud volcanoes and gas vents in the Okhotsk Sea area. Geo-Mar Lett 24:140–149. doi:10.1007/s00367-004-0177-y
Shinya T, Tanaka K (2009) Origin of materials erupting from mud volcano in Tokamachi City, Niigata Prefecture, Central Japan (in Japanese with English abstract). Chigaku Zasshi (J Geogr) 118:340–349. doi:10.5026/jgeography.118.340
Suzuki K, Tokuyasu S, Tanaka K (2009) Underground structure of mud volcanoes in Tokamachi City, Niigata Prefecture determined by electromagnetic exploration, and geographical and geological surveys (in Japanese with English abstract). Chigaku Zasshi (J Geogr) 118:373–389. doi:10.5026/jgeography.118.373
Takeuchi K, Yoshikawa T, Kamai T (2000) Geology of the Matsunoyama Onsen district with geological map at 1:50,000 (in Japanese with English abstract). Geological Survey of Japan, Tsukuba
Teza G, Galgaro A, Zaltron N, Genevois R (2007) Terrestrial laser scanner to detect landslide displacement fields: a new approach. Int J Remote Sens 28:3425–3446. doi:10.1080/01431160601024234
Timoshenko SP, Woinowsky-Krieger S (1959) Theory of plates and shells, 2nd edn. McGraw-Hill, New York
Topcon (2010) User manual: laser scanner GLS-1500 series (in Japanese), Topcon, Tokyo
Trimble Navigation Limited (2012) Datasheet Trimble TX5 scanner, Trimble Inc., Sunnyvale
Wang C-Y, Manga M (2010) Mud volcanoes. Earthquakes Water - Lect. Notes Earth Sci. 114. Springer, Berlin/Heidelberg, pp 33–43
Whitworth MZ, Giles D, Anderson I (2006) Terrestrial laser scanning for applied geoscience studies in the urban environment. In: Tenth IAEG Congr (ed) The Geological Society of London. The Geological Society of London, Nottingham, pp 1–9
Yokota T, Onishi K, Sanada Y (2008) Geophysical explorations of shallow structure of mud volcano using a ground penetrating radar system in Matsudai, Tokamachi City, Niigata Japan (in Japanese). Chishitsu News 644:25–32
Yoshida T, Moriyoshi A, Takano S (2001) Fracture properties of asphalt mixture in tension and application (in Japanese with English abstract). Sekiyu Gakkaishi 44:312–316
We thank the editor and the two anonymous reviewers for their critical but constructive comments, which greatly improved the manuscript. We would like to thank Editage (https://www.editage.jp) and FORTE (https://www.forte-science.co.jp/) for English language editing. This work is supported by JSPS KAKENHI Grant Number JP25702014 and is a part of the joint research of CSIS, The University of Tokyo.
The data used in this paper are available for research purposes at JoRAS (Joint Research Assist System): https://joras.csis.u-tokyo.ac.jp/dataset/show/id/15003201000.
YSH performed the TLS measurements in the field and analyzed the data, as well as drafting this manuscript. SK carried out numerical modeling, drafted the manuscript, and obtained permission for the field survey. NM planned the initial field measurements and discussed the results we obtained. All authors read and approved the final manuscript.
Center for Spatial Information Science, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba, 277-8568, Japan
Yuichi S. Hayakawa
Graduate School of Science and Engineering for Research, University of Toyama, 3190 Gofuku, Toyama City, Toyama, 930-8555, Japan
Shigekazu Kusumoto
Graduate School of Education, Okayama University, 3-1-1 Tsushimanaka, Okayama City, Okayama, 700-8530, Japan
Nobuhisa Matta
Search for Yuichi S. Hayakawa in:
Search for Shigekazu Kusumoto in:
Search for Nobuhisa Matta in:
Correspondence to Yuichi S. Hayakawa.
Mud volcano
3. Human geosciences
High-definition topographic and geophysical data in geosciences | CommonCrawl |
B. Parent • AE25225 Intermediate Thermodynamics
Intermediate Thermodynamics Assignment 5 — Second Law of Thermo
$\xi$ is a parameter related to your student ID, with $\xi_1$ corresponding to the last digit, $\xi_2$ to the last two digits, $\xi_3$ to the last three digits, etc. For instance, if your ID is 199225962, then $\xi_1=2$, $\xi_2=62$, $\xi_3=962$, $\xi_4=5962$, etc. Keep a copy of the assignment — the assignment will not be handed back to you. You must be capable of remembering the solutions you hand in.
For a system composed of $k=1..n$ bodies each with a temperature $T_k$, and after defining the entropy as $$ {\rm d} S_k\equiv \frac{\delta Q}{T_k} - \frac{\delta W}{T_k}$$ prove that, when $\delta W =0$, the entropy of the system always increases: $$ \sum_{k=1}^n {\rm d} S_k \ge 0$$ Note: this question is awarded double the points given to the other questions.
Start from the $T{\rm d}s$ equations and derive the following expressions for the entropy change of a perfect gas: $$ s_2-s_1=c_v \cdot \ln \left( \frac{T_2}{T_1}\right) - R \cdot \ln \left( \frac{\rho_2}{\rho_1}\right)$$ $$ s_2-s_1=c_p \cdot \ln \left( \frac{T_2}{T_1}\right) - R \cdot \ln \left( \frac{P_2}{P_1}\right)$$ with $c_v$ and $c_p$ the specific heats at constant volume and pressure and $R$ the gas constant.
Using the expressions derived in the previous question, calculate the specific entropy change for the following changes of state for a system. Consider one mole of N$_2$ at 1 bar and 300 K as initial conditions.
(a) Isentropic compression to 10 bar.
(b) Constant volume heating to 2 bars
(c) Constant pressure heating to 600 K
(a) One kg of water at $300$ K is brought into contact with a heat reservoir at 600 K. When the water has reached 600 K, what is the entropy change of the water? Of the heat reservoir? Of the universe?
(b) If the water has been heated from 300 K to 600 K by first bringing it into contact with a reservoir at 400 K and then with a reservoir at 600 K, what would have been the entropy change of the universe?
(c) Explain how the water might be heated from 300 K to 600 K with almost no change of entropy of the universe.
Consider one kg of air ($R=0.287$ kJ/kgK, $c_p=1.0035$ kJ/kgK, $c_v=0.7165$ kJ/kgK) expanding from 2 bar and 600 K to 1 bar and $500$ K. Calculate the entropy change and verify it is the same for different paths of integration.
Use the following paths: a constant volume and constant pressure 1-A-2, a reversible adiabatic and constant pressure 1-B-2, an isothermal and constant pressure 1-C-2.
A rigid nonconducting tank with a volume of $120$ cubic meters is divided into two equal parts by a thin membrane. Hydrogen gas is contained on one side of the membrane at $3.5$ bar and 80$^\circ$C. The other side is a perfect vacuum. The membrane is suddenly ruptured, and the H$_2$ gas fills the tank following the polytropic process $P V^{1.2}={\rm constant}$. What is the entropy change of the hydrogen? Consider hydrogen to be a perfect gas ($R=4.124$ kJ/kgK, $c_p=14.307$ kJ/kgK, $c_v=10.183$ kJ/kgK).
Consider three water jets entering a mixing chamber as follows:
The properties of the water jets entering the chamber correspond to: $$ \begin{array}{llll} \hline ~ & \rm Jet~1 & \rm Jet~2 & \rm Jet~3 \\ \hline \dot{m} & \rm 1~kg/s & \rm 2~kg/s & \rm 3~kg/s \\ T & \rm 300~K & \rm 310~K & \rm 330~K\\ P & \rm 1~atm & \rm 1~atm & \rm 1~atm \\ \hline \end{array} $$ The chamber is sufficiently long that the 3 water jets mix completely with each other. This results in the water exiting the chamber having uniform properties. Knowing that the mixing chamber loses heat to the environment at a rate of 200 kW, determine the following:
(a) The final temperature of the mixed water
(b) The rate of change in entropy of the water within the chamber in W/K (that is, find the difference between the entropy of the mixed water and the sum of the entropies of the incoming 3 water jets).
Consider a 1 m$^3$ tank in which air is contained in three different zones separated by membranes, as follows:
Initially, the air within the three zones has the following properties: $$ \begin{array}{llll} \hline ~ & \rm Zone~A & \rm Zone~B & \rm Zone~C \\ \hline P & \rm 1~bar & \rm 2~bar & \rm 3~bar \\ T & \rm 300~K & \rm 300~K & \rm 300~K\\ V & \rm 0.2~m^3 & \rm 0.5~m^3 & \rm 0.3~m^3 \\ \hline \end{array} $$ The membranes are suddenly ruptured, mixing occurs between the zones, and after a large amount of time the properties of the air become uniform throughout the tank. Assuming no heat transfer from the air to the tank walls, calculate:
(a) The final temperature and pressure of the mixed air
(b) The change in entropy of the air within the tank in J/K (that is, find the difference between the entropy of the mixed air and the sum of the entropies of the air within the 3 zones)
6. 20.83 kJ/K.
7. 310.4 K, $-614.5$ W/K.
8. 2.1 bar, 300 K, 41.28 J/K.
Due on Monday April 22nd at 16:30. Do Questions #6, #7, and #8 only. | CommonCrawl |
As somebody used to say:
Does research. Smokes. Battles administration. Smokes. Wishes he could stop battling administration so that he could have more time to do research. Smokes some more.
The same. Except I do not smoke.
Last seen Apr 4 '18 at 17:08
Mathematics 255.7k 255.7k 2626 gold badges238238 silver badges487487 bronze badges
MathOverflow 5.4k 5.4k 11 gold badge2222 silver badges3333 bronze badges
Cross Validated 1.5k 1.5k 1313 silver badges2121 bronze badges
French Language 247 247 11 silver badge77 bronze badges
Physics 148 148 66 bronze badges
837 Construct a function which is continuous in $[1,5]$ but not differentiable at $2, 3, 4$
203 How to define a bijection between $(0,1)$ and $(0,1]$?
177 Continued fraction fallacy: $1=2$
164 Compute $ \lim\limits_{n \to \infty }\sin \sin \dots\sin n$
142 Evaluating $\lim\limits_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$
101 Expected time to roll all 1 through 6 on a die
79 Find closed form for $1, 2, 2, 3, 4, 4, 5, 6, 6, 7, 8, 8, 9, 10, 10, \ldots$
anglicismes
14 Combinatoricien vs combinatoriste Nov 9 '11 | CommonCrawl |
Hostname: page-component-6c8bd87754-h9sqt Total loading time: 0.16 Render date: 2022-01-21T09:03:45.714Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }
>Proceedings of the Royal Society of Edinburgh. Section A: Mathematics
>Proceedings of the Royal Society of Edinburgh Section A: Mathematics
>Volume 146 Issue 1
>Investigating the multiplicity and concentration behaviour...
Proceedings of the Royal Society of Edinburgh Section A: Mathematics
Investigating the multiplicity and concentration behaviour of solutions for a quasi-linear Choquard equation via the penalization method
Part of: Elliptic equations and systems Partial differential equations
Published online by Cambridge University Press: 08 October 2015
Claudianor O. Alves and
Minbo Yang
Claudianor O. Alves
Departamento de Matemática, Universidade Federal de Campina, Grande, CEP 58429-900, Campina Grande – Pb, Brazil ([email protected])
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, People's Republic of China ([email protected])
We study the multiplicity and concentration behaviour of positive solutions for a quasi-linear Choquard equation
where Δ p is the p-Laplacian operator, 1 < p < N, V is a continuous real function on ℝ N , 0 < μ < N, F(s) is the primitive function of f(s), ε is a positive parameter and * represents the convolution between two functions. The question of the existence of semiclassical solutions for the semilinear case p = 2 has recently been posed by Ambrosetti and Malchiodi. We suppose that the potential satisfies the condition introduced by del Pino and Felmer, i.e.V has a local minimum. We prove the existence, multiplicity and concentration of solutions for the equation by the penalization method and Lyusternik–Schnirelmann theory and even show novel results for the semilinear case p = 2.
Choquard equationnon-local nonlinearitiesconcentrationLyusternik–Schnirelmann theoryvariational methods
MSC classification
Primary: 35J50: Variational methods for elliptic systems 35J60: Nonlinear elliptic equations 35A15: Variational methods
Proceedings of the Royal Society of Edinburgh Section A: Mathematics , Volume 146 , Issue 1 , February 2016 , pp. 23 - 58
Copyright © Royal Society of Edinburgh 2016
This article has been cited by the following publications. This list is generated based on data provided by CrossRef.
Alves, Claudianor O. Cassani, Daniele Tarsi, Cristina and Yang, Minbo 2016. Existence and concentration of ground state solutions for a critical nonlocal Schrödinger equation in R2. Journal of Differential Equations, Vol. 261, Issue. 3, p. 1933.
Shen, Zifei Gao, Fashun and Yang, Minbo 2017. Multiple solutions for nonhomogeneous Choquard equation involving Hardy–Littlewood–Sobolev critical exponent. Zeitschrift für angewandte Mathematik und Physik, Vol. 68, Issue. 3,
Belchior, P. Bueno, H. Miyagaki, O.H. and Pereira, G.A. 2017. Remarks about a fractional Choquard equation: Ground state, regularity and polynomial decay. Nonlinear Analysis, Vol. 164, Issue. , p. 38.
Yang, Minbo Zhang, Jianjun and Zhang, Yimin 2017. Multi-peak solutions for nonlinear Choquard equation with a general nonlinearity. Communications on Pure and Applied Analysis, Vol. 16, Issue. 2, p. 493.
Mukherjee, T. and Sreenadh, K. 2017. Fractional Choquard equation with critical nonlinearities. Nonlinear Differential Equations and Applications NoDEA, Vol. 24, Issue. 6,
Gao, Fashun and Yang, Minbo 2017. On nonlocal Choquard equations with Hardy–Littlewood–Sobolev critical exponents. Journal of Mathematical Analysis and Applications, Vol. 448, Issue. 2, p. 1006.
Alves, Claudianor O. Gao, Fashun Squassina, Marco and Yang, Minbo 2017. Singularly perturbed critical Choquard equations. Journal of Differential Equations, Vol. 263, Issue. 7, p. 3943.
Wu, Yuanze 2018. Least energy sign-changing solutions of the singularly perturbed Brezis–Nirenberg problem. Nonlinear Analysis, Vol. 171, Issue. , p. 85.
Sang, Yanbin Luo, Xiaorong and Wang, Yongqing 2018. The Choquard Equation with Weighted Terms and Sobolev-Hardy Exponent. Journal of Function Spaces, Vol. 2018, Issue. , p. 1.
Gao, Fashun and Yang, Minbo 2018. The Brezis-Nirenberg type critical problem for the nonlinear Choquard equation. Science China Mathematics, Vol. 61, Issue. 7, p. 1219.
Yang, Minbo 2018. Semiclassical ground state solutions for a Choquard type equation in ℝ2 with critical exponential growth. ESAIM: Control, Optimisation and Calculus of Variations, Vol. 24, Issue. 1, p. 177.
Zhang, Fubao and Zhang, Hui 2018. Existence and concentration of ground states for a Choquard equation with competing potentials. Journal of Mathematical Analysis and Applications, Vol. 465, Issue. 1, p. 159.
Gao, Zu Tang, Xianhua and Chen, Sitong 2018. On existence and concentration behavior of positive ground state solutions for a class of fractional Schrödinger–Choquard equations. Zeitschrift für angewandte Mathematik und Physik, Vol. 69, Issue. 5,
Lee, Jongrak Kim, Jae-Myoung Bae, Jung-Hyun and Park, Kisoeb 2018. Existence of nontrivial weak solutions for a quasilinear Choquard equation. Journal of Inequalities and Applications, Vol. 2018, Issue. 1,
Gu, Guangze and Tang, Xianhua 2019. The Concentration Behavior of Ground States for a Class of Kirchhoff-type Problems with Hartree-type Nonlinearity. Advanced Nonlinear Studies, Vol. 19, Issue. 4, p. 779.
Alves, Claudianor O. and Tavares, Leandro S. 2019. A Hardy–Littlewood–Sobolev-Type Inequality for Variable Exponents and Applications to Quasilinear Choquard Equations Involving Variable Exponent. Mediterranean Journal of Mathematics, Vol. 16, Issue. 2,
Ambrosio, Vincenzo 2019. Multiplicity and Concentration Results for a Fractional Choquard Equation via Penalization Method. Potential Analysis, Vol. 50, Issue. 1, p. 55.
Huang, Wangcheng Long, Wei Xia, Aliang and Zheng, Xiongjun 2019. A global compactness result for a critical nonlinear Choquard equation in $\mathbb{R} ^{N}$. Boundary Value Problems, Vol. 2019, Issue. 1,
Alves, Claudianor O. Rădulescu, Vicenţiu D. and Tavares, Leandro S. 2019. Generalized Choquard Equations Driven by Nonhomogeneous Operators. Mediterranean Journal of Mathematics, Vol. 16, Issue. 1,
Cassani, Daniele and Zhang, Jianjun 2019. Choquard-type equations with Hardy–Littlewood–Sobolev upper-critical growth. Advances in Nonlinear Analysis, Vol. 8, Issue. 1, p. 1184.
Download full list
Claudianor O. Alves (a1) and Minbo Yang (a2) | CommonCrawl |
npj quantum information
Dual-species, multi-qubit logic primitives for Ca+/Sr+ trapped-ion crystals
Magneto-optical trapping and sub-Doppler cooling of a polyatomic molecule
Nathaniel B. Vilas, Christian Hallas, … John M. Doyle
Realizing coherently convertible dual-type qubits with the same ion species
H.-X. Yang, J.-Y. Ma, … L.-M. Duan
Integrated multi-wavelength control of an ion qubit
R. J. Niffenegger, J. Stuart, … J. Chiaverini
Transient quantum beatings of trions in hybrid organic tri-iodine perovskite single crystal
Uyen N. Huynh, Ye Liu, … Z. Valy Vardeny
Erasure conversion for fault-tolerant quantum computing in alkaline earth Rydberg atom arrays
Yue Wu, Shimon Kolkowitz, … Jeff D. Thompson
A novel coupled RPL/OSL system to understand the dynamics of the metastable states
M. Jain, R. Kumar & M. Kook
A pyramid MOT with integrated optical cavities as a cold atom platform for an optical lattice clock
William Bowden, Richard Hobson, … Patrick Gill
Ba+2 ion trapping using organic submonolayer for ultra-low background neutrinoless double beta detector
P. Herrero-Gómez, J. P. Calupitan, … NEXT collaboration
Inner-shell clock transition in atomic thulium with a small blackbody radiation shift
A. Golovizin, E. Fedorova, … N. Kolachevsky
C. D. Bruzewicz1,
R. McConnell1,
J. Stuart2,
J. M. Sage1,2 &
J. Chiaverini1
npj Quantum Information volume 5, Article number: 102 (2019) Cite this article
Quantum information
We demonstrate key multi-qubit quantum-logic primitives in a dual-species trapped-ion system based on \({}^{40}\)Ca\({}^{+}\) and \({}^{88}\)Sr\({}^{+}\) ions, using two optical qubits with quantum-logic-control frequencies in the red to near-infrared range. With all ionization, cooling, and control wavelengths in a wavelength band similar for the two species and centered in the visible, and with a favorable mass ratio for sympathetic cooling, this pair is a promising candidate for scalable quantum information processing. Same-species and dual-species two-qubit gates, based on the Mølmer–Sørensen interaction and performed in a cryogenic surface-electrode trap, are characterized via the fidelity of generated entangled states; we achieve fidelities of 98.8(2)% and 97.5(2)% in Ca\({}^{+}\)–Ca\({}^{+}\) and Sr\({}^{+}\)–Sr\({}^{+}\) gates, respectively. For a similar Ca\({}^{+}\)–Sr\({}^{+}\) gate, we achieve a fidelity of 94.3(3)%, and carrying out a Sr\({}^{+}\)–Sr\({}^{+}\) gate performed with a Ca\({}^{+}\) sympathetic cooling ion in a Sr\({}^{+}\)–Ca\({}^{+}\)–Sr\({}^{+}\) crystal configuration, we achieve a fidelity of 95.7(3)%. These primitives form a set of trapped-ion capabilities for logic with sympathetic cooling and ancilla readout or state transfer for general quantum computing and communication applications.
Quantum computing requires ancilla qubits as crucial components of quantum algorithmic primitives, such as quantum phase estimation,1 gate teleportation,2 and syndrome extraction during quantum error correction.3 In addition, particular physical implementations of quantum information processing (QIP) may utilize additional physical qubits to aid in the preparation, transport, and readout of quantum information. Particularly for trapped-ion qubits, one of the most promising quantum-computing modalities,4,5,6,7 quantum-logic operations on chains containing both computational and ancilla ions are a critical component of practical QIP systems.
In such systems, computational ions, which house the qubits primarily used for quantum-logic operations, will likely be paired with ancilla ions of a different species such that control fields applied to the ancilla ions will not affect or decohere the quantum states of the computational ions.8,9 An alternate single-species strategy based on shelving a subset of the ion register in nearby levels outside of the qubit state subspace during certain quantum operations has been successfully demonstrated.10 Although the shelving technique added a tolerable number of additional operations in that work, it would likely require significant additional overhead when manipulating large qubit registers. In a system equipped with multiple atomic species, however, ancillas could be used for sympathetic cooling,9,11 for remote-entanglement generation using photons at fiber-friendly wavelengths,12 or for ancilla-qubit readout without decoherence of unmeasured qubits due to scattered fluorescence photons13—all without the need for shelving operation overhead. Although different isotopes of the same element can provide some isolation, the isotope shifts of the relevant transitions are not typically large enough to reach levels required for high-fidelity quantum operations, much less fault tolerance. Desired properties of ion species pairs used for QIP include a high-coherence controllable qubit in the computational ion, similar masses of the computational and ancilla ions to allow efficient energy transfer for sympathetic cooling,8,14 and favorable control frequencies in both species. Control should be favorable not only in regards to the absolute frequency but also to overlap of the frequency ranges required for the two species. Especially in light of the potential use of integrated technologies for control-light distribution in trapped-ion quantum processors,15,16,17,18 ion pairs with similar control wavelengths in the visible to near-infrared portion of the spectrum may be preferable to minimize coupling and propagation losses in the optical components,19 while also keeping the number of different material systems needed to work across the required wavelength range to a minimum.
Here we demonstrate a set of quantum-logic primitives, using the \({}^{40}\)Ca\({}^{+}\)/\({}^{88}\)Sr\({}^{+}\) two-species system, which form the basis for a possible QIP architecture. Each of these ions houses a long-lived, optical-frequency qubit that has proven to be a workhorse for demanding QIP experiments and demonstrations.20,21 Their mass ratio near 2 allows for efficient momentum transfer for sympathetic cooling. Furthermore, the wavelengths required for production, cooling, logic, and readout in each species fall in the optical and near-infrared range, and their respective ranges have a high degree of overlap, minimizing required additional considerations for optical materials, etc., in the case of dual-species operation. Moreover, using these two optical qubits, the frequencies of control fields required for quantum logic, where the highest intensities and phase stability are required, are at very favorable wavelengths (729 nm and 674 nm for Ca\({}^{+}\) and Sr\({}^{+}\), respectively) for laser and optical component technology. As an additional benefit, light emitted by broad transitions in these species, as would be used for remote-entanglement generation, is in the blue and infrared parts of the spectrum. Collection, distribution, and controlled interference of this light is considerably simpler than when manipulating light from similar transitions in the ultraviolet. Moreover, the light from these broad transitions in Ca\({}^{+}\) and Sr\({}^{+}\) is also sufficiently separated in frequency to not appreciably affect the coherence of nearby ions of the other species.22 Our demonstration of a suite of two-qubit quantum-logic primitives using two optical-frequency qubits helps establish this ion-species pair as a useful system for scalable-QIP explorations.
Prior work in dual-species trapped-ion quantum logic includes demonstrations of so-called quantum-logic spectroscopy23 to enable the operation of optical clocks using ions with inaccessible or inconvenient transitions; these typically focus on the Al\({}^{+}\) clock ion with another species similar in mass used to manipulate and read out its state.24 Entangling quantum gates have also been performed with the ion systems Be\({}^{+}\)/Mg\({}^{+}\),13 Be\({}^{+}\)/Ca\({}^{+}\),25 \({}^{40}\)Ca\({}^{+}\)/\({}^{43}{{\rm{Ca}}}^{+}\),26 and Ba\({}^{+}\)/Yb\({}^{+}\).27 Although these pairs will likely find particular application, they all have drawbacks, such as technologically challenging wavelengths, insufficient separation in internal-state energy splittings, or large masses, and so it is beneficial to explore alternative dual-species systems that may have complementary strengths.
The Ca\({}^{+}\)/Sr\({}^{+}\) system has been considered recently for applications in QIP, although few quantum-logic operations in the combined system have been reported. For instance, these species form the basis of several analyses of large-scale quantum computing platforms, both as sympathetic cooling ancillas6 and for photon-emitting intermediaries in optically linked architectures;28 a method has also been suggested to perform an inter-species gate in the Ca\({}^{+}\)/Sr\({}^{+}\) system using a single laser wavelength.26 Although considerable work demonstrating same-species, two-qubit operations in Ca\({}^{+}\)20,29,30,31,32 and, to a lesser extent, Sr\({}^{+}\)21,33 exists, their use together has not been investigated widely. Due to its potential utility, it is important to explore the implications of quantum operations based on this species pair.
One potential architecture for QIP with trapped ions consists of a two-dimensional array of trapping zones, interconnected via transport regions, in which two species of ions are held9,34 (see Fig. 1a). Each trapping zone holds a few-ion, one-dimensional ion crystal, in which multi-qubit operations can be performed. Ions are moved between trapping zones via the transport regions to bring ions from different array sites into the same crystal, such that a high degree of connectivity of multi-qubit operations can be maintained across the array, limited only by the complexity of the transport region network. Housing one to four ions in each site simplifies the vibrational-mode spectrum when compared with keeping all ions in one crystal, while potentially permitting simultaneous individual addressing of many ions throughout an array. Ion crystals composed of ions from separate array sites, transported and then joined together, may acquire vibrational-mode excitation that can limit gate fidelity. Hence, any array site where multi-qubit gate operations will be performed contains one or two ancilla ions used primarily for sympathetic cooling prior to gate operations, allowing preparation of select vibrational modes of the crystal without affecting the internal state of the computational ions.
Trap array architecture with ion-transport-based connectivity. a Subsection of array showing segmented electrodes that create both static potential wells at multiple array sites and dynamically variable potentials for ion transport. Different zones depict various crystal configurations of computational ions (red) and ancilla ions (blue). b A larger section of such an array configured for quantum computation with surface-code error correction encoding; in this case, two-qubit gates are performed between nearest-neighbor computational ions after transporting lone computational ions to the zones (alternating in a checkerboard pattern) where a computational ion is housed with an ancilla. One step in the error-correction cycle is depicted. The ancilla is used to prepare the shared motional state before gate operations and the ancilla could also be used for periodic syndrome readout without detrimental effect on unmeasured computational ions due to photon scattering
Such an architecture could be flexible in terms of application. The segmented trap structure can be tailored in terms of connectivity from nearest-neighbor, e.g., for surface-code quantum-error correction or quantum emulation of solid-state Hamiltonians, to fully connected, e.g., for quantum chemistry simulations requiring Jordan–Wigner transformations between qubit and orbital bases. The connectivity can also be reconfigured dynamically to suit the particular entanglement-generation requirements of a quantum algorithm as it proceeds. In all these cases, however, the composition of each array site can be essentially identical, able to maintain one to two of each of the computational ions and ancilla ions in a linear crystal. We focus here on demonstrations of key two-qubit primitives that enable operations within these individual sites, e.g., architectural components useful for multiple applications, using the Ca\({}^{+}\)/Sr\({}^{+}\) system.
Basic two-qubit logic between ions of the same species forms the foundation of any QIP application that requires entanglement generation or qubit interaction, and we therefore begin with Ca\({}^{+}\)-Ca\({}^{+}\) and Sr\({}^{+}\)-Sr\({}^{+}\) two-qubit gates, which set the baseline capability of this system. These gates lead naturally to an architectural primitive in which a gate is performed between two ions of a single species in the presence of a sympathetic coolant ancilla of a second species. This operation would be the primary multi-qubit gate in a case in which all array sites contain a single computational ion, while a subset of the sites also contain a co-located coolant ancilla. To perform logic operations between pairs of computational ions, a lone computational ion is transported from its home site to the home site of a computational ion which houses an ancilla. All three ions are then joined in a single crystal. The ancilla is used to remove any unwanted motional excitation accrued during ion movement and crystal merging (the opposite operation, crystal separation, will in general occur before the next gate involving one of the current computational ions, and sympathetic cooling via an ancilla subsequent to transport and merging will also remove the kinetic energy acquired during this process). The gate is then performed between the computational ions. As a particular example, the surface code35 can be implemented in such an architectural scheme in which half the sites, in a checkerboard pattern, of a square array contain sympathetic-cooling ancillas (see Fig. 1b). Here we demonstrate each of the above-mentioned same-species, multi-ion, quantum-logic architectural components.
Dual-species operations provide additional key capabilities for trapped-ion-based quantum information processors and therefore form other primitives of interest. For instance, in a larger-scale system employing quantum error correction, subsets of the qubits must be measured during the computation. Moreover, measurement-based quantum computing also requires projecting a subset of qubits while maintaining coherence of the unmeasured remainder. In both cases, inter-species transfer of state population to ancillas just prior to measurement can avoid decoherence in computational ions due to resonant light scattered from measured ions in close proximity.22 Another important example is repeat-until-success remote-entanglement generation for linking modules of a composite quantum information processor;27 in this case, ions desired for long-distance communication, either due to particular wavelengths or beneficial level structures, may be different from the computational species. Transfer of the quantum state from a computational ion to one qubit of a Bell-entangled ancilla pair can allow for independent choices of ion species in these roles. For instance, a different communication-ion species may be desirable to enhance overall processor speed without affecting the coherence of neighboring computational ions during repeated entanglement-generation attempts. In contrast to the shelving of same-species computational-ion qubits to protect them from incident resonant photons, using a second species permits execution of logic operations and entanglement generation in parallel.
Transfer to ancillas to avoid decoherence during measurement does not necessarily need to preserve qubit phase information, and hence techniques based on quantum-logic spectroscopy23,36 can be utilized to accomplish this readout scheme. A full phase-preserving state swap, e.g., a series of three CNOT gates based on a dual-species entangling gate, can also fulfill this task;13 for state transfer into part of a remote entangled pair as a prerequisite for additional computation, only full quantum-state transfer will suffice. We have previously performed quantum-logic-assisted readout of a Ca\({}^{+}\) ion using a Sr\({}^{+}\) ion,22 a technique that may be useful for resonant-light-scatter-free syndrome extraction in a close-packed array. This operation required a pair of \(\pi\)-pulses on the sideband transitions corresponding to a shared vibrational mode after it was cooled to the ground state. Here we demonstrate a more general dual-species Mølmer-Sørensen (MS) entangling gate between Ca\({}^{+}\) and Sr\({}^{+}\), a method that has an advantage beyond phase preservation in that ground-state cooling is generally not required for MS gates.29 It could therefore be used for both syndrome extraction and remote-entanglement-generation applications.
We confine and manipulate \({}^{40}\)Ca\({}^{+}\) and \({}^{88}\)Sr\({}^{+}\) ions in a linear surface-electrode trap consisting of a 2 \(\mu\)m-thick, patterned aluminum layer on a sapphire substrate, similar to traps used in previous work.37,38 Single-ion axial trap frequencies in the \(0.5\)–\(2\) MHz range are produced via application of potentials to segmented electrodes defined along the axial direction; radial trapping at frequencies near 5 MHz is produced through application of a radiofrequency (RF) potential near 50 MHz to a subset of the electrodes. Ions are held 50 \(\mu\)m from the trap-chip surface in a cryogenic vacuum chamber achieving ultra-high vacuum pressure in which the trap is maintained at a temperature below 6 K. The ions are loaded into the trap via photoionization from neutral atomic beams produced via acceleration from co-located Ca and Sr two-dimensional magneto-optical traps to the remotely located trap chip.22,39
Environmental magnetic-field fluctuations are suppressed using a pair of superconducting niobium rings40 located above and below the trap chip. The rings are centered radially on the ion location, with one 10 mm-square ring attached just beneath the trap's sapphire substrate and one 65 mm-diameter octagonal ring attached to the radiation shield ~5 mm from the trap surface; this design is shown schematically in Fig. 2a. Induced supercurrents in the rings compensate variation in the local magnetic field41 in the direction of an applied, axial quantizing field of ~\(5\times 1{0}^{-4}\) T. In practice, the quantizing magnetic field is produced using the supercurrent induced in the rings. To reduce field noise, the current in coils used to inject this supercurrent is removed once the rings are in the superconducting state. We have measured suppression of slow magnetic-field fluctuations by a factor of 17–20 dB using this technique.
Magnetic-field shielding, laser beam paths, and ion-crystal motional modes. a Self-shielding, superconducting ring geometry for suppressing magnetic-field noise. Fluctuations in the magnetic flux threading the rings induce persistent supercurrents that (partially) cancel the changes in magnetic flux. b Top view of the beam paths in the ion-trapping chamber. Arrows are offset in the figure for clarity, but the beams are largely overlapped in the experimental apparatus. c Ion-crystal configurations and axial normal vibrational modes pertaining to two- and three-ion chains employed in this work. Arrows (not drawn to scale) indicate the relative amplitudes and directions of the normalized motional eigenvectors for the different ion crystals
Qubits are defined using the electronic states in each ion, with an optical-frequency separation between the \(\left|0\right\rangle \equiv |{(n-1)}^{2}{D}_{5/2},{m}_{J}=-5/2\rangle\) and \(\left|1\right\rangle \equiv |n{}^{2}{S}_{1/2},{m}_{J}=-1/2\rangle\) states, where \(n=\{4,5\}\) for {Ca\({}^{+}\), Sr\({}^{+}\)}. Light for qubit manipulation and multi-qubit operations is derived from separate systems each consisting of an external-cavity-stabilized diode laser, frequency locked to an ultra-low-expansion glass cavity. Transmitted light, filtered by the cavity, is used to injection-lock one or more slave diode lasers21 and the output is amplified via one or more tapered optical amplifiers. Light is passed through several acousto-optic modulators (AOMs) to shift the frequency and modulate the amplitude and phase (and frequency for multi-qubit operations) before being delivered to the ions through windows in the vacuum chamber. Beam paths are shown in Fig. 2b.
Experimental trials each begin with Doppler cooling of the ions using light at 397 nm (422 nm) in conjunction with repumping light at 866 nm (1092 nm) for Ca\({}^{+}\)(Sr\({}^{+}\); the same construction will be used throughout this paragraph). Light at 854 nm (1033 nm) is also applied to quench any residual population in \(\left|0\right\rangle\) through the \(n{}^{2}{P}_{3/2}\) level. Ions are Doppler cooled for ~1 ms, after which resolved sideband cooling is performed to bring the ions to the motional ground state for a subset of the axial vibrational modes of the ions in the crystal. For same-species, two-qubit gates with two ions in the crystal, sideband cooling pulses at 729 nm (674 nm) interspersed with quenching pulses at 854 nm (1033 nm) are used to bring the in-phase (IP) and out-of-phase (OOP) modes (see Fig. 2c) to average occupation below ~0.05. Sideband cooling for dual-species, two-qubit gates and single-species, three-ion, two-qubit gates will be described below in the respective sections. Optical pumping after sideband cooling serves as state preparation, bringing the ions to \(\left|1\right\rangle\) using a combination of 729 nm and 854 nm (674 nm and 1033 nm) light. After state preparation, bichromatically modulated light is used to perform two-qubit entangling operations via the MS technique.42
State detection is performed by applying the wavelengths used for Doppler cooling, but without the quench light, such that an ion in \(\left|1\right\rangle\) will fluoresce at 397 nm (422 nm), whereas an ion in \(\left|0\right\rangle\) will not. Fluorescence is collected using a high-numerical-aperture objective outside the vacuum chamber and directed to an electron multiplying charge-coupled device or photo-multiplier tube (PMT) for imaging or state detection, respectively. For single-species gates, ions are detected simultaneously and the detection time, typically a few milliseconds, is set such that experimental photon-number histograms corresponding to 0, 1, or 2 ions in the scattering \(\left|1\right\rangle\) state are sufficiently separated to allow discrimination between these cases with error probability below ~0.001. It is noteworthy that of the four possible two-qubit state outcomes, two of them are indistinguishable when measured using the non-imaging PMT; hence, only the sum of the probabilities for the states \(\left|01\right\rangle\) and \(\left|10\right\rangle\) are measured. This is not a limitation for determination of the fidelities of the created Bell states, as only the populations of the \(\left|00\right\rangle\) and \(\left|11\right\rangle\) states, and the parity of the two-qubit state measured in an auxiliary experiment performed on the Bell state, are needed.43 In the mixed-species gate, fluorescence is detected from each ion sequentially, but the fidelity is calculated in the same way as in the single-species gates.
Mølmer–Sørensen logic gates and considerations pertaining to ion crystal configuration and motional modes
The MS gates demonstrated here are enacted through optical dipole forces applied at frequencies near a particular shared motional mode of the trapped-ion crystal. When bringing about these forces via a bichromatic light field with frequency components detuned by \(\delta\) above and below the blue and red motional sidebands (mode frequency \({\omega }_{\beta }\)), the interaction Hamiltonian for two ions is44
$${H}_{{\rm{I}}}(t)=\hslash \Omega \left({e}^{-i({\omega }_{\beta }+\delta )t}+{e}^{i({\omega }_{\beta }+\delta )t}\right){e}^{i\eta (a{e}^{-i{\omega }_{\beta }t}+{a}^{\dagger }{e}^{i{\omega }_{\beta }t})}\left({\sigma }_{+}^{(1)}+{\sigma }_{+}^{(2)}\right)+{\rm{h.c.}}$$
Here \(a\) is the annihilation operator of the vibrational mode of interest, \(\Omega\) is the \(\left|1\right\rangle \to \left|0\right\rangle\) transition Rabi frequency, \({\sigma }_{+}^{(j)}\) is the raising operator for the electronic spin qubit of ion \(j\) defined via the Pauli spin operators as \({\sigma }_{+}=({\sigma }_{x}+i{\sigma }_{y})/2\), and \(\eta =k\ {z}_{{\rm{RMS}}}\) is the Lamb-Dicke parameter, which expresses the ratio between the vibrational ground-state wavefunction size \({z}_{{\rm{RMS}}}=\sqrt{\hslash /(2m{\omega }_{\beta })}\) and the spatial gradient of the electromagnetic field, as expressed by the mode-direction projection of the wavevector \(k\) of the light used to drive the transition. We are here assuming a single-species MS gate for simplicity; below we generalize the Lamb-Dicke parameter to more complex configurations. Taking the rotating-wave approximation and in the Lamb-Dicke limit (\(\eta \sqrt{\langle {(a+{a}^{\dagger })}^{2}\rangle }\ll 1\)), this Hamiltonian becomes
$${H}_{{\rm{I}}}(t)=-\hslash \eta \Omega (a{e}^{-i\delta t}+{a}^{\dagger }{e}^{i\delta t})\left({\sigma }_{y}^{(1)}+{\sigma }_{y}^{(2)}\right).$$
This interaction couples the motional mode to the joint spin state of the two ions, and by driving off-resonance with detuning \(\delta\), the vibrational mode acquires a geometric phase that depends on the joint spin state as it traverses a curved path in the phase space of the mode. At drive times equal to multiples of \({t}_{g}=2\pi /\delta\), the curved path becomes a closed loop and the ions' spin and motion are disentangled, while a subset of the two-ion spin states acquire a phase relative to the others; by setting the interaction strength such that \(\eta \Omega =\delta /4\), a maximally (spin) entangled state can be created after a time \({t}_{g}\).
The mode structure affects the gate time through the Lamb-Dicke parameter, but this is rather straightforward in same-species gates. We now discuss considerations affecting the gate time for dual-species operations, where the Lamb-Dicke parameter is ion-dependent, and we consider errors due to excitation of spectator modes spectrally close to the gate-drive frequency, an effect that can be exacerbated in mixed-species systems. See Methods for the calculation of the mode structure in dual-species crystals.
In general, it is desirable to execute multi-qubit operations in the shortest possible time with the highest possible fidelity, and both of these design goals can be affected by the chosen motional mode used to execute the gate. For the optical qubits used here, the attainable sideband Rabi frequencies, which set the MS gate time, are determined by the electric quadrupole transition matrix elements between the chosen qubit states, the Lamb-Dicke parameters for the relevant motional mode, and the qubit laser intensities. These parameters can easily be made, to a very good approximation, equal for ions of the same species. In multi-species crystals, however, the values can vary significantly and each must be well characterized to achieve high-fidelity operation. For example, the electric quadrupole transition matrix element for the chosen qubit states in \({{\rm{Ca}}}^{+}\) is ~0.7 times that of \({{\rm{Sr}}}^{+}\).45 As shown in Methods, the Lamb-Dicke parameters vary for different ion species in different motional modes using different crystal configurations. The sideband Rabi frequencies can be equalized, however, by adjusting the gate-laser intensities at the ion locations, subject to the constraints imposed by the available laser power and the achievable beam waists. We also point out that the inclusion of a second atomic species in three-ion crystals brings about additional considerations, as each ion crystal configuration yields different normal mode frequencies and motional amplitudes.
In addition to affecting the gate speed, motional mode choice can also determine the sensitivity of the ion chain to different types of electric-field noise. Homogeneous electric fields couple most strongly to IP motional modes. Hence, some level of motional state heating suppression can be expected for modes with OOP ion motion.8,34 A single Ca\({}^{+}\) ion heating rate of \(8.6\pm 0.4\) quanta \({{\rm{s}}}^{-1}\) for a \(2\pi \times 1.94\) MHz trap frequency was measured in the trap used here, and although the heating rates of IP axial modes for multi-ion chains are expected to be slightly larger than this, they are not currently limiting gate fidelity.
Lastly, the excitation spectrum becomes increasingly dense in larger ion crystals, especially when including the effects of higher-order motional sidebands. Nearby transitions can lead to unwanted state couplings and subsequent gate errors. This consideration is especially pertinent in the \({}^{40}{{\rm{Ca}}}^{+}\)/\({}^{88}{{\rm{Sr}}}^{+}\) system where the mass ratio \(\mu =2.2\) gives rise to a number of near-degeneracies, as shown in Methods.
A key example is the small energy splitting between the axial OOP mode first-order sideband and the axial IP-mode second-order sideband in this pair, due to the mode frequency ratio \({\omega }_{{\rm{OOP}}}/{\omega }_{{\rm{IP}}}=1.988\). Driving the gate on the OOP mode is desirable in some cases, as it is affected less than the IP mode by motional heating. For a typical axial trap frequency of \({\omega }_{{\rm{IP}}}\approx 2\pi \times 1\) MHz, the OOP-2IP splitting is only \(2\pi \times 12\) kHz, approximately equal to typical detunings \(\delta\) from the motional sidebands used during gate operations. Thus, a detuned drive of the OOP mode may be near resonant with the second-order IP sideband, potentially leading to error from unwanted displacement of this mode's motional state. This error is similar to off-resonant excitation of spectator modes on the first-order sideband,46 although it depends on the different carrier Rabi frequencies and mode-dependent Lamb-Dicke parameters for each species and has an extra factor of the Lamb-Dicke parameter due to the higher-order excitation. We therefore expect the error to be (for small displacements and mode heating rates, and ignoring other off-resonant terms)
$${\epsilon }_{2\times {\rm{IP}}}\approx | \alpha {| }^{2}\ \left({\bar{n}}_{{\rm{IP}}}+\frac{1}{2}\right)$$
where \({\bar{n}}_{{\rm{IP}}}\) is the average occupation of the IP mode before the gate and \(\alpha\) is the displacement of the OOP mode due to the drive, expressed as (assuming IP excitation of the two ions at frequency \(\omega\))
$$\alpha =\frac{1}{2}{\int _{0}^{{t}_{g}}}\left({\eta }_{{\rm{Ca,IP}}}^{2}{\Omega }_{{\rm{Ca}}}+{\eta }_{{\rm{Sr,IP}}}^{2}{\Omega }_{{\rm{Sr}}}\right){e}^{-i(\omega -2{\omega }_{{\rm{IP}}})t}\,dt.$$
Here, \({\eta }_{j,\beta }\) and \({\Omega }_{j}\) are the Lamb-Dicke parameter (see Methods for the definition in the multi-species case) and carrier Rabi frequency, respectively, for ion \(j\) in mode \(\beta\), and \({t}_{g}\) is the MS gate time. Evaluating this integral, we obtain
$$| \alpha {| }^{2}=\frac{\left({\eta }_{{\rm{Ca,IP}}}^{2}{\Omega }_{{\rm{Ca}}}+{\eta }_{{\rm{Sr,IP}}}^{2}{\Omega }_{{\rm{Sr}}}\right){\sin }^{2}\left[(\omega -2{\omega }_{{\rm{IP}}})\frac{{t}_{g}}{2}\right]}{{(\omega -2{\omega }_{{\rm{IP}}})}^{2}}.$$
Applying this result to the worst-case scenario, which corresponds to on-resonant driving of the IP second-order sideband (\(\omega ={\omega }_{{\rm{OOP}}}+\delta =2{\omega }_{{\rm{IP}}}\) for the drive near the blue sideband), and for the dual-species gate conditions \({\eta }_{{\rm{Ca,OOP}}}{\Omega }_{{\rm{Ca}}}={\eta }_{{\rm{Sr,OOP}}}{\Omega }_{{\rm{Sr}}}=\delta /4\), we find \(\alpha =(\delta {t}_{g}/8)({\eta }_{{\rm{Ca,IP}}}^{2}/{\eta }_{{\rm{Ca,OOP}}}+{\eta }_{{\rm{Sr,IP}}}^{2}/{\eta }_{{\rm{Sr,OOP}}})\); the error for a gate that acquires a \(\pi\) phase due to displacement around one-loop on the OOP mode (\({t}_{g}=2\pi /\delta\)) would then be
$${\epsilon }_{2\times {\rm{IP}}}=\frac{{\pi }^{2}}{16}\left({\bar{n}}_{{\rm{IP}}}+\frac{1}{2}\right){\left(\frac{{\eta }_{{\rm{Ca,IP}}}^{2}}{{\eta }_{{\rm{Ca,OOP}}}}\,+\,\frac{{\eta }_{{\rm{Sr,IP}}}^{2}}{{\eta }_{{\rm{Sr,OOP}}}}\right)}^{2}.$$
Even assuming negligible initial population in the motional modes, this error can be substantial; for a \({\omega }_{{\rm{IP}}}=2\pi \times 1\) MHz axial mode in the ground state, the error would be \({\epsilon }_{2\times {\rm{IP}}}=0.013\) for a \({}^{40}{{\rm{Ca}}}^{+}\)– \({}^{88}{{\rm{Sr}}}^{+}\) crystal. Moreover, as can be seen from the functional form of the acquired displacement (Eq. 5 with \(\omega ={\omega }_{{\rm{OOP}}}+\delta\)), the width in gate detuning of the effect of driving near this IP resonance is set by \(2{\omega }_{{\rm{IP}}}-{\omega }_{{\rm{OOP}}}\), so small relative changes in detuning will not be effective in significantly reducing this error. It is therefore prudent to avoid such near coincidences, either via judicious choices of detuning and laser intensity if available (although this removes a degree of freedom often used to optimize gate operation), trap frequency, mode of operation, or isotopes of the ions in the chain (see Methods for the mode frequency ratios for several different isotope combinations of Ca\({}^{+}\) and Sr\({}^{+}\)); the latter can be very effective, as the absolute mode frequency difference sets the relevant scale. In the work presented here, the modes used to execute the dual-species MS gates were chosen to be spectrally well separated from all other transitions for attainable gate speeds.
Single-species two-qubit gates
Single-species quantum-logic gates are performed with either two Ca\({}^{+}\) or two Sr\({}^{+}\) ions in the trap such that the ions form a linear crystal oriented along the trap axis, with ions spaced a few microns apart. After sideband cooling and state preparation as described above, the MS interaction is brought about by applying light detuned near the IP vibrational-mode sidebands of the \(\left|1\right\rangle \to \left|0\right\rangle\) transition; two frequencies are simultaneously applied to a single-pass AOM to produce a bichromatic light field with components detuned by \(\delta\) above the blue sideband and below the red sideband of the mode \(\beta\) to be driven. This bichromatic field can be thought of as a drive resonant with the qubit carrier transition, but modulated at the beat frequency \({\omega }_{\beta }+\delta\). The bichromatic field is coupled into a single-mode, polarization-maintaining optical fiber and directed to the ions. Starting from \(\left|11\right\rangle\), the joint qubit state is coherently driven between \(\left|11\right\rangle\) and \(\left|00\right\rangle\) via multiple pathways through \(\left|10\right\rangle\) and \(\left|01\right\rangle\) using the joint motional state. If this evolution is stopped after a time \({t}_{g}\) as described above, the ions will be in a coherent superposition of \(\left|11\right\rangle\) and \(\left|00\right\rangle\), nominally the Bell state \(\left|{\Phi }_{+\phi }\right\rangle =\frac{1}{\sqrt{2}}(\left|00\right\rangle +{e}^{i\phi }\left|11\right\rangle )\) (the value of \(\phi\) is can be adjusted via the phases of the AOM RF drive signals used to create the bichromatic field; it must have a constant relative phase relationship with following analysis pulses). We apply the bichromatic gate pulses with an additional asymmetric detuning, calibrated separately, to compensate for AC-Stark shifts from other electronic levels, and we also shape the pulse amplitude in time to minimize dependence on the initial bichromatic phase.29
To characterize gate operation, we estimate the created Bell-state fidelity by measuring the four elements of the resulting two-qubit density matrix \(\rho\) that would be nonzero in the case of ideal creation of \(\left|{\Phi }_{+\phi }\right\rangle\). The two diagonal elements are computed from the probabilities to measure \(\left|00\right\rangle\) and \(\left|11\right\rangle\) after the entangling gate operation, \({P}_{00}\) and \({P}_{11}\), respectively. The state population measurements are typically repeated thousands of times to precisely determine these values. The off-diagonal elements are calculated using an auxiliary "parity-flopping" measurement in which a \(\pi /2\)-pulse around an axis in the equatorial plane of the Bloch sphere of varying phase angle \(\chi\) with respect to the \(X\) axis is applied uniformly to the ions in the created entangled state. This experiment effectively rotates the coherences into the populations and the parity of the populations, defined as \(({P}_{\chi ,00}+{P}_{\chi ,11})-({P}_{\chi ,01}+{P}_{\chi ,10})\), will oscillate with a period of \(\pi\) in \(\chi\) for a two-qubit maximally entangled state. The amplitude \({C}_{{\rm{PF}}}\) of this oscillation gives a direct measure of the off-diagonal elements. We calculate the Bell-state fidelity as \({F}_{\left|{\Phi }_{+\phi }\right\rangle }\equiv \langle {\Phi }_{+\phi }| \rho | {\Phi }_{+\phi }\rangle =({P}_{00}+{P}_{11}+{C}_{{\rm{PF}}})/2\).
Figure 3 shows results for Ca\({}^{+}\) and Sr\({}^{+}\) single-species, two-qubit gates; both the measured state populations as a function of gate-interaction duration and the parity-flopping curves are shown. In the case of ideal evolution, the Bell state \(\left|{\Phi }_{+\phi }\right\rangle\) is created at the second zero of the combined population of the \(\left|10\right\rangle\) and \(\left|01\right\rangle\) states ("1-bright"). As can be seen, this time was (coincidentally) just over 70 \(\mu\)s in both cases; the achieved Bell-state error is 1.2(2)% and 2.5(2)% for Ca\({}^{+}\)–Ca\({}^{+}\) and Sr\({}^{+}\)–Sr\({}^{+}\), respectively. Leading error sources are believed to be due to state dephasing from cryocooler vibrations and laser phase fluctuations. Cryocooler vibrations, which bring about an oscillation in the trap, and hence ion, location on the 10–100 nm scale with respect to the delivery optics can lead to shot-to-shot variations in the gate-laser phase at the ion location. The effects of these variations could be mitigated in future work by interferometrically stabilizing the optical path length to the ions.47,48 Laser instability is also a direct limit to optical qubit coherence time. From auxiliary measurements, we place an upper limit on the laser bandwidth of 50 Hz, but due to cryocooler vibrations and variability of lab environmental parameters, we have measured coherence times at the 1–2 ms level through single-ion Ramsey decay. Sources of error, which we expect to come in at a lower level, include intensity fluctuations, due to power fluctuations and beam pointing instability, which lead to variations of the Rabi frequency and fluctuating AC-Stark shifts due to additional levels in the ions' electronic structure.
Two-qubit entangling gates with ions of the same species. a Ca\({}^{+}\)–Ca\({}^{+}\)Mølmer-Sørensen (MS) gate performed on the in-phase (IP) mode at \(2\pi \times 1.2\) MHz; here, the measured state populations are plotted as a function of duration of the gate pulse. Starting with the product state \(\left|11\right\rangle\) (two bright ions) at time 0, a maximally entangled state is created after 71 \(\mu\)s. b Sr\({}^{+}\)–Sr\({}^{+}\)MS gate performed on the IP mode at \(2\pi \times 1.3\) MHz; the Bell state is created at a duration of 72 \(\mu\)s. c MS gate on two Sr\({}^{+}\) ions with initial sympathetic cooling performed using a central Ca\({}^{+}\) ancilla, i.e., a Sr\({}^{+}\)–Ca\({}^{+}\)–Sr\({}^{+}\) three-ion chain; the IP-mode frequency is \(2\pi \times 730\) kHz. The two Sr\({}^{+}\) ions are entangled after a gate duration of 61 \(\mu\)s. d Parity-flopping curve (see text) for the Ca\({}^{+}\)–Ca\({}^{+}\) gate shown in part (a). e Parity flopping curve for the Sr\({}^{+}\)–Sr\({}^{+}\) gate shown in part (b). f Parity flopping curve for the Sr\({}^{+}\)–Ca\({}^{+}\)–Sr\({}^{+}\) gate shown in part (c). Lines in d, e, and f are sinusoidal fits to the data with constrained periods; the best-fit amplitude is used to calculate the Bell-state fidelity. The offset phases in these plots are related to \(\phi\) and can be zeroed via adjustment of the bichromatic drive fields with respect to the carrier. Error bars reflect SEM
We have also performed an entangling quantum gate between two computational ions in the presence of a third ion of a different species, a primitive described above. Our implementation consists of an MS gate performed between two Sr\({}^{+}\) ions in a crystal with a Sr\({}^{+}\)–Ca\({}^{+}\)–Sr\({}^{+}\) configuration (see Fig. 2b). Initialization for these experiments begins with Doppler cooling using the Ca\({}^{+}\) ancilla and quenching of the \(\left|0\right\rangle\) states of the computational Sr\({}^{+}\) ions to prepare them in \(\left|11\right\rangle\), followed by resolved sideband cooling on the IP vibrational mode using the ancilla only. The gate is then performed with light resonant with the computational ions only, with the bichromatic field tuned near the IP mode, as in the Sr\({}^{+}\)–Sr\({}^{+}\) gate described above. Figure 3 shows the gate flopping and parity-flopping curves for this gate; we achieve a Bell state with infidelity of 0.043(3) in 61 \(\mu\)s. Beyond the sources of imperfection mentioned above when discussing the Sr\({}^{+}\)–Sr\({}^{+}\) gate, additional error sources include mode coupling to the uncooled spectator modes, particularly the stretch mode in which the center Ca\({}^{+}\) ion does not participate.
Dual-species two-qubit gates
We have also demonstrated a dual-species MS entangling operation. To create the Bell state \(\left|{\Phi }_{+\phi }\right\rangle\) between Ca\({}^{+}\) and Sr\({}^{+}\) in a two-ion crystal, we start by Doppler and resolved sideband cooling both the axial IP and OOP modes using the Sr\({}^{+}\) ion. Following optical pumping to bring the ions to the \(\left|11\right\rangle\) state, we simultaneously apply two bichromatic light fields, one to couple the internal state of each species to the shared motion, each detuned from the IP vibrational mode (see Fig. 4a) by \(\delta\). The 674 nm and 729 nm beams are oriented parallel to the trap axis and anti-parallel to each other. As in the single-species case, we ramp up and down the pulse amplitude at the beginning and end of the interaction to avoid both dependence on the initial phase between the red- and blue-sideband drives and off-resonant excitation of the \(\left|1\right\rangle \to \left|0\right\rangle\) carrier transition.29
Two-qubit entangling gate with ions of different species. a Level structure used in the Ca\({}^{+}\)–Sr\({}^{+}\) Mølmer-Sørensen (MS) gate; The \(\left|0\right\rangle ,\left|1\right\rangle\) notation has been replaced with \(\left|e\right\rangle ,\left|g\right\rangle\) here to avoid confusion with the shared motional state occupation denoted by \(n\). b Representative measured state populations are plotted as a function of duration of the gate pulse, here for a gate performed on the in-phase mode at \(2\pi \times 770\) kHz. A maximally entangled state is created here after 140 \(\mu\)s. The highest gate fidelities were achieved using a slightly lower Rabi frequency and correspondingly longer gate time of 160 \(\mu\)s. c Parity flopping for the 160 \(\mu\)s gate, where the line is a sinusoidal fit with constrained period. Error bars reflect SEM
An additional consideration with MS gates on two different optical transitions on two different ion species is the relative phase of the force on each ion; this relationship is dictated by the relative phase of the red- and blue-sideband phase differences on each bichromatic beam pair at the ions' location. The distance between the ions (~3 \(\mu\)m) and the difference in the optical path lengths between the two bichromatic pairs (~100 mm) are both very small compared with the distance between maxima of the amplitude-modulated waveform of each bichromatic pair (~100 m, set by \({\omega }_{\beta }+\delta\), which is on the order of megahertz), so when the RF phases of the fields driving the AOMs producing the bichromatic fields are in phase, the force on the ions is as well. The four AOM RF drive signals, two each driving the 674 nm and 729 nm AOMs, are all phase coherent, derived from the same clock, allowing control and maintenance of shot-to-shot phase coherence of the force on the two ions. Although the 674 nm and 729 nm lasers each need to be coherent over the course of each experiment such that the analysis pulse phase (and that of any subsequent algorithmic logic pulses) is coherent with the beams driving the gate, the relative optical phase between the two lasers does not need to be constant; only the relative phase between the red- and blue-sideband component phase differences must be maintained.
We extract the parity contrast by scanning the phases of \(\pi /2\) analysis pulses for Ca\({}^{+}\) and Sr\({}^{+}\) applied simultaneously after the completion of the MS gate. In Fig. 4b we plot a representative population-flopping curve (taken with a slightly higher Rabi frequency than in the dataset used to calculate the state fidelity) and in Fig. 4c the parity-flopping curve for which the produced Bell state has a measured error of 0.057(3) for a gate duration of 160 \(\mu\)s. The mixed-species gate speed is limited here by the achievable sideband Rabi frequency for the Ca\({}^{+}\) ion. The amplitude of the normalized eigenvector \({\text{b}}_{{\rm{Ca,IP}}}\) and the corresponding Lamb-Dicke parameter \({\eta }_{{\rm{Ca,IP}}}\) are significantly lower for the chosen IP mode than in the single-species chain (see Table 2 in Methods). We expect some level of rejection of gate-laser-field noise common to both ions in same-species gates. On the contrary, for dual-species gates, the ions are driven by different lasers. Hence, we expect that effects such as differential phase and amplitude noise between the 674 nm and 729 nm light at the ion positions, leading to phase-space displacement variation and additional fluctuating AC-Stark shifts, are the primary causes for the larger error in dual-species MS gates.
The achieved infidelities and gate times for the Ca\({}^{+}\)/Sr\({}^{+}\) quantum-logic primitives demonstrated here are listed in Table 1 along with the previously reported quantum-logic-assisted readout for the same two species. The achieved error probabilities are not due to fundamental sources, and so we believe they can be reduced with technological improvements in qubit-laser frequency and amplitude stability at the ions' location. This, along with the relative convenience of control methodologies for this pair of ion qubits, leads us to expect that these primitives will form the basis for more complex ancilla-assisted QIP in larger Ca\({}^{+}\)/Sr\({}^{+}\) systems.
Table 1 Multi-qubit logic gates in the Ca\({}^{+}\)/Sr\({}^{+}\) system in this and previous work
A notable aspect of multi-qubit operations in this particular system is the presence of optical-frequency qubits in both species, as we have demonstrated here. The presence of metastable \(D\) states allows for high-efficiency electron-shelving-based state detection, with the added potential for relatively lower optical power requirements for 10–100 \(\mu\)s two-qubit gate durations when compared with Raman-based gates, or very low ultimate error rates for direct optical single- and two-qubit operations.7 In addition, Ca\({}^{+}\) and Sr\({}^{+}\) both possess optical qubits with qubit transition frequency in the red to near-infrared. This means that the control fields with the greatest requirements for optical power and frequency stability—those used for quantum gates—are at more technologically convenient wavelengths when compared with those used for Raman excitation, which are typically detuned from the higher-energy \(S\to P\) transitions. Optical elements such as crystals, fibers, and mirror coatings all perform better away from the ultraviolet. Furthermore, integrated waveguides and related photonics devices have lower loss at longer wavelengths,19 which is of particular interest for applications benefiting from control of large arrays of ions.
A consideration when employing ancilla ions of a different species from the logic ions is vibrational-mode structure. Driving coherent-displacement-based gates near accidental near-degeneracies should be avoided for the highest-fidelity logic-gate operation and, in some cases, species pairs may be chosen to avoid such coincidences. This appears to be straightforward for typical gate durations, but the problem could become more pronounced for high-speed operations, where more precisely tailored amplitude-shaped pulses may be required to account for driving multiple modes.32
Mode parameter calculation for dual-species ion chains and Lamb-Dicke parameters for various Ca\({}^{+}\)/Sr\({}^{+}\) crystals
For the two-ion mixed-species chain, there are only two axial modes and their frequency ratio is given by:14
$$\frac{{\omega }_{{\rm{OOP}}}}{{\omega }_{{\rm{IP}}}}=\sqrt{\frac{1+\mu +\sqrt{1-\mu +{\mu }^{2}}}{1+\mu -\sqrt{1-\mu +{\mu }^{2}}}},$$
where \(\mu\) is the ion mass ratio, and \({\omega }_{{\rm{IP}}}\) and \({\omega }_{{\rm{OOP}}}\) are the frequencies of the IP and OOP normal modes, respectively. The ion-dependent Lamb-Dicke parameter for ion \(j\) with respect to the normal mode \(\beta\) is given by:49
$${\eta }_{j,\beta }=\sqrt{\frac{\hslash }{2{m}_{j}{\omega }_{\beta }}}{{\bf{k}}}_{j}\cdot {{\bf{b}}}_{j,\beta },$$
where \({{\bf{k}}}_{j}\) is the wavevector of the qubit laser, \({{\bf{b}}}_{j,\beta }\) is the normalized motional eigenvector for ion \(j\) in normal mode \(\beta\), \({\omega }_{\beta }\) is the oscillation frequency, \({m}_{j}\) is the ion mass, and \(\hslash\) is the reduced Planck constant. The amplitudes of the motional eigenvectors \({\text{b}}_{j,\beta }\) can be calculated numerically for arbitrary chain lengths and configurations.49 In addition, simple closed form expressions exist for the two-ion chain;14 specifically, for ions \(i,j\):
$$\begin{array}{lll}{\text{b}}_{j,{\rm{IP}}}=\sqrt{\frac{1\,-\,\tilde{\mu }\,+\,\sqrt{1\,-\,\tilde{\mu }\,+\,{\tilde{\mu }}^{2}}}{2\sqrt{1\,-\,\tilde{\mu }\,+\,{\tilde{\mu }}^{2}}}},\\ {\,\text{b}\,}_{j,{\rm{OOP}}}^{2}=1-{\,\text{b}\,}_{j,{\rm{IP}}}^{2},\end{array}$$
where \(\tilde{\mu }\) is the mass ratio expressed as \(\tilde{\mu }={m}_{i}/{m}_{j}\).
Analogous calculations of the normal mode frequencies and Lamb-Dicke parameters in a symmetric three-ion chain, such as the Sr\({}^{+}\)–Ca\({}^{+}\)–Sr\({}^{+}\) chain used here, can also be made. The mode frequencies for a symmetric chain \({i}_{1}\)–\(j\)–\({i}_{2}\) are given by8
$$\begin{array}{lll}{\omega }_{{\rm{IP}}}=\sqrt{\frac{13}{10}\,+\,\frac{21\,-\,\sqrt{441\,-\,34{\tilde{\mu }}^{-1}\,+\,169{\tilde{\mu }}^{-2}}}{10{\tilde{\mu }}^{-1}}}\ {\omega }_{i},\\ {\omega }_{{\rm{Stretch}}}=\sqrt{3}\ {\omega }_{i},\\ {\omega }_{{\rm{Alt}}}=\sqrt{\frac{13}{10}\,+\,\frac{21\,+\,\sqrt{441\,-\,34{\tilde{\mu }}^{-1}\,+\,169{\tilde{\mu }}^{-2}}}{10{\tilde{\mu }}^{-1}}}\ {\omega }_{i},\end{array}$$
where \({\omega }_{i}\) is the axial frequency of a single ion with mass \({m}_{i}\). Here, \({\omega }_{{\rm{Stretch}}}\) and \({\omega }_{{\rm{Alt}}}\) are mode frequencies for the second, breathing-type axial mode and the third, alternating-ion-motion axial mode, respectively (see Fig. 2b). The definition of \(\tilde{\mu }\) used here is the inverse of \(\mu\) used in ref. 8 but has been chosen to be consistent with the convention used in the previous discussion of two-ion chains.
In contrast to the two-ion mixed-species chain, the normalized mode eigenvectors of a symmetric three-ion chain are dependent on the normal mode frequencies. These eigenvectors \({{\bf{x}}}_{\beta }\) for ions \({i}_{1}\)–\(j\)–\({i}_{2}\) in mode \(\beta\) can be calculated as8
$${{\bf{x}}}_{{\rm{IP}}}={N}_{{\rm{IP}}}\left(1,\frac{13-5{({\omega }_{{\rm{IP}}}/{\omega }_{i})}^{2}}{8\sqrt{\tilde{\mu }}},1\right),$$
$${{\bf{x}}}_{{\rm{Stretch}}}={N}_{{\rm{Stretch}}}\left(\right.1,0,-1\left)\right.,$$
$${{\bf{x}}}_{{\rm{Alt}}}={N}_{{\rm{Alt}}}\left(1,\frac{13-5{({\omega }_{{\rm{Alt}}}/{\omega }_{i})}^{2}}{8\sqrt{\tilde{\mu }}},1\right),$$
where \({{\bf{x}}}_{\beta }=({\text{b}}_{{i}_{1},\beta },{\text{b}}_{j,\beta },{\text{b}}_{{i}_{2},\beta })\) and \({N}_{\beta }\) are normalization constants. The Lamb-Dicke parameters can then be calculated using Eq. 8.
In Table 2, we list the mode parameters and Lamb-Dicke parameters (for a particular axial trap frequency) for dual-species, two- and three-ion crystals containing various common isotopes of Ca\({}^{+}\) and Sr\({}^{+}\). Besides allowing for the inclusion or exclusion of hyperfine structure in one or the other species, different isotopes lead to slightly different mode frequencies, providing the flexibility to avoid accidental overlaps between the target mode sidebands and higher-order sidebands of spectator modes. Small changes in mode frequencies can make a large difference as the relevant comparison is the frequency shift with the detuning from the sidebands, typically several kilohertz to a few tens of kilohertz (see main text).
Table 2 Selected mixed-species ion crystal parameters
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
Kitaev, A. Y. Quantum measurements and the abelian stabilizer problem. Preprint at arXiv:quant-ph/951106 (1995).
Gottesman, D. & Chuang, I. L. Demonstrating the viability of universal quantum computation using teleportation and single-qubit operations. Nature 402, 390 (1999).
Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information (Cambridge Univ. Press, 2000).
Blatt, R. & Wineland, D. Entangled states of trapped atomic ions. Nature 453, 1008 (2008).
Monroe, C. & Kim, J. Scaling the ion trap quantum processor. Science 339, 1164–1169 (2013).
Bermudez, A. et al. Assessing the progress of trapped-ion processors towards fault-tolerant quantum computation. Phys. Rev. X 7, 041061 (2017).
Bruzewicz, C. D., Chiaverini, J., McConnell, R. & Sage, J. M. Trapped-ion quantum computing: progress and challenges. Appl. Phys. Rev. 6, 021314 (2019).
Kielpinski, D. et al. Sympathetic cooling of trapped ions for quantum logic. Phys. Rev. A 61, 032310 (2000).
Kielpinski, D., Monroe, C. & Wineland, D. J. Architecture for a large-scale ion-trap quantum computer. Nature 417, 709 (2002).
Nigg, D. et al. Quantum computations on a topologically encoded qubit. Science 345, 302–305 (2014).
Article ADS MathSciNet MATH Google Scholar
Wang, Y. et al. Single-qubit quantum memory exceeding ten-minute coherence time. Nat. Photonics 11, 646 (2017).
Monroe, C. et al. Large-scale modular quantum-computer architecture with atomic memory and photonic interconnects. Phys. Rev. A 89, 022317 (2014).
Tan, T. R. et al. Multi-element logic gates for trapped-ion qubits. Nature 528, 380–383 (2015).
Wübbena, J. B., Amairi, S., Mandel, O. & Schmidt, P. O. Sympathetic cooling of mixed-species two-ion crystals for precision spectroscopy. Phys. Rev. A 85, 043412 (2012).
Kielpinski, D., Volin, C., Streed, E. W., Lenzini, F. & Lobino, M. Integrated optics architecture for trapped-ion quantum information processing. Quantum Inform. Process. 15, 5315–5338 (2016).
Mehta, K. K. & Ram, R. J. Precise and diffraction-limited waveguide-to-free-space focusing gratings. Sci. Rep. 7, 2019 (2017).
Ghadimi, M. et al. Scalable ion-photon quantum interface based on integrated diffractive mirrors. npj Quantum Inform. 3, 4 (2017).
West, G. N. et al. Low-loss integrated photonics for the blue and ultraviolet regime. APL Photonics 4, 026101 (2019).
Sorace-Agaskar, C. et al. Multi-layer integrated photonics from the ultraviolet to the infrared. Proc. SPIE 10510, 105100D (2018).
Benhelm, J., Kirchmair, G., Roos, C. F. & Blatt, R. Towards fault-tolerant quantum computing with trapped ions. Nat. Phys. 4, 463–466 (2008).
Akerman, N., Navon, N., Kotler, S., Glickman, Y. & Ozeri, R. Universal gate-set for trapped-ion qubits using a narrow linewidth diode laser. New J. Phys. 17, 113060 (2015).
Bruzewicz, C. D. et al. High-fidelity, single-shot, quantum-logic-assisted readout in a mixed-species ion chain. Preprint at arXiv:1706.05102 (2017).
Schmidt, P. O. et al. Spectroscopy using quantum logic. Science 309, 749–752 (2005).
Rosenband, T. et al. Frequency ratio of \({{\rm{Al}}}^{+}\) and \({{\rm{Hg}}}^{+}\) single-ion optical clocks; metrology at the 17\({}^{\rm{th}}\) decimal place. Science 319, 1808–1812 (2008).
Negnevitsky, V. et al. Repeated multi-qubit readout and feedback with a mixed-species trapped-ion register. Nature 563, 527 (2018).
Ballance, C. J. et al. Hybrid quantum logic and a test of Bell's inequality using two different atomic isotopes. Nature 528, 384 (2015).
Inlek, I. V., Crocker, C., Lichtman, M., Sosnova, K. & Monroe, C. Multispecies trapped-ion node for quantum networking. Phys. Rev. Lett. 118, 250502 (2017).
Nigmatullin, R., Ballance, C. J., de Beaudrap, N. & Benjamin, S. C. Minimally complex ion traps as modules for quantum communication and computing. New J. Phys. 18, 103028 (2016).
Kirchmair, G. et al. Deterministic entanglement of ions in thermal states of motion. New J. Phys. 11, 023002 (2009).
Ballance, C. J., Harty, T. P., Linke, N. M., Sepiol, M. A. & Lucas, D. M. High-fidelity quantum logic gates using trapped-ion hyperfine qubits. Phys. Rev. Lett. 117, 060504 (2016).
Harty, T. P. et al. High-fidelity trapped-ion quantum logic using near-field microwaves. Phys. Rev. Lett. 117, 140501 (2016).
Schäfer, V. M. et al. Fast quantum logic gates with trapped-ion qubits. Nature 555, 75 (2018).
Shapira, Y., Shaniv, R., Manovitz, T., Akerman, N. & Ozeri, R. Robust entanglement gates for trapped-ion qubits. Phys. Rev. Lett. 121, 180502 (2018).
Wineland, D. J. et al. Experimental issues in coherent quantum-state manipulation of trapped atomic ions. J. Res. Natl. Inst. Stand. Technol. 103, 259–328 (1998).
Article MATH Google Scholar
Fowler, A. G., Mariantoni, M., Martinis, J. M. & Cleland, A. N. Surface codes: Towards practical large-scale quantum computation. Phys. Rev. A 86, 032324 (2012).
Hume, D. B., Rosenband, T. & Wineland, D. J. High-fidelity adaptive qubit detection through repetitive quantum nondemolition measurements. Phys. Rev. Lett. 99, 120502 (2007).
Chiaverini, J. & Sage, J. M. Insensitivity of the rate of ion motional heating to trap-electrode material over a large temperature range. Phys. Rev. A 89, 012318 (2014).
Sedlacek, J. A. et al. Evidence for multiple mechanisms underlying surface electric-field noise in ion traps. Phys. Rev. A 98, 063430 (2018).
Bruzewicz, C. D., McConnell, R., Chiaverini, J. & Sage, J. M. Scalable loading of a two-dimensional trapped-ion array. Nat. Commun. 7, 13005 (2016).
Wang, S. X., Labaziewicz, J., Ge, Y., Shewmon, R. & Chuang, I. L. Demonstration of a quantum logic gate in a cryogenic surface-electrode ion trap. Phys. Rev. A 81, 062332 (2010).
Gabrielse, G. et al. A superconducting solenoid system which cancels fluctuations in the ambient magnetic field. J. Magn. Reson. (1969) 91, 564–572 (1991).
Sørensen, A. & Mølmer, K. Quantum computation with ions in thermal motion. Phys. Rev. Lett. 82, 1971–1974 (1999).
Sackett, C. A. et al. Experimental entanglement of four particles. Nature 404, 256–259 (2000).
Roos, C. F. Ion trap quantum gates with amplitude-modulated laser beams. New J. Phys. 10, 013002 (2008).
Safronova, U. I., Safronova, M. S. & Johnson, W. R. Forbidden \(M1\) and \(E2\) transitions in monovalent atoms and ions. Phys. Rev. A 95, 042507 (2017).
Ballance, C. J. High-Fidelity Quantum Logic in Ca \({}^{+}\). Ph.D. thesis, University of Oxford (2014).
Bergquist, J. C., Itano, W. M. & Wineland, D. J. Laser stabilization to a single ion. In Frontiers in Laser Spectroscopy 359–376 (North Holland, 1994).
Ma, L. S. et al. Delivering the same optical frequency at two places: accurate cancellation of phase noise introduced by an optical fiber or other time-varying path. Optics Lett. 19, 1777–1779 (1994).
Home, J. P. Quantum science and metrology with mixed-species ion chains. In Advances in Atomic, Molecular, and Optical Physics, Vol. 62, 231–277 (Elsevier, 2013).
We thank Vladimir Bolkhovsky for trap fabrication, George Fitch for layout assistance, and Peter Murphy, Chris Thoummaraj, and Karen Magoon for assistance with chip packaging. We are also grateful to Robert Niffenegger and Garrett Simon for comments on the manuscript. This work was sponsored by the Under Secretary of Defense for Research and Engineering under Air Force contract number FA8721-05-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States Government.
Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, Massachusetts, 02421, USA
C. D. Bruzewicz, R. McConnell, J. M. Sage & J. Chiaverini
Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA
J. Stuart & J. M. Sage
C. D. Bruzewicz
R. McConnell
J. Stuart
J. M. Sage
J. Chiaverini
C.D.B., J.C., and J.M.S. conceived of the work. C.D.B. performed the experiments, with assistance from R.M. and J.S. C.D.B. analyzed the data. All authors discussed the results and contributed to writing the paper.
Correspondence to C. D. Bruzewicz or J. Chiaverini.
Bruzewicz, C.D., McConnell, R., Stuart, J. et al. Dual-species, multi-qubit logic primitives for Ca+/Sr+ trapped-ion crystals. npj Quantum Inf 5, 102 (2019). https://doi.org/10.1038/s41534-019-0218-z
Two qubits for the price of one ion
Cornelius Hempel
H.-X. Yang
J.-Y. Ma
L.-M. Duan
Engineering of microfabricated ion traps and integration of advanced on-chip features
Zak David Romaszko
Seokjun Hong
Winfried Karl Hensinger
Nature Reviews Physics (2020)
npj Quantum Information (npj Quantum Inf) ISSN 2056-6387 (online) | CommonCrawl |
Existence, uniqueness, and stability of bubble solutions of a chemotaxis model
DCDS Home
Hill-type formula and Krein-type trace formula for $S$-periodic solutions in ODEs
February 2016, 36(2): 785-803. doi: 10.3934/dcds.2016.36.785
$2\pi$-Periodic self-similar solutions for the anisotropic affine curve shortening problem II
Meiyue Jiang 1, and Juncheng Wei 2,
LMAM, School of Mathematical Sciences, Peking University, Beijing, 100871, China
Department of Mathematics, Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
Received April 2014 Published August 2015
The existence of $2\pi$-periodic positive solutions of the equation $$ u'' + u = \displaystyle{\frac{a(x)}{u^3}} $$ is studied, where $a$ is a positive smooth $2\pi$-periodic function. Under some non-degenerate conditions on $a$, the existence of $2\pi$-periodic solutions to the equation is established.
Keywords: anisotropic affine curve shortening problem., Self-similar solutions.
Mathematics Subject Classification: Primary: 34B15, 34B16; Secondary: 44J9.
Citation: Meiyue Jiang, Juncheng Wei. $2\pi$-Periodic self-similar solutions for the anisotropic affine curve shortening problem II. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 785-803. doi: 10.3934/dcds.2016.36.785
U. Abresch and J. Langer, The normalized curved shortening flow and homothetic solutions,, J. Differential Geometry, 23 (1986), 175. Google Scholar
J. Ai, K. S. Chou and J. Wei, Self-similar solutions for the anisotropic affine curve shortening problem,, Calc. Var., 13 (2001), 311. doi: 10.1007/s005260000075. Google Scholar
S. Altschuler, Singularities of the curve shrinking flow for space curves,, J. Differential Geometry, 34 (1991), 491. Google Scholar
B. Andrews, Contraction of convex hypersurfaces by their affine normal,, J. Differential Geometry, 43 (1996), 207. Google Scholar
B. Andrews, Evolving convex curves,, Calc. Var., 7 (1998), 315. doi: 10.1007/s005260050111. Google Scholar
S. Angenent, On the formation of singularities in the curve shortening flow,, J. Differential Geometry, 33 (1991), 601. Google Scholar
S. Angenent and M. E. Gurtin, Multiphase thermodynamics with interfacial structure evolution of an isothermal interface,, Arch. Rational Mech. Anal., 108 (1989), 323. doi: 10.1007/BF01041068. Google Scholar
W. X. Chen, $L_p$-Minkowski problem with not necessarily positive data,, Adv. in Math., 201 (2006), 77. doi: 10.1016/j.aim.2004.11.007. Google Scholar
K. S. Chou and L. Zhang, On the uniqueness of stable ultimate shapes for the anisotropic curve-shorting problem,, Manuscripta Math., 102 (2000), 101. doi: 10.1007/s002291020101. Google Scholar
K. S. Chou and X. P. Zhu, Anisotropic flows for convex plane curves,, Duke Math. J., 97 (1999), 579. doi: 10.1215/S0012-7094-99-09722-3. Google Scholar
M. del Pino, R. Manásevich and A. Montero, $T$-periodic solutions for some second order differential equation with singularities,, Proc. Roy. Soc. Edinburgh, 120 (1992), 231. doi: 10.1017/S030821050003211X. Google Scholar
C. Dohmen and Y. Giga, Self-similar shrinking curves for anisotropic curvature flow equations,, Proc. Japan Acad., 70 (1994), 252. doi: 10.3792/pjaa.70.252. Google Scholar
C. Dohmen, Y. Giga and N. Mizoguchi, Existence of self-similar shrinking curves for anisotropic curvature flow equations,, Calc. Var., 4 (1996), 103. doi: 10.1007/BF01189949. Google Scholar
I. Fonseca and W. Gangbo, Degree Theory in Analysis and Applications,, Oxford Science Publications, (1995). Google Scholar
M. E. Gage, Evolving plane curve by curvature in relative geometries,, Duke Math. J., 72 (1993), 441. doi: 10.1215/S0012-7094-93-07216-X. Google Scholar
M. E. Gage and R. Hamilton, The heat equation shrinking convex plane curves,, J. Differential Geometry, 23 (1986), 69. Google Scholar
M. E. Gage and Y. Li, Evolving plane curve by curvature in relative geometries II,, Duke Math. J., 75 (1994), 79. doi: 10.1215/S0012-7094-94-07503-0. Google Scholar
M. Grayson, The heat equation shrinking embedded curves to round points,, J. Differential Geometry, 26 (1987), 285. Google Scholar
M. E. Gurtin, Thermodynamics of Evolving Phase Boundaries in the Plane,, Clarendon Press, (1993). Google Scholar
M.-Y. Jiang, Remarks on the 2-dimensional $L_p$-Minkowski problem,, Advanced Nonlinear Studies, 10 (2010), 297. Google Scholar
M.-Y. Jiang, L. Wang and J. Wei, $2\pi$-periodic self-similar solutions for the anisotropic affine curve shortening problem,, Calc. Var., 41 (2011), 535. doi: 10.1007/s00526-010-0375-6. Google Scholar
H. Matano and J. Wei, On anisotropic curvature flow equations,, preprint., (). Google Scholar
G. Sapiro and A. Tannenbaum, On affine plane curve evolution,, J. Funct. Anal., 119 (1994), 79. doi: 10.1006/jfan.1994.1004. Google Scholar
Weronika Biedrzycka, Marta Tyran-Kamińska. Self-similar solutions of fragmentation equations revisited. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 13-27. doi: 10.3934/dcdsb.2018002
Marco Cannone, Grzegorz Karch. On self-similar solutions to the homogeneous Boltzmann equation. Kinetic & Related Models, 2013, 6 (4) : 801-808. doi: 10.3934/krm.2013.6.801
Qiaolin He. Numerical simulation and self-similar analysis of singular solutions of Prandtl equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 101-116. doi: 10.3934/dcdsb.2010.13.101
F. Berezovskaya, G. Karev. Bifurcations of self-similar solutions of the Fokker-Plank equations. Conference Publications, 2005, 2005 (Special) : 91-99. doi: 10.3934/proc.2005.2005.91
Bendong Lou. Self-similar solutions in a sector for a quasilinear parabolic equation. Networks & Heterogeneous Media, 2012, 7 (4) : 857-879. doi: 10.3934/nhm.2012.7.857
Shota Sato, Eiji Yanagida. Singular backward self-similar solutions of a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 897-906. doi: 10.3934/dcdss.2011.4.897
Marek Fila, Michael Winkler, Eiji Yanagida. Convergence to self-similar solutions for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 703-716. doi: 10.3934/dcds.2008.21.703
Hyungjin Huh. Self-similar solutions to nonlinear Dirac equations and an application to nonuniqueness. Evolution Equations & Control Theory, 2018, 7 (1) : 53-60. doi: 10.3934/eect.2018003
Kin Ming Hui. Existence of self-similar solutions of the inverse mean curvature flow. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 863-880. doi: 10.3934/dcds.2019036
Hideo Kubo, Kotaro Tsugawa. Global solutions and self-similar solutions of the coupled system of semilinear wave equations in three space dimensions. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 471-482. doi: 10.3934/dcds.2003.9.471
K. T. Joseph, Philippe G. LeFloch. Boundary layers in weak solutions of hyperbolic conservation laws II. self-similar vanishing diffusion limits. Communications on Pure & Applied Analysis, 2002, 1 (1) : 51-76. doi: 10.3934/cpaa.2002.1.51
Jochen Merker, Aleš Matas. Positivity of self-similar solutions of doubly nonlinear reaction-diffusion equations. Conference Publications, 2015, 2015 (special) : 817-825. doi: 10.3934/proc.2015.0817
Adrien Blanchet, Philippe Laurençot. Finite mass self-similar blowing-up solutions of a chemotaxis system with non-linear diffusion. Communications on Pure & Applied Analysis, 2012, 11 (1) : 47-60. doi: 10.3934/cpaa.2012.11.47
Zoran Grujić. Regularity of forward-in-time self-similar solutions to the 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 837-843. doi: 10.3934/dcds.2006.14.837
Rostislav Grigorchuk, Volodymyr Nekrashevych. Self-similar groups, operator algebras and Schur complement. Journal of Modern Dynamics, 2007, 1 (3) : 323-370. doi: 10.3934/jmd.2007.1.323
Christoph Bandt, Helena PeÑa. Polynomial approximation of self-similar measures and the spectrum of the transfer operator. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4611-4623. doi: 10.3934/dcds.2017198
Anna Chiara Lai, Paola Loreti. Self-similar control systems and applications to zygodactyl bird's foot. Networks & Heterogeneous Media, 2015, 10 (2) : 401-419. doi: 10.3934/nhm.2015.10.401
D. G. Aronson. Self-similar focusing in porous media: An explicit calculation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1685-1691. doi: 10.3934/dcdsb.2012.17.1685
G. A. Braga, Frederico Furtado, Vincenzo Isaia. Renormalization group calculation of asymptotically self-similar dynamics. Conference Publications, 2005, 2005 (Special) : 131-141. doi: 10.3934/proc.2005.2005.131
Shota Sato, Eiji Yanagida. Forward self-similar solution with a moving singularity for a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 313-331. doi: 10.3934/dcds.2010.26.313
PDF downloads (12)
Meiyue Jiang Juncheng Wei | CommonCrawl |
Methodology article
IDSSIM: an lncRNA functional similarity calculation model based on an improved disease semantic similarity method
Wenwen Fan1,
Junliang Shang ORCID: orcid.org/0000-0002-8488-22281,
Feng Li1,
Yan Sun1,
Shasha Yuan1 &
Jin-Xing Liu1
It has been widely accepted that long non-coding RNAs (lncRNAs) play important roles in the development and progression of human diseases. Many association prediction models have been proposed for predicting lncRNA functions and identifying potential lncRNA-disease associations. Nevertheless, among them, little effort has been attempted to measure lncRNA functional similarity, which is an essential part of association prediction models.
In this study, we presented an lncRNA functional similarity calculation model, IDSSIM for short, based on an improved disease semantic similarity method, highlight of which is the introduction of information content contribution factor into the semantic value calculation to take into account both the hierarchical structures of disease directed acyclic graphs and the disease specificities. IDSSIM and three state-of-the-art models, i.e., LNCSIM1, LNCSIM2, and ILNCSIM, were evaluated by applying their disease semantic similarity matrices and the lncRNA functional similarity matrices, as well as corresponding matrices of human lncRNA-disease associations coming from either lncRNADisease database or MNDR database, into an association prediction method WKNKN for lncRNA-disease association prediction. In addition, case studies of breast cancer and adenocarcinoma were also performed to validate the effectiveness of IDSSIM.
Results demonstrated that in terms of ROC curves and AUC values, IDSSIM is superior to compared models, and can improve accuracy of disease semantic similarity effectively, leading to increase the association prediction ability of the IDSSIM-WKNKN model; in terms of case studies, most of potential disease-associated lncRNAs predicted by IDSSIM can be confirmed by databases and literatures, implying that IDSSIM can serve as a promising tool for predicting lncRNA functions, identifying potential lncRNA-disease associations, and pre-screening candidate lncRNAs to perform biological experiments. The IDSSIM code, all experimental data and prediction results are available online at https://github.com/CDMB-lab/IDSSIM.
Genome sequence analysis has shown that only less than 2% of human genome sequence can encode protein, that is, about 20,000 protein-coding genes, and more than 98% of human genome sequence do not encode protein, yielding a great number of non-coding RNAs (ncRNAs) [1,2,3]. In fact, it has been widely acknowledged that ncRNAs also play a key regulatory role in various biological processes [4, 5]. As a member of ncRNA family, long non-coding RNAs (lncRNAs) defined as ncRNAs with more than 200 nucleotides in length have been suggested as potential drivers of several diseases more recently [4, 6]. For instance, Gregory et al. reported that lncRNA HOTAIR promotes proliferation, survival, invasion, metastasis, and drug resistance in lung cancer cells [7]. Wang et al. summarized several lncRNAs that have been reported to be involved in the pathogenesis of Alzheimer's disease, Parkinson's disease, Huntington's disease, and amyotrophic lateral sclerosis [8]. Therefore, inferring lncRNA functions, as well as detecting lncRNA-disease associations, can help us to deeply understand the pathogenesis of human diseases [9, 10]. For inferring lncRNA functions, a simple but efficient way is to develop functional similarity calculation model that inferring lncRNA-lncRNA functional similarities using their known functions and associations with specific diseases. Compared with biological experiments, the functional similarity calculation model is a valuable supplement to characterize lncRNA functions with less time and costs, which can be further studied by lncRNA-disease association detection methods to better understand underlying genetic mechanisms of human diseases at lncRNA level, leading to more accurate associations between lncRNAs and diseases being captured [11,12,13].
Many lncRNA functional similarity calculation models have been proposed so far [12,13,14,15,16], which mainly fall into four categories [17]. The first is based on the lncRNA expression profile. Since the lncRNA expression profile can characterize details of lncRNA in digital form, expression similarity between two lncRNAs can be calculated using correlation measures, which have strong link to functional similarity. Chen et al. proposed LRLSLDA method to predict lncRNA-disease associations, where Spearman correlation coefficient was used to measure expression similarity between expression profiles of each lncRNA pair, which was combined with lncRNA Gaussian interaction profile kernel similarity to obtain the lncRNA functional similarity [14]. The second is based on the gene ontology (GO) terms since many lncRNAs have been annotated with GO terms, which are broadly adopted for biological functional descriptions. Yu et al. utilized a Bayesian prior probability strategy, as well as associations between lncRNAs and GO terms, to measure the lncRNA functional similarity [15]. The third is based on lncRNA interactions with other biomolecules. It has been believed that lncRNAs normally interacting with other biomolecules, such as miRNA and mRNA, in a complicated way to jointly affect diseases. Therefore, measuring the lncRNA functional similarity through its interactions with other biomolecules is reasonable. Cheng et al. developed IntNetLncSim model to calculate the lncRNA functional similarity based on the integration of two interaction networks (mRNA-mRNA, miRNA-mRNA) and the lncRNA-regulatory network [12]. The fourth is based on the lncRNA-disease associations. Assuming that similar lncRNAs may show similar functions, and therefore affect similar diseases, the lncRNA functional similarity can be measured using lncRNA-disease associations and disease semantic similarity. Chen et al. proposed both LNCSIM1 and LNCSIM2 models to measure the lncRNA functional similarity, the former based on directed acyclic graphs (DAGs) and the later based on the information content (IC) to calculate the disease semantic similarity [16]. Their reliable performance improvements have been demonstrated in both cross validation and case studies. Nevertheless, they also have several limitations need to be addressed. For example, semantic contributions of different disease terms at the same layer cannot be effectively distinguished in LNCSIM1 and the accuracy of IC value always depends on the information integrity of DAGs in LNCSIM2. Huang et al. therefore developed an edge-based calculation model ILNCSIM to measure the lncRNA functional similarity, main improvement of which comes from the combination of the concept of IC and the hierarchical structure of DAGs for calculating disease semantic similarity [13].
In this study, inspired by previous models, especially LNCSIM1, LNCSIM2 and ILNCSIM, we presented an lncRNA functional similarity calculation model, IDSSIM for short, based on an improved disease semantic similarity method. Highlight of the improved disease semantic similarity method is the introduction of IC contribution factor into the semantic value calculation to take into account both the hierarchical structures of DAGs and the disease specificities. Experiments of IDSSIM and its comparison with three state-of-the-art models, i.e., LNCSIM1, LNCSIM2, and ILNCSIM, were performed on both lncRNADisease database and MNDR database by using evaluation measures of receiver operating characteristic (ROC) curves and area under the curve (AUC) values. Results demonstrated that IDSSIM is superior to compared models, and can improve accuracy of disease semantic similarity effectively, leading to increase the association prediction ability of our model. Besides, case studies of breast cancer and adenocarcinoma were also adopted. Results showed that most of potential disease-associated lncRNAs predicted by IDSSIM can be confirmed by databases and literatures, implying that IDSSIM can serve as a promising tool for predicting lncRNA functions, identifying potential lncRNA-disease associations, and pre-screening candidate lncRNAs to perform biological experiments.
Human lncRNA-disease associations
Two matrices that contain human lncRNA-disease associations were collected for the calculation of lncRNA functional similarities. The first matrix came from the 2017 version of lncRNADisease database [18] (http://www.cuilab.cn/lncrnadisease) in October, 2019. There were in total 116 lncRNAs that were collected according to the reference [19]. After performing quality control to exclude lncRNAs unrecorded in lncRNADisease database and diseases with irregular names or lack of Medical Subject Headings (MeSH) tree numbers, 157 diseases, 82 lncRNAs and 701 associations were retained. The second matrix was downloaded from the Mammalian ncRNA-disease repository (MNDR) database [20] (http://www.rna-society.org/mndr/index.html) in October, 2019. After the same quality control, we collected lncRNA-disease associations with 89 diseases, 190 lncRNAs and 1680 associations. In these two matrices, each row represents an lncRNA and each column represents a disease. If an lncRNA associated with a disease, its corresponding element of matrix is set to 1, otherwise, 0.
Disease semantic similarity
Disease semantic similarity between two diseases can be calculated using their DAGs, which were constructed by mapping their disease names into MeSH descriptors. MeSH descriptors were obtained from the National Library of Medicine [21] (http://www.nlm.nih.Gov/), and the disease category of which was used here. For a disease A, its DAG can be denoted as DAGA = {TA, EA}, where TA is the set of ancestor nodes of A including itself, and EA is the set of all edges in the DAG. The disease term t ∈ TA in DAGA has semantic contribution to the disease A, which was defined as semantic value \( {SV}_A^1(t) \) of t to the disease A, and can be calculated in LNCSIM1 [16] by the following formula,
$$ {SV}_A^1(t)=\left\{\begin{array}{cc}1& t=A\\ {}\max \left(\Delta \times {SV}_A^1\left({t}^{\prime}\right)\left|{t}^{\prime}\in C(t)\right.\right)& t\ne A\end{array}\right. $$
where C(t) is the children of t, Δ is the semantic contribution factor of edges in EA that linking t and t′, which was normally set to 0.5 [22].
This formula interprets the DAG in a quantitative way under the assumption of disease terms at the same layer of DAGA having the same semantic contribution to the disease A. However, this assumption is sometimes problematic. For example, the disease term t1 and t2 are at the same layer of DAGA, but compared with t2, t1 is a rarer disease and appears in less DAGs. In this case, the conclusion of t1 being the more specific disease term than t2 in DAGA and therefore \( {SV}_A^1\left({t}_1\right) \) being higher than \( {SV}_A^1\left({t}_2\right) \) seems more reasonable than the assumption of LNCSIM1.
To consider this situation, LNCSIM2 used another formula to calculate the contribution of disease term t ∈ TA in DAGA to the semantic value of disease A,
$$ {SV}_A^2(t)=-\log \frac{Dags(t)}{D} $$
where D is the number of diseases in the MeSH, and Dags(t) is the number of DAGs including t. This IC strategy helps to retain the disease specificity, and performs well while several diseases with significantly different DAG-frequencies appear at the same layer of a DAG. However, its accuracy depends on the information integrity of DAGs and easily suffers from the information bias in DAGs.
In the IDSSIM model, we leveraged the advantages of both LNCSIM1 and LNCSIM2, and defined the contribution of disease term t ∈ TA in DAGA to the semantic value of disease A as,
$$ {SV}_A^3(t)=\left\{\begin{array}{cc}1& t=A\\ {}\max \left(\left(\Delta +{P}_t\right)\times {SV}_A^3\left({t}^{\prime}\right)\left|{t}^{\prime}\in C(t)\right.\right)& t\ne A\end{array}\right. $$
where Pt is the IC contribution factor, and defined as,
$$ {P}_t=\frac{\underset{k\in K}{\max}\left( Dags(k)\right)- Dags(t)}{D} $$
where K is the set of all diseases in MeSH. It should be noted that for the disease t, its Pt value change with the continuously updated version of MeSH.
Then, the semantic value of disease A in IDSSIM was calculated in the same way as described in LNCSIM1, that is, it is the summation of contributions of all disease terms in DAGA to the disease A,
$$ SV(A)=\sum \limits_{t\in {T}_A}{SV}_A^3(t) $$
Furthermore, the disease semantic similarity between two diseases A and B was defined in the similar way as LNCSIM1 based on their shared disease terms in DAGs,
$$ DSS\left(A,B\right)=\frac{\sum \limits_{t\in {T}_A\cap {T}_B}\left({SV}_A^3(t)+{SV}_B^3(t)\right)}{SV(A)+ SV(B)} $$
To understand the calculation process of the disease semantic similarity more clearly, an example was given in Fig. 1. First, DAGs of two diseases, i.e., Pancreatic Neoplasms and Liver Neoplasms, were constructed by using MeSH descriptors. It is seen that DAG of Pancreatic Neoplasms has 4 layers and 8 disease terms, and DAG of Liver Neoplasms has 4 layers and 6 disease terms, among which, 4 disease terms are shared by these two diseases. Second, D, Dags(t), and \( \underset{k\in K}{\max}\left( Dags(k)\right) \) were calculated by using all disease DAGs, and the semantic contribution factor Δ was also set to 0.5 [16, 22]. We can see that disease terms in the same layer have different contribution factor Δ + Pt, therefore yielding different semantic contributions \( {SV}_A^3(t) \) to the disease in each DAG. Third, semantic values of these two diseases and their disease semantic similarity were calculated using above formulas. As we can see from the example, the IDSSIM model takes into account both the hierarchical structures of DAGs and the disease specificities.
An example of calculating the disease semantic similarity in IDSSIM
LncRNA functional similarity
In the IDSSIM model, the lncRNA functional similarity was calculated in the same way as described in the references [11, 13, 16]. In this paper, an example was given to explain the calculation process, as shown in Fig. 2.
An example of calculating the lncRNA functional similarity in IDSSIM
Suppose DG(u) and DG(v) are disease groups of lncRNAs u and v respectively, which were collected from the matrix of human lncRNA-disease associations, the lncRNA functional similarity between u and v can be calculated using semantic similarities of diseases appearing in DG(u) and DG(v). More specifically, at first, the disease semantic similarity sub-matrix was constructed, where both row and column represent diseases that appears in DG(u) ∪ DG(v), and each element is the disease semantic similarity between the corresponding diseases. Then, the similarity between a disease of one disease group and another disease group is defined as,
$$ S\left({d}_u, DG(v)\right)=\underset{d\in DG(v)}{\max}\left( DSS\left({d}_u,d\right)\right) $$
$$ S\left({d}_v, DG(u)\right)=\underset{d\in DG(u)}{\max}\left( DSS\left({d}_v,d\right)\right) $$
where du and dv represent one disease in DG(u) and DG(v), respectively. Next, the similarities of two disease groups to each other were defined as,
$$ {S}_{u\to v}=\sum \limits_{d\in DG(u)}S\left(d, DG(v)\right) $$
$$ {S}_{v\to u}=\sum \limits_{d\in DG(v)}S\left(d, DG(u)\right) $$
Finally, the lncRNA functional similarity between u and v was defined as,
$$ FS\left(u,v\right)=\frac{S_{u\to v}+{S}_{v\to u}}{\left| DG(u)\right|+\left| DG(v)\right|} $$
where |⋅| denotes the number of diseases in the corresponding disease group.
In order to evaluate the performance of IDSSIM, we compared it with three state-of-the-art models, i.e., LNCSIM1, LNCSIM2, and ILNCSIM, on both lncRNADisease database and MNDR database by using evaluation measures of ROC curves and AUC values that generated by a five-fold cross validation strategy [13].
Specifically, for each database, the original matrix of human lncRNA-disease associations was randomly divided into five groups, scores of one of which were changed into 0 and others remained unchanged. These five changed association matrices, as well as results of each compared model, i.e., disease semantic similarity matrix and lncRNA functional similarity matrix, were applied to an association prediction method WKNKN [23] in turn to get five predicted matrices of human lncRNA-disease associations. WKNKN was used here since it was recently proposed and claimed to facilitate association prediction and its package is available online. For the changed group in the original matrix of human lncRNA-disease associations, associations with their scores being equal to 1 were considered as observed positives, otherwise, observed negatives. For the changed group in each predicted matrix of human lncRNA-disease associations, associations with their scores being higher than a threshold were considered as predicted positives, otherwise, predicted negatives, where the threshold was set to predicted scores in the changed group with the descending order in turn. Therefore, for each predicted matrix of human lncRNA-disease associations, their true positive rates (TPR) and false positive rates (FPR) can be obtained with different thresholds. In order to reduce the error caused by random grouping, the five-fold cross validation was repeated 10 times for each compared model, and the average values of TPR and FPR were used to draw ROC curve and calculate AUC value.
ROC curves and AUC values of compared models on lncRNADisease database and MNDR database were shown in Fig. 3. It is seen that in terms of ROC curves and AUC values, IDSSIM performed best among all compared models on these two databases. For the lncRNADisease database, the AUC value of IDSSIM was 0.8966, and 0.74, 0.85, 1.00% higher than AUC values of LNCSIM1, LNCSIM2, ILNCSIM, respectively. Similarly, for the MNDR database, the AUC value of IDSSIM was 0.9302, has increased by 0.51, 0.22 and 0.35% than those of LNCSIM1, LNCSIM2, ILNCSIM, respectively. These experimental results demonstrated that IDSSIM can provide more accurate disease semantic similarity matrix and lncRNA functional similarity matrix. Therefore, based on these two matrices, performance of the association prediction method, such as WKNKN, can be further improved.
ROC curves and AUC values of compared models on lncRNADisease database and MNDR database
We applied two similarity matrices that generated by IDSSIM, namely, the disease semantic similarity matrix and the lncRNA functional similarity matrix, as well as their corresponding downloaded matrix of human lncRNA-disease associations coming from either lncRNADisease database or MNDR database, to the association prediction method WKNKN [23] to get two predicted matrices of human lncRNA-disease associations. In these two predicted matrices, several potential lncRNA-disease associations were identified, which might be useful for uncovering underlying genetic mechanisms of diseases though they need further bioinformatics studies and biological experiment confirmation. In Fig. 4, the significant potential lncRNA-disease associations captured by IDSSIM were shown as networks. In each network, blue and red nodes represent lncRNAs and diseases respectively, and each edge linking an lncRNA and a disease represents the captured significant potential lncRNA-disease association, score of which is higher than a threshold m(LDA) + 2 ⋅ sd(LDA), where LDA denotes scores of all potential lncRNA-disease associations that captured by IDSSIM, m(⋅) and sd(⋅) are the mean and the standard deviation of them. We believed that these two networks can provide important clues for the exploration of causative biomarkers of diseases.
The significant potential lncRNA-disease association networks captured by IDSSIM
Based on the predicted matrix of human lncRNA-disease associations in the lncRNADisease database, another evaluation method of case studies, which is a routine method and has been widely adopted by association prediction models [23, 24], was used to validate the effectiveness of IDSSIM. Two diseases, i.e., breast cancer and adenocarcinoma, were taken as cases in the study. For each disease, top 20 predicted potential lncRNAs were recorded, as shown in Table 1 and Table 2 respectively. In the tables, lncRNAs were examined one by one to confirm whether it associates with the disease using the lncRNADisease (v2.0) database [25], Lnc2Cancer database [26] and recently published literatures.
Table 1 Top 20 predicted potential lncRNAs associated with breast cancer
Table 2 Top 20 predicted potential lncRNAs associated with adenocarcinoma
Breast cancer is one of the most common malignant tumors which threaten the health of women, accounting for about 500,000 deaths per year worldwide [27]. Recent advances have suggest that dysregulations of lncRNAs are associated with breast cancer [28, 29]. Besides known associations between lncRNAs and breast cancer in the lncRNADisease database, we further predicted 20 potential lncRNAs in Table 1 that might be involved with breast cancer. Among them, 8 lncRNAs have been confirmed by lncRNADisease (v2.0) database and Lnc2Cancer database, and 4 lncRNAs were reported by literatures to be implicated in breast cancer. Sarrafzadeh et al. demonstrated that significant up-regulation of PCAT1 has only been detected in a fraction of breast cancers and concluded that PCAT1 is possibly involved in the pathogenesis of fraction of breast cancers [30]. Ma et al. declared that SNHG3 promotes cell proliferation and invasion through the miR-384/hepatoma-derived growth factor axis in breast cancer [31]. Wang et al. identified MIR100HG as a pro-oncogene for triple-negative breast cancer progression that promotes cell proliferation through triplex formation with p27 loci [32]. Silwal-Pandit et al. showed that the sub-cellular localization of the WRAP53 protein has a significant impact on breast cancer survival, and thus has a potential as a clinical marker in diagnostics and treatment [33].
Adenocarcinoma is a type of malignant tumors, and appears in many human organs, for example, lung [34], prostate [35], stomach [36], colon [37] and so on. Among top 20 predicted potential lncRNAs in Tables 2, 11 lncRNAs were reported to be associated with adenocarcinoma in literatures. Dong et al. showed that GAS5 is significantly downregulated in lung adenocarcinoma tissues, and may represent a potential biomarker for the diagnosis of lung adenocarcinoma [38]. Lee et al. found that HOTAIR was involved in inhibition of apoptosis and promoted invasiveness, supporting a role for HOTAIR in carcinogenesis and invasion of gastric adenocarcinoma [39]. Tano et al. suggested that MALAT1 enhances cell motility of lung adenocarcinoma cells by influencing the expression of motility-related genes [40]. Li et al. confirmed that MEG3 plays a promoting role in the proliferation, invasion, and angiogenesis of lung adenocarcinoma cells through the AKT pathway [41]. Liu et al. reasoned that H19 promotes viability and epithelial-mesenchymal transition of lung adenocarcinoma cells by targeting miR-29b-3p and modifying STAT3 [42]. Lin et al. concluded that overexpression of CCAT1 promotes metastasis via epithelial-to-mesenchymal transition in lung adenocarcinoma [43]. Jiang et al. found that an increased expression of PANDAR promotes cell proliferation and inhibits cell apoptosis in pancreatic ductal adenocarcinoma [44]. Xu et al. provided strong evidence that PVT1 confers an aggressive phenotype to esophageal adenocarcinoma [45]. Liu et al. suggested that UCA1 axis plays a crucial role in progression of pancreatic ductal adenocarcinoma and may serve as a target for new therapies [46]. Hu et al. showed that CCAT2 may act as a competitive endogenous RNA to regulate FOXC1 expression by competitively binding miR-23b-5p in lung adenocarcinoma [47]. Lu et al. suggested that DANCR might be an oncogenic lncRNA that regulates mTOR expression through directly binding to miR-496, and therefore may be regarded as a biomarker or therapeutic target for lung adenocarcinoma [48].
Though future studies are needed to confirm above findings, according to case studies, we believed that IDSSIM is a promising model for lncRNA function prediction, and the time and cost could be significantly reduced while performing biological experiments based on clues that provided by IDSSIM.
In order to further validate the effectiveness of IDSSIM, Venn diagrams of four compared models were illustrated in Fig. 5, each element of which can be written as |Lcon|/|Lall|, where Lall represents potential disease-associated lncRNAs that predicted by all corresponding models, Lcon represents those lncRNAs in Lall that can be confirmed to associated with the disease by databases and literatures, and |⋅| denotes the number of Lall or Lcon. It is seen that the combination of IDSSIM and WKNKN can predict more confirmed disease-associated lncRNAs than other combinations of compared models and WKNKN. For breast cancer, IDSSIM predicted 35 potential disease-associated lncRNAs in total and 16 out of which have been confirmed. These ratios of LNCSIM1, LNCSIM2, and ILNCSIM were 15/35, 14/30, and 14/34 respectively. Similarly, for adenocarcinoma, these ratios of IDSSIM, LNCSIM1, LNCSIM2, and ILNCSIM were 18/33, 18/33, 16/30, and 6/13 respectively.
Venn diagrams of four compared models on breast cancer and adenocarcinoma
LncRNA functional similarity calculation model plays an important role in predicting lncRNA functions and identifying potential lncRNA-disease associations. In this paper, we proposed an lncRNA functional similarity calculation model, IDSSIM for short, based on an improved disease semantic similarity method, highlight of which is the introduction of IC contribution factor into the semantic value calculation to take into account both the hierarchical structures of DAGs and the disease specificities. To evaluate the performance of IDSSIM, comparison experiments with three state-of-the-art models LNCSIM1, LNCSIM2, and ILNCSIM, were performed on both lncRNADisease database and MNDR database by using evaluation measures of ROC curves and AUC values. Results demonstrated that IDSSIM is superior to compared models, and can improve accuracy of disease semantic similarity effectively, leading to increase the association prediction ability of our model. In addition, case studies of breast cancer and adenocarcinoma were also adopted. Results showed that most of potential disease-associated lncRNAs predicted by IDSSIM can be confirmed by databases and literatures, implying that IDSSIM can serve as a promising tool for predicting lncRNA functions, identifying potential lncRNA-disease associations, and pre-screening candidate lncRNAs to perform biological experiments.
However, IDSSIM still has several limitations, which inspire us to continue working in the future. Firstly, the information biases of diseases and/or lncRNAs in databases which usually caused by their research heat sometimes lead to inaccurate lncRNA-disease association scores. Secondly, the priori knowledge of lncRNAs, as well as their interactions with other biomolecules, should be considered together in IDSSIM to further improve its prediction accuracy. Thirdly, software package or web application of IDSSIM should be provided later.
The IDSSIM code and experimental data, including the matrices of the human lncRNA-disease associations that comes from the lncRNADisease database and the MNDR database respectively, two corresponding disease semantic similarity matrices, two corresponding lncRNA functional similarity matrices, and two corresponding matrices of the human lncRNA-disease associations that predicted by WKNKN, are available online at https://github.com/CDMB-lab/IDSSIM.
LncRNA:
Long non-coding RNA
NcRNAs:
Non-coding RNAs
MeSH:
Medical Subject Headings
MNDR:
Mammalian NcRNA-Disease Repository
DAGs:
Directed Acyclic Graphs
Information Content
AUC:
WKNKN:
Weighted K Nearest Known Neighbors
TPR:
True Positive Rates
FPR:
False Positive Rates
ROC:
Receiver Operating Characteristic
Kapranov P, Cheng J, Dike S, Nix DA, Duttagupta R, Willingham AT, Stadler PF, Hertel J, Hackermuller J, Hofacker IL, et al. RNA maps reveal new RNA classes and a possible function for pervasive transcription. Science. 2007;316(5830):1484–8.
Kapranov P, Willingham AT, Gingeras TR. Genome-wide transcription and the implications for genomic organization. Nat Rev Genet. 2007;8(6):413–23.
Mercer TR, Dinger ME, Mattick JS. Long non-coding RNAs: insights into functions. Nat Rev Genet. 2009;10(3):155–9.
Esteller M. Non-coding RNAs in human disease. Nat Rev Genet. 2011;12(12):861–74.
Taft RJ, Pang KC, Mercer TR, Dinger M, Mattick JS. Non-coding RNAs: regulators of disease. J Pathol. 2010;220(2):126–39.
Matjašič A, Glavač D. Long noncoding RNAs and tumorigenesis. eLS. 2015:1–10.
Loewen G, Jayawickramarajah J, Zhuo Y, Shan B. Functions of lncRNA HOTAIR in lung cancer. J Hematol Oncol. 2014;7:90.
Wang DQ, Fu P, Yao C, Zhu LS, Hou TY, Chen JG, Lu Y, Liu D, Zhu LQ. Long non-coding RNAs, novel culprits, or bodyguards in neurodegenerative diseases. Mol Ther Nucleic Acids. 2018;10:269–76.
Chen X. KATZLDA: KATZ measure for the lncRNA-disease association prediction. Sci Rep. 2015;5:16840.
Chen X, Yan CC, Zhang X, You ZH. Long non-coding RNAs and complex diseases: from experimental results to computational models. Brief Bioinform. 2017;18(4):558–76.
Chen X, Huang YA, Wang XS, You ZH, Chan KC. FMLNCSIM: fuzzy measure-based lncRNA functional similarity calculation model. Oncotarget. 2016;7(29):45948–58.
Cheng L, Shi H, Wang Z, Hu Y, Yang H, Zhou C, Sun J, Zhou M. IntNetLncSim: an integrative network analysis method to infer human lncRNA functional similarity. Oncotarget. 2016;7(30):47864–74.
Huang YA, Chen X, You ZH, Huang DS, Chan KC. ILNCSIM: improved lncRNA functional similarity calculation model. Oncotarget. 2016;7(18):25902–14.
Chen X, Yan GY. Novel human lncRNA-disease association inference based on lncRNA expression profiles. Bioinformatics. 2013;29(20):2617–24.
Yu G, Fu G, Lu C, Ren Y, Wang J. BRWLDA: bi-random walks for predicting lncRNA-disease associations. Oncotarget. 2017;8(36):60429–46.
Chen X, Yan CC, Luo C, Ji W, Zhang Y, Dai Q. Constructing lncRNA functional similarity network based on lncRNA-disease associations and disease semantic similarity. Sci Rep. 2015;5:11338.
Chen X, Sun YZ, Guan NN, Qu J, Huang ZA, Zhu ZX, Li JQ. Computational models for lncRNA function prediction and functional similarity calculation. Brief Funct Genomics. 2019;18(1):58–82.
Chen G, Wang Z, Wang D, Qiu C, Liu M, Chen X, Zhang Q, Yan G, Cui Q. LncRNADisease: a database for long-non-coding RNA-associated diseases. Nucleic Acids Res. 2013;41:D983–6.
Ding L, Wang M, Sun D, Li A. TPGLDA: novel prediction of associations between lncRNAs and diseases via lncRNA-disease-gene tripartite graph. Sci Rep. 2018;8(1):1065.
Cui T, Zhang L, Huang Y, Yi Y, Tan P, Zhao Y, Hu Y, Xu L, Li E, Wang D. MNDR v2.0: an updated resource of ncRNA-disease associations in mammals. Nucleic Acids Res. 2018;46(D1):D371–4.
Lipscomb CE. Medical subject headings (MeSH). Bull Med Libr Assoc. 2000;88(3):265–6.
Wang D, Wang J, Lu M, Song F, Cui Q. Inferring the human microRNA functional similarity and functional network based on microRNA-associated diseases. Bioinformatics. 2010;26(13):1644–50.
Ezzat A, Zhao P, Wu M, Li XL, Kwoh CK. Drug-target interaction prediction with graph regularized matrix factorization. IEEE/ACM Trans Comput Biol Bioinform. 2017;14(3):646–56.
Yao D, Zhan X, Zhan X, Kwoh CK, Li P, Wang J. A random forest based computational model for predicting novel lncRNA-disease associations. BMC Bioinformatics. 2020;21(1):126.
Bao Z, Yang Z, Huang Z, Zhou Y, Cui Q, Dong D. LncRNADisease 2.0: an updated database of long non-coding RNA-associated diseases. Nucleic Acids Res. 2019;47(D1):D1034–7.
Gao Y, Wang P, Wang Y, Ma X, Zhi H, Zhou D, Li X, Fang Y, Shen W, Xu Y, et al. Lnc2Cancer v2.0: updated database of experimentally supported long non-coding RNAs in human cancers. Nucleic Acids Res. 2019;47(D1):D1028–33.
Benson JR, Jatoi I, Keisch M, Esteva FJ, Makris A, Jordan VC. Early breast cancer. Lancet. 2009;373:1463–79.
Fan H, Yuan J, Li X, Ma Y, Wang X, Xu B, Li X. LncRNA LINC00173 enhances triple-negative breast cancer progression by suppressing miR-490-3p expression. Biomed Pharmacother. 2020;125:109987.
Zheng S, Jiang F, Ge D, Tang J, Chen H, Yang J, Yao Y, Yan J, Qiu J, Yin Z, et al. LncRNA SNHG3/miRNA-151a-3p/RAB22A axis regulates invasion and migration of osteosarcoma. Biomed Pharmacother. 2019;112:108695.
Sarrafzadeh S, Geranpayeh L, Ghafouri-Fard S. Expression analysis of long non-coding PCAT-1in breast Cancer. Int J Hematol Oncol Stem Cell Res. 2017;11(3):185–91.
Ma Q, Qi X, Lin X, Li L, Chen L, Hu W. LncRNA SNHG3 promotes cell proliferation and invasion through the miR-384/hepatoma-derived growth factor axis in breast cancer. Hum Cell. 2020;33(1):232–42.
Wang S, Ke H, Zhang H, Ma Y, Ao L, Zou L, Yang Q, Zhu H, Nie J, Wu C, et al. LncRNA MIR100HG promotes cell proliferation in triple-negative breast cancer through triplex formation with p27 loci. Cell Death Dis. 2018;9(8):805.
Silwal-Pandit L, Russnes H, Borgen E, Skarpeteig V, Moen Vollan HK, Schlichting E, Karesen R, Naume B, Borresen-Dale AL, Farnebo M, et al. The sub-cellular localization of WRAP53 has prognostic impact in breast Cancer. PLoS One. 2015;10(10):e0139965.
Collisson EA, Rosenberg M, Balasundaram M, Chin E, Curley E, Saller C. Comprehensive molecular profiling of lung adenocarcinoma. Nature. 2014;511(7511):543–50.
Cho N-Y, Choi M, Kim B-H, Cho Y-M, Moon KC, Kang GH. BRAF andKRAS mutations in prostatic adenocarcinoma. Int J Cancer. 2006;119(8):1858–62.
Matsuyama S, Ohkura Y, Eguchi H, Kobayashi Y, Akagi K, Uchida K, Nakachi K, Gustafsson JA, Hayashi S. Estrogen receptor beta is expressed in human stomach adenocarcinoma. J Cancer Res Clin Oncol. 2002;128(6):319–24.
Reedijk M, Odorcic S, Zhang H, Chetty R, Tennert C, Dickson BC, Lockwood G, Gallinger S, Egan SE. Activation of notch signaling in human colon adenocarcinoma. Int J Oncol. 2008;33(6):1223–9.
Dong S, Qu X, Li W, Zhong X, Li P, Yang S, Chen X, Shao M, Zhang L. The long non-coding RNA, GAS5, enhances gefitinib-induced cell death in innate EGFR tyrosine kinase inhibitor-resistant lung adenocarcinoma cells with wide-type EGFR via downregulation of the IGF-1R expression. J Hematol Oncol. 2015;8:43.
Lee NK, Lee JH, Park CH, Yu D, Lee YC, Cheong JH, Noh SH, Lee SK. Long non-coding RNA HOTAIR promotes carcinogenesis and invasion of gastric adenocarcinoma. Biochem Biophys Res Commun. 2014;451(2):171–8.
Tano K, Mizuno R, Okada T, Rakwal R, Shibato J, Masuo Y, Ijiri K, Akimitsu N. MALAT-1 enhances cell motility of lung adenocarcinoma cells by influencing the expression of motility-related genes. FEBS Lett. 2010;584(22):4575–80.
Li H, Wang J, Lv S, Zhang Y, Zhang C, Lige B, Dan S, Sun Y. Long noncoding RNA MEG3 plays a promoting role in the proliferation, invasion, and angiogenesis of lung adenocarcinoma cells through the AKT pathway. J Cell Biochem. 2019;120(9):16143–52.
Liu L, Liu L, Lu S. lncRNA H19 promotes viability and epithelial-mesenchymal transition of lung adenocarcinoma cells by targeting miR-29b-3p and modifying STAT3. Int J Oncol. 2019;54(3):929–41.
Lin H, Cheng W, Yan H, Zhang X. Overexpression of the long noncoding RNA CCAT1 promotes metastasis via epithelial-to-mesenchymal transition in lung adenocarcinoma. Oncol Lett. 2018;16(2):1809–14.
Jiang Y, Feng E, Sun L, Jin W, You Y, Yao Y, Xu Y. An increased expression of long non-coding RNA PANDAR promotes cell proliferation and inhibits cell apoptosis in pancreatic ductal adenocarcinoma. Biomed Pharmacother. 2017;95:685–91.
Xu Y, Li Y, Jin J, Han G, Sun C, Pizzi MP, Huo L, Scott A, Wang Y, Ma L, et al. LncRNA PVT1 up-regulation is a poor prognosticator and serves as a therapeutic target in esophageal adenocarcinoma. Mol Cancer. 2019;18(1):141.
Liu Y, Feng W, Gu S, Wang H, Zhang Y, Chen W, Xu W, Lin C, Gong A, Xu M. The UCA1/KRAS axis promotes human pancreatic ductal adenocarcinoma stem cell properties and tumor growth. Am J Cancer Res. 2019;9(3):496–510.
Hu GD, Wang CX, Wang HY, Wang YQ, Hu S, Cao ZW, Min B, Li L, Tian XF, Hu HB. Long noncoding RNA CCAT2 functions as a competitive endogenous RNA to regulate FOXC1 expression by sponging miR-23b-5p in lung adenocarcinoma. J Cell Biochem. 2018.
Lu QC, Rui ZH, Guo ZL, Xie W, Shan S, Ren T. LncRNA-DANCR contributes to lung adenocarcinoma progression by sponging miR-496 to modulate mTOR expression. J Cell Mol Med. 2018;22(3):1527–37.
We are grateful to the anonymous reviewers whose suggestions and comments contributed to the significant improvement of this paper.
This work was supported by the National Science Foundation of China (61972226, 61902216, 61701279, and 61872220) and the China Postdoctoral Science Foundation (2018 M642635).
School of Information Science and Engineering, Qufu Normal University, Rizhao, 276826, China
Wenwen Fan, Junliang Shang, Feng Li, Yan Sun, Shasha Yuan & Jin-Xing Liu
Wenwen Fan
Junliang Shang
Yan Sun
Shasha Yuan
Jin-Xing Liu
WF and JS jointly contributed to the design of the study. WF designed and implemented IDSSIM, performed the experiments, and drafted the manuscript. FL participated in designing evaluation criteria. YS, SY and J-X L contributed to the data analysis. All authors read and approved the final manuscript.
Correspondence to Junliang Shang.
Consent for publication
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Fan, W., Shang, J., Li, F. et al. IDSSIM: an lncRNA functional similarity calculation model based on an improved disease semantic similarity method. BMC Bioinformatics 21, 339 (2020). https://doi.org/10.1186/s12859-020-03699-9
Received: 30 April 2020
Accepted: 23 July 2020
DOI: https://doi.org/10.1186/s12859-020-03699-9
lncRNA-disease associations
Novel computational methods for the analysis of biological systems | CommonCrawl |
> hep-ph > arXiv:1301.6085
arXiv:1301.6085 (hep-ph)
[Submitted on 25 Jan 2013]
Title:Supersymmetric SO(10) GUTs with sliding scales
Authors:C. Arbelaez, R. M. Fonseca, M. Hirsch, J. C. Romao
Abstract: We construct lists of supersymmetric models with extended gauge groups at intermediate steps, all of which are based on SO(10) unification. We consider three different kinds of setups: (i) The model has exactly one additional intermediate scale with a left-right (LR) symmetric group; (ii) SO(10) is broken to the LR group via an intermediate Pati-Salam (PS) scale; and (iii) the LR group is broken into $SU(3)_{c} \times SU(2)_{L} \times U(1)_{R} \times U(1)_{B-L}$, before breaking to the SM group. We use sets of conditions, which we call the "sliding mechanism", which yield unification with the extended gauge group(s) allowed at arbitrary intermediate energy scales. All models thus can have new gauge bosons within the reach of the LHC, in principle. We apply additional conditions, such as perturbative unification, renormalizability and anomaly cancellation and find that, despite these requirements, for the ansatz (i) with only one additional scale still around 50 different variants exist that can have an LR symmetry below 10 TeV. For the more complicated schemes (ii) and (iii) literally thousands of possible variants exist, and for scheme (ii) we have also found variants with very low PS scales. We also discuss possible experimental tests of the models from measurements of SUSY masses. Assuming mSugra boundary conditions we calculate certain combinations of soft terms, called "invariants", for the different classes of models. Values for all the invariants can be classified into a small number of sets, which contain information about the class of models and, in principle, the scale of beyond-MSSM physics, even in case the extended gauge group is broken at an energy beyond the reach of the LHC.
Subjects: High Energy Physics - Phenomenology (hep-ph)
Report number: CFTP/13-002, IFIC/13-02
Cite as: arXiv:1301.6085 [hep-ph]
(or arXiv:1301.6085v1 [hep-ph] for this version)
From: Carolina Arbelaez [view email]
[v1] Fri, 25 Jan 2013 16:39:12 UTC (2,856 KB) | CommonCrawl |
Chapter 3: Kan Complexes
Section 3.1: The Homotopy Theory of Kan Complexes
Subsection 3.1.5: Homotopy Equivalences and Weak Homotopy Equivalences (cite)
3.1.5 Homotopy Equivalences and Weak Homotopy Equivalences
Let $f: X_{} \rightarrow Y_{}$ be a morphism of Kan complexes. We will say that $f$ is a homotopy equivalence if the homotopy class $[f]$ is an isomorphism in the homotopy category $\mathrm{h} \mathit{\operatorname{Kan}}$ of Construction 3.1.4.10. This definition can be extended to more general simplicial sets in multiple ways.
Definition 3.1.5.1. Let $f: X_{} \rightarrow Y_{}$ be a morphism of simplicial sets. We will say that a morphism $g: Y_{} \rightarrow X_{}$ is a homotopy inverse to $f$ if the compositions $g \circ f$ and $f \circ g$ are homotopic to the identity morphisms $\operatorname{id}_{X_{}}$ and $\operatorname{id}_{ Y_{} }$, respectively (in the sense of Definition 3.1.4.2). We say that $f: X_{} \rightarrow Y_{}$ is a homotopy equivalence if it admits a homotopy inverse $g$.
Example 3.1.5.2. Let $f: X \rightarrow Y$ be a homotopy equivalence of topological spaces. Then the induced map of singular simplicial sets $\operatorname{Sing}_{\bullet }(f): \operatorname{Sing}_{\bullet }(X) \rightarrow \operatorname{Sing}_{\bullet }(Y)$ is a homotopy equivalence (see Example 3.1.4.6).
Remark 3.1.5.3. Let $f: X_{} \rightarrow Y_{}$ be a morphism of simplicial sets. The condition that $f$ is a homotopy equivalence depends only on the homotopy class $[f] \in \pi _0( \operatorname{Fun}(X_{}, Y_{} ) )$. Moreover, if $f$ is a homotopy equivalence, then its homotopy inverse $g: Y_{} \rightarrow X_{}$ is determined uniquely up to homotopy.
Remark 3.1.5.4. Let $f: X_{} \rightarrow Y_{}$ be a morphism of Kan complexes. If $f$ is a homotopy equivalence, then the induced map of fundamental groupoids $\pi _{\leq 1}(f): \pi _{\leq 1}(X) \rightarrow \pi _{\leq 1}(Y)$ is an equivalence of categories. In particular, $f$ induces a bijection $\pi _0(f): \pi _0( X_{} ) \rightarrow \pi _0( Y_{} )$.
Remark 3.1.5.5. Let $f: X_{} \rightarrow Y_{}$ be a morphism of simplicial sets. The following conditions are equivalent:
The morphism $f$ is a homotopy equivalence.
For every simplicial set $Z_{}$, composition with $f$ induces a bijection $\pi _0( \operatorname{Fun}(Y_{}, Z_{})) \rightarrow \pi _0( \operatorname{Fun}( X_{}, Z_{}) ).$.
For every simplicial set $W_{}$, composition with $f$ induces a bijection $\pi _0( \operatorname{Fun}(W_{}, X_{} ) ) \rightarrow \pi _0( \operatorname{Fun}( W_{}, Y_{} ))$.
In particular (taking $W_{} = \Delta ^{0}$), if $f$ is a homotopy equivalence, then the induced map $\pi _0(f): \pi _0( X_{} ) \rightarrow \pi _0( Y_{} )$ is a bijection.
Remark 3.1.5.6 (Two-out-of-Six). Let $f: W_{} \rightarrow X_{}$, $g: X_{} \rightarrow Y_{}$, and $h: Y_{} \rightarrow Z_{}$ be morphisms of simplicial sets. If $g \circ f$ and $h \circ g$ are homotopy equivalences, then $f$, $g$, and $h$ are all homotopy equivalences.
Remark 3.1.5.7 (Two-out-of-Three). Let $f: X_{} \rightarrow Y_{}$ and $g: Y_{} \rightarrow Z_{}$ be morphisms of simplicial sets. If any two of the morphisms $f$, $g$, and $g \circ f$ are homotopy equivalences, then so is the third.
We now give some more examples of homotopy equivalences.
Proposition 3.1.5.8. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between categories, and suppose that $F$ admits either a left or a right adjoint. Then the induced map $\operatorname{N}_{\bullet }(F): \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) \rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is a homotopy equivalence of simplicial sets.
Proof. Without loss of generality, we may assume that $F$ admits a right adjoint $G: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{C}}$. Then there exist natural transformations $u: \operatorname{id}_{\operatorname{\mathcal{C}}} \rightarrow G \circ F$ and $v: F \circ G \rightarrow \operatorname{id}_{\operatorname{\mathcal{D}}}$ witnessing an adjunction between $F$ and $G$, so that the maps $\operatorname{N}_{\bullet }(F)$ and $\operatorname{N}_{\bullet }(G)$ are homotopy inverses by virtue of Example 3.1.4.7. $\square$
Proposition 3.1.5.9. Let $f: X_{} \rightarrow S_{}$ be a trivial Kan fibration of simplicial sets. Then $f$ is a homotopy equivalence.
Proof. Since $f$ is a trivial Kan fibration, the lifting problem
\[ \xymatrix@R =50pt@C=50pt{ \emptyset \ar [r] \ar [d] & X_{} \ar [d]^{f} \\ S_{} \ar [r]^-{\operatorname{id}} \ar@ {-->}[ur] & S_{} } \]
admits a solution (Proposition 1.4.5.3). We can therefore choose a morphism of simplicial sets $g: S_{} \rightarrow X_{}$ which is a section of $f$: that is, $f \circ g$ is the identity morphism from $S_{}$ to itself. We will complete the proof by showing that $g$ is a homotopy inverse to $f$. In fact, we claim that there exists a homotopy $h$ from $\operatorname{id}_{X_{}}$ to the composition $g \circ f$. This follows from the solubility of the lifting problem
\[ \xymatrix@C =100pt{ \{ 0,1\} \times X_{} \ar [r]^-{(\operatorname{id}, g \circ f)} \ar [d] & X_{} \ar [d]^{f} \\ X_{} \ar [r]^-{f} \ar@ {-->}[ur]^-{h} & S_{}. } \]
$\square$
When working with simplicial sets which are not Kan complexes, it is usually better to work with a more liberal notion of homotopy equivalence.
Definition 3.1.5.10. Let $f: X_{} \rightarrow Y_{}$ be a morphism of simplicial sets. We will say that $f$ is a weak homotopy equivalence if, for every Kan complex $Z_{}$, precomposition with $f$ induces a bijection $\pi _0( \operatorname{Fun}(Y_{}, Z_{} ) ) \rightarrow \pi _0( \operatorname{Fun}( X_{}, Z_{} ) )$.
Proposition 3.1.5.11. Let $f: X_{} \rightarrow Y_{}$ be a morphism of simplicial sets. If $f$ is a homotopy equivalence, then it is a weak homotopy equivalence. The converse holds if $X_{}$ and $Y_{}$ are Kan complexes.
Proof. The first assertion follows from Remark 3.1.5.5. For the second, assume that $f$ is a weak homotopy equivalence. If $X_{}$ is a Kan complex, then precomposition with $f$ induces a bijection $\pi _0( \operatorname{Fun}(Y_{}, X_{} ) ) \rightarrow \pi _0( \operatorname{Fun}( X_{}, X_{} ) )$. We can therefore choose a map of simplicial sets $g: Y_{} \rightarrow X_{}$ such that $g \circ f$ is homotopic to the identity on $X_{}$ It follows that $f \circ g \circ f$ is homotopic to $f = \operatorname{id}_{Y_{}} \circ f$. Invoking the injectivity of the map $\pi _0( \operatorname{Fun}(Y_{}, Y_{} ) ) \xrightarrow {\circ f} \pi _0( \operatorname{Fun}( X_{}, Y_{} ) )$, we conclude that $f \circ g$ is homotopic to $\operatorname{id}_{ Y_{} }$, so that $g$ is a homotopy inverse to $f$. $\square$
Proposition 3.1.5.12. Let $f: A_{} \hookrightarrow B_{}$ be an anodyne morphism of simplicial sets. Then $f$ is a weak homotopy equivalence.
Remark 3.1.5.13. We will later prove a (partial) convere to Proposition 3.1.5.12: if a monomorphism of simplicial sets $f: A_{} \hookrightarrow B_{}$ is a weak homotopy equivalence, then $f$ is anodyne (see Corollary 3.3.7.5).
Proof of Proposition 3.1.5.12. Let $i: A_{} \hookrightarrow B_{}$ be an anodyne morphism of simplicial sets; we wish to show that $i$ is a weak homotopy equivalence. Let $X_{}$ be any Kan complex. It follows from Corollary 3.1.3.6 that the restriction map $\theta : \operatorname{Fun}( B_{}, X_{} ) \rightarrow \operatorname{Fun}(A_{}, X_{} )$ is a trivial Kan fibration. In particular, $\theta $ is a homotopy equivalence (Proposition 3.1.5.9), and therefore induced a bijection on connected components $\pi _0( \operatorname{Fun}( B_{}, X_{} ) ) \rightarrow \pi _0( \operatorname{Fun}( A_{}, X_{} ) )$ (Remark 3.1.5.5). $\square$
Remark 3.1.5.14 (Two-out-of-Six). Let $f: W_{} \rightarrow X_{}$, $g: X_{} \rightarrow Y_{}$, and $h: Y_{} \rightarrow Z_{}$ be morphisms of simplicial sets. If $g \circ f$ and $h \circ g$ are weak homotopy equivalences, then $f$, $g$, and $h$ are all weak homotopy equivalences.
Remark 3.1.5.15 (Two-out-of-Three). Let $f: X_{} \rightarrow Y_{}$ and $g: Y_{} \rightarrow Z_{}$ be morphisms of simplicial sets. If any two of the morphisms $f$, $g$, and $g \circ f$ are weak homotopy equivalences, then so is the third.
Proposition 3.1.5.16. Let $f: X \rightarrow Y$ be a weak homotopy equivalence of simplicial sets. Then the induced map of normalized chain complexes $\mathrm{N}_{\ast }(X; \operatorname{\mathbf{Z}}) \rightarrow \mathrm{N}_{\ast }(Y; \operatorname{\mathbf{Z}})$ is a chain homotopy equivalence. In particular, $f$ induces an isomorphism of homology groups $\mathrm{H}_{\ast }(X;\operatorname{\mathbf{Z}}) \rightarrow \mathrm{H}_{\ast }(Y; \operatorname{\mathbf{Z}})$.
Proof. Let $M_{\ast }$ be a chain complex of abelian groups. We wish to show that precomposition with $\mathrm{N}_{\ast }(f; \operatorname{\mathbf{Z}})$ induces a bijection
\[ \xymatrix { \{ \text{Chain homotopy classes of maps $\mathrm{N}_{\ast }(Y; \operatorname{\mathbf{Z}}) \rightarrow M_{\ast }$} \} \ar [d]^{\theta } \\ \{ \text{Chain homotopy classes of maps $\mathrm{N}_{\ast }(X; \operatorname{\mathbf{Z}}) \rightarrow M_{\ast }$} \} . } \]
Let $\mathrm{K}(M_{\ast })$ denote the Eilenberg-MacLane space associated to $M_{\ast }$ (Construction 2.5.6.3). Using Example 3.1.4.8, we can identify $\theta $ with the map
\[ \pi _{0}(\operatorname{Fun}(Y, \mathrm{K}(M_{\ast } ) )) \rightarrow \pi _0( \operatorname{Fun}(X, \mathrm{K}(M_{\ast } ) ) ) \]
given by precomposition with $f$. This map is bijective because $f$ is a weak homotopy equivalence (by assumption) and $\mathrm{K}(M_{\ast })$ is a Kan complex (Remark 2.5.6.4). $\square$
Remark 3.1.5.17. There is a partial converse to Proposition 3.1.5.16. If $f: X \rightarrow Y$ is a morphism between simply-connected simplicial sets and the induced map $\mathrm{H}_{\ast }(X; \operatorname{\mathbf{Z}}) \rightarrow \mathrm{H}_{\ast }(Y; \operatorname{\mathbf{Z}})$ is an isomorphism, one can show that $f$ is a weak homotopy equivalence. Beware that this is not necessarily true if $X$ and $Y$ are not simply connected (see § for further discussion).
Remark 3.1.5.18 (Coproducts of Weak Homotopy Equivalences). Let $\{ f(i): X(i) \rightarrow Y(i) \} _{i \in I}$ be a collection of weak homotopy equivalences of simplicial sets indexed by a set $I$. For every Kan complex $Z$, we have a commutative diagram of Kan complexes
\[ \xymatrix { \operatorname{Fun}( \coprod _{i \in I} Y(i), Z) \ar [r] \ar [d]^{\sim } & \operatorname{Fun}( \coprod _{i \in I} X(i), Z) \ar [d]^{\sim } \\ \prod _{i \in I} \operatorname{Fun}( Y(i), Z) \ar [r] & \prod _{i \in I} \operatorname{Fun}(X(i), Z), } \]
where the vertical maps are isomorphisms. Passing to the connected components (and using the fact that the functor $Q \mapsto \pi _0(Q)$ preserves products when restricted to Kan complexes; see Corollary 1.1.9.11), we deduce that the map $\pi _0( \operatorname{Fun}( \coprod _{i \in I} Y(i), Z) ) \rightarrow \pi _0( \operatorname{Fun}(\coprod _{i \in I} X(i), Z) )$ is bijective. Allowing $Z$ to vary, we conclude that the induced map $\coprod _{i \in I} X(i) \rightarrow \coprod _{i \in I} Y(i)$ is also a weak homotopy equivalence.
Exercise 3.1.5.19. Let $G$ be the the directed graph depicted in the diagram
\[ \xymatrix@R =50pt@C=50pt{ 0 \ar [r] & 1 \ar [r] & 2 \ar [r] & 3 \ar [r] & 4 \ar [r] & \cdots } \]
and let $G_{}$ denote the associated $1$-dimensional simplicial set (see Warning 1.1.6.27). Show that the projection map $G_{} \rightarrow \Delta ^{0}$ is a weak homotopy equivalence, but not a homotopy equivalence.
Warning 3.1.5.20. Let $X_{}$ and $Y_{}$ be simplicial sets. The existence of a weak homotopy equivalence $f: X_{} \rightarrow Y_{}$ does not guarantee the existence of a weak homotopy equivalence $g: Y_{} \rightarrow X_{}$. | CommonCrawl |
Reconfigurable Stochastic neurons based on tin oxide/MoS2 hetero-memristors for simulated annealing and the Boltzmann machine
Xiaodong Yan ORCID: orcid.org/0000-0002-7737-69841 na1,
Jiahui Ma1 na1,
Tong Wu ORCID: orcid.org/0000-0001-6018-02832,
Aoyang Zhang1,
Jiangbin Wu ORCID: orcid.org/0000-0002-8751-70821,
Matthew Chin3,
Zhihan Zhang4,
Madan Dubey3,
Wei Wu ORCID: orcid.org/0000-0001-6404-03171,
Mike Shuo-Wei Chen1,
Jing Guo ORCID: orcid.org/0000-0003-4009-30562 &
Han Wang ORCID: orcid.org/0000-0001-5121-33621,5
Neuromorphic hardware implementation of Boltzmann Machine using a network of stochastic neurons can allow non-deterministic polynomial-time (NP) hard combinatorial optimization problems to be efficiently solved. Efficient implementation of such Boltzmann Machine with simulated annealing desires the statistical parameters of the stochastic neurons to be dynamically tunable, however, there has been limited research on stochastic semiconductor devices with controllable statistical distributions. Here, we demonstrate a reconfigurable tin oxide (SnOx)/molybdenum disulfide (MoS2) heterogeneous memristive device that can realize tunable stochastic dynamics in its output sampling characteristics. The device can sample exponential-class sigmoidal distributions analogous to the Fermi-Dirac distribution of physical systems with quantitatively defined tunable "temperature" effect. A BM composed of these tunable stochastic neuron devices, which can enable simulated annealing with designed "cooling" strategies, is conducted to solve the MAX-SAT, a representative in NP-hard combinatorial optimization problems. Quantitative insights into the effect of different "cooling" strategies on improving the BM optimization process efficiency are also provided.
Stochastic neuron devices are essential for the neural network implementation of key emerging non-von-Neumann computing concepts such as the Boltzmann machines, which are recurrent artificial neural networks with stochastic features analogous to the thermodynamics of real-world physical systems. BM can be used to solve a broad range of combinatorial optimization problems1,2 with applications in classification3, pattern recognition4, feature learning, and other emerging computing systems. Deriving its name from the Boltzmann distribution of statistical mechanics, BM possesses an artificial notion of "temperature", and the controlled evolution of this "temperature" parameter during the optimization process5,6, i.e., the "cooling" strategy, can impact the convergence efficiency of the BM and its chance of reaching a better cost-energy minimization (or maximization depending on problem definition). To realize the hardware implementation of the BM that can also allow the "temperature" control and hence the precise execution of desired "cooling" strategy, it is essential to have electronic devices that can generate exponential-class stochastic sampling with dynamically tunable distribution parameters.
The property of memristor in its deterministic form has been commonly used in applications such as multiply-and-accumulate matrix calculation7 and resistor-logic demultiplexers8,9,10. Its stochastic property is often intentionally suppressed11,12,13 in such applications for the purpose of achieving accurate and reproducible computational results14,15. On the other hand, rich stochastic property of memristors, which relies on ensembles of random movements of atoms and ions, offers opportunities in energy-efficient computing applications16,17,18,19,20. With the stochastic property, one can generate random number21 to encrypt information, implement physical unclonable functions22, and realize artificial neurons23 with integrate-and-fire activations. Furthermore, emerging computing schemes can use stochastic memristive device as a building block to emulate biological neural network24,25, whose functions—such as decision-making—can leverage the stochastic dynamics of neurons and synapses. However, a common challenge with previous stochastic memristors is the lack of means to precisely control and modulate the probability distribution that is associated with its randomness. Realizing such devices has been difficult because many device-generated random features in stochastic memristors or oscillators lack stable probability distribution, which limits the chance of controlling it experimentally19,26,27. Additionally, with only two terminals in a common memristor, where the probability distribution can only be influenced through the two-terminal bias, the probability distribution of the device output cannot be tuned flexibly and precisely.
In this work, we overcome such challenge with a three-terminal stochastic hetero-memristor based on tin oxide/MoS2 heterostructure, which demonstrates tunable statistical distributions enabled by the gate modulation. The inherent exponential-class stochastic characteristics of the device arising from the intrinsic randomness and energy distribution in its ionic motions are explored to realize sampling of exponential-class sigmoidal distributions that resembles the Fermi–Dirac distribution in physical systems. The device incorporates gate modulation that allows the efficient control of the stochastic features in the device output characteristics. The device enables the realization of reconfigurable stochastic neuron and the implementation of Boltzmann machine in which the reconfigurable statistic of the device allows different "cooling" strategies to be implemented during the optimization process. The effect of different "cooling" strategies on improving the optimization process efficiency of the BM is demonstrated experimentally.
Figure 1a shows the schematic of this reconfigurable heteromemristor, where tin oxide serves as filament-switching layer and is sandwiched between a MoS2 layer and Cr/Au top electrodes (TE). The Si substrate serves as a modulating gate bias (Vg) that can influence the filament-formation dynamics in the tin oxide layer. The high-resolution scanning transmission electron microscopy (HR-STEM) image in Fig. 1b shows the cross section of the fabricated device and reveals that the tin oxide layer is amorphous. An energy-dispersive X-ray spectroscopy (EDX) scan in Fig. 1c indicates the elemental composition. Figure 1d plots the Raman spectra for the SnSe sample before and after oxidation, which leads to the formation of the SnOx layer. All signature modes of SnSe, including the shear mode Ag1, the in-the-plane modes Ag2 and B3g, and the out-of-plane mode Ag3 that are observed before oxidation, and are not detected after oxidation, indicating the full oxidation and amorphization of the SnSe sample28. The tin oxide film can also be synthesized using atomic-layer deposition (ALD)29,30,31, which produces films of similar quality as the direct oxidation method.
Fig. 1: Device structure and electrical characteristics.
a Schematic of the heteromemristive device. b The HR-STEM image of the fabricated device cross section. The scale bar is 5 nm. c EDX scan indicates the elemental composition. d Raman spectra for the SnSe sample before and after oxidation. The missing-signature modes after oxidation indicate the full oxidation and amorphization of the SnSe sample. e Unipolar electrical switching characteristics of the device at Vg = 0 V. The set and reset voltages in positive scan are 3.2 V and 2.8 V, and in negative scan are −3.4 V and −3 V. f Modulation of the set voltage by the gate bias. When Vg decreases from 30 V to −20 V, the set voltage increases.
Unipolar electrical switching characteristics of the device at Vg = 0 V are shown in Fig. 1e. It sets and resets at around 3.2 V and 2.8 V respectively in the positive bias, and at −3.4 V and −3 V, respectively, in the negative bias32. Both the Joule heating and the electric-field driven effect can be playing roles in the device operation. The filament-formation operation can be due to a breakdown-like process with random creation of voltage-stress-induced vacancy or defect sites, which is electric-field driven. The Joule heating can be the main effect in filament rupturing. The insertion of the MoS2 layer in the device made it possible to adjust the electron energy level in MoS2 by externally modulating the gate bias Vg, which can modulate both the contact-energy barrier between the MoS2 and SnOx, and the conductivity of the MoS2 sheet itself (see supplementary information section 4). Hence, as shown in Fig. 1f, as the gate bias decreases from 30 V to −20 V, the electrostatic doping in MoS2 and the associated energy level decreases, leading to the reduction in the series conductivity and hence the gradual increase in the set voltage.
The filament-formation process is stochastic due to the inherent random motion of oxygen ions. To extract this stochastic property quantitatively, a statistical study is carried out on the set process. As shown in Fig. 2, the device is initially reset to the high-resistance state and a bias VTE is applied to the device for up to 2 s. During each set process, it takes a certain amount of time t (t ≤ 2 s) after the bias voltage is applied for the device to be set. This required bias time until set is stochastic in each trial. Furthermore, there is certain chance that the device may still remain in the high-resistance state after 2 s. Figure 2a plots the device current characteristics as a function of time when this reset and set process was repeated for 30 times at VTE = 6 V, 5 V, 4 V, and 3 V, respectively, with Vg fixed at 0 V. At VTE = 6 V, the device is successfully set within the first 2 s for all the 30 trials. At VTE = 5 V, 4 V, and 3 V, the device failed to set within the first 2 s in certain cases. Figure 2b shows the histogram probability distribution extracted from 30 trials of the time required, until the device becomes set. If we consider t as a random variable, the probability that the set will occur within an infinitesimal interval \(\triangle t\) at time t can be described by an exponential-class distribution33 function \(P=\frac{\triangle t}{\tau }\cdot {e}^{-\frac{t}{\tau }}\) with the wait time t following a Poisson distribution (see supplementary information section 6) and it fits the experimental data well (red lines, Fig. 2b). This experimental observation resembling Poisson random wait time underlying the filament-formation process in the tin oxide memristive device is indicative of its exponential-class stochastic nature.
Fig. 2: Sampling of exponential-class sigmoidal distribution.
a The set process under different VTE. The initial state is reset to high-resistance state and a bias VTE is applied to the device for 2 s. b The experimentally extracted probability distribution of the bias time until set occurrence for VTE = 3 V, 4 V, 5 V, and 6 V, respectively. c Pss,t<2s as a function of the VTE under different gate voltages, showing exponential-class sigmoidal distribution function. Experimental results are shown as data symbols, and the analytical model fit is shown in lines. d Experimental results (dots) and model fit (line) showing the relation between Teff and the gate bias Vg.
Moreover, Fig. 2c plots Pss,t<2s as a function of VTE−VTE0 under different gate voltages, which shows exponential-class sigmoidal distribution function. Here, Pss,t<2s is the probability that the device will successfully set within 2 s and VTE0 is the 50% probability bias-voltage point, i.e., Pss,t<2s (VTE = VTE0) = 0.5. With the gate voltage fixed, the chance of the device being set within t < 2 s becomes higher with increasing VTE, following a sigmoidal distribution. It shows that VTE can tune the stochastic property of the set event in the device when Vg is fixed. Microscopically, the VTE tunes the filament-formation process by modulating the vacancy-hopping barrier height and thus the ion-hopping rate. Thus, the device is understandably easier to set at high VTE than low VTE. Under different gate voltages, Pss,t<2s shows a sharper 0-to-1 transition when Vg is 30 V and a wider spread in its 0-to-1 transition when the Vg decreases. Here Vg tunes the Fermi level and charge density in the MoS2 layer, which modulates the potential distribution between MoS2 and tin oxide layer under VTE bias. VTE is more effective in modulating the device when Vg is higher, i.e. the MoS2 layer has a higher electron carrier density and higher conductivity, and thus leads to a sharper 0-to-1 transition in the sigmoidal distribution curve.
The set process is achieved by the filament formation through stochastic vacancy generation and hopping-transport processes. Applying a voltage can reduce the generation and hopping-barrier height and exponentially enhance the generation and hopping rates. Analytically, the set probability, Pss,t<2s, can be derived as Pss,t<2s\(\; = 1-{e}^{{-\beta e}^{\alpha ({V}_{{{{{{\rm{TE}}}}}}}-{V}_{{{{{{\rm{TE}}}}}}0})}}\), where \(\alpha\) and \(\beta\) are parameters related to the material and device structure (see supplementary information section 7). After further approximation, Pss,t<2s can be simplified to a distribution function that resembles the Fermi–Dirac distribution (see supplementary information section 8):
$${P}_{{{{{{\rm{ss}}}}}},\,t < 2{{{{{\rm{s}}}}}}}\approx \frac{1}{1+{{\exp }}\left(-\frac{{V}_{{{{{{\rm{TE}}}}}}}-{V}_{{{{{{\rm{TE}}}}}}0}}{{T}_{{{{{{\rm{eff}}}}}}}}\right)}$$
where Teff is an effective "temperature" term that can be tuned by the gate bias. This expression fits very well with the experimental data in Fig. 2c. The above analytical description is also in agreement with kinetic Monte Carlo simulations, which describes microscopic stochastic process of vacancy generation, hopping, and recombination in filament formation34,35. Teff corresponding to various gate voltages is extracted from the fitting and Fig. 2d plots Teff versus gate voltage Vg. A behavioral model is developed to understand the dependence of the Teff on the gate-bias voltage. The device is modeled as a memristor in serial combination with a MoS2 layer whose resistance (both the sheet resistance and its contact property with the memristive filament) can be modulated by the gate electric field. As a result, Teff can be expressed as \({T}_{{{{{{\rm{eff}}}}}}}\left({V}_{{{{{{\rm{g}}}}}}}\right)={T}_{{{{{{\rm{V}}}}}}0}\left[1+\frac{Z}{\left({V}_{{{{{{\rm{g}}}}}}}-{V}_{{{{{{\rm{T}}}}}}}\right)}\right]\), where \({T}_{{{{{{\rm{V}}}}}}0}\) and Z are constants, VT is the threshold voltage (see supplementary information section 9). As shown in Fig. 2d, this model fits well with the experimental data and describes the modulation effect of Teff by Vg. We would like to note that the value of Teff has the unit of volt. However, to avoid confusion with the actual electrical bias voltages applied on the device, the unit of Teff will be omitted in the subsequent discussions. The above discussed stochastic process of the filament formation together with the gate voltage-dependent "temperature" effect can be used to construct exponential-class distribution sampling that has broad applications in statistical modeling and computing, with the Boltzmann machine as a typical example.
To demonstrate the unique advantages of these tunable exponential-class stochastic heteromemristors in computing application, a version of Boltzmann machine that contains a network of stochastic neurons is implemented. The stochastic neurons may fire in response to the input signals and thus drive the searching dynamics of the BM. The BM iterates all possible solutions to search for the best solution by minimizing the system-energy function. Hardware implementations36,37 of such BM are challenging with conventional transistors and would require a large number of devices and complex circuitry. Here we build a BM where each of the stochastic neuron is based on a single tin oxide/MoS2 hetero-memristor as stochastic switching and simple peripheral circuitry (more details in Methods: BM construction). This implemented BM is used to solve a maximum satisfiability problem (MAX-SAT), which is an NP-hard combinatorial optimization problem underlying a wide range of key applications, including Max-Clique38, correlation clustering39, treewidth computation40, Bayesian network structure learning41, and argumentation dynamics42.
Given a set of Boolean clauses, where each clause is a disjunction of Boolean variables and their negations, the MAX-SAT problem43 aims to maximize the number of clauses that can be true when truth values are assigned to the Boolean variables. Without the loss of generality, the set of Boolean clauses to be solved in this work are selected to be \(\left\{{{{{{\rm{Ci}}}}}}|{{{{{\rm{i}}}}}}={{{{\mathrm{1,2}}}}},\ldots ,5\right\}\), where the clause C1 is \(\left(x\vee y\vee z\right)\); C2 is \(\left({x}^{{\prime} }\vee y\vee z\right)\); C3 is \(\left({x}^{{\prime} }\vee {y}^{{\prime} }\vee z\right)\); C4 is \(\left(x\vee {y}^{{\prime} }\vee {z}^{{\prime} }\right)\) and C5 is \(\left({x}^{{\prime} }\vee y\vee {z}^{{\prime} }\right)\) (shown in Fig. 3a, the Boolean variable \({x}^{{\prime} }\) is the negation of the Boolean variable \(x\)). The optimization task here is to find a state vector \({{{{{\bf{X}}}}}}=\left({x}_{1},\cdots ,{x}_{6}\right)=(x,y,z,{x}^{{\prime} },{y}^{{\prime} },z^{\prime} )\) that can maximize the number of clauses to be true. A MAX-SAT can be converted equivalently to a problem that is solvable for the BM44,45. Six stochastic units are used in the BM to realize the activation for each Boolean variable in the state vector \({{{{{\bf{X}}}}}}=\left({x}_{1},\cdots ,{x}_{6}\right)\). Then we build a weight matrix W. The weight \({w}_{{{{{{\rm{ij}}}}}}}\) that is between every two Boolean variables is assigned based on the MAX-SAT problem. Solving the MAX-SAT is equivalent to minimizing the total energy \(E={{{{{{\bf{X}}}}}}}^{{{{{{\rm{T}}}}}}}{{{{{\bf{WX}}}}}}\) of the BM, where \({{{{{{\bf{X}}}}}}}^{{{{{{\rm{T}}}}}}}\) is the transverse of \({{{{{\bf{X}}}}}}\).
Fig. 3: Boltzmann machine implementation using tin oxide/MoS2 heteromemristor.
a Flow chart showing the steps in mapping a MAX-SAT problem to an equivalent form solvable using the Boltzmann machine. b The PCB evaluation board of BM-integrated system, including the packaged tin oxide/MoS2 memristive units and CMOS peripheral circuits. c Schematic of the BM circuit blocks with six tin oxide/MoS2 heteromemristors as the artificial neurons. d The experimentally obtained evolution of state vector and total energy when the BM was started from three different initial states, resulting in the same optimal solution. e Experimentally obtained energy evolution in the BM optimization process with Vg = −20 V, 0 V, and 20 V, respectively. f The success rate of the BM optimization process under different Vg.
The constructed BM utilizing the tin oxide/MoS2 heteromemristors is shown in Fig. 3b and the schematic of the circuit blocks with six stochastic neurons is shown in Fig. 3c. In each iteration step, if the hetero-memristor sets, the Boolean value of \({x}_{{{{{{\rm{i}}}}}}}\) would be flipped. If the heteromemristor does not set, the stochastic neuron would not fire and \({x}_{{{{{{\rm{i}}}}}}}\) remains the same. The stochastic neurons are sequentially updated until the BM reaches the optimal solution. In Fig. 3d, we experimentally demonstrated the evolution of the state vector and total energy when the BM started from three different initial states and found the same optimal solution, which is \({{{{{\bf{X}}}}}}=(x,{y},z,{x}^{{\prime} },{y}^{{\prime} },{z}^{{\prime} })=({{{{\mathrm{0,1,1,1,0,0}}}}})\).
As previously shown in Fig. 2d, Vg can tune the tin oxide/MoS2 heteromemristor to have different Teff during the BM optimization process. Teff of the BM describes the average behaviors of all the stochastic units, in close analogy to the temperature parameter in the Boltzmann distribution that describes the average behavior of particles under different thermal equilibrium states in physical systems. Thus, by controlling Teff in the optimization process that can be achieved via tuning the Vg, it is possible to avoid premature convergence issues and facilitate the convergence efficiency associated with the BM. Figure 3e shows the effect of different Vg bias on the BM optimization process. During these three different runs of the BM, all the tin oxide/MoS2 stochastic hetero-memristors are biased at Vg = −20 V, 0 V, and 20 V, respectively. The energy evolved differently during these runs each time. The BM is at Teff = 7 when Vg = 20 V and converges easily for this particular problem. On the other hand, the BM is at Teff = 50 when Vg = −20 V and is less efficient in reaching convergence. For Vg = 0 V, the BM is at Teff = 10 and converges at an intermediate rate among the three cases. By counting how many times the BM can reach the global optimal solution out of 50 trial runs, the success rate as a function of Vg and Teff is statistically obtained as shown in Fig. 3f. It indicates that the Vg and hence the Teff can substantially affect the performance of the BM.
Simulated annealing46,47 can be implemented with our BM where the Teff can gradually change during the optimization process to emulate different "cooling" strategy. It is an important approach for efficiently reaching better optimization solutions and for avoiding the premature convergence. Using the gate-tunable tin oxide/MoS2 device, such "cooling" procedures can be quantitatively implemented during the simulated annealing by translating the designated sequential evolution of Teff into the corresponding series of gate voltage bias conditions following the relation in Fig. 2d. To study the effect of different "cooling" strategies on the efficiency of the BM, four different Teff variation strategies were experimentally applied on the BM. Strategy 1: high Teff in the first three iteration steps followed by low Teff for the remaining iterations in one optimization process (HT to LT), Strategy 2: low Teff in the first three iterations followed by high Teff for the remaining iterations (LT to HT), Strategy 3: maintaining a low Teff in the entire optimization process (LT), and Strategy 4: maintaining a high Teff in the entire optimization process (HT). Figure 4a shows the qualitative schematic about how system energy (color dots) would evolve in the process of searching optimal solutions among multiple possible energy minimums (gray line). To analyze the effect of these "cooling" strategies, typical evolutions of the energy (cost function) during the BM optimization process for the four different strategies were experimentally obtained. As shown in Fig. 4b, using the HT strategy (Teff = 50), the BM is highly active but loses the selectivity for reaching proper convergence. Using the LT strategy (Teff = 5), the BM is significantly less active but possesses higher selectivity that facilitates its convergence to a premature state. Finally, simulated annealing using a "cooling" strategy (HT to LT) enables active initial searches at HT (Teff = 50) and then steady convergence to the minimum energy state at LT (Teff = 5) as shown in the experimental results. Furthermore, Figs. 4c and 4d show the experimentally obtained statistics of success rate in finding the global optimal solution when the different "cooling" strategies are used. Different initial values for the state vectors are used in Figs. 4c and 4d to show the effect from the different initial conditions. Both figures indicate that the HT to LT strategy has the highest success rate for reaching the global optimal solution for this particular problem, while the HT strategy has the lowest success rate. The results are consistent with the simulated performance of the BM (see supplementary information section 10).
Fig. 4: Implementing the simulated annealing in tin oxide/MoS2-based BM.
a Conceptual schematic illustrating the evolution of the solution and energy states during the optimization process employing the four different variation strategies. b Experimentally obtained energy evolution in the BM optimization process for the four different strategies. c, d Experimentally extracted success rate of the BM in achieving the most optimal solution using four different strategies of Teff variation during the optimization process: HT to LT, LT to HT, LT, and HT. Different initial states are used in c and d. Teff = 50 for HT and Teff = 5 for LT in b, c and d.
To quantitatively understand why Teff can make such a significant difference in the BM optimization process, we analyze the Russel–Rao (RR) similarity48 between all the clauses for this particular MAX-SAT problem. It is because, as illustrated in Fig. 5a, all the five clauses C1–C5 bear inherent similarity to each other due to the following two constraints: the variable constraint and the clause constraint. On the variable side, a Boolean variable and its negation (two variables connected by red lines) are always logically opposite. For example, \(x\) and \({x}^{{\prime} }\) will always have opposite values. On the clause side, the chance of two clauses both being true is lower if they contain more complementary Boolean variables in each clause. By assigning true values to the variables \(x\), \({y}^{{\prime} }\)and \({z}^{{\prime} }\)(yellow circle), the number of complementary variables (blue circle) between clauses could be easily observed. Counting the number of complementary variables can directly reflect the inner connection and constraint of the clauses. In Fig. 5a, for example, if the clause C4: \(\left(x\vee y^{\prime} \vee z^{\prime} \right)\) is true, then the probability that the clause C2: \(\left({x}^{{\prime} }\vee y\vee z\right)\) also being true is much smaller than the other three clauses since C4 and C2 contain three pairs of complementary variables.
Fig. 5: Russel–Rao similarity matrix underlying the clauses employing different "cooling" strategy in a MAX-SAT problem.
a Schematic shows that the five clauses in MAX-SAT problem are correlated with each other, which is imposed by variable constraint and clause constraint. For illustration, the yellow-circled variables are assigned true values, thus making the blue-circled variables false. b c d Russel–Rao similarity matrix between the five clauses when BM runs the optimization process under Teff = 50, 20, and 5, respectively. e The evolution of the Russel–Rao similarity matrix in a BM optimization process when Teff is decreased linearly with each iteration step.
With the BM set to different Teff, the RR similarity matrix among the five clauses based on the experimental data is constructed in Figs. 5b, 5c and 5d. The color and number in each cell quantify the similarity between each pair of clauses indexed by the row and column. It represents the probability when both clauses are true among all cases. For example, a RR similarity of 0.84 between C1 and C2 in Fig. 5b means that by repeatedly running the BM 50 times at Teff = 50, we had C1 and C2, both being true by the end of 42 (out of 50) runs.
The effect of Teff can be explained as follows. We view the RR similarity as the distance measurement of the statistical relationship between each of the two clauses (distance = 1 − RR coefficient) in solution space49. In other words, clauses with RR similarity close to 1 are seen as closely clustered, while the clauses with RR similarity close to 0 are furthermost separated. When Teff is tuned to 50 (Fig. 5b), all the clauses have similar distances in the solution space, since they show close RR similarity between all pairs. As a consequence, BM tends to search widely in the solution space with a high robustness, high stochasticity, and low selectivity, since choosing any solution would look the same to the BM. When Teff is 20 (Fig. 5c), clauses with small distances are closely clustered, giving high RR similarity close to unity for pairs of clauses that can be easily satisfied simultaneously, such as C1 and C2, and a low RR similarity for pairs of clauses that can hardly be satisfied at the same time, such as C1 and C4. At this Teff = 20, the BM gains more selectivity in solution space. When the Teff is 5 (Fig. 5d), all the clauses are either strongly clustered or separated in distance, with distinct either 1 or 0 RR similarity. BM behaves more like a deterministic "machine". This tends to cause premature convergence as the BM is significantly less active.
Next, a simulated annealing process in the BM with linear cooling is simulated in Fig. 5e. The evolution of the RR similarity matrix indicates that the BM would evolve through all the cases that are discussed above from being fully stochastic toward nearly deterministic as Teff decreases linearly. Thus, the simulated annealing process of a BM could be understood as such: at high Teff, the BM searches solution space globally with high robustness and low selectivity, for the sake of large gradient descent; as the BM cools down, it gains selectivity toward some solutions and can possibly jump out of local minima since Teff still provides enough perturbation; as the BM cools down to the limit, the BM exhibits a stronger selectivity than robustness, preventing itself from jumping out of the optimal zone. Hence, more efficient performance in the BM can be achieved with an appropriate "cooling" strategy.
In summary, tunable stochastic behavior is demonstrated in the tin oxide/MoS2 heteromemristor, showing inherent exponential-class statistical characteristics. The device can sample exponential-class sigmoidal distributions resembling the Fermi–Dirac distribution in physical systems with tunable distribution parameters to emulate the "temperature" effects. Simulated annealing with control of the "cooling" strategies is demonstrated in the implemented Boltzmann machine for solving combinatorial optimization with respect to a MAX-SAT problem. These stochastic neurons based on tin oxide/MoS2 heteromemristors with reconfigurable statistical behavior pave the way for implementing selected "cooling" strategies in BM to reach optimal convergence efficiency and can find broad applications in energy-efficient computing for learning, clustering, and classification.
Device fabrication
A thin MoS2 layer is first deposited on a Si wafer with a 285-nm thermally grown SiO2 layer on top. The sample is then treated in an Ar/H2-mixed gas environment at 350 °C to clean the MoS2 surface. Subsequently, a thin tin oxide layer oxidized from SnSe is deposited on MoS2 and serves as filament-switching layer. Electron beam lithography is then used to transfer the patterns followed by the evaporation of a 10-nm/40-nm Cr/Au metal stack, which forms the top electrode.
STEM and EDX
A FEI Titan Themis G2 system was used to prepare the HRSTEM images with four detectors and spherical aberration. To observe the cross-section image, the sample was pretreated by depositing chromium and carbon-capping layers, then thinned by a focused-ion beam (FIB, FEI Helios 450 S) with an acceleration voltage of 30 kV. The HRSTEM image was acquired with an acceleration voltage of 200 kV. EDX signals were collected to identify the elemental component in the cross section, which was integrated within the STEM system.
A Renishaw inVia Qontor system was used to measure the Raman spectra, which was installed with a ×100 objective lens, a grating (1800 grooves mm−1), and a charge-coupled device camera. The wavelength of the excitation laser was 532 nm (from a solid laser). The Raman spectra resolution is 1.2 cm−1 per pixel.
BM construction
The implemented BM prototype contains 24 5-bit digital-to-analog converters (DAC). The digital pattern generation interface (DPGI) and training data acquisition interface (TDAI) are controlled by a Xilinx ML605 FPGA board that carries out information storage and computations. It formed a feedback loop to adjust both input and output patterns at each BM iteration. Depending on different input signals, the BM system adjusts the corresponding output training data accordingly. The BM prototype has six stochastic units, with each unit containing a tin oxide/MoS2 heteromemristor that has approximately sigmoidal switching probability upon applied voltages and peripheral circuitry. The peripheral circuitry is consisting of 4 DACs (digital-to-analog converter) to read digital voltage values and apply to heteromemristor, a dynamic comparator for generating discrete-state readout and output-level shifters.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Kirkpatrick, S., Gelatt, C. D. & Vecchi, M. P. Optimization by simulated annealing. science 220, 671–680 (1983).
MathSciNet CAS Article ADS Google Scholar
Smith, K. A. Neural networks for combinatorial optimization: a review of more than a decade of research. INFORMS J. Comput. 11, 15–34 (1999).
Larochelle, H., Mandel, M., Pascanu, R. & Bengio, Y. Learning algorithms for the classification restricted Boltzmann machine. J. Mach. Learn. Res. 13, 643–669 (2012).
MathSciNet MATH Google Scholar
Fischer, A. & Igel, C. in Iberoamerican Congress on Pattern Recognition. pp. 14–36 (Springer, 2012).
Li, G. et al. Temperature based restricted boltzmann machines. Sci. Rep. 6, 19133 (2016).
CAS Article ADS Google Scholar
Salazar, D. S. Nonequilibrium thermodynamics of restricted Boltzmann machines. Phys. Rev. E 96, 022131 (2017).
Article ADS Google Scholar
Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).
Kuekes, P. J. et al. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes. Nanotechnology 17, 1052 (2006).
Kuekes, P. J., Robinett, W. & Williams, R. S. Improved voltage margins using linear error-correcting codes in resistor-logic demultiplexers for nanoelectronics. Nanotechnology 16, 1419 (2005).
Pan, C. et al. Reconfigurable logic and neuromorphic circuits based on electrically tunable two-dimensional homojunctions. Nat. Electron. 3, 383–390 (2020).
Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018).
Boybat, I. et al. Neuromorphic computing with multi-memristive synapses. Nat. Commun. 9, 1–12 (2018).
Sangwan, V. K. & Hersam, M. C. Neuromorphic nanoelectronic materials. Nat. Nanotechnol. 15, 517–528 (2020).
Wong, H.-S. P. & Salahuddin, S. Memory leads the way to better computing. Nat. Nanotechnol. 10, 191–194 (2015).
Yu, S., Wu, Y., Jeyasingh, R., Kuzum, D. & Wong, H.-S. P. An electronic synapse device based on metal oxide resistive switching memory for neuromorphic computation. IEEE Trans. Electron Devices 58, 2729–2737 (2011).
Hu, M., Wang, Y., Wen, W., Wang, Y. & Li, H. Leveraging stochastic memristor devices in neuromorphic hardware systems. IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 235–246 (2016).
Gaba, S., Sheridan, P., Zhou, J., Choi, S. & Lu, W. Stochastic memristive devices for computing and neuromorphic applications. Nanoscale 5, 5872–5878 (2013).
Gaba, S., Knag, P., Zhang, Z. & Lu, W. In 2014 IEEE International Symposium on Circuits and Systems (ISCAS). 2592–2595 (IEEE, 2014).
Cai, F. et al. Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks. Nat. Electron. 3, 409–418 (2020).
Zhu, X., Li, D., Liang, X. & Lu, W. D. Ionic modulation and ionic coupling effects in MoS2 devices for neuromorphic computing. Nat. Mater. 18, 141–148 (2019).
Jiang, H. et al. A novel true random number generator based on a stochastic diffusive memristor. Nat. Commun. 8, 1–9 (2017).
Zhang, R. et al. Nanoscale diffusive memristor crossbars as physical unclonable functions. Nanoscale 10, 2721–2726 (2018).
Wang, Z. et al. Fully memristive neural networks for pattern classification with unsupervised learning. Nat. Electron. 1, 137–145 (2018).
Zhang, W. et al. Neuro-inspired computing chips. Nat. Electron. 3, 371–382 (2020).
Baek, E. et al. Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions. Nat. Electron. 3, 398–408 (2020).
Serb, A. et al. Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses. Nat. Commun. 7, 1–9 (2016).
Huang, C.-Y., Shen, W. C., Tseng, Y.-H., King, Y.-C. & Lin, C.-J. A contact-resistive random-access-memory-based true random number generator. IEEE Electron Device Lett. 33, 1108–1110 (2012).
Zhao, S. et al. Controlled synthesis of single-crystal SnSe nanoplates. Nano Res. 8, 288–295 (2015).
Park, B.-E. et al. Phase-controlled synthesis of SnOx thin films by atomic layer deposition and post-treatment. Appl. Surf. Sci. 480, 472–477 (2019).
Lee, J.-H. et al. Selective SnOx atomic layer deposition driven by oxygen reactants. ACS Appl. Mater. interfaces 10, 33335–33342 (2018).
Hoffmann, L. et al. Atmospheric pressure plasma enhanced spatial atomic layer deposition of SnOx as conductive gas diffusion barrier. J. Vac. Sci. Technol. A Vac. Surf. Films 36, 01A112 (2018).
Nagashima, K., Yanagida, T., Oka, K. & Kawai, T. Unipolar resistive switching characteristics of room temperature grown SnO2 thin films. Appl. Phys. Lett. 94, 242902 (2009).
Jo, S. H., Kim, K.-H. & Lu, W. Programmable resistance switching in nanoscale two-terminal devices. Nano Lett. 9, 496–500 (2009).
Sadi, T., Badami, O., Georgiev, V. & Asenov, A. In International Conference on Large-Scale Scientific Computing. 429–437 (Springer, 2019).
Wu, T., Zhao, H., Liu, F., Guo, J. & Wang, H. Machine Learning Approach for Device-Circuit Co-Optimization of Stochastic-Memristive-Device-Based Boltzmann Machine. arXiv preprint arXiv:1905.04431 (2019).
Kim, S. K., McAfee, L. C., McMahon, P. L. & Olukotun, K. In 2009 International Conference on Field Programmable Logic and Applications. 367–372 (IEEE, 2009).
Kim, L.-W., Asaad, S. & Linsker, R. A fully pipelined fpga architecture of a factored restricted Boltzmann machine artificial neural network. ACM Trans. Reconfigurable Technol. Syst. 7, 1–23 (2014).
Heras, F. & Larrosa, J. In International Conference on Theory and Applications of Satisfiability Testing. 139–152 (Springer, 2008).
Berg, J. & Järvisalo, M. In 2013 IEEE 13th International Conference on Data Mining Workshops. 750–757 (IEEE, 2013).
Berg, J. & Järvisalo, M. In 2014 IEEE 26th International Conference on Tools with Artificial Intelligence. 328–335 (IEEE, 2014).
Cussens, J. Bayesian network learning by compiling to weighted MAX-SAT. arXiv preprint arXiv:1206.3244 (2012).
Wallner, J. P., Niskanen, A. & Järvisalo, M. Complexity results and algorithms for extension enforcement in abstract argumentation. J. Artif. Intell. Res. 60, 1–40 (2017).
Ansótegui, C., Bonet, M. L. & Levy, J. SAT-based MaxSAT algorithms. Artif. Intell. 196, 77–105 (2013).
d'Anjou, A., Grana, M., Torrealdea, F. J. & Hernandez, M. Solving satisfiability via Boltzmann machines. IEEE Trans. Pattern Anal. Mach. Intell. 15, 514–521 (1993).
Bojnordi, M. N. & Ipek, E. In 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA). 1–13 (IEEE, 2016).
Shin, J. H., Jeong, Y. J., Zidan, M. A., Wang, Q. & Lu, W. D. In 2018 IEEE International Electron Devices Meeting (IEDM). pp. 3 (IEEE, 2018).
Yang, K. et al. Transiently chaotic simulated annealing based on intrinsic nonlinearity of memristors for efficient solution of optimization problems. Sci. Adv. 6, eaba9901 (2020).
Zhang, B. & Srihari, S. N. in Document Recognition and Retrieval X. Vol. 5010, 28–38 (International Society for Optics and Photonics, 2003).
Finch, H. Comparison of distance measures in cluster analysis with dichotomous data. J. Data Sci. 3, 85–100 (2005).
This work is supported in part by the Army Research Office (grant no. W911NF-21-2-0128) and National Science Foundation (grant no. CMMI-2036359). T.W. and J.G. acknowledge support by National Science Foundation (grant no. 1809770 and 1904580). W.W. acknowledges the support from Air Force Research Laboratory (grant no. FA8750-19-1-0503).
These authors contributed equally: Xiaodong Yan, Jiahui Ma.
Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, 90089, USA
Xiaodong Yan, Jiahui Ma, Aoyang Zhang, Jiangbin Wu, Wei Wu, Mike Shuo-Wei Chen & Han Wang
Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA
Tong Wu & Jing Guo
Sensors and Electron Devices Directorate, U.S. Army Research Laboratory, Adelphi, MD, 20723, USA
Matthew Chin & Madan Dubey
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
Zhihan Zhang
Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA, 90089, USA
Han Wang
Xiaodong Yan
Jiahui Ma
Tong Wu
Aoyang Zhang
Jiangbin Wu
Matthew Chin
Madan Dubey
Wei Wu
Mike Shuo-Wei Chen
Jing Guo
X.Y., J.M., and H.W. conceived the project idea. X.Y., J.M. and J.W. fabricated the devices, characterized their electrical performance, and constructed and measured the BM circuit. A.Z., X.Y., M.S.-W.C., and Z.Z. contributed to the design of the BM circuit. M.C and M.D. contributed to the device fabrication. W.W. contributed to the understanding of the device operation. T.W, X.Y., J.M., and J.G led the simulation and modeling of the device and BM circuit. H.W. coordinated and supervised the overall research activities. All coauthors contributed to the discussion of the data. X.Y., J.M., T.W., J.G., and H.W. cowrote the paper with inputs from all coauthors.
Correspondence to Han Wang.
The authors declare the following competing interests: H.W. currently also leads the low-dimensional materials research at Taiwan Semiconductor Manufacturing Company (TSMC) Corporate Research. All other authors declare no competing interests.
Peer review information Nature Communications thanks Gunuk Wang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Yan, X., Ma, J., Wu, T. et al. Reconfigurable Stochastic neurons based on tin oxide/MoS2 hetero-memristors for simulated annealing and the Boltzmann machine. Nat Commun 12, 5710 (2021). https://doi.org/10.1038/s41467-021-26012-5 | CommonCrawl |
[Submitted on 20 Jul 2020 (v1), last revised 18 Mar 2021 (this version, v2)]
Title:A little FABLE: exploring AGN feedback in dwarf galaxies with cosmological simulations
Authors:Sophie Koudmani, Nicholas A. Henden, Debora Sijacki
Abstract: Contrary to the standard lore, there is mounting observational evidence that feedback from active galactic nuclei (AGN) may also play a role at the low-mass end of the galaxy population. We investigate this using the cosmological simulation suite FABLE, with a particular focus on the dwarf regime ($M_\mathrm{stellar} < 10^{9.5} \ \mathrm{M_{\odot}}$). We find that overmassive black holes (BHs), with respect to the mean scaling relations with their host galaxies, drive hotter and faster outflows and lead to significantly reduced gas mass fractions. They are also more likely to display a kinematically misaligned ionized gas component in our mock MaNGA velocity maps, although we caution that cosmic inflows and mergers contribute to misalignments as well. While in the local Universe the majority of AGN in dwarfs are much dimmer than the stellar component, for $z \geq 2$ there is a significant population that outshines their hosts. These high-redshift overmassive BHs contribute to the quenching of dwarfs, whereas at late cosmic times supernova (SN) feedback is more efficient. While our results are overall in good agreement with X-ray observations of AGN in dwarfs, the lack of high-luminosity X-ray AGN in FABLE at low redshifts highlights an interesting possibility that SN feedback could be too strong in FABLE's dwarfs, curtailing AGN growth and feedback. We predict that future observations may uncover many more AGN in dwarfs with lower luminosities and at higher redshifts.
Comments: Accepted for publication in MNRAS; 24 pages, 12 figures, 1 appendix
Subjects: Astrophysics of Galaxies (astro-ph.GA)
Related DOI: https://doi.org/10.1093/mnras/stab677
From: Sophie Koudmani [view email]
[v1] Mon, 20 Jul 2020 18:00:01 UTC (3,978 KB)
[v2] Thu, 18 Mar 2021 18:07:30 UTC (3,879 KB) | CommonCrawl |
$\omega$-limit sets for porous medium equation with initial data in some weighted spaces
Global dynamics and bifurcations in a four-dimensional replicator system
January 2013, 18(1): 237-258. doi: 10.3934/dcdsb.2013.18.237
On the multiple spike solutions for singularly perturbed elliptic systems
Weichung Wang 1, , Tsung-Fang Wu 2, and Chien-Hsiang Liu 2,
Department of Mathematics, National Taiwan University, Taipei 106, Taiwan
Department of Applied Mathematics, National University of Kaohsiung, Kaohsiung 811, Taiwan
Received June 2011 Revised June 2012 Published September 2012
We study the multiplicity of positive solutions for the two coupled nonlinear Schrödinger equations in bounded domains in this paper. By using Nehari manifold and Lusternik-Schnirelmann category, we prove the existence of multiple positive solutions for two coupled nonlinear Schrödinger equations in bounded domains. We also propose a numerical scheme that leads to various new numerical predictions regarding the solution characteristics.
Keywords: Singularly perturbed elliptic systems, Lusternik-Schnirelmann category., Nehari manifold.
Mathematics Subject Classification: Primary: 35J47, 35J50; Secondary: 35J5.
Citation: Weichung Wang, Tsung-Fang Wu, Chien-Hsiang Liu. On the multiple spike solutions for singularly perturbed elliptic systems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 237-258. doi: 10.3934/dcdsb.2013.18.237
S. Adachi and K. Tanaka, Four positive solutions for the semilinear elliptic equation: $-\Delta u + u=a(x)u^p+f(x)$ in $\mathbbR^N$,, Calc. Var. Partial Diff. Eqns., 11 (2000), 63. doi: 10.1007/s005260050003. Google Scholar
A. Ambrosetti, "Critical Points and Nonlinear Variational Problems,", Bulletin Soc. Math. France, (1992). Google Scholar
A. Ambrosetti and E. Colorado, Standing waves of some coupled nonlinear Schrodinger equations,, Journal of the London Mathematical Society, 75 (2007), 67. Google Scholar
T. Bartsch, M. Clapp and T. Weth, Configuration spaces, transfer, and 2-nodal solutions of a semiclassical nonlinear Schrödinger equation,, Mathematische Annalen, 388 (2007), 147. Google Scholar
T. Bartsch and T. Weth, Three nodal solutions of singularly perturbed elliptic equations on domains without topology,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 22 (2005), 259. Google Scholar
G. Cerami and D. Passaseo, The effect of concentrating potentials in some singularly perturbed problems,, Calc. Var. Partial Differential Equations, 17 (2003), 257. Google Scholar
E. N. Dancer, The effect of domain shape on the number of positive solutions of certain nonlinear equations,, Journal of differential equations, 74 (1988), 120. Google Scholar
D. G. de Figueiredo and O. Lopes, Solitary waves for some nonlinear Schrödinger systems,, Ann. I. H. Poincaré-AN, 25 (2008), 149. Google Scholar
N. Ikoma, Uniqueness of positive solutions for a nonlinear elliptic system,, NoDEA: Nonlinear Differential Equations and Applications, 16 (2009), 555. Google Scholar
M. K. Kwong, Uniqueness of positive solution of $\Delta u-u+u^p=0$ in $\mathbbR^N$,, Arch. Rat. Math. Anal., 105 (1989), 243. Google Scholar
P. L. Lions, The concentration-compactness principle in the calculus of variations. The local compact case I,, Ann. Inst. H. Poincaré Anal. Non Lineairé, 1 (1984), 102. Google Scholar
P. L. Lions, The concentration-compactness principle in the calculus of variations. The local compact case II,, Ann. Inst. H. Poincaré Anal. Non Lineairé, 1 (1984), 223. Google Scholar
W. C. Lien, S. Y. Tzeng and H. C. Wang, Existence of solutions of semilinear elliptic problems on unbounded domains,, Differential Integral Equations, 6 (1993), 1281. Google Scholar
T. C. Lin and J. Wei, Spikes in two coupled nonlinear Schrödinger equations,, Ann. I. H. Poincaré-AN, 22 (2005), 403. Google Scholar
P. E. Merilees, The pseudo-spectral approximation applied to the shallow water equations on a sphere,, Atmosphere, 11 (1973), 13. Google Scholar
E. Montefusco, B. Pellacci and M. Squassina, Semiclassical states for weakly coupled nonlinear Schrödinger systems,, J. Eur. Math. Soc., 10 (2008), 47. Google Scholar
Z. Nehari, On a class of nonlinear second-order differential equations,, Trans. Am. Math. Soc., 95 (1960), 101. doi: 10.1090/S0002-9947-1960-0111898-8. Google Scholar
A. Pomponio, Coupled nonlinear Schrödinger systems with potentials,, Journal of Differential Equations, 227 (2006), 258. Google Scholar
H. C. Wang and T. F. Wu, Symmetry breaking in a bounded symmetry domain,, Nonlinear Differential Equations Appl., 11 (2004), 361. doi: 10.1007/s00030-004-2008-2. Google Scholar
Xiaoming He, Marco Squassina, Wenming Zou. The Nehari manifold for fractional systems involving critical nonlinearities. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1285-1308. doi: 10.3934/cpaa.2016.15.1285
Andrés Ávila, Louis Jeanjean. A result on singularly perturbed elliptic problems. Communications on Pure & Applied Analysis, 2005, 4 (2) : 341-356. doi: 10.3934/cpaa.2005.4.341
Flaviano Battelli, Ken Palmer. Heteroclinic connections in singularly perturbed systems. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 431-461. doi: 10.3934/dcdsb.2008.9.431
Shengbing Deng, Zied Khemiri, Fethi Mahmoudi. On spike solutions for a singularly perturbed problem in a compact riemannian manifold. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2063-2084. doi: 10.3934/cpaa.2018098
Bernhard Ruf, P. N. Srikanth. Hopf fibration and singularly perturbed elliptic equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (4) : 823-838. doi: 10.3934/dcdss.2014.7.823
Marco Ghimenti, Anna Maria Micheletti, Angela Pistoia. The role of the scalar curvature in some singularly perturbed coupled elliptic systems on Riemannian manifolds. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2535-2560. doi: 10.3934/dcds.2014.34.2535
Flaviano Battelli, Ken Palmer. Transversal periodic-to-periodic homoclinic orbits in singularly perturbed systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 367-387. doi: 10.3934/dcdsb.2010.14.367
Grégoire Allaire, Yves Capdeboscq, Marjolaine Puel. Homogenization of a one-dimensional spectral problem for a singularly perturbed elliptic operator with Neumann boundary conditions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 1-31. doi: 10.3934/dcdsb.2012.17.1
Marco Ghimenti, A. M. Micheletti. Non degeneracy for solutions of singularly perturbed nonlinear elliptic problems on symmetric Riemannian manifolds. Communications on Pure & Applied Analysis, 2013, 12 (2) : 679-693. doi: 10.3934/cpaa.2013.12.679
Jianhe Shen, Maoan Han. Bifurcations of canard limit cycles in several singularly perturbed generalized polynomial Liénard systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3085-3108. doi: 10.3934/dcds.2013.33.3085
Qingfang Wang. The Nehari manifold for a fractional Laplacian equation involving critical nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2261-2281. doi: 10.3934/cpaa.2018108
Nara Bobko, Jorge P. Zubelli. A singularly perturbed HIV model with treatment and antigenic variation. Mathematical Biosciences & Engineering, 2015, 12 (1) : 1-21. doi: 10.3934/mbe.2015.12.1
Jacek Banasiak, Eddy Kimba Phongi, MirosŁaw Lachowicz. A singularly perturbed SIS model with age structure. Mathematical Biosciences & Engineering, 2013, 10 (3) : 499-521. doi: 10.3934/mbe.2013.10.499
Michele Coti Zelati. Global and exponential attractors for the singularly perturbed extensible beam. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 1041-1060. doi: 10.3934/dcds.2009.25.1041
Caisheng Chen, Qing Yuan. Existence of solution to $p-$Kirchhoff type problem in $\mathbb{R}^N$ via Nehari manifold. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2289-2303. doi: 10.3934/cpaa.2014.13.2289
A. Pankov. Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 419-430. doi: 10.3934/dcds.2007.19.419
Patrik Nystedt, Johan Öinert. Simple skew category algebras associated with minimal partially defined dynamical systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4157-4171. doi: 10.3934/dcds.2013.33.4157
Changming Song, Hong Li, Jina Li. Initial boundary value problem for the singularly perturbed Boussinesq-type equation. Conference Publications, 2013, 2013 (special) : 709-717. doi: 10.3934/proc.2013.2013.709
Pierre Fabrie, Cedric Galusinski, A. Miranville, Sergey Zelik. Uniform exponential attractors for a singularly perturbed damped wave equation. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 211-238. doi: 10.3934/dcds.2004.10.211
Minbo Yang, Yanheng Ding. Existence of solutions for singularly perturbed Schrödinger equations with nonlocal part. Communications on Pure & Applied Analysis, 2013, 12 (2) : 771-783. doi: 10.3934/cpaa.2013.12.771
PDF downloads (10)
Weichung Wang Tsung-Fang Wu Chien-Hsiang Liu | CommonCrawl |
An Almost Constant Lower Bound of the Isoperimetric Coefficient in the KLS Conjecture
Yuansi Chen1
Geometric and Functional Analysis volume 31, pages 34–61 (2021)Cite this article
We prove an almost constant lower bound of the isoperimetric coefficient in the KLS conjecture. The lower bound has the dimension dependency \(d^{-o_d(1)}\). When the dimension is large enough, our lower bound is tighter than the previous best bound which has the dimension dependency \(d^{-1/4}\). Improving the current best lower bound of the isoperimetric coefficient in the KLS conjecture has many implications, including improvements of the current best bounds in Bourgain's slicing conjecture and in the thin-shell conjecture, better concentration inequalities for Lipschitz functions of log-concave measures and better mixing time bounds for MCMC sampling algorithms on log-concave measures.
Given a distribution, the isoperimetric coefficient of a subset is the ratio of the measure of the subset boundary to the minimum of the measures of the subset and its complement. Taking the minimum of such ratios over all subsets defines the isoperimetric coefficient of the distribution, also called the Cheeger isoperimetric coefficient of the distribution.
Kannan, Lovász and Simonovits (KLS) [12] conjecture that for any distribution that is log-concave, the Cheeger isoperimetric coefficient equals to that achieved by half-spaces up to a universal constant factor. If the conjecture is true, the Cheeger isoperimetric coefficient can be determined by going through all the half-spaces instead of all subsets. For this reason, the KLS conjecture is also called the KLS hyperplane conjecture. To make it precise, we start by formally defining log-concave distributions and then we state the conjecture.
A probability density function \(p: \mathbb {R}^d\rightarrow \mathbb {R}\) is log-concave if its logarithm is concave, i.e., for any \(x, y \in \mathbb {R}^{d} \times \mathbb {R}^{d}\) and for any \(\lambda \in [0, 1]\),
$$\begin{aligned} p(\lambda x + (1 - \lambda ) y) \ge p(x)^\lambda p(y)^{1-\lambda }. \end{aligned}$$
Common probability distributions such as Gaussian, exponential and logistic are log-concave. This definition also includes any uniform distribution over a convex set defined as follows. A subset \(K \subset \mathbb {R}^d\) is convex if \(\forall x, y \in K \times K, z \in [x, y] \implies z \in K\). The isoperimetric coefficient \(\psi (p)\) of a density p in \(\mathbb {R}^d\) is defined as
$$\begin{aligned} \psi (p) :=\inf _{S \subset \mathbb {R}^d}\frac{p^+(\partial S)}{\min (p(S), p(S^c))} \end{aligned}$$
where \(p(S) = \int _{x \in S} p(x) dx\) and the boundary measure of the subset is
$$\begin{aligned} p^+(\partial S) :=\underset{\epsilon \rightarrow 0^+}{\lim \inf }\ \frac{p\left( \left\{ x: {\mathbf {d}}(x, S) \le \epsilon \right\} \right) - p(S)}{\epsilon }, \end{aligned}$$
where \({\mathbf {d}}(x, S)\) is the Euclidean distance between x and the subset S.
The KLS conjecture is stated by Kannan, Lovász and Simonovits [12] as follows.
Conjecture 1
There exists a universal constant c, such that for any log-concave density p in \(\mathbb {R}^d\), we have
$$\begin{aligned} \psi (p) \ge \frac{c}{\sqrt{\rho \left( p \right) }}, \end{aligned}$$
where \(\rho \left( p \right) \) is the spectral norm of the covariance matrix of p. In other words, \(\rho \left( p \right) = \left\| A\right\| _{2}\), where \(A = {{\,\mathrm{Cov}\,}}_{X \sim p} (X)\) is the covariance matrix.
An upper bound of \(\psi (p)\) of the same form is relatively easy and it was shown to be achieved by half-spaces [12]. Proving the lower bound on \(\psi (p)\) up to some small factors in Conjecture 1 is the main goal of this paper. We say a log-concave density is isotropic if its mean \({\mathbb {E}}_{X\sim p} [X]\) equals to 0 and its covariance \({{\,\mathrm{Cov}\,}}_{X\sim p}(X)\) equals to \(\mathbb {I}_d\). In the case of isotropic log-concave densities, the KLS conjecture states that any isotropic log-concave density has its isoperimetric coefficient lower bounded by a universal constant.
There are many attempts trying to lower bound the Cheeger isoperimetric coefficient in the KLS conjecture. We refer readers to the survey paper by Lee and Vempala [18] for a detailed exposition of these attempts. In particular, the original KLS paper [12] (Theorem 5.1) shows that for any log-concave density p with covariance matrix A,
$$\begin{aligned} \psi (p) \ge \frac{\log (2)}{\sqrt{{{\,\mathrm{Tr}\,}}\left( A \right) }}. \end{aligned}$$
The original KLS paper [12] only deals with uniform distributions over convex sets, but their proof techniques can be easily extended to show that the same results hold for all log-concave densities. Remark that Equation (3) implies \(\psi (p) \ge \frac{\log (2)}{d^{1/2} \cdot \sqrt{\rho \left( p \right) }}\). The current best bound is shown in Lee and Vempala [17], where they show that there exists a universal constant c such that for any log-concave density p with covariance matrix A,
$$\begin{aligned} \psi (p) \ge \frac{c}{\left( {{\,\mathrm{Tr}\,}}\left( A^2 \right) \right) ^{1/4}}. \end{aligned}$$
It implies that \(\psi (p) \ge \frac{c}{d^{1/4} \cdot \sqrt{\rho \left( p \right) }}\). Note that in Lee and Vempala [17], their notation of \(\psi (p)\) is the reciprocal of ours and it is later switched in Theorem 32 of the survey paper [18] by the same authors. As a result, the above bound is not a misstatement of the results in Lee and Vempala [17] and it is simply translated into our notations. In this paper, we improve the dimension dependency \(d^{-1/4}\) to \(d^{-o_d(1)}\) in the lower bound of the isoperimetric coefficient.
There are many implications of improving the lower bound in the KLS conjecture. The two closely related conjectures are Bourgain's slicing conjecture [3, 4] and the thin-shell conjecture [2]. It is worth noting that Bourgain [4] stated the slicing conjecture earlier than the introduction of the KLS conjecture. In terms of their connections to the KLS conjecture, Eldan and Klartag [9] proved that the thin-shell conjecture implies Bourgain's slicing conjecture up to a universal constant factor. Later, Eldan [8] showed that the inverse of an lower bound of the isoperimetric coefficient is equivalent to an upper bound of the thin-shell constant in the thin-shell conjecture. Combining these two results, we have that an lower bound in the KLS conjecture implies upper bounds in the thin-shell conjecture and in Bourgain's slicing conjecture.
The current best upper bound of the thin-shell constant has the dimension dependency \(d^{1/4}\) due to Lee and Vempala's [17] improvement in the KLS conjecture. The current best bound of the slicing constant in Bourgain's slicing conjecture also has the dimension dependency \(d^{1/4}\), proved by Klartag [13] without using the KLS conjecture. Klartag's slicing constant bound is a slight improvement over Bourgain's earlier slicing bound [4] which has the dimension dependency \(d^{1/4}\log (d)\). Given the current best bounds in these three conjectures and the relation among them, we conclude that improving the current best lower bound in the KLS conjecture improves the current best bounds for the other two conjectures, as noted in Lee and Vempala [18]. For a detailed exposition of the three conjectures and related results since the introduction of Bourgain's slicing conjecture, we refer readers to Klartag and Milman [14].
Additionally, improving the lower bound in the KLS conjecture also improves concentration inequalities for Lipschitz functions of log-concave measures. It also leads to faster mixing time bounds of Markov chain Monte Carlo (MCMC) sampling algorithms on log-concave measures. Despite the great importance of these results, deriving these results from our new bound in the KLS conjecture is not the main focus of our paper. We refer readers to Milman [20] and Lee and Vempala [18] for more details about the abundant implications of the KLS conjecture.
Notation For two sequences \(a_n\) and \(b_n\) indexed by an integer n, we say that \(a_n = o_n(b_n)\) if \(\lim _{n \rightarrow \infty } \frac{a_n}{b_n} = 0\). The Euclidean norm of a vector \(x \in \mathbb {R}^d\) is denoted by \(\left\| x\right\| _{2}\). The spectral norm of a square matrix \(A \in \mathbb {R}^{d\times d}\) is denoted by \(\left\| A\right\| _{2}\). The Euclidean ball with center x and radius r is denoted by \(\mathbb {B}(x, r)\). For a real number \(x \in \mathbb {R}\), we denote its ceiling by \(\lceil x \rceil = \min \left\{ m \in \mathbb {Z} \mid m \ge x \right\} \). We say a density p is more log-concave than a Gaussian density \(\varphi \) if p can be written as a product form \(p = \nu \cdot \varphi \) where \(\varphi \) is the Gaussian density and \(\nu \) is a log-concave function (that is, \(\nu \) is proportional to a log-concave density). For a martingale \((M_t,\ t \in \mathbb {R}_+)\), we use \(\left[ M \right] _t\) to denote its quadratic variation, defined as
$$\begin{aligned} \left[ M \right] _t = \sup _{k \in \mathbb {N}} \sup _{0 \le t_1 \le \cdots \le t_k \le t} \sum _{i=1}^k \left( M_{t_i} - M_{t_{i-1}} \right) ^2. \end{aligned}$$
We prove the following lower bound on the isoperimetric coefficient of any log-concave density.
Theorem 1
There exists a universal constant c such that for any log-concave density p in \(\mathbb {R}^d\) and any integer \(\ell \ge 1\), we have
$$\begin{aligned} \psi (p) \ge \frac{1}{\left[ c \cdot \ell \left( \log (d)+1 \right) \right] ^{\ell /2} d^{16/\ell } \cdot \sqrt{\rho \left( p \right) }} \end{aligned}$$
where \(\rho \left( p \right) \) is the spectral norm of the covariance matrix of p.
As a corollary, take \(\ell = \left\lceil \left( \frac{\log (d)}{\log \log (d)} \right) ^{1/2} \right\rceil \), then there exists a constant \(c'\) such that
$$\begin{aligned} \psi (p) \ge \frac{1}{d^{c' \left( \frac{\log \log (d)}{\log {d}} \right) ^{1/2}} \cdot \sqrt{\rho \left( p \right) }}. \end{aligned}$$
Since \(\lim _{d\rightarrow \infty } \frac{\log \log (d)}{\log (d)} = 0\), for \(d\) large enough, the above lower bound is better than any lower bound of the form \(\frac{1}{d^{c''} \sqrt{\rho \left( p \right) }} \) (\(c''\) is a positive constant) in terms of dimension \(d\) dependency.
The proof of the main theorem uses the stochastic localization scheme introduced by Eldan [8]. Eldan uses this stochastic localization scheme to show that the thin shell conjecture is equivalent to the KLS conjecture up to a logarithmic factor. The construction of stochastic localization scheme uses elementary properties of semimartingales and stochastic integration. The main idea of Eldan's proof to derive the KLS conjecture from the thin shell conjecture is to smoothly multiply a Gaussian part to the log-concave density, so that the modified density is more log-concave than a Gaussian density. When the Gaussian part is large enough, one can then easily prove the isoperimetric inequality.
The same scheme was refined in Lee and Vempala [17] to obtain the current best lower bound in the KLS conjecture. Lee and Vempala directly attack the KLS conjecture while following the same stochastic localization scheme to smoothly multiply a Gaussian part to the log-concave density. Their use of a new potential function leads to the current best lower bound in the KLS conjecture. The proof in this paper builds on Lee and Vempala [17]'s refinements of Eldan's method, while it improves the handling of several quantities involved in the stochastic localization scheme. Figure 1 provides a diagram showing the relationship between the main lemmas.
Proof sketch.
To ensure the existence and the uniqueness of the stochastic localization construction, we first prove a lemma that deals with log-concave densities with compact support. Then we relate back to the main theorem by finding a compact support which contains most of the probability measure for a log-concave density.
Lemma 1
There exists a universal constant c such that for any log-concave density p in \(\mathbb {R}^d\) with compact support and any integer \(\ell \ge 1\), we have
$$\begin{aligned} \psi (p) \ge \frac{1}{\left[ c \cdot \ell \left( \log (d)+1 \right) \right] ^{\ell /2} d^{16/\ell } \cdot \sqrt{\rho \left( p \right) }}. \end{aligned}$$
The proof of Lemma 1 is provided in Section 2.5 after we introduce the intermediate lemmas. The use of the integer l in the lemma indicates that we control the Cheeger isoperimetric coefficient in an iterative fashion. In fact, we prove Lemma 1 by induction over l starting from the known bound in Equation (3). For this, we define the supremum of the product of the isoperimetric coefficient and the square-root of its spectral norm over all log-concave densities in \(\mathbb {R}^d\) with compact support:
$$\begin{aligned} \psi _d= \inf _{ \begin{array}{c} \text{ log-concave } \text{ density }\ p\ \text{ in }\ \mathbb {R}^d\\ \text{ with } \text{ compact } \text{ support } \end{array}} \psi (p) \sqrt{\rho \left( p \right) }. \end{aligned}$$
Then we prove the following lemma on the lower bound of \(\psi _d\), which serves as the main induction argument.
Suppose that \(\psi _k \ge \frac{1}{\alpha k^\beta }\) for all \(k \le d\) for some \(0 \le \beta \le \frac{1}{2}\) and \(\alpha \ge 1\), take \(q = \lceil \frac{1}{\beta } \rceil + 1\), there exists a universal constant c such that we have
$$\begin{aligned} \psi _d\ge \frac{1}{c \cdot q^{1/2} \alpha \log (d)^{1/2} d^{\beta - \beta / (8q) }}. \end{aligned}$$
The proof of Lemma 2 is provided towards the end of this section in Section 2.4. To have a good understanding of how we get there, we start by introducing the stochastic localization scheme introduced by Eldan [8].
Eldan's stochastic localization scheme.
Given a log-concave density p in \(\mathbb {R}^d\) with covariance matrix \(A\), we define the following stochastic differential equation (SDE)
$$\begin{aligned} dc_t&= C_t^{1/2}dW_t + C_t \mu _t dt,\quad c_0 = 0,\nonumber \\ dB_t&= C_t dt,\quad B_0 = 0, \end{aligned}$$
where \(W_t\) is the Wiener process, the matrix \(C_t\), the density \(p_t\), the mean \(\mu _t\) and the covariance \(A_t\) are defined as follows
$$\begin{aligned} C_t&= A^{-1}, \end{aligned}$$
$$\begin{aligned} p_t(x)&= \frac{e^{c_t^\top x - \frac{1}{2}x^\top B_t x} p(x)}{\int _{\mathbb {R}^d} e^{c_t ^\top x - \frac{1}{2}y^\top B_t y} p(y) dy}, \text {for}\, x \in \mathbb {R}^d, \end{aligned}$$
$$\begin{aligned} \mu _t&= \int _{\mathbb {R}^d} x p_t(x)dx, \end{aligned}$$
$$\begin{aligned} A_t&= \int _{\mathbb {R}^d} \left( x - \mu _t \right) \left( x - \mu _t \right) p_t(x) dx. \end{aligned}$$
The next lemma shows the existence and the uniqueness of the SDE solution.
Given a density p in \(\mathbb {R}^d\) with compact support with covariance \(A\) and \(A\) is invertible, then the SDE (8) is well defined and it has a unique solution on the time interval [0, T], for any time \(T > 0\). Additionally, for any \(x \in \mathbb {R}^d\), \(p_t(x)\) is a martingale with
$$\begin{aligned} dp_t(x) = \left( x - \mu _t \right) ^\top A^{-1/2} dW_t p_t(x). \end{aligned}$$
The proof of Lemma 3 follows from the standard existence and uniqueness theorem of SDE (Theorem 5.2 in Øksendal [21]). The proof is provided in Appendix A.
Before we dive into the proof of Lemma 2, we discuss how the stochastic localization scheme allows us to control the boundary measure of a subset. First, according to the concavity of the isoperimetric profile (Theorem 2.8 in Sternberg and Zumbrun [25] or Theorem 1.8 in Milman [20]), it is sufficient to consider subsets of measure 1/2 in the definition of the isoperimetric coefficient in Equation (2). Second, the density \(p_t\) is log-concave and it is more log-concave than the Gaussian density proportional to \(e^{-\frac{1}{2}x^\top B_t x}\). It can be shown via the KLS localization lemma [12] that a density which is more log-concave than a Gaussian has an isoperimetric coefficient lower bound that depends on the covariance of the Gaussian (see e.g. Theorem 2.7 in Ledoux [16] or Theorem 4.4 in Cousins and Vempala [7]). Third, given an initial subset E of \(\mathbb {R}^d\) with measure \(p(E) = \frac{1}{2}\), using the martingale property of \(p_t(E)\), we observe that
$$\begin{aligned} p(\partial E)&= {\mathbb {E}}\left[ p_t(\partial E) \right] \\&{\mathop {\ge }\limits ^{\mathrm{(i)}}} {\mathbb {E}}\left[ \frac{1}{2}\left\| B_t^{-1}\right\| _{2}^{-1/2} \min \left( p_t(E), p_t(E^c) \right) \right] \\&{\mathop {\ge }\limits ^{\mathrm{(ii)}}} \frac{1}{4}\cdot \frac{1}{2}\left\| B_t^{-1}\right\| _{2}^{-1/2} {\mathbb {P}}\left( \frac{1}{4}\le p_t(E) \le \frac{3}{4}\right) \\&= \frac{1}{4}\left\| B_t^{-1}\right\| _{2}^{-1/2} {\mathbb {P}}\left( \frac{1}{4}\le p_t(E) \le \frac{3}{4}\right) \cdot \min \left\{ p(E), p(E^c) \right\} . \end{aligned}$$
Inequality (i) uses the isoperimetric inequality for a log-concave density which is more log-concave than a Gaussian density proportional to \(e^{-\frac{1}{2}x^\top B_t x}\) [7, 16]. Inequality (ii) uses the fact that \(p_t(E)\) is nonnegative.
Based on the above observation, the high level idea of the proof requires two main steps:
There exists some time \(t > 0\), such that the Gaussian component \(\frac{1}{2}x^\top B_t x\) of the density \(p_t\) is large enough, so that we can apply the known isoperimetric inequality for densities more log-concave than a Gaussian.
We need to control the quantity \(p_t(E)\) so that the obtained isoperimetric inequality at time t can be related back to that at time 0.
The first step is obvious since our construction explicitly enforces the density \(p_t\) to have a Gaussian component \(\frac{1}{2}x^\top B_t x\) in Equation (9). Then the remaining question is whether we can run the SDE long enough to make the Gaussian component large enough while still keeping \(p_t(E)\) to be the same order as \(p(E) = \frac{1}{2}\) with large probability.
Control the evolution of the measure of a subset.
Under the same assumptions of Lemma 3, for any measurable subset E of \(\mathbb {R}^d\) with \(p(E) = \frac{1}{2}\) and \(t > 0\), the solution \(p_t\) of the SDE (9) satisfies
$$\begin{aligned} {\mathbb {P}}\left( \frac{1}{4} \le p_t(E) \le \frac{3}{4} \right) \ge \frac{9}{10} - {\mathbb {P}}\left( \int _0^t \left\| A^{-1/2} A_t A^{-1/2}\right\| _{2} ds \ge \frac{1}{64} \right) . \end{aligned}$$
This lemma is proved in Lemma 29 of Lee and Vempala [17]. We provide a proof here for completeness.
Proof of Lemma 4. Let \(g_t = p_t(E)\). Using Equation (13), we obtain the following derivative of \(g_t\)
$$\begin{aligned} d g_t&= \int _E (x - \mu _t)^\top A^{-1/2} dW_t p_t(x) dx. \end{aligned}$$
Its quadratic variation is
$$\begin{aligned} d\left[ g \right] _t&= \left\| \int _E A^{-1/2} (x - \mu _t) p_t(x) dx \right\| _{2}^2 dt \\&= \max _{\left\| \xi \right\| _{2} \le 1} \left( \int _E \xi ^\top A^{-1/2} (x - \mu _t) p_t(x) dx \right) ^2 dt \\&\le \max _{\left\| \xi \right\| _{2} \le 1} \left( \int _E \left( \xi ^\top A^{-1/2} (x - \mu _t) \right) ^2 p_t(x) dx \right) \left( \int _E p_t(x) dx \right) dt \\&\le \max _{\left\| \xi \right\| _{2} \le 1} \xi ^\top A^{-1/2} A_t A^{-1/2} \xi dt \\&= \left\| A^{-1/2} A_t A^{-1/2}\right\| _{2} dt, \end{aligned}$$
where the inequality follows from Cauchy–Schwarz inequality. Applying the Dambis, Dubins-Schwarz theorem (see e.g. Revuz and Yor [23] Section V.1 Theorem 1.7), there exists a Wiener process \({\tilde{W}}_t\) such that \(g_t - g_0\) has the same distribution as \({\tilde{W}}_{[g]_t}\). Since \(g_0 = \frac{1}{2}\), we obtain
$$\begin{aligned} {\mathbb {P}}\left( \frac{1}{4} \le p_t(E) \le \frac{3}{4} \right)&= {\mathbb {P}}\left( -\frac{1}{4} \le {\tilde{W}}_{[g]_t} \le \frac{1}{4} \right) \\&\ge 1 - {\mathbb {P}}\left( \max _{0 \le s \le \frac{1}{64}} \left| {\tilde{W}}_s \right|> \frac{1}{4} \right) - {\mathbb {P}}([g]_t> \frac{1}{64}) \\&= 1 - 4 {\mathbb {P}}\left( {\tilde{W}}_{\frac{1}{64}}> \frac{1}{4} \right) - {\mathbb {P}}\left( [g]_t> \frac{1}{64} \right) \\&\ge \frac{9}{10} - {\mathbb {P}}\left( \int _0^t \left\| A^{-1/2} A_t A^{-1/2}\right\| _{2} ds > \frac{1}{64} \right) , \end{aligned}$$
where the last inequality follows from the fact that \({\mathbb {P}}\left( \xi > 2 \right) < 0.023\) for \(\xi \) follows the standard Gaussian distribution.\(\square \)
Control the evolution of the spectral norm.
According to Lemma 4, to control the evolution of the measures of subsets, we need to control the spectral norm of \(A^{-1/2} A_t A^{-1/2}\). The following lemma serves the purpose.
In addition to the same assumptions of Lemma 3, if \(\psi _k \ge \frac{1}{\alpha k^\beta }\) for all \(k \le d\) for some \(0 < \beta \le \frac{1}{2}\) and \(\alpha \ge 1\), then there exists a universal constant c such that for \(q = \lceil \frac{1}{\beta } \rceil + 1\), \(d\ge 3\) and \(T_2 = \frac{1}{ c \cdot q \alpha ^2\log (d) d^{2\beta - \beta /(4q)}}\), we have
$$\begin{aligned} {\mathbb {P}}\left( \int _{0}^{T_2} \left\| A^{-1/2} A_t A^{-1/2}\right\| _{2} dt \ge \frac{1}{64} \right) < \frac{4}{10}. \end{aligned}$$
Direct control of the largest eigenvalue of \(A^{-1/2} A_t A^{-1/2}\) is not trivial, instead we use the potential function \(\Gamma _t\) to upper bound the largest eigenvalue. Define
$$\begin{aligned} Q_t&= A^{-1/2} A_t A^{-1/2} \nonumber \\ \Gamma _t&= {{\,\mathrm{Tr}\,}}\left( Q_t^q \right) . \end{aligned}$$
It is clear that \(\Gamma _t^{1/q} \ge \left\| A^{-1/2} A_t A^{-1/2}\right\| _{2}\). So in order to upper bound \(\left\| A^{-1/2} A_t A^{-1/2}\right\| _{2}\), it is sufficient to upper bound \(\Gamma _t^{1/q}\). The advantage of using \(\Gamma _t\) is that it is differentiable. We have the following differential for \(A_t\) and \(\Gamma _t\):
$$\begin{aligned} dA_t&= \int (x - \mu _t) (x - \mu _t)^\top \left( (x-\mu _t)^\top A^{-1/2}dW_t \right) p_t(x) dx - A_tA^{-1}A_t dt, \end{aligned}$$
$$\begin{aligned} d\Gamma _t&= q \int \left( x-\mu _t \right) ^\top A^{-1/2} \left( Q_t \right) ^{q-1} A^{-1/2} \left( x-\mu _t \right) \left( x-\mu _t \right) ^\top A^{-1/2} dW_t p_t(x) dx \nonumber \\&\quad - q {{\,\mathrm{Tr}\,}}\left( Q_t^{q+1} \right) dt + \frac{q}{2} \sum _{a = 0}^{q-2} \int \int \left( x-\mu _t \right) ^\top A^{-1/2} Q_t^{a} A^{-1/2} \left( y-\mu _t \right) \nonumber \\&\quad \cdot \left( x-\mu _t \right) ^\top A^{-1/2} Q_t^{q-2-a} A^{-1/2} \left( y-\mu _t \right) \left( x-\mu _t \right) ^\top A^{-1} \left( y-\mu _t \right) p_t(x) p_t(y) dx dy dt. \end{aligned}$$
Obtaining these differentials uses Itô's formula and the proofs are provided in Appendix A.
The next lemma upper bounds the terms in the potential \(\Gamma _t\).
Under the same assumptions of Lemma 5, the potential \(\Gamma _t\) defined in Equation (14) can be written as follows
$$\begin{aligned} d\Gamma _t = v_t^\top dW_t + \delta _t dt, \end{aligned}$$
where \(v_t \in \mathbb {R}^d\) and \(\delta _t \in \mathbb {R}\) satisfy
$$\begin{aligned} \left\| v_t\right\| _{2}&\le 16 q \Gamma _t^{1 + 1/(2q)}, \text { and } \\ \delta _t&\le \min \left\{ 64 q^2 \alpha ^2 \log (d) d^{2\beta -1/q}\Gamma _t^{1 + 1/q}, \frac{2q^2}{t} \Gamma _t \right\} . \end{aligned}$$
The proof of Lemma 6 is provided in Section 3.1. Remark that bounds similar to the first bound of \(\delta _t\) in Lemma 6 have appeared in Lee and Vempala [17], whereas the second bound of \(\delta _t\) in Lemma 6 is novel. The second bound of \(\delta _t\) also leads to the following Lemma 8 which gives better control of the potential than the previous proof by Lee and Vempala [17] when t is large.
Using the bounds in Lemma 6, we state the two lemmas which control the potential \(\Gamma _t\) in two ways.
Under the same assumptions of Lemma 6, using the following transformation
$$\begin{aligned} h: \mathbb {R}_+&\rightarrow \mathbb {R}\\ a&\mapsto -(a+1)^{-1/q} \end{aligned}$$
$$\begin{aligned} {\mathbb {P}}\left( \max _{t \in [0, T_1]} h(\Gamma _t) \ge - \frac{1}{2}\left( d+1 \right) ^{-1/q} \right) \le \exp (-\frac{2}{3} q\log (d)) \le \frac{3}{10} \end{aligned}$$
where \(T_1 = \frac{1}{32768 q \alpha ^2 \log (d) d^{2\beta }}\).
$$\begin{aligned} f: \mathbb {R}_+&\rightarrow \mathbb {R}\\ a&\mapsto a^{1/q} \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}f(\Gamma _{t_2}) \le {\mathbb {E}}f(\Gamma _{t_1}) \left( \frac{t_2}{t_1} \right) ^{2q}, \forall t_2> t_1 > 0. \end{aligned}$$
The proofs of Lemma 7 and 8 are provided in Section 3.2.
Now we are ready to prove Lemma 5.
Proof of Lemma 5. We take
$$\begin{aligned} T_1 = \frac{1}{32768 q \alpha ^2 \log (d) d^{2\beta }}, \quad T_2 = \frac{d^{\beta /(4q)}}{40} T_1 = \frac{1}{ 1310720 q \alpha ^2\log (d) d^{2\beta - \beta /(4q)}}. \end{aligned}$$
We bound the spectral norm of \(A^{-1/2}A_t A^{-1/2}\) in two time intervals via Lemma 7 and Lemma 8. In the first time interval \([0, T_1]\), we have
$$\begin{aligned} {\mathbb {P}}\left( \int _0^{T_1} \left\| A^{-1/2}A_t A^{-1/2}\right\| _{2} \ge \frac{1}{128} \right)&\le {\mathbb {P}}\left( \max _{t \in [0, T_1]} \left\| A^{-1/2}A_t A^{-1/2}\right\| _{2} \ge \frac{1}{128T_1} \right) \nonumber \\&\quad {\mathop {\le }\limits ^{\mathrm{(i)}}} {\mathbb {P}}\left( \max _{t \in [0, T_1]} \left\| A^{-1/2}A_t A^{-1/2}\right\| _{2} \ge 3 d^{1/q} \right) \nonumber \\&\quad {\mathop {\le }\limits ^{\mathrm{(ii)}}} {\mathbb {P}}\left( \max _{t \in [0, T_1]} \Gamma _t \ge 3^{q} d \right) \nonumber \\&\quad {\mathop {\le }\limits ^{\mathrm{(iii)}}} {\mathbb {P}}\left( \max _{t \in [0, T_1]} \Gamma _t + 1 \ge 2^{q} (d+1) \right) \nonumber \\&\quad {\mathop {=}\limits ^{{\mathrm{(iii)}}}} {\mathbb {P}}\left( \max _{t \in [0, T_1]} h(\Gamma _t) \ge -\frac{1}{2} \left( d+1 \right) ^{-1/q} \right) \nonumber \\&\quad {\mathop {\le }\limits ^{\mathrm{(iv)}}} \frac{3}{10}. \end{aligned}$$
Inequality (i) follows from the condition \(\beta q \ge 1\). (ii) follows from the fact that \({{\,\mathrm{Tr}\,}}\left( A^q \right) ^{1/q} \ge \left\| A\right\| _{2}\). (iii) is because \(3^q d\ge 2^q (d+ 1)\) when \(q \ge 2\) and \(d\ge 1\). h is defined in Lemma 7. (iv) follows from Lemma 7.
In the first time interval, we can also bound the expectation of \(\Gamma _{T_1}^{1/q}\). Since the density \(p_{T_1}\) is more log-concave than a Gaussian density with covariance matrix \(\frac{A}{T_1}\), the covariance matrix of \(p_{T_1}\) is upper bounded as follows (see Theorem 4.1 in Brascamp-Lieb [5] or Lemma 5 in Eldan and Lehec [10])
$$\begin{aligned} A_{T_1} \preceq \frac{A}{T_1}. \end{aligned}$$
Consequently, all the eigenvalues of \(Q_{T_1}\) are less than \(\frac{1}{T_1}\) and \(\Gamma _{T_1}\) is upper bounded by \(\frac{d}{T_1^{q}}\). Using the above bound, we can bound the expectation of \(\Gamma _{T_1}^{1/q}\) as follows
$$\begin{aligned} {\mathbb {E}}\left[ \Gamma _{T_1}^{1/q} \right]&= {\mathbb {E}}\left[ \mathbb {1}_{\Gamma _{T_1} \ge 3^q d} \Gamma _{T_1}^{1/q} + \mathbb {1}_{\Gamma _{T_1} < 3^q d} \Gamma _{T_1}^{1/q} \right] \nonumber \\&{\mathop {\le }\limits ^{\mathrm{(i)}}} \frac{d^{1/q}}{T_1} \exp \left( - \frac{2}{3} q \log (d) \right) + 3 d^{1/q} \nonumber \\&{\mathop {\le }\limits ^{\mathrm{(ii)}}} 32768 d^{1/q} q \alpha ^2 + 4 d^{1/q} \nonumber \\&\le 40000 d^{1/q} q \alpha ^2. \end{aligned}$$
Inequality (i) follows from Lemma 7, the inequality \(3^qd\ge 2^q(d+1)\) (similar to what we did in the last four steps of Equation (17)) and Equation (18). (ii) follows from \(q \ge 2\), \(\beta \le {1/2}\) and \(d^{1/2} \ge \log (d)\) for \(d\ge 3\).
In the second time interval, for \(t \in [T_1, T_2]\), we have
$$\begin{aligned} {\mathbb {E}}\left[ \left\| A^{-1/2} A_{t} A^{-1/2}\right\| _{2} \right]&\le {\mathbb {E}}\left[ \Gamma _{t}^{1/q} \right] \nonumber \\&{\mathop {\le }\limits ^{\mathrm{(i)}}} {\mathbb {E}}\left[ \Gamma _{T_1}^{1/q} \right] \left( \frac{t}{T_1} \right) ^{2q} \nonumber \\&{\mathop {\le }\limits ^{\mathrm{(ii)}}} {\mathbb {E}}\left[ \Gamma _{T_1}^{1/q} \right] \left( \frac{T_2}{T_1} \right) ^{2q} \nonumber \\&{\mathop {\le }\limits ^{\mathrm{(iii)}}} 1000 d^{\beta /2 + 1/q} q \alpha ^2. \end{aligned}$$
Inequality (i) follows from Lemma 8. (ii) is because \(t \le T_2\). (iii) follows from \(T_2 = \frac{d^{\beta /(4q)}}{40} T_1\). Using the above bound, we control the spectral norm in the second time interval via Markov's inequality
$$\begin{aligned} {\mathbb {P}}\left( \int _{T_1}^{T_2} \left\| A^{-1/2}A_t A^{-1/2}\right\| _{2} \ge \frac{1}{128} \right)&{\mathop {\le }\limits ^{\mathrm{(i)}}} \frac{{\mathbb {E}}\left[ \int _{T_1}^{T_2} \left\| A^{-1/2}A_t A^{-1/2}\right\| _{2} dt \right] }{1/128} \nonumber \\&{\mathop {\le }\limits ^{\mathrm{(ii)}}} T_2 \cdot 1000 d^{\beta /2 + 1/q} q \alpha ^2 \cdot 128 \nonumber \\&{\mathop {<}\limits ^{\mathrm{(iii)}}} \frac{1}{10}, \end{aligned}$$
where inequality (i) follows from Markov's inequality and (ii) follows from Equation (20). (iii) follows from the definition of \(T_2\) and \(\frac{\beta }{2}+\frac{1}{q} \le 2\beta -\beta /(4q)\) when \(\beta q \ge 1\) and \(q \ge 2\).
Combining the bounds in the first and second time intervals in Equation (17) and (21), we obtain
$$\begin{aligned} {\mathbb {P}}\left( \int _{0}^{T_2} \left\| A^{-1/2}A_t A^{-1/2}\right\| _{2} \ge \frac{1}{64} \right)&\le {\mathbb {P}}\left( \int _{0}^{T_1} \left\| A^{-1/2}A_t A^{-1/2}\right\| _{2} \ge \frac{1}{128} \right) \nonumber \\&\quad + {\mathbb {P}}\left( \int _{T_1}^{T_2} \left\| A^{-1/2}A_t A^{-1/2}\right\| _{2} \ge \frac{1}{128} \right) \le \frac{4}{10}. \end{aligned}$$
\(\square \)
Proof of Lemma 2.
The proof of Lemma 2 follows the strategy described after Lemma 3. We make the arguments rigorous here. We consider a log-concave density p in \(\mathbb {R}^d\) with compact support. Without loss of generality, we can assume that the covariance matrix A of the density p is invertible. Otherwise, the density p is degenerate and we can instead prove the results in a lower dimension.
According to the concavity of the isoperimetric profile (Theorem 2.8 in Sternberg and Zumbrun [25] or Theorem 1.8 in Milman [20]), it is sufficient to consider subsets of measure 1/2 in the definition of isoperimetric coefficient (2). Given an initial subset E of \(\mathbb {R}^d\) with \(p(E) = \frac{1}{2}\), use the martingale property of \(p_{T_2}(E)\), we have
$$\begin{aligned} p(\partial E)&= {\mathbb {E}}\left[ p_{T_2}(\partial E) \right] \\&{\mathop {\ge }\limits ^{\mathrm{(i)}}} {\mathbb {E}}\left[ \frac{1}{2}\left\| B_{T_2}^{-1}\right\| _{2}^{-1/2} \min \left( p_{T_2}(E), p_{T_2}(E^c) \right) \right] \\&{\mathop {\ge }\limits ^{\mathrm{(ii)}}} \frac{1}{4}\cdot \frac{1}{2}\left\| B_{T_2}^{-1}\right\| _{2}^{-1/2} {\mathbb {P}}( \frac{1}{4}\le p_{T_2}(E) \le \frac{3}{4})\\&= \frac{1}{4}\left\| B_{T_2}^{-1}\right\| _{2}^{-1/2} {\mathbb {P}}( \frac{1}{4}\le p_{T_2}(E) \le \frac{3}{4}) \cdot \min \left\{ p(E), p(E^c) \right\} \\&{\mathop {\ge }\limits ^{\mathrm{(iii)}}} \frac{1}{8}\left\| B_{T_2}^{-1}\right\| _{2}^{-1/2} \cdot \min \left\{ p(E), p(E^c) \right\} \\&{\mathop {=}\limits ^{\mathrm{(iv)}}} \frac{1}{8}T_2^{1/2}\left\| A\right\| _{2}^{-1/2} \cdot \min \left\{ p(E), p(E^c) \right\} . \end{aligned}$$
Inequality (i) uses the isoperimetric inequality for a log-concave density which is more log-concave than a Gaussian density proportional to \(e^{-\frac{1}{2}x^\top B_t x}\) (see e.g. Theorem 2.7 in Ledoux [16] or Theorem 4.4 in Cousins and Vempala [7]). Inequality (ii) follows from the fact that \(p_t(E)\) is nonnegative. (iii) follows from Lemma 4 and Lemma 5 (for \(d\ge 3\)). (iv) follows from the construction that \(B_t = t A^{-1}\). We conclude the proof since \(T_2\) is taken as \(\frac{1}{ c \cdot q \alpha ^2\log (d) d^{2\beta - \beta /(4q)}}\) with c as a constant. The above proof only works for \(d\ge 3\). It is easy to verify that Lemma 2 still holds for the case for \(d= 1, 2\) from the original KLS bound in Equation (3).\(\square \)
The proof of Lemma 1 consists of applying Lemma 2 recursively. We define
$$\begin{aligned} \alpha _1 = 4, \beta _1 = \frac{1}{2}. \end{aligned}$$
For \(\ell \ge 1\), we define \(\alpha _\ell \) and \(\beta _\ell \) recursively as follows:
$$\begin{aligned} \alpha _{\ell +1}&= 2c \cdot \alpha _\ell \beta _\ell ^{-1/2}, \nonumber \\ \beta _{\ell +1}&= \beta _\ell - \beta _\ell ^2/16, \end{aligned}$$
where c is the constant in Lemma 2. It is not difficult to show by induction that \(\alpha _\ell \) and \(\beta _\ell \) satisfy
$$\begin{aligned} \frac{1}{\ell +1}&\le \beta _\ell \le \frac{16}{\ell } \nonumber \\ \alpha _\ell&\le \left( 4c^2 \ell \right) ^{\ell /2}. \end{aligned}$$
We start with a known bound from the original KLS paper [12]
$$\begin{aligned} \psi _d\ge \frac{1}{\alpha _1 d^{\beta _1}},\quad \forall d\ge 1. \end{aligned}$$
In the induction, suppose that we have
$$\begin{aligned} \psi _d\ge \frac{1}{\alpha _\ell \left( \log (d)+1 \right) ^{\ell /2} d^{\beta _\ell }},\quad \forall d\ge 1. \end{aligned}$$
From the above inequality, we obtain for any \(1 \le k \le d\),
$$\begin{aligned} \psi _k \ge \frac{1}{\alpha _\ell ' k^\beta _\ell }, \end{aligned}$$
with \(\alpha _\ell ' = \alpha _\ell \left( \log (d) +1 \right) ^{\ell /2}\). Using the above lower bounds for \(\psi _k\), we can apply Lemma 2. For integer \(\ell +1\), we have
$$\begin{aligned} \psi _d&{\mathop {\ge }\limits ^{(i)}} \frac{1}{c \cdot q^{1/2} \alpha _\ell \left( \log (d)+1 \right) ^{l/2} \log (d)^{1/2} d^{\beta _\ell - \beta _\ell / (8q) }}\\&{\mathop {\ge }\limits ^{(ii)}} \frac{1}{2c \cdot \alpha _\ell \beta _\ell ^{-1/2} \left( \log (d)+1 \right) ^{(l+1)/2} d^{\beta _\ell - \beta _\ell ^2 / 16 }} \\&= \frac{1}{\alpha _{\ell +1} \left( \log (d)+1 \right) ^{(\ell +1)/2} d^{\beta _{\ell +1}}} \end{aligned}$$
where inequality (i) follows from Lemma 2, inequality (ii) follows from \(q \le \frac{2}{\beta }\) and the last equality follows from the definition of \(\alpha _\ell \) and \(\beta _\ell \). We conclude Lemma 1 using the \(\alpha _\ell \) and \(\beta _\ell \) bounds in Equation (24).\(\square \)
Proof of Theorem 1.
To derive Theorem 1 from Lemma 1, it is sufficient to show that for any log-concave density p in \(\mathbb {R}^d\), most of its probability measure is on a compact support. Let \(\mu \) be the mean of the density p. Since \(r \mapsto p(\mathbb {B}\left( \mu , r \right) ^c)\) is an non-increasing function of r with limit 0 at \(\infty \), there exists a radius \(R > 0\), such that \(p(\mathbb {B}\left( \mu , R \right) ^c) \le 0.2\). Note that it is possible to get a better bound via e.g. log-concave concentration bounds from Paouris [22], but knowing the existence of such radius R is sufficient for the proof here.
Denote \(B = \mathbb {B}\left( \mu , R \right) \). Then \(p(B^c)\le 0.2\). Let \(\varrho \) be the density obtained by truncating p on the ball B. Then \(\varrho \) is log-concave and it has compact support. For a subset \(E \subset \mathbb {R}^d\) of measure such that \(p(E) = \frac{1}{2}\), we have
$$\begin{aligned} p(\partial E)&\ge \varrho (\partial E) p(B) \\&\ge \psi (\varrho ) \min \left( \varrho (E), \varrho (E^c) \right) p(B) \\&= \psi (\varrho ) \min \left( p(E \cap B), p(B \cap E^c) \right) \\&\ge \psi (\varrho ) \min \left( p(E) - p(B^c), p(E^c) - p(B^c) \right) \\&\ge \frac{1}{2} \psi (\varrho ) \min \left( p(E), p(E^c) \right) . \end{aligned}$$
The last inequality follows because \(p(E^c) - p(B^c) \ge 0.5 - 0.2 \ge \frac{1}{4}\). Since it is sufficient to consider subsets of measure 1/2 in the definition of the isoperimetric coefficient [20, 25], we conclude that the isoperimetric coefficient of p is lower bounded by half of that of \(\varrho \). Applying Lemma 1 for the isoperimetric coefficient of \(\varrho \), we obtain Theorem 1.\(\square \)
Proof of auxiliary lemmas
In this section, we prove auxiliary Lemmas 6, 7 and 8.
Tensor bounds and proof of Lemma 6.
In this subsection, we prove Lemma 6. Since Lemma 6 involves the third-order moment tensor of a log-concave density, we define the following 3-Tensor for any probability density \(p \in \mathbb {R}^d\) with mean \(\mu \) to simplify notations.
$$\begin{aligned}&\mathcal {T}_p: \quad \mathbb {R}^{d\times d} \times \mathbb {R}^{d\times d} \times \mathbb {R}^{d\times d} \rightarrow \mathbb {R}\nonumber \\&\quad (A, B, C) \mapsto \int \int (x-\mu )^\top A (y-\mu )\nonumber \\&\quad \cdot (x-\mu )^\top B (y - \mu ) \cdot (x - \mu ) ^\top C (y - \mu ) p(x) p(y) dx dy. \end{aligned}$$
For A, B, C three matrices in \(\mathbb {R}^{d\times d}\), we can write \(\mathcal {T}_p(A, B, C)\) equivalently as
$$\begin{aligned} \mathcal {T}_p(A, B, C) = {\mathbb {E}}_{X, Y \sim p} (X-\mu ) ^\top A (Y-\mu ) \cdot (X-\mu ) ^\top B (Y-\mu ) \cdot (X-\mu ) ^\top C (Y-\mu ). \end{aligned}$$
Before we prove Lemma 6, we prove the following properties related to the 3-Tensor.
Suppose p is a log-concave density with mean \(\mu \) and covariance A. Then for any positive semi-definite matrices B and C, we have
$$\begin{aligned} \left\| \int B^{1/2} (x - \mu ) (x - \mu ) ^\top C (x - \mu ) p(x)dx\right\| _{2} \le 16 \left\| A^{1/2}B A^{1/2}\right\| _{2}^{1/2} {{\,\mathrm{Tr}\,}}\left( A^{1/2} C A^{1/2} \right) . \end{aligned}$$
Lemma 10
Suppose that \(\psi _k \ge \frac{1}{\alpha k^\beta }\) for all \(k \le d\) for some \(0 \le \beta \le \frac{1}{2}\) and \(\alpha \ge 1\). Suppose p is a log-concave density in \(\mathbb {R}^d\) with covariance A and A is invertible. Then for \(q \ge \frac{1}{2\beta }\), we have
$$\begin{aligned} \mathcal {T}_p(A^{q-2}, \mathbb {I}_d, \mathbb {I}_d) \le 128 \alpha ^2 \log (d) d^{2\beta - 1/q} {{\,\mathrm{Tr}\,}}(A^q) ^{1 + 1/q}. \end{aligned}$$
Given \(\tau > 0\). Suppose p is a log-concave density which is more log-concave than \(\mathcal {N}(0, \frac{1}{\tau } \mathbb {I}_d)\). Let A be its covariance matrix. Suppose A is invertible then for \(q \ge 3\), we have
$$\begin{aligned} \mathcal {T}_p(A^{q-2}, \mathbb {I}_d, \mathbb {I}_d) \le \frac{4}{\tau } {{\,\mathrm{Tr}\,}}\left( A^{q} \right) . \end{aligned}$$
Suppose p is a log-concave density in \(\mathbb {R}^d\). For any \(\delta \in [0, 1]\), for A, B, C positive semi-definite matrices then
$$\begin{aligned} \mathcal {T}_{p}(B^{1/2}A^\delta B^{1/2}, B^{1/2}A^{1-\delta }B^{1/2}, C) \le \mathcal {T}_{p}(B^{1/2}AB^{1/2}, B, C). \end{aligned}$$
The proofs of the above lemmas are provided in Section 3.3.
Proof of Lemma 6. We first prove the bound on \(\left\| v_t\right\| _{2}\), where
$$\begin{aligned} v_t = q \int A^{-1/2} \left( x-\mu _t \right) \left( x-\mu _t \right) ^\top A^{-1/2} \left( Q_t \right) ^{q-1} A^{-1/2} \left( x-\mu _t \right) p_t(x) dx. \end{aligned}$$
Applying Lemma 9 and knowing the covariance of \(p_t\) is \(A_t\), we obtain
$$\begin{aligned} \left\| v_t\right\| _{2}&\le 16 q \left\| A_t^{1/2} A^{-1} A_t^{1/2}\right\| _{2}^{1/2} {{\,\mathrm{Tr}\,}}\left( A_t^{1/2} A^{-1/2} Q_t^{q-1} A^{-1/2} A_t^{1/2} \right) \\&{\mathop {=}\limits ^{\mathrm{(i)}}} 16 q \left\| A_t^{1/2} A^{-1} A_t^{1/2}\right\| _{2}^{1/2} {{\,\mathrm{Tr}\,}}\left( Q_t^{q} \right) \\&{\mathop {=}\limits ^{\mathrm{(ii)}}} 16 q \left\| Q_t\right\| _{2}^{1/2} {{\,\mathrm{Tr}\,}}\left( Q_t^{q} \right) \\&{\mathop {\le }\limits ^{\mathrm{(iii)}}} 16 q \left[ {{\,\mathrm{Tr}\,}}\left( Q_t^{q} \right) \right] ^{1+1/(2q)}. \end{aligned}$$
Equality (i) uses the definition of \(Q_t = A^{-1/2} A_t A^{-1/2}\). Equality (ii) uses the fact that \(\left\| MM^\top \right\| _{2} = \left\| M^\top M\right\| _{2}\) for any square matrix \(M \in \mathbb {R}^{d\times d}\). Inequality (iii) uses that \(\left\| M\right\| _{2} \le {{\,\mathrm{Tr}\,}}\left( M^q \right) ^{1/q}\) for any positive semi-definite matrix M.
Next, we bound \(\delta _t\) in two ways. We can ignore the negative term in \(\delta _t\) to obtain the following:
$$\begin{aligned} \delta _t&\le \frac{q}{2} \sum _{a = 0}^{q-2} \int \int \left( x-\mu _t \right) ^\top A^{-1/2} Q_t^{a} A^{-1/2} \left( y-\mu _t \right) \nonumber \\&\quad \cdot \left( x-\mu _t \right) ^\top A^{-1/2} Q_t^{q-2-a} A^{-1/2} \left( y-\mu _t \right) \left( x-\mu _t \right) ^\top A^{-1} \left( y-\mu _t \right) p_t(x) p_t(y) dx dy \nonumber \\&= \frac{q}{2} \sum _{a = 0}^{q-2} \mathcal {T}_{\varrho _t}(Q_t^{a}, Q_t^{q-2-a}, \mathbb {I}_d), \end{aligned}$$
where \(\varrho _t\) is the density of linear-transformed random variable \(A^{-1/2}\left( X-\mu _t \right) \) for X drawn from \(p_t\) and \(\mu _t\) is the mean of \(p_t\). \(\varrho _t\) is still log-concave since any linear transformation of a log-concave density is log-concave (see e.g. Saumard and Wellner [24]). \(\varrho _t\) has covariance \(A^{-1/2} A_t A^{-1/2}\), which is also \(Q_t\). For \(a \in \left\{ 0, \cdots , q-2 \right\} \), we have
$$\begin{aligned} \mathcal {T}_{\varrho _t}(Q_t^{a}, Q_t^{q-2-a}, \mathbb {I}_d)&{\mathop {\le }\limits ^{\mathrm{(i)}}} \mathcal {T}_{\varrho _t}(Q_t^{q-2}, \mathbb {I}_d, \mathbb {I}_d) \\&{\mathop {\le }\limits ^{\mathrm{(ii)}}} 128 \alpha ^2 \log (d) d^{2\beta - 1/q} \left[ {{\,\mathrm{Tr}\,}}\left( Q_t^q \right) \right] ^{1+1/q}. \end{aligned}$$
Inequality (i) follows from Lemma 12. Inequality (ii) follows from Lemma 10. Since there are \(q-1\) terms in the sum, we conclude the first part of the bound for \(\delta _t\).
On the other hand, since \(p_t\) is more log-concave than the Gaussian density proportional to \(e^{-\frac{t}{2} (x-\mu _t)^\top A^{-1} (x-\mu _t)}\), \(\varrho _t\) is more log-concave than the Gaussian density proportional to \(e^{-\frac{t}{2} x^\top x}\). Applying Lemma 12 and Lemma 11 to each term in Equation (27), we obtain
$$\begin{aligned} \delta _t&\le \frac{q^2}{2} \mathcal {T}_{\varrho _t}(Q_t^{q-2}, \mathbb {I}_d, \mathbb {I}_d) \\&\le \frac{2q^2}{t} {{\,\mathrm{Tr}\,}}\left( Q_t^{q} \right) . \end{aligned}$$
This concludes the second part of the bound for \(\delta _t\).\(\square \)
Control of the potential in two time intervals.
In this subsection, we prove Lemma 7 and Lemma 8.
Proof of Lemma 7. The function h has the following derivatives
$$\begin{aligned} \frac{d h}{d a} = \frac{1}{q} \left( a + 1 \right) ^{-1/q - 1}, \quad \frac{d^2 h}{da^2} = -\frac{q+1}{q^2} \left( a + 1 \right) ^{-1/q - 2}. \end{aligned}$$
Using Itô's formula, we obtain
$$\begin{aligned} d h(\Gamma _t)&= \left. \frac{d h}{d a}\right| _{\Gamma _t} d\Gamma _t + \frac{1}{2} \left. \frac{d^2 h}{d a^2}\right| _{\Gamma _t} d\left[ \Gamma \right] _t \\ {}&= \frac{1}{q (\Gamma _t+1)^{1/q+1}} d\Gamma _t - \frac{1}{2} \frac{q+1}{q^2 (\Gamma _t+1)^{1/q+2}} \left\| v_t\right\| _{2}^2 dt \\ {}&\le \frac{1}{q (\Gamma _t+1)^{1/q+1}} d\Gamma _t \\ {}&{\mathop {\le }\limits ^{\mathrm{(i)}}}\ 64 q \alpha ^2 \log (d) d^{2\beta -1/q} dt + \frac{v_t^\top dW_t}{q \left( \Gamma _t + 1 \right) ^{1/q+1}}, \end{aligned}$$
where inequality (i) plugs in the bounds in Lemma 6.
Define a martingale \(Y_t\) such that
$$\begin{aligned} dY_t = \frac{v_t^\top dW_t}{q \left( \Gamma _t + 1 \right) ^{1/q+1}}, \end{aligned}$$
with \(Y_0 = 0\). According to the \(\left\| v_t\right\| _{2}\) upper bound in Lemma 6, we have
$$\begin{aligned} \left\| \frac{1}{q \left( \Gamma _t + 1 \right) ^{1 + 1/q}}v_t\right\| _{2}^2&\le 256. \end{aligned}$$
Hence the martingale \(Y_t\) is well-defined. According to the Dambis, Dubins-Schwarz theorem (see e.g. Revuz and Yor [23] Section V.1 Theorem 1.7), there exits a Wiener process \({\tilde{W}}_t\) such that \(Y_t\) has the same distribution as \({\tilde{W}}_{[Y]_t}\). Then we have for any \(\gamma > 0\),
$$\begin{aligned} {\mathbb {P}}\left( \max _{t \in [0, T]} Y_t \ge \gamma \right) \le {\mathbb {P}}\left( \max _{t \in [0, T]} {\tilde{W}}_{256t} \ge \gamma \right) \le \exp \left( -\frac{\gamma ^2}{512 T} \right) . \end{aligned}$$
Set \(T = \frac{1}{32768 q \alpha ^2 \log (d) d^{2\beta }}\) and \(\Psi = \frac{1}{2} \left( d+1 \right) ^{-1/q}\). Observe that \(\Gamma _0 = d\) and as a result \(h(\Gamma _0) = -\left( d+1 \right) ^{-1/q}\). Then we have
$$\begin{aligned} {\mathbb {P}}\left( \max _{t \in [0, T]} h(\Gamma _t) \ge -\Psi \right)&\le {\mathbb {P}}\left( \max \right) _{t \in [0, T]} Y_t \ge -\Psi + \left( d+1 \right) ^{-1/q}\\ {}&\quad - \int _0^T 64q \alpha ^2 \log (d) d^{2\beta - 1/q} dt \\ {}&{\mathop {\le }\limits ^{\mathrm{(i)}}}\ {\mathbb {P}}\left( \max _{t \in [0, T]} Y_t \ge \frac{\Psi }{4} \right) \\ {}&{\mathop {\le }\limits ^{\mathrm{(ii)}}} \exp \left( -\frac{\Psi ^2}{8192T} \right) \\ {}&{\mathop {\le }\limits ^{\mathrm{(iii)}}} \exp \left( -\frac{2}{3} q \alpha ^2 \log (d) d^{2\beta - 2/q} \right) \nonumber \\ {}&{\mathop {<}\limits ^{\mathrm{(iv)}}} \frac{3}{10}. \end{aligned}$$
Inequality (i) follows from the choice of T. (ii) uses Equation (28). (iii) follows by plugging in \(\Psi = \frac{1}{2}\left( d+1 \right) ^{-1/q}\) and \(3^q d^2 \ge 2^q (d+ 1)^2\). (iv) follows from \(\beta q \ge 1\), \(d\ge 3\), \(q\ge 2\) and \(3^{-4/3} < 0.3\).\(\square \)
Proof of Lemma 8. The function f has the following derivatives
$$\begin{aligned} \frac{d f(a)}{d a} = \frac{1}{q} a^{1/q-1}, \frac{d^2 f(a, t)}{d a^2} = -\frac{q-1}{q^2} a^{1/q-2}. \end{aligned}$$
$$\begin{aligned} d f\left( \Gamma _t \right)&= \left. \frac{df}{da} \right| _{\Gamma _t} d\Gamma _t + \frac{1}{2} \left. \frac{d^2 f}{ d^2 a }\right| _{\Gamma _t} d \left[ \Gamma \right] _t \\&= \frac{1}{q} \Gamma _t^{1/q-1} \left( v_t^\top dW_t + \delta _t dt \right) - \frac{q-1}{2q^2} \Gamma _t^{1/q-2} \left\| v_t\right\| _{2}^2 dt. \end{aligned}$$
Using the bounds in Lemma 6 and the martingale property of the term \(\frac{1}{q} \Gamma _t^{1/q-1} v_t^\top dW_t\), we obtain
$$\begin{aligned} d {\mathbb {E}}f(\Gamma _t) \le \frac{2q}{t} {\mathbb {E}}f(\Gamma _t) dt. \end{aligned}$$
Solving the above differential equation, we obtain
Proof of tensor bounds.
In this subsection, we prove Lemmas 9, 10, 11 and 12.
Proof of Lemma 9. Since C is positive semi-definite, we can write its eigenvalue decomposition as follows \(C = \sum _{i=1}^d\lambda _i v_i v_i^\top \), with \(\lambda _i \ge 0\). Then,
$$\begin{aligned}&\left\| \int B^{1/2} (x-\mu ) (x-\mu )^\top C (x-\mu ) p(x) dx\right\| _{2} \\&\quad = \left\| \sum _{i=1}^d\int B^{1/2} (x-\mu ) \lambda _i \left( (x-\mu )^\top v_i \right) ^2 p(x) dx\right\| _{2}\\&\quad {\mathop {\le }\limits ^{\mathrm{(i)}}} \sum _{i=1}^d\lambda _i \left\| \int B^{1/2} (x-\mu ) \left( (x-\mu )^\top v_i \right) ^2 p(x) dx\right\| _{2}\\&\quad = \sum _{i=1}^d\lambda _i \max _{\left\| \xi \right\| _{2}\le 1} \int \xi ^\top B^{1/2} (x-\mu ) \left( (x-\mu )^\top v_i \right) ^2 p(x) dx \\&\quad {\mathop {\le }\limits ^{\mathrm{(ii)}}} \sum _{i=1}^d\lambda _i \max _{\left\| \xi \right\| _{2}\le 1} \left( \int \left( \xi ^\top B^{1/2} (x-\mu ) \right) ^2 p(x) dx \right) ^{1/2} \left( \int \left( (x-\mu )^\top v_i \right) ^4 p(x) dx \right) ^{1/2} \\&\quad {\mathop {\le }\limits ^{\mathrm{(iii)}}} 16 \sum _{i=1}^d\lambda _i \max _{\left\| \xi \right\| _{2}\le 1} \left( \int \left( \xi ^\top B^{1/2} (x-\mu ) \right) ^2 p(x) dx \right) ^{1/2} \left( \int \left( (x-\mu )^\top v_i \right) ^2 p(x) dx \right) \\&\quad = 16\left\| B^{1/2} A B^{1/2} \right\| _{2}^{1/2} {{\,\mathrm{Tr}\,}}\left( A^{1/2}CA^{1/2} \right) . \end{aligned}$$
Inequality (i) follows from triangular inequality. (ii) follows from Cauchy–Schwarz inequality. (iii) follows from the statement below, which upper bounds the fourth moment of a log-concave density via its second moment.\(\square \)
For any log-concave density \(\nu \) and any vector \(\theta \in \mathbb {R}^{d}\), we have
$$\begin{aligned} \left( \int \left( (x-\mu _\nu )^\top \theta \right) ^a \nu (x) dx \right) ^{1/a} \le 2 \frac{a}{b} \left( \int \left( (x-\mu _\nu )^\top \theta \right) ^b \nu (x) dx \right) ^{1/b} \end{aligned}$$
for \(a \ge b > 0\), where \(\mu _\nu \) is the mean of \(\nu \). Equation (29) is proved e.g. in Corollary 5.7 of Guédon et al. [11] and the exact constant is provided in Proposition 3.8 of Latała and Wojtaszczyk [15].
In order to prove Lemma 10, we need to introduce one additional lemma as follows.
Suppose that \(\psi _k \ge \frac{1}{\alpha k^\beta }\) for all \(k \le d\) for some \(0 < \beta \le \frac{1}{2}\) and \(\alpha \ge 1\). For an isotropic log-concave density p in \(\mathbb {R}^d\) and a unit vector \(v \in \mathbb {R}^d\), define \(\Delta = {\mathbb {E}}_{X \sim p} \left( X^\top v \right) \cdot XX^\top \), then we have
For any orthogonal projection matrix \(P \in \mathbb {R}^{d\times d}\) with rank r, we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta P \Delta \right) \le 16 \psi ^{-2}_{\min (2r, d)}. \end{aligned}$$
For any positive semi-definite matrix A, we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta A \Delta \right) \le 128 \alpha ^2 \log (d) \left( {{\,\mathrm{Tr}\,}}\left( A^{1/(2\beta )} \right) \right) ^{2\beta }. \end{aligned}$$
This lemma was proved in Lemma 41 in an older version (arXiv version 2) of Lee and Vempala [17]. The main proof idea for the first part of Lemma 13 appeared in Eldan [8] (Lemma 6). we provide a proof here for completeness.
Proof of Lemma 13. For the first part, we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta P \Delta \right) = {\mathbb {E}}_{X \sim p} X^\top \Delta P X \cdot X ^\top v. \end{aligned}$$
Since \({\mathbb {E}}_{X\sim p} X^\top v = 0\), we can subtract the mean of the first term \(X^\top \Delta P X\) without changing the value of \({{\,\mathrm{Tr}\,}}\left( \Delta P \Delta \right) \). Then
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta P \Delta \right)&= {\mathbb {E}}_{X\sim p} \left[ \left( X^\top \Delta P X - {\mathbb {E}}_{Y \sim p} Y ^\top \Delta P Y \right) \cdot X ^\top v \right] \\&{\mathop {\le }\limits ^{\mathrm{(i)}}} \left( {\mathbb {E}}_{X\sim p}(X^\top v)^2 \right) ^{1/2} \left( {{\,\mathrm{Var}\,}}_{X \sim p }\left( X ^\top \Delta P X \right) \right) ^{1/2} \\&{\mathop {\le }\limits ^{\mathrm{(ii)}}} 2 \psi _{\min (2r, d)}^{-1} \left( {\mathbb {E}}_{X \sim p} \left\| \Delta P X + P^\top \Delta ^\top X\right\| _{2}^2 \right) ^{1/2} \\&\le 4 \psi _{\min (2r, d)}^{-1} \left( {{\,\mathrm{Tr}\,}}\left( \Delta P \Delta \right) \right) ^{1/2}. \end{aligned}$$
Inequality (i) follows from the Cauchy–Schwarz inequality. Inequality (ii) follows from the fact that \({\mathbb {E}}_{X\sim p}(X^\top v)^2 = 1\) as p is isotropic and that the inverse Poincaré constant is upper bounded by twice of inverse of the squared isoperimetric coefficient (also known as Cheeger's inequality [6, 19] or Theorem 1.1 in Milman [20]). The matrix \(\Delta P + P^\top \Delta \) has rank at most \(\min (2r, d)\). Rearranging the terms in the above equation, we conclude the first part of Lemma 13.
For the second part, we write the matrix A in its eigenvalue decomposition and group the terms by eigenvalues. We have
$$\begin{aligned} A = \sum _{i=1}^d\lambda _i v_i v_i^\top = \sum _{j=1}^J A_j + B, \end{aligned}$$
where \(A_i\) has eigenvalues between the interval \((\left\| A\right\| _{2} e^{i-1} /d, \left\| A\right\| _{2} e^{i} /d]\) and B has eigenvalues smaller than or equal to \(\left\| A\right\| _{2}/d\). Because the intervals have right bounds increasing exponentially, we have \(J = \lceil \log (d) \rceil \). Let \(P_i\) be the orthogonal projection matrix formed by the eigenvectors in \(A_i\). Then we have
$$\begin{aligned}&{{\,\mathrm{Tr}\,}}\left( \Delta A_i \Delta \right) \le \left\| A_i\right\| _{2} {{\,\mathrm{Tr}\,}}\left( \Delta P_i \Delta \right) {\mathop {\le }\limits ^{\mathrm{(i)}}} 16 \left\| A_i\right\| _{2} \psi ^{-2}_{\min (2 \text {rank}(A_i), d)} {\mathop {\nonumber }\limits ^{\mathrm{(ii)}}}\\&\quad {\le } 16 \alpha ^2 \left\| A_i\right\| _{2} \cdot \left( 2 \text {rank}(A_i) \right) ^{2\beta }, \end{aligned}$$
where inequality (i) follows from the first part of Lemma 13 and inequality (ii) follows from the hypothesis of Lemma 13. Similarly for matrix B, we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta B \Delta \right) {\mathop {\le }\limits ^{\mathrm{(i)}}} 16 \alpha ^2 \left\| B\right\| _{2} \left( 2\text {rank}(B) \right) ^{2\beta } {\mathop {\le }\limits ^{\mathrm{(ii)}}} 32 \alpha ^2 \left\| A\right\| _{2}, \end{aligned}$$
where inequality (i) follows from the hypothesis of Lemma 13 and inequality (ii) follows from the fact that \(\left\| B\right\| _{2} \le \left\| A\right\| _{2}/d\) and \(2\beta \le 1\). Putting the bounds (30) and (31) together, we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta A \Delta \right)&= \sum _{j=1}^J {{\,\mathrm{Tr}\,}}\left( \Delta A_j\Delta \right) + {{\,\mathrm{Tr}\,}}\left( \Delta B \Delta \right) \\&\le 16 \alpha ^2 \left( \sum _{j=1}^J \left\| A_j\right\| _{2} \cdot \left( 2\text {rank}(A_j) \right) ^{2\beta } + 2\left\| A\right\| _{2} \right) \\&{\mathop {\le }\limits ^{\mathrm{(i)}}} 16 \alpha ^2 \left[ \left( \sum _{j=1}^J \left\| A_j\right\| _{2}^{1/(2\beta )} \cdot \left( 2\text {rank}(A_j) \right) \right) ^{2\beta } \cdot \left( J \right) ^{1-2\beta } + 2\left\| A\right\| _{2} \right] \\&{\mathop {\le }\limits ^{\mathrm{(ii)}}} 16 \alpha ^2\left[ \left( 2 e {{\,\mathrm{Tr}\,}}\left( A^{1/(2\beta )} \right) \right) ^{2\beta } \cdot \left( J \right) ^{1-2\beta } + 2 \left\| A\right\| _{2} \right] \\&\le 128 \alpha ^2 \log (d) \left( {{\,\mathrm{Tr}\,}}\left( A^{1/(2\beta )} \right) \right) ^{2\beta }. \end{aligned}$$
Inequality (i) follows from Holder's inequality and inequality (ii) follows from the fact that \(\left\| A_j\right\| _{2}^{1/2\beta } \text {rank}(A_j) \le e {{\,\mathrm{Tr}\,}}\left( A_{j}^{1/2\beta } \right) \) due to the construction of \(A_j\). This concludes the second part of Lemma 13.\(\square \)
Proof of Lemma 10. Let \(\mu \) be the mean of p. First, for X a random vector in \(\mathbb {R}^d\) drawn from p, we define the standardized random variable \(A^{-1/2} (X - \mu )\) and its density \(\varrho \). \(\varrho \) is an isotropic log-concave density. Then through a change of variable, we have
$$\begin{aligned}&\mathcal {T}_p \left( A^{q-2}, \mathbb {I}_d, \mathbb {I}_d \right) \\&\quad = \int \int (x-\mu )^\top A^{q-2} (y-\mu ) \cdot (x-\mu )^\top (y-\mu ) \cdot (x-\mu ) ^\top (y-\mu ) p(x) p(y) dx dy \\&\quad = \int \int \left( x^\top A^{q-1} y \right) (x^\top A y) (x ^\top A y) \varrho (x) \varrho (y) dx dy \\&\quad \le \int \int \left( x^\top A^{q} y \right) (x^\top A y) (x ^\top y) \varrho (x) \varrho (y) dx dy \\&\quad = \mathcal {T}_\varrho \left( A^{q}, A, \mathbb {I}_d \right) , \end{aligned}$$
where the last inequality follows from Lemma 12. \(A^q\) is positive semi-definite and we write down its eigenvalue decomposition \(A^q = \sum _{i=1}^d\lambda _i v_i v_i ^\top \) with \(\lambda _i \ge 0\). Since \(\varrho \) is isotropic, we can rewrite the 3-Tensor into a summation form and apply Lemma 13.
$$\begin{aligned}&\mathcal {T}_\varrho \left( A^{q}, A, \mathbb {I}_d \right) \\&\quad = \int \int \left( x ^\top A^q y \right) \left( x ^\top A y \right) \left( x^\top y \right) \varrho (x) \varrho (y) dx dy \\&\quad = \sum _{i=1}^d\lambda _i \int \int \left( x ^\top v_i \right) \left( y ^\top v_i \right) \left( x ^\top A y \right) \left( x^\top y \right) \varrho (x) \varrho (y) dx dy \\&\quad = \sum _{i=1}^d\lambda _i {{\,\mathrm{Tr}\,}}\left( \Delta _i A \Delta _i \right) \\&\quad {\mathop {\le }\limits ^{\mathrm{(i)}}} 128 \alpha ^2 \log (d) \left( {{\,\mathrm{Tr}\,}}(A^{1/2\beta }) \right) ^{2\beta } \left( \sum _{i=1}^d\lambda _i \right) \\&\quad = 128 \alpha ^2 \log (d) \left( {{\,\mathrm{Tr}\,}}(A^{1/2\beta }) \right) ^{2\beta } {{\,\mathrm{Tr}\,}}(A^q) \\&\quad {\mathop {\le }\limits ^{\mathrm{(ii)}}} 128 \alpha ^2 \log (d) {{\,\mathrm{Tr}\,}}(A^q) \left[ {{\,\mathrm{Tr}\,}}\left( A^q \right) ^{1/(2\beta q)} \left( d \right) ^{1 - 1/(2\beta q)} \right] ^{2\beta } \\&\quad = 128 \alpha ^2 \log (d) d^{2\beta - 1/q} {{\,\mathrm{Tr}\,}}(A^q) ^{1 + 1/q}, \end{aligned}$$
where we define \(\Delta _i = \int (x^\top v_i) x x^\top \varrho (x) dx\), inequality (i) follows from Lemma 13 and that \(\varrho \) is isotropic, inequality (ii) follows from Cauchy–Schwarz inequality and the assumption that \(q \ge \frac{1}{2\beta }\).\(\square \)
Proof of Lemma 11. Without loss of generality, we can assume that the density p has mean 0. Its covariance matrix A is positive semi-definite and invertible. We can write down its eigenvalue decomposition as follows \(A = \sum _{i=1}^d\lambda _i v_i v_i^\top \) with \(\lambda _i > 0\) and \(v_i\) are eigenvectors with norm 1. Then \(A^{q}\) has an eigenvalue decomposition with the same eigenvectors \(A^q = \sum _{i=1}^d\lambda _i^q v_i v_i^\top \). Define \(\Delta _i = {\mathbb {E}}_{X \sim p} (X^\top A^{-1/2}v_i) X X ^\top \), then
$$\begin{aligned} \mathcal {T}_p\left( A^{q-2}, \mathbb {I}_d, \mathbb {I}_d \right)&= {\mathbb {E}}_{X, Y \sim p} \left( X^\top A^{q-2} Y \right) (X^\top Y) (X ^\top Y) \nonumber \\&= \sum _{i=1}^d\lambda _i^{q-1} {{\,\mathrm{Tr}\,}}\left( \Delta _i \Delta _i \right) . \end{aligned}$$
Next we bound the terms \({{\,\mathrm{Tr}\,}}\left( \Delta _i \Delta _i \right) \). We have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( \Delta _i \Delta _i \right)&= {\mathbb {E}}_{X \sim p} \left( X ^\top A^{-1/2} v_i \right) X^\top \Delta _i X \\&{\mathop {=}\limits ^{\mathrm{(i)}}} {\mathbb {E}}_{X \sim p} \left( X ^\top A^{-1/2} v_i \right) \left( X^\top \Delta _i X - {\mathbb {E}}_{Y \sim p} \left[ Y^\top \Delta _i Y \right] \right) \\&{\mathop {\le }\limits ^{\mathrm{(ii)}}} \left( {\mathbb {E}}_{X \sim p} \left( X ^\top A^{-1/2} v_i \right) ^2 \right) ^{1/2} \left( {{\,\mathrm{Var}\,}}\left( X ^\top \Delta _i X \right) \right) ^{1/2} \\&{\mathop {=}\limits ^{\mathrm{(iii)}}} \left( {{\,\mathrm{Var}\,}}_{X \sim p}\left( X ^\top \Delta _i X \right) \right) ^{1/2} \\&{\mathop {\le }\limits ^{\mathrm{(iv)}}} \left( {\mathbb {E}}_{X \sim p} \frac{1}{\tau } \left\| \Delta _i X + \Delta _i X\right\| _{2}^2 \right) ^{1/2} \\&{\mathop {\le }\limits ^{\mathrm{(v)}}} \left( \frac{4}{\tau } {{\,\mathrm{Tr}\,}}\left( A \Delta _i \Delta _i \right) \right) ^{1/2}. \end{aligned}$$
Equality (i) is because \({\mathbb {E}}_{X \sim p} X = 0\). Inequality (ii) follows from Cauchy–Schwarz inequality. Equality (iii) follows from the definition of the covariance matrix \({\mathbb {E}}_{X\sim p} XX^\top = A\). Inequality (iv) follows from the Brascamp-Lieb inequality (or Hessian Poincaré, see Theorem 4.1 in Brascamp and Lieb [5]) together with the assumption that p is more log-concave than \(\mathcal {N}(0, \frac{1}{\tau }\mathbb {I}_d)\).
Plugging the bounds of the terms \({{\,\mathrm{Tr}\,}}\left( \Delta _i \Delta _i \right) \) into Equation (32), we obtain
$$\begin{aligned} \mathcal {T}_p\left( A^{q-2}, \mathbb {I}_d, \mathbb {I}_d \right)&= \sum _{i=1}^d\lambda _i^{q-1} {{\,\mathrm{Tr}\,}}\left( \Delta _i \Delta _i \right) \\&\le \sum _{i=1}^d\lambda _i^{q-1} \left( \frac{4}{\tau } {{\,\mathrm{Tr}\,}}\left( A \Delta _i \Delta _i \right) \right) ^{1/2} \\&{\mathop {\le }\limits ^{\mathrm{(i)}}} \frac{2}{\tau ^{1/2}} \left( \sum _{i=1}^d\lambda _i^{q} \right) ^{1/2} \left( \sum _{i=1}^d\lambda _i^{q-2} {{\,\mathrm{Tr}\,}}\left( A \Delta _i \Delta _i \right) \right) ^{1/2} \\&= \frac{2}{\tau ^{1/2}} \left( {{\,\mathrm{Tr}\,}}\left( A^q \right) \right) ^{1/2} \left( {\mathbb {E}}_{X, Y \sim p} \left( X^\top A^{q-3} Y \right) (X^\top A Y) (X ^\top Y) \right) ^{1/2} \\&{\mathop {\le }\limits ^{\mathrm{(ii)}}} \frac{2}{\tau ^{1/2}} \left( {{\,\mathrm{Tr}\,}}\left( A^q \right) \right) ^{1/2} \left( {\mathbb {E}}_{X, Y \sim p} \left( X^\top A^{q-2} Y \right) (X^\top Y) (X ^\top Y) \right) ^{1/2} \\&= \frac{2}{\tau ^{1/2}} \left( {{\,\mathrm{Tr}\,}}\left( A^q \right) \right) ^{1/2} \left[ \mathcal {T}_p\left( A^{q-2}, \mathbb {I}_d, \mathbb {I}_d \right) \right] ^{1/2}. \end{aligned}$$
Inequality (i) follows from Cauchy–Schwarz inequality. For \(q \ge 3\), inequality (ii) follows from Lemma 12. From the above equation, after rearranging the terms, we obtain
$$\begin{aligned} \mathcal {T}_p\left( A^{q-2}, \mathbb {I}_d, \mathbb {I}_d \right) \le \frac{4}{\tau } {{\,\mathrm{Tr}\,}}\left( A^q \right) . \end{aligned}$$
Proof of Lemma 12. This lemma is proved in Lemma 43 in an older version (arXiv version 2) of Lee and Vempala [17], we provide a proof here for completeness.
Without loss of generality, we can assume that the density p has mean 0. For \(i \in \left\{ 1, \cdots , d \right\} \), we define \(\Delta _i = {\mathbb {E}}_{X\sim p} B^{1/2} X X ^\top B^{1/2} X^\top C^{1/2} e_i\) where \(e_i \in \mathbb {R}^d\) is the vector with ith coordinate 1 and 0 elsewhere. We have \(\sum _{i=1}^de_i e_i ^\top = \mathbb {I}_d\). We can rewrite the tensor on the left hand side as a sum of traces.
$$\begin{aligned}&\mathcal {T}_{p}(B^{1/2}A^\delta B^{1/2}, B^{1/2}A^{1-\delta }B^{1/2}, C) \nonumber \\&\quad = {\mathbb {E}}_{X, Y \sim p} X^\top B^{1/2}A^\delta B^{1/2} Y \cdot X^\top B^{1/2}A^{1-\delta }B^{1/2} Y \cdot X ^\top C Y \nonumber \\&\quad = \sum _{i=1}^d{\mathbb {E}}_{X, Y \sim p} X^\top B^{1/2}A^\delta B^{1/2} Y \cdot X^\top B^{1/2}A^{1-\delta }B^{1/2} Y \cdot X^\top C^{1/2} e_i \cdot Y^\top C^{1/2} e_i \nonumber \\&\quad = \sum _{i=1}^d{{\,\mathrm{Tr}\,}}\left( A^{\delta } \Delta _i A^{1-\delta } \Delta _i \right) . \end{aligned}$$
For any symmetric matrix F, a positive-semidefinite matrix G and \(\delta \in [0, 1]\), we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( G^\delta F G^{1-\delta } F \right) \le {{\,\mathrm{Tr}\,}}\left( G F^2 \right) . \end{aligned}$$
Applying the above trace inequality (34) that we prove later for completeness (see also Lemma 2.1 in Zhu et al. [1]), we obtain
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( A^{\delta } \Delta _i A^{1-\delta } \Delta _i \right) \le {{\,\mathrm{Tr}\,}}\left( A \Delta _i \Delta _i \right) . \end{aligned}$$
Writing the sum of traces in Equation (33) back to the 3-Tensor form, we conclude Lemma 12.
It remains to prove the trace inequality in Equation (34). Without loss of generality, we can assume G is diagonal. Hence, we have
$$\begin{aligned} {{\,\mathrm{Tr}\,}}\left( G^\delta F G^{1-\delta } F \right)&= \sum _{i = 1}^d\sum _{j = 1}^dG_{ii}^\delta G_{jj}^{1-\delta } F_{ij}^2 \\&\le \sum _{i=1}^d\sum _{j=1}^d\left( \delta G_{ii} + (1-\delta ) G_{jj} \right) F_{ij}^2 \\&= \delta \sum _{i=1}^d\sum _{j=1}^dG_{ii} F_{ij}^2 + (1-\delta ) \sum _{i=1}^d\sum _{j=1}^dG_{jj} F_{ij}^2 \\&= {{\,\mathrm{Tr}\,}}\left( G F^2 \right) , \end{aligned}$$
where the inequality follows from Jensen's inequality and the fact that the logarithm function is concave (or the inequality of arithmetic and geometric means).\(\square \)
Z. Allen-Zhu, Y.T. Lee, and L. Orecchia. Using optimization to obtain a width-independent, parallel, simpler, and faster positive SDP solver. In: Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM (2016), pp. 1824–1831.
M. Anttila, K. Ball, and I. Perissinaki. The central limit problem for convex bodies. Transactions of the American Mathematical Society, (12)355 (2003), 4723–4735
K. Ball. Logarithmically concave functions and sections of convex sets in Rn. Studia Math, (1)88 (1988), 69–84
J. Bourgain. On high dimensional maximal functions associated to convex bodies. American Journal of Mathematics, (6)108 (1986), 1467–1476
H.J. Brascamp and E.H. Lieb. On extensions of the Brunn–Minkowski and Prékopa–Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. In: Inequalities. Springer (2002), pp. 441–464.
J. Cheeger. A lower bound for the smallest eigenvalue of the Laplacian. In: Proceedings of the Princeton Conference in Honor of Professor S. Bochner (1969), pp. 195–199.
B. Cousins and S. Vempala. A cubic algorithm for computing Gaussian volume. In: Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics (2014), pp. 1215–1228.
R. Eldan. Thin shell implies spectral gap up to polylog via a stochastic localization scheme. Geometric and Functional Analysis, (2)23 (2013), 532–569
R. Eldan and B. Klartag. Approximately Gaussian marginals and the hyperplane conjecture. Concentration, Functional Inequalities and Isoperimetry, 545 (2011), 55–68
R. Eldan and J. Lehec. Bounding the norm of a log-concave vector via thin-shell estimates. In: Geometric Aspects of Functional Analysis. Springer (2014), pp. 107–122.
O. Guédon, P. Nayar, and T. Tkocz. Concentration inequalities and geometry of convex bodies. Analytical and Probabilistic Methods in the Geometry of Convex Bodies, 2 (2014), 9–86
MathSciNet MATH Google Scholar
R. Kannan, L. Lovász, and M. Simonovits. Isoperimetric problems for convex bodies and a localization lemma. Discrete & Computational Geometry, (3–4)13 (1995), 541–559
B. Klartag. On convex perturbations with a bounded isotropic constant. Geometric & Functional Analysis GAFA, (6)16 (2006), 1274–1290
B. Klartag and V. Milman. The slicing problem by bourgain. In: (To Appear) Analysis at Large, A Collection of Articles in Memory of Jean Bourgain. Springer (2021).
R. Latała and J. Wojtaszczyk. On the infimum convolution inequality. Studia Mathematica, (189)2 (2008), 147–187
M. Ledoux. The Concentration of Measure Phenomenon, Number 89. American Mathematical Society (2001).
Y.T. Lee and S.S. Vempala. Eldan's stochastic localization and the KLS hyperplane conjecture: an improved lower bound for expansion. In: 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). IEEE (2017), pp. 998–1007.
Y.T. Lee and S.S. Vempala. The Kannan–Lovász–Simonovits conjecture. arXiv preprint arXiv:1807.03465 (2018).
V.G. Maz'ya. Classes of domains and imbedding theorems for function spaces. In: Doklady Akademii Nauk, Vol. 133. Russian Academy of Sciences (1960), pp. 527–530.
E. Milman. On the role of convexity in isoperimetry, spectral gap and concentration. Inventiones Mathematicae, (1)177 (2009), 1–43
B. Øksendal. Stochastic Differential Equations. Springer, Berlin (2003).
G. Paouris. Concentration of mass on convex bodies. Geometric & Functional Analysis GAFA, (5)16 (2006), 1021–1049
D. Revuz and M. Yor. Continuous Martingales and Brownian Motion, Vol. 293. Springer, Berlin (2013).
A. Saumard and J.A. Wellner. Log-concavity and strong log-concavity: a review. Statistics Surveys, 8 (2014), 45
P. Sternberg and K. Zumbrun. On the connectivity of boundaries of sets minimizing perimeter subject to a volume constraint. Communications in Analysis and Geometry, (1)7 (1999), 199–220
Yuansi Chen has received funding from the European Research Council under the Grant Agreement No 786461 (CausalStats - ERC-2017-ADG). We acknowledge scientific interaction and exchange at "ETH Foundations of Data Science". We thank Peter Bühlmann and Bin Yu for their continuous support and encouragement. We thank Afonso Bandeira, Raaz Dwivedi, Ronen Eldan, Yin Tat Lee and Martin Wainwright for helpful discussions. We thank Bo'az Klartag and Joseph Lehec for pointing out a mistake in the previous revision. We also thank anonymous reviewers for their careful reading of our manuscript and their suggestions on presentation and writing.
Open access funding provided by Swiss Federal Institute of Technology Zurich.
Seminar for Statistics, ETH, Zurich, Switzerland
Yuansi Chen
Correspondence to Yuansi Chen.
Proof of Lemma 3 and derivatives
In this section, we first prove the existence and uniqueness of the SDE solution in Lemma 3 and then derive the derivatives of \(p_t\), \(A_t\) and \(\Gamma _t\) in Equation (13), Equation (15) and (16) using Itô's calculus. Similar results are also proved in Eldan [8] and Lee and Vempala [17] since a similar stochastic localization is used. We provide a proof here for completeness.
Proof of Lemma 3. We can rewrite the stochastic differential equation (8) as follows to make the dependency clear:
$$\begin{aligned} dc_t&= A^{-1/2} dW_t + A^{-1} \mu \left( c_t, B_t \right) dt\\ dB_t&= A^{-1} dt, \end{aligned}$$
$$\begin{aligned} \mu (c, B)&= \int x \varrho (c, B, x) dx, \\ \varrho (c, B, x)&= \frac{e^{c^\top x - \frac{1}{2}x^\top B x} p(x)}{\int _{\mathbb {R}^d} e^{c ^\top x - \frac{1}{2}y^\top B y} p(y) dy}. \end{aligned}$$
Since p has a compact support, given \(x \in \mathbb {R}^d\), \(\varrho (\cdot , \cdot , x)\) as a function of (c, B) is Lipschitz in c and B. Similarly, \(\mu \) is also Lipschitz in c and B. Consequently, \(A^{-1/2}\), \(A^{-1}\mu (c_t, B_t)\) and \(A^{-1}\) are all bounded and Lipschitz on \(c_t\) and \(B_t\) on the compact support. Applying the existence and uniqueness theorem of SDE solutions (Theorem 5.2 in Øksendal [21]), we show that the SDE solution exists and is unique on the time interval [0, T] for any \(T > 0\).
Next, we derive the derivative of \(p_t\). Define
$$\begin{aligned} G_t(x)&= e^{c_t^\top x - \frac{1}{2} x^\top B_t x} p(x), \\ V_t&= \int G_t(x) dx. \end{aligned}$$
Then \(p_t(x)\) can be written as \(\frac{G_t(x)}{V_t}\). Let \(S_t(x)\) denote the quadratic variation of the process \(c_t^\top x\). We have
$$\begin{aligned} d S_t(x) = x^\top A^{-1} x dt. \end{aligned}$$
Using Itô's formula, we have
$$\begin{aligned} dG_t(x)&= \left( x^\top (dc_t) - \frac{1}{2} x^\top dB_t x + \frac{1}{2} dS_t \right) G_t(x) \\&= \left( x^\top A^{-1/2}dW_t + x^\top A^{-1} \mu _t dt \right) G_t(x), \\ dV_t&= \int dG_t(x) dx = V_t \left( \mu _t^\top A^{-1/2}dW_t + \mu _t^\top A^{-1} \mu _t dt \right) . \end{aligned}$$
Using Itô's formula on the inverse of \(V_t\), we have
$$\begin{aligned} d V_t^{-1}&= -\frac{dV_t}{V_t^2} + \frac{d \left[ V \right] _t}{V_t^3} \\&= - V_t^{-1} \left[ \mu _t^\top A^{-1/2} dW_t + \mu _t^\top A^{-1} \mu _t dt \right] + V_t^{-1} \mu _t^\top A^{-1} \mu _t dt \\&= - V_t^{-1} \mu _t^\top A^{-1/2} dW_t. \end{aligned}$$
Using Itô's formula on \(p_t\), with the above derivatives, we obtain
$$\begin{aligned} dp_t(x)&= d \left( V_t^{-1} G_t(x) \right) \\&= \left( G_t(x) dV_t^{-1} + V_t^{-1}dG_t(x) + d\left[ V^{-1}, G(x) \right] _t \right) \\&= \left( x-\mu _t \right) ^\top A^{-1/2} dW_t p_t(x). \end{aligned}$$
Then we derive the derivative of \(A_t\). By the definition of \(A_t\), we have
$$\begin{aligned} A_t = \int \left( x - \mu _t \right) \left( x - \mu _t \right) ^\top p_t(x) dx, \end{aligned}$$
where \(\mu _t = \int _{\mathbb {R}^d} x p_t(x) dx\). Using Itô's formula on \(\mu _t\), we obtain
$$\begin{aligned} d\mu _t&= \int x d p_t(x) dx \\&= \int x (x- \mu _t)^\top A^{-1/2} dW_t p_t(x) dx \\&= \int (x - \mu _t) (x- \mu _t)^\top A^{-1/2} dW_t dx\\&= A_t A^{-1/2} dW_t. \end{aligned}$$
Using Itô's formula on \(A_t\) and viewing it as a function of \(\mu _t\) and \(p_t\), we obtain
$$\begin{aligned} dA_t =&\int \left( x - \mu _t \right) \left( x - \mu _t \right) ^\top dp_t(x) dx - \int d\mu _t\left( x - \mu _t \right) ^\top p_t(x) dx\\&- \int \left( x - \mu _t \right) \left( d\mu _t \right) ^\top p_t(x) dx \\&-\frac{1}{2}\cdot 2 \int \left( x - \mu _t \right) d\left[ \mu _t^\top , p_t(x) \right] _t dx - \frac{1}{2}\cdot 2 \int d\left[ \mu _t, p_t(x) \right] _t \left( x - \mu _t \right) ^\top dx \\&+ \frac{1}{2} \cdot 2 d\left[ \mu _t, \mu _t^\top \right] _t \int p_t(x) dx. \end{aligned}$$
We observe that \(\int d\mu _t\left( x - \mu _t \right) ^\top p_t(x) dx = 0\) and \(\int \left( x - \mu _t \right) \left( d\mu _t \right) ^\top p_t(x) dx = 0\). Then,
$$\begin{aligned} d\left[ \mu _t^\top , p_t(x) \right] _t&= \left( x - \mu _t \right) ^\top A^{-1} A_t p_t(x) dt,\\ d\left[ \mu _t, p_t(x) \right] _t&= A_t A^{-1} \left( x - \mu _t \right) p_t(x) dt, \\ d\left[ \mu _t, \mu _t \right] _t&= A_t A^{-1} A_t dt. \end{aligned}$$
Combining all the terms together, we have
$$\begin{aligned} dA_t = \int \left( x - \mu _t \right) \left( x - \mu _t \right) ^\top \left( \left( x-\mu _t \right) ^\top A^{-1/2} dW_t \right) p_t(x) dx - A_tA^{-1} A_t dt. \end{aligned}$$
Finally, we derive the derivative of \(\Gamma _t\). Define the function \(\Gamma : \mathbb {R}^{d\times d} \mapsto \mathbb {R}\) as \(\Gamma (X) = {{\,\mathrm{Tr}\,}}\left( X^q \right) \). The first-order and second-order derivatives of \(\Gamma \) are given by
$$\begin{aligned} \left. \frac{\partial \Gamma }{\partial X}\right| _{H} = q {{\,\mathrm{Tr}\,}}\left( X^{q-1} H \right) , \left. \frac{\partial ^2 \Gamma }{\partial X \partial X}\right| _{H_1, H_2} = q \sum _{a=0}^{q-2} {{\,\mathrm{Tr}\,}}\left( X^a H_2 X^{q-2-a} H_1 \right) . \end{aligned}$$
Using the above derivatives and Itô's formula, we obtain
$$\begin{aligned} d\Gamma _t = d {{\,\mathrm{Tr}\,}}\left( Q_t^q \right) = q {{\,\mathrm{Tr}\,}}\left( Q_t^{q-1} d Q_t \right) + \frac{q}{2} \sum _{a = 0}^{q-2} \sum _{i,j,k,l=1}^{d} {{\,\mathrm{Tr}\,}}\left( Q_t^a E_{ij} Q_t^{q-2-a} E_{kl} \right) d\left[ Q_{ij}, Q_{kl} \right] _t, \end{aligned}$$
where \(E_{ij}\) is the matrix that takes 1 at the entry (i, j) and 0 otherwise and \(Q_{ij, t}\) is the stochastic process defined by the (i, j) entry of \(Q_t\). Using the derivative of \(A_t\) in Equation (15), we have
$$\begin{aligned} dQ_t&= \int A^{-1/2} \left( x - \mu _t \right) \left( x - \mu _t \right) ^\top A^{-1/2} \left( \left( x-\mu _t \right) ^\top A^{-1/2} dW_t \right) p_t(x) dx\\&\quad - Q_t^2 dt, \\ d\left[ Q_{ij}, Q_{kl} \right] _t&= \int \int z(x)_i z(x)_j z(y)_k z(y)_l (x-\mu _t)^\top A^{-1} (y-\mu _t) p_t(x) p_t(y)dx dy dt, \end{aligned}$$
where \(z(x)_i\) is the ith coordinate of \(\left[ A^{-1/2}(x-\mu _t) \right] \). Plugging the expressions of \(dA_t\) and \(d\left[ A_{ij}, A_{kl} \right] _t \) into Equation (35), we obtain
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Chen, Y. An Almost Constant Lower Bound of the Isoperimetric Coefficient in the KLS Conjecture. Geom. Funct. Anal. 31, 34–61 (2021). https://doi.org/10.1007/s00039-021-00558-4
Revised: 24 December 2020
Issue Date: February 2021 | CommonCrawl |
Problem with Maxwell's theory
What exactly is the problem with classical Maxwell theory and the blowing up of energy at $r=0$? Does it have any other problems on the classical level?
classical-electrodynamics maxwell-equations singularities
FluctuationsFluctuations
$\begingroup$ What exactly is "classical Maxwell theory" for you (I suppose you mean classical electrodynamics)? The "blowing up of energy" refers to two charges spatially coinciding? What do you consider "problems"? $\endgroup$ – ACuriousMind♦ Dec 4 '14 at 20:40
$\begingroup$ @ACuriousMind, photonicboom answered what exactly I was asking about. But I isn't it that Classical Maxwell Theory the same thing as Classical Electrodynamics? Isn't it that the the theory of Classical Electrodynamics called Maxwell's? $\endgroup$ – Fluctuations Dec 4 '14 at 21:08
$\begingroup$ Related: physics.stackexchange.com/q/11939/2451 and links therein. $\endgroup$ – Qmechanic♦ Dec 4 '14 at 21:47
There is nothing wrong with Classical Electrodynamics. Electromagnetism is an effective theory in the sense that it provides an almost exact description of physics in our everyday life energy scales, but has a few technical problems like the ones you mention. This is due to the fact that classical electromagnetism is just a very good approximation to a more deeper theory, one we now realise as Quantum Electrodynamics(QED), which takes into account the relativistic quantum mechanical description of maxwell's equations to tackle these problems.
For the above problem physicists have realised that in the proper quantum mechanical description of charged particles that interact via the electromagnetic force is through the emission and absorption of virtual gauge bosons, in the case of QED, photons. This means the separation between two particles cannot physically go to $0$ because they will interact and exchange momentum (via the gauge boson) much before they physically occupy the same spatial state. There are of course other ways this could be avoided like the exclusion principle preventing this from happening to two interacting fermions, or the discrete energy states of electrons in atoms (and thus a stable ground state at a distance $r > 0$) in the case of electrostatics.
PhotonBoomPhotonBoom
$\begingroup$ Aha, thanks! Can you please illustrate the exact problem the one I mentioned. I mean I have never really tackled it, I only heard about it. @photonicboom $\endgroup$ – Fluctuations Dec 4 '14 at 21:04
$\begingroup$ @Fluctuations hopefully if I understand your problem correctly you will find this edit useful. $\endgroup$ – PhotonBoom Dec 4 '14 at 21:15
$\begingroup$ Is it here where Renormalization can come in handy? @photonicboom $\endgroup$ – Fluctuations Dec 4 '14 at 21:19
$\begingroup$ If you are familiar with QFT and the perturbative expansion of Feynman diagrams this en.wikipedia.org/wiki/… link explains it pretty well $\endgroup$ – PhotonBoom Dec 4 '14 at 21:35
One issue with classical electromagnetism is mentioned in Introduction to Electrodynamics, Third Edition by David J. Griffiths, in the section starting on p. 465 about the "radiation reaction", which Griffiths describes this way:
According to the laws of classical electromagnetism, an accelerating charge raddiates. This radiation carries off energy, which must come at the expense of the particle's kinetic energy. Under the influence of a given force, therefore, a charged particle accelerates less than a neutral one of the same mass. The radiation evidently exerts a force ($\textbf{F}_{rad}$) back on the charge—a recoil force, rather like that of a bullet on a gun.
He then goes on to show that Abraham-Lorentz formula, $\textbf{F}_{rad} = \frac{\mu_0 q^2}{6 \pi c} \dot{\textbf{a}}$, "represents the simplest form the radiation reaction force could take, consistent with conservation of energy"--he notes that he hadn't performed a true derivation but that "As we'll see in the next section, there are other reasons for believing in the Abraham-Lorentz formula." He goes on to say:
The Abraham-Lorentz formula has disturbing implications, which are not entirely understood nearly a century after the law was first proposed. For suppose a particle is subject to no external forces; then Newton's second law says
$F_{rad} = \frac{\mu_0 q^2}{6 \pi c} \dot{a} = ma$
from which it follows that
$a(t) = a_0 e^{t/\tau}$,
$\tau \equiv \frac{\mu_0 q^2}{6 \pi m c}$
(In the case of the electron, $\tau = 6 \times 10^{-24}$ s.) The acceleration spontaneously increases exponentially with time! This absurd conclusion can be avoided if we insist that $a_0 = 0$, but it turns out that the systematic exclusion of such runaway solutions has an even more unpleasant consequence: If you do apply an external force, the particle starts to respond before the force acts! (See Prob. 11.19.) This acausal preacceleration jumps the gun by only a short time $\tau$; nevertheless, it is (to my mind) philosophically repugnant that the theory should countenance it at all.
Then he adds a footnote saying:
These difficulties persist in the relativistic version of the Abraham-Lorentz equation, which can be derived by starting with Liénard's formula instead of Larmor's (see Prob. 12.70). Perhaps they are telling us that there can be no such thing as a point charge in classical electrodynamics, or maybe they presage the onset of quantum mechanics. For guides to the literature see Philip Pearle's chapter in D. Teplitz, ed., Electromagnetism: Paths to Research (New York: Plenum, 1982) and F. Rohrlich, Am. J. Phys. 65, 1051 (1997).
HypnosiflHypnosifl
Not the answer you're looking for? Browse other questions tagged classical-electrodynamics maxwell-equations singularities or ask your own question.
10 votes · comment · stats
What is the Lagrangian for a relativistic charge that includes the self-force?
What are some open problems in classical electrodynamics?
Are the Maxwell equations a correct description of the wave character of photons?
Integration constants in Maxwell's equations (ambiguousness?)
Why ONLY Maxwell's equations are the basic equations of electromagnetism?
Validity of Maxwell's equations with no aether or relativity?
The interaction of a charged particle with its own field
What is the significance of Maxwell's equations being invariant under the Lorentz transformation?
Naked singularity
Reduction of Maxwell's equations to classical circuit theory | CommonCrawl |
Computer Science > Computer Science and Game Theory
Title:An Improved Algorithm for Computing Approximate Equilibria in Weighted Congestion Games
Authors:Yiannis Giannakopoulos, Georgy Noarov, Andreas S. Schulz
(Submitted on 30 Oct 2018 (v1), last revised 24 Aug 2019 (this version, v4))
Abstract: We present a deterministic polynomial-time algorithm for computing $d^{d+o(d)}$-approximate (pure) Nash equilibria in weighted congestion games with polynomial cost functions of degree at most $d$. This is an exponential improvement of the approximation factor with respect to the previously best algorithm. An appealing additional feature of our algorithm is that it uses only best-improvement steps in the actual game, as opposed to earlier approaches that first had to transform the game itself. Our algorithm is an adaptation of the seminal algorithm by Caragiannis et al. [FOCS'11, TEAC 2015], but we utilize an approximate potential function directly on the original game instead of an exact one on a modified game.
A critical component of our analysis, which is of independent interest, is the derivation of a novel bound of $[d/\mathcal{W}(d/\rho)]^{d+1}$ for the Price of Anarchy (PoA) of $\rho$-approximate equilibria in weighted congestion games, where $\mathcal{W}$ is the Lambert-W function. More specifically, we show that this PoA is exactly equal to $\Phi_{d,\rho}^{d+1}$, where $\Phi_{d,\rho}$ is the unique positive solution of the equation $\rho (x+1)^d=x^{d+1}$. Our upper bound is derived via a smoothness-like argument, and thus holds even for mixed Nash and correlated equilibria, while our lower bound is simple enough to apply even to singleton congestion games.
Subjects: Computer Science and Game Theory (cs.GT)
Cite as: arXiv:1810.12806 [cs.GT]
(or arXiv:1810.12806v4 [cs.GT] for this version)
From: Yiannis Giannakopoulos [view email]
[v1] Tue, 30 Oct 2018 15:19:35 UTC (31 KB)
[v2] Wed, 31 Oct 2018 00:52:41 UTC (31 KB)
[v3] Mon, 18 Feb 2019 11:19:22 UTC (31 KB)
[v4] Sat, 24 Aug 2019 23:17:39 UTC (31 KB)
cs.GT
Yiannis Giannakopoulos
Georgy Noarov
Andreas S. Schulz | CommonCrawl |
Results for 'Katherine W. Robinson'
1000+ found
1 filter applied
Katherine Robinson
Environmental Aesthetics and Public Environmental Philosophy.Katherine W. Robinson & Kevin C. Elliott - 2011 - Ethics, Policy and Environment 14 (2):175 - 191.details
We argue that environmental aesthetics, and specifically the concept of aesthetic integrity, should play a central role in a public environmental philosophy designed to communicate about environmental problems in an effective manner. After developing the concept of the ?aesthetic integrity? of the environment, we appeal to empirical research to show that it contributes significantly to people?s sense of place, which is, in turn, central to their well-being and motivational state. As a result, appealing to aesthetic integrity in policy contexts is (...) both strategically and morally advisable. To provide a concrete illustration of the ways in which such appeals can play a role in policy making, we examine a specific case study in which attention to aesthetic integrity contributed to blocking a proposed development. The case yields at least four lessons: (1) aesthetic integrity can be a practically effective framing device; (2) local deliberative settings are particularly conducive for addressing it; (3) it can serve as an umbrella under which multiple other issues can be brought to the fore; and (4) judgments about aesthetic integrity need not be entirely objective in order for them to play a productive role in the policy sphere. (shrink)
Environmental Philosophy in Philosophy of Biology
Direct download (2 more)
Communication, Competition, and Secrecy: The Production and Dissemination of Research-Related Information in Genetics.Katherine W. McCain - 1991 - Science, Technology and Human Values 16 (4):491-516.details
The dissemination of experimental materials, instruments, and methods is central to the progress of research in genetics. In recent years, competition for research funding and intellectual property issues have increasingly presented barriers to the dissemination of this "research-related information. "Information gathered in interviews with experimental geneticists and analysis of acknowledgment patterns in published genetics research are used to construct a series of basic scenarios for the exchange of genetic materials and research methods. The discussion focuses on factors affecting individuals' behavior (...) and expectations as information requesters and information providers. (shrink)
Implementation Science Can Do Even More for Translational Ethics.Katherine W. Saylor & Megan C. Roberts - 2020 - American Journal of Bioethics 20 (4):83-85.details
Volume 20, Issue 4, May 2020, Page 83-85.
Biomedical Ethics in Applied Ethics
Wilhelm Wundt in History: The Making of a Scientific Psychology.Robert W. Rieber & David K. Robinson (eds.) - 2001 - Kluwer Academic/Plenum Publishers.details
In an extensive revision of this important book, first published by Plenum in 1980, a distinguished roster of contributors reconsider this much heralded ...
Introspection and Introspectionism in Philosophy of Cognitive Science
$107.15 used $125.09 from Amazon Amazon page
Excavations at Olynthus. Part II: Architecture and Sculpture.W. L. & David M. Robinson - 1931 - Journal of Hellenic Studies 51:114.details
Hellenistic and Later Ancient Philosophy in Ancient Greek and Roman Philosophy
Hellenistic and Later Ancient Philosophy, Misc in Ancient Greek and Roman Philosophy
Robert W. Robinson. Simplicity of Recursively Enumerable Sets.The Journal of Symbolic Logic, Vol. 32 , Pp. 162–172. - Robert W. Robinson. Two Theorems on Hyperhypersimple Sets. Transactions of the American Mathematical Society, Vol. 128 , Pp. 531–538. - A. H. Lachlan. On the Lattice of Recursively Enumerable Sets.Transactions of the American Mathematical Society, Vol. 130 , Pp. 1–37. - A. H. Lachlan. The Elementary Theory of Recursively Enumerable Sets. Duke Mathematical Journal, Vol. 35 , Pp. 123–146. [REVIEW]James C. Owings - 1970 - Journal of Symbolic Logic 35 (1):153-155.details
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
Speculative and Practical.S. J. †peter W. Robinson - 1968 - Heythrop Journal 9 (1):037–049.details
The Impact of Anxiety Upon Cognition: Perspectives From Human Threat of Shock Studies.Oliver J. Robinson, Katherine Vytal, Brian R. Cornwell & Christian Grillon - 2013 - Frontiers in Human Neuroscience 7.details
Philosophy of Neuroscience in Philosophy of Cognitive Science
Some Magnetic Properties of Dilute Ferromagnetic Alloys II.B. W. Lothian, A. C. Robinson & W. Sucksmith - 1958 - Philosophical Magazine 3 (33):999-1012.details
Selected Papers of Abraham Robinson: Nonstandard Analysis and Philosophy.W. A. J. Luxemburg & A. Robinson - 1982 - Journal of Symbolic Logic 47 (1):203-210.details
'Woe Betides Anybody Who Tries to Turn Me Down.' A Qualitative Analysis of Neuropsychiatric Symptoms Following Subthalamic Deep Brain Stimulation for Parkinson's Disease.Philip E. Mosley, Katherine Robinson, Terry Coyne, Peter Silburn, Michael Breakspear & Adrian Carter - forthcoming - Neuroethics:1-17.details
Deep brain stimulation of the subthalamic nucleus for the treatment of Parkinson's disease can lead to the development of neuropsychiatric symptoms. These can include harmful changes in mood and behaviour that alienate family members and raise ethical questions about personal responsibility for actions committed under stimulation-dependent mental states. Qualitative interviews were conducted with twenty participants following subthalamic DBS at a movement disorders centre, in order to explore the meaning and significance of stimulation-related neuropsychiatric symptoms amongst a purposive sample of persons (...) with PD and their spousal caregivers. Interview transcripts underwent inductive thematic analysis. Clinical and experiential aspects of post-DBS neuropsychiatric symptoms were identified. Caregivers were highly burdened by these symptoms and both patients and caregivers felt unprepared for their consequences, despite having received information prior to DBS, desiring greater family and peer engagement prior to neurosurgery. Participants held conflicting opinions as to whether emergent symptoms were attributable to neurostimulation. Many felt that they reflected aspects of the person's "real" or "younger" personality. Those participants who perceived a close relationship between stimulation changes and changes in mental state were more likely to view these symptoms as inauthentic and uncontrollable. Unexpected and troublesome neuropsychiatric symptoms occurred despite a pre-operative education programme that was delivered to all participants. This suggests that such symptoms are difficult to predict and manage even if best practice guidelines are followed by experienced centres. Further research aimed at predicting these complications may improve the capacity of clinicians to tailor the consent process. (shrink)
Neuroethics in Applied Ethics
Parkinson's DIsease in Philosophy of Science, Misc
TEXTS ON DEMOCRACY E. W. Robinson (Ed.): Ancient Greek Democracy. Readings and Sources . Pp. Xiv + 326, Maps, Ill. Malden, MA and Oxford: Blackwell Publishing, 2004. Paper, £17.99/US$34.95 (Cased, £60/US$69.95). ISBN: 0-631-23394-6 (0-631-23393-8 Hbk). [REVIEW]Peter Liddel - 2004 - The Classical Review 54 (02):458-.details
Ancient Greek and Roman Philosophy, Miscellaneous in Ancient Greek and Roman Philosophy
Classics in Arts and Humanities
Democracy in Social and Political Philosophy
17: Three Views of the Agentic Self: A Developmental Synthesis.Todd D. Little, Patricia H. Hawley, Christopher C. Henrich & Katherine W. Marsland - 2002 - In Edward L. Deci & Richard M. Ryan (eds.), Handbook of Self-Determination Research. University of Rochester Press.details
Evolutionary Developmental Biology in Philosophy of Biology
Community Engagement and the Human Infrastructure of Global Health Research.Katherine F. King, Pamela Kolopack, Maria W. Merritt & James V. Lavery - 2014 - BMC Medical Ethics 15 (1):84.details
Biomedical research is increasingly globalized with ever more research conducted in low and middle-income countries. This trend raises a host of ethical concerns and critiques. While community engagement has been proposed as an ethically important practice for global biomedical research, there is no agreement about what these practices contribute to the ethics of research, or when they are needed.
Medical Ethics in Applied Ethics
Hypermnesia in the Eyewitness to a Crime.Paul Eugenio, Robert Buckhout, Stephen Kostes & Katherine W. Ellison - 1982 - Bulletin of the Psychonomic Society 19 (2):83-86.details
Memory in Philosophy of Mind
To: "Surface to Subsurface Correlation of the Middle-Upper Triassic Shublik Formation Within a Revised Sequence Stratigraphic Framework," William A. Rouse, Katherine J. Whidden, Julie A. Dumoulin, and David W. Houseknecht, Interpretation, 8, No. 2, SJ1–SJ16, Doi: 10.1190/INT-2019-0195.1. [REVIEW]William A. Rouse, Katherine J. Whidden, Julie A. Dumoulin & David W. Houseknecht - 2020 - Interpretation 8 (3):Y1-Y1.details
The Logic of 'Solemn' Believing: W. D. ROBINSON.W. D. Robinson - 1977 - Religious Studies 13 (4):409-416.details
It is sometimes suggested that the logic of religious language differs from other kinds of language. Or it is said that each 'language-game' has its own 'logic' and that, whatever usual language-games are played in the context of religion, there is something that could be called the 'religious language-game' which does not correspond to any other and, therefore, has its own peculiar logic. In either case, religious people are urged to make clear what this logic is, so that their utterances (...) may be understood and evaluated. (shrink)
Law and the Lawyers.H. W. S. & Edward Stevens Robinson - 1936 - Journal of Philosophy 33 (5):137.details
Legal Ethics in Applied Ethics
The Facsimile Edition of the Nag Hammadi Codices: Introduction.Dwight W. Young, James M. Robinson & Stephen Emmel - 1987 - Journal of the American Oriental Society 107 (4):836.details
Consent Related Challenges for Neonatal Clinical Trials.Katherine F. Guttmann, Yvonne W. Wu, Sandra E. Juul & Elliott M. Weiss - 2020 - American Journal of Bioethics 20 (5):38-40.details
Volume 20, Issue 5, June 2020, Page 38-40.
"What Is the FDA Going to Think?": Negotiating Values Through Reflective and Strategic Category Work in Microbiome Science.Pamela L. Sankar, Mildred K. Cho, Angie M. Boyce & Katherine W. Darling - 2015 - Science, Technology, and Human Values 40 (1):71-95.details
The US National Institute of Health's Human Microbiome Project aims to use genomic techniques to understand the microbial communities that live on the human body. The emergent field of microbiome science brought together diverse disciplinary perspectives and technologies, thus facilitating the negotiation of differing values. Here, we describe how values are conceptualized and negotiated within microbiome research. Analyzing discussions from a series of interdisciplinary workshops conducted with microbiome researchers, we argue that negotiations of epistemic, social, and institutional values were inextricable (...) from the reflective and strategic category work that defined and organized the microbiome as an object of study and a potential future site of biomedical intervention. Negotiating the divergence or tension between emerging scientific and regulatory classifications also activated "values levers" and opened up reflective discussions of how classifications embody values and how these values might differ across domains. These data suggest that scholars at the intersections of science and technology studies, ethics, and policy could leverage such openings to identify and intervene in the ways that ethical/regulatory and scientific/technical practices are coproduced within unfolding research. (shrink)
Excavations at Olynthus. Part VIII: The Hellenic House. By D. M. Robinson and J. W. Graham. Pp. Xxi + 370; 110 Pl. Baltimore: Johns Hopkins Press , 1938. £3 7s. 6d. [REVIEW]D. S. Robertson, D. M. Robinson & J. W. Graham - 1939 - Journal of Hellenic Studies 59 (1):146-147.details
Selecting Participants Fairly for Controlled Human Infection Studies.Douglas MacKay, Nancy S. Jecker, Punnee Pitisuttithum & Katherine W. Saylor - 2020 - Bioethics 34 (8):771-784.details
Controlled human infection (CHI) studies involve the deliberate exposure of healthy research participants to infectious agents to study early disease processes and evaluate interventions under controlled conditions with high efficiency. Although CHI studies expose participants to the risk of infection, they are designed to offer investigators unique advantages for studying the pathogenesis of infectious diseases and testing potential vaccines or treatments in humans. One of the central challenges facing investigators involves the fair selection of research subjects to participate in CHI (...) studies. While there is widespread agreement that investigators have a duty to select research participants fairly, this principle also yields conflicting ethical imperatives, for example requiring investigators to both exclude potential participants with co‐morbidities since they face increased risks, but also to include them in order to ensure generalizability. In this paper we defend an account of fair subject selection that is tailored to the context of CHI studies. We identify the considerations of fairness that bear directly on selecting participants for CHI studies and provide investigators and members of IRBs and RECs with a principled way to navigate the conflicting imperatives to which these considerations give rise. (shrink)
Infectious Diseases, Misc in Philosophy of Science, Misc
Book Review Symposium. [REVIEW]W. Bradley Wendel, Katherine R. Kruse, Eli Wald, Russell G. Pearce & Charles R. Mendez - 2014 - Legal Ethics 17 (2):313-369.details
Democracy Beyond Athens: Popular Government in the Greek Classical Age by Eric W. Robinson.Mirko Canevaro - 2014 - Classical World: A Quarterly Journal on Antiquity 107 (3):424-427.details
Marine Cartography in Britain: A History of the Sea Chart to 1855. A. H. W. Robinson.G. E. R. Deacon - 1965 - Isis 56 (2):226-228.details
Beth E. W.. Sur le Parallélisme Logico-Mathématique. Les Méthodes Formelles En Axiomatique Paris Décembre 1950, Colloques Internationaux du Centre National de la Recherche Scientifique No. 36, Paris 1953, Pp. 27–32.Bernays Paul, Beth E. W., Robinson Abraham. Discussion. Ibid., Pp. 32–33. [REVIEW]Leon Henkin - 1955 - Journal of Symbolic Logic 20 (2):184-185.details
Review: The Persian Book of Kings: An Epitome of the Shahnama of Firdawsi * Basil W. Robinson: The Persian Book of Kings: An Epitome of the Shahnama of Firdawsi. [REVIEW]J. S. Meisami - 2004 - Journal of Islamic Studies 15 (2):226-227.details
The New England Glass Co. Vs George W. Robinson, Machinist.".Kirk J. Nelson - 1990 - Acorn: Journal of the Sandwich Glass Museum 1:51-64.details
The Diary of Robert Hooke . Henry W. Robinson, Walter AdamsEarly Science in Oxford, Vol. X, The Life and Work of Robert Hooke . R. T. Gunther. [REVIEW]J. Pelseneer - 1936 - Isis 25 (2):466-470.details
17th/18th Century British Philosophy, Misc in 17th/18th Century Philosophy
Simplicity of Recursively Enumerable Sets.Robert W. Robinson - 1967 - Journal of Symbolic Logic 32 (2):162-172.details
A Dichotomy of the Recursively Enumerable Sets.Robert W. Robinson - 1968 - Zeitschrift fur mathematische Logik und Grundlagen der Mathematik 14 (21-24):339-356.details
Areas of Mathematics in Philosophy of Mathematics
The Role of Words in Cognitive Tasks: What, When, and How?Christopher W. Robinson, Catherine A. Best, Wei Deng & Vladimir M. Sloutsky - 2012 - Frontiers in Psychology 3.details
Analytica Priora Et Posteriora. [REVIEW]David B. Robinson, Aristotle, W. D. Ross & L. Minio-Paluello - 1966 - Journal of Hellenic Studies 86:192-193.details
Bimodal Presentation Speeds Up Auditory Processing and Slows Down Visual Processing.Christopher W. Robinson, Robert L. Moore & Thomas A. Crook - 2018 - Frontiers in Psychology 9.details
A Dichotomy of the Recursively Enumerable Sets.Robert W. Robinson - 1968 - Mathematical Logic Quarterly 14 (21‐24):339-356.details
Annals of Science.--A Quarterly Review of the History of Science Since the Renaissance by D. McKie; Harcourt Brown; H. W. Robinson. [REVIEW]George Sarton - 1936 - Isis 25:488-489.details
History of Science in General Philosophy of Science
Studies in Fifteenth-Century Stagecraft.J. W. Robinson.Edgar Schell - 1993 - Speculum 68 (3):869-870.details
15th/16th Century Philosophy in Medieval and Renaissance Philosophy
Degrees Joining to 0'. [REVIEW]David B. Posner & Robert W. Robinson - 1981 - Journal of Symbolic Logic 46 (4):714 - 722.details
It is shown that if A and C are sets of degrees uniformly recursive in 0' with $\mathbf{0} \nonin \mathscr{C}$ then there is a degree b with b' = 0', b ∪ c = 0' for every c ∈ C, and $\mathbf{a} \nleq \mathbf{b}$ for every a ∈ A ∼ {0}. The proof is given as an oracle construction recursive in 0'. It follows that any nonrecursive degree below 0' can be joined to 0' by a degree strictly below 0'. (...) Also, if $\mathbf{a ' and a" = 0" then there is a degree b such that a ∪ b = 0' and a ∩ b = 0. (shrink)
Knowing When You Don't Know Enough: Children's Judgements About Ambiguous Information.E. J. Robinson & W. P. Robinson - 1982 - Cognition 12 (3):267-280.details
Philosophy of Psychology in Philosophy of Cognitive Science
Are Self-Deceivers Enhancing Positive Affect or Denying Negative Affect? Toward an Understanding of Implicit Affective Processes.Michael D. Robinson, Sara K. Moeller & Paul W. Goetz - 2009 - Cognition and Emotion 23 (1):152-180.details
Emotion and Consciousness in Psychology in Philosophy of Cognitive Science
What is the Shape of Developmental Change?Karen E. Adolph, Scott R. Robinson, Jesse W. Young & Felix Gill-Alvarez - 2008 - Psychological Review 115 (3):527-543.details
The Role of Words and Sounds in Infants' Visual Processing: From Overshadowing to Attentional Tuning.Vladimir M. Sloutsky & Christopher W. Robinson - 2008 - Cognitive Science 32 (2):342-365.details
The Role of Values in a Community-Based Conservation Initiative in Northern Ghana.Lance W. Robinson & Kwame Ampadu Sasu - 2013 - Environmental Values 22 (5):647-6664.details
Environmental Ethics in Applied Ethics
Topics in Environmental Ethics in Applied Ethics
The Cultural Revolution in China.Thomas W. Robinson - 1973 - Science and Society 37 (1):91-94.details
Evidence for Auditory Dominance in a Passive Oddball Task.Christopher W. Robinson, Nayef Ahmar & Vladimir M. Sloutsky - 2010 - In S. Ohlsson & R. Catrambone (eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Cognitive Science Society. pp. 2644--2649.details
Aspects of Consciousness in Philosophy of Mind
Philosophy of Consciousness in Philosophy of Mind
One Hundred Years of General Relativity: Albert Einstein: Relativity: The Special and the General Theory, 100th Anniversary Edition. Princeton, NJ: Princeton University Press, 2015, 320 Pp, £19.95 HB Andrew Robinson, Einstein. A Hundred Years of Relativity. Princeton, NJ: Princeton University Press, 2015, 256 Pp, £18.95 PB.Katherine Brading, Sebastián Murgueitio Ramírez & Laura Wells - 2017 - Metascience 26 (1):49-57.details
General Relativity in Philosophy of Physical Science
Using Awake, Behaving Animals to Study the Brain.David Lee Robinson & John W. McClurkin - 1987 - Behavioral and Brain Sciences 10 (1):129-129.details
Science of Consciousness in Philosophy of Cognitive Science
Visual Statistical Learning: Getting Some Help From the Auditory Modality.Christopher W. Robinson & Vladimir M. Sloutsky - 2007 - In McNamara D. S. & Trafton J. G. (eds.), Proceedings of the 29th Annual Cognitive Science Society. Cognitive Science Society. pp. 611--616.details
Belief Without Credence.J. Adam Carter, Benjamin W. Jarvis & Katherine Rubin - 2016 - Synthese 193 (8):2323-2351.details
One of the deepest ideological divides in contemporary epistemology concerns the relative importance of belief versus credence. A prominent consideration in favor of credence-based epistemology is the ease with which it appears to account for rational action. In contrast, cases with risky payoff structures threaten to break the link between rational belief and rational action. This threat poses a challenge to traditional epistemology, which maintains the theoretical prominence of belief. The core problem, we suggest, is that belief may not be (...) enough to register all aspects of a subject's epistemic position with respect to any given proposition. We claim this problem can be solved by introducing other doxastic attitudes—genuine representations—that differ in strength from belief. The resulting alternative picture, a kind of doxastic states pluralism, retains the central features of traditional epistemology—most saliently, an emphasis on truth as a kind of objective accuracy—while adequately accounting for rational action. (shrink)
Degrees of Belief in Philosophy of Probability
1 — 50 / 1000
Using PhilPapers from home? | CommonCrawl |
Assessment of trace elements pollution in the sea ports of New South Wales (NSW), Australia using oysters as bioindicators
Sayka Jahan1 &
Vladimir Strezov ORCID: orcid.org/0000-0002-9129-92841
Scientific Reports volume 9, Article number: 1416 (2019) Cite this article
In this study Sydney rock oysters (S. glomerata) from six major sea ports of NSW, Australia were used as bioindicators to assess the distribution and levels of trace element accumulation in the ports. Substantial enrichment of Cu, Pb and Zn in the oysters of the sea ports were detected when compared to their background samples and the US Environmental Protection Agency (USEPA) provisional tolerable intake standard. Enrichment of As, Al, Fe, Mn, Br, Sr were also found in the oysters at the port areas. The bioconcentration ratios of the trace elements illustrated significant Fe, Cu, Zn, As, Mn, Al, Pb and Cr accumulation in S. glomerate. The biota sediment accumulation factor suggested Cu, Mn and Zn accumulation at two of the ports (Port Yamba and Botany), indicating availability of these metals in the oysters as strong metal accumulators. In addition, integrated metal contamination illustrated notable Fe, Zn, Cu and Al contamination at port environment, whereas cluster analysis portrayed interconnection between the contaminants and the study sites.
Trace element contamination is considered as one of the major issues in marine and estuarine environment due to their diverse sources, persistence, bioaccumulation, non-degradability and harmful effects on biota1,2,3,4,5,6,7. The ecological status of the aquatic environment can be evaluated by analyzing the distribution of trace elements in water, sediments and marine organisms8. In most cases, contaminated site assessment typically demands analysis of water and sediments to measure total trace elements concentrations, but often this is not a sufficient predictor of trace element toxicity to biota9,10. To overcome the problem, biomonitoring offers advantage as marine organisms' (oysters, mussels, and clams) manifest greater spatial tolerant to elemental toxicity compared to water and sediments and therefore gained universal acceptance as the most reliable medium to ascertain sources of biologically available trace element contamination6,8,11. However, bivalve mollusks are considered to be one of the best bioindicators for coastal pollution studies due to their specific life traits, such as a sessile and filter-feeding behavior, a wide geographical distribution, abundance, sedentary and a relative resilience to pollutants6,12. Moreover, mollusc bivalves have the potential to accumulate chemical compounds at levels of 103 at 105 times more than other species13. Hence, since 1970's a Worldwide scheme for monitoring ocean health by using mussels and oysters has been introduced14,15,16,17.
The specific ability of oysters to accumulate pollutants makes them candidate species for biomonitoring contaminant exposure to their potential biological effects6,12,18,19. Furthermore, oysters are often used as sentinel organisms due to their rapid adaptive capacity to the new environment. In Australia an integrated approach, including analysis of oysters as bioindicator and for quantification of elements in biota, was analyzed to monitor the impact of trace elements on port ecosystems20,21. As a common food source, it also urges investigation of the impact of marine activities on trace elements pollution in oysters.
Australian sea ports, which accommodate industry, commerce, tourism and recreation, often exacerbate trace elements contamination from different port related activities (transport and storage of hazardous materials, industrial installation, recreational shipping etc)22,23,24,25,26,27,28. This influences the growth rate and fecundity of marine biota and ultimately reduces the population diversity26,27,28,29,30,31 and reduces their suitability as a food source for humans32.
Among Pacific oysters, S. glomerata has been evidenced to be one of the most suitable organisms for biomonitoring chemical contamination in coasts and estuaries. This has preferentially been selected as a sentinel organism for its capability to concentrate pollutants, its lethargy, its limited ability to metabolize accumulated contaminants, its abundance, persistence, and ease of collection, all of which make it a good stable assimilator of the environment33. In the present study S. glomerata was used as the bioindicator to investigate the levels of trace element contamination in port environment, also applied by Goldberg et al.34 and Thompson et al.35. Evidence also suggested that S. glomerata is widely distributed species along the coastal belts of NSW that also acts as a potential accumulator of trace elements35,36.
The objective of this study was to assess the trace elements concentrations in the oysters of NSW sea ports to determine the variations of trace element bioaccumulation in the oysters under different port activities and to explore the level of trace element pollution in oysters which ultimately gives a scenario of stress on port environments. Finally, the present study typifies a new perspective for biomonitoring and risk assessment of trace elements in aquatic ecosystems using principal component and hierarchical cluster analysis methods.
The field study was conducted at the six major seaports in NSW, Australia, namely Port Jackson, Botany, Kembla, Newcastle, Yamba and Eden (Fig. 1). These ports are away from each other and are engaged with different shipping activities (23 km–1198 km). Port Jackson of Sydney harbour, which accommodates cruise shipping, pleasure boating and water sports, is a well-mixed estuary37. Port Botany is another important port of Sydney mainly engaged with shipping of containers, cruide oil, fossil fuel, chemicals and bio-fuels. Port Kembla is a prime export location for coal, grain terminal, bulk liquids, oil, fertiliser, pulp and steel products. The port is important for export and import of different mineral ores and petroleum products. Port Newcastle is the world largest port for coal export by tonnage that is also engaged with export and import of raw materials for steelworks, fertiliser and aluminium industries, grain, steel products, mineral sands and woodchips. Port Yamba is the eastern most sea port of New South Wales located at the mouth of the Clarence River. It is the second largest fishing port of New South Wales dealing with container liquid berth-livestock and explosive products. The Port of Eden is located in the South Coast region of New South Wales, Australia. The Port is the largest fishing port of New South Wales also engaged with export and import of woodchips, break bulk, machinery and equipment for the oil and gas industry22,38. The study locations are shown in Fig. 1.
Map of the study area showing study ports of NSW.
Sample collection and processing
Oyster samples known as Sydney rock oyster (S. glomerata) of different shell sizes (3 cm–7 cm) were collected from the six sea ports from April–June 2017. Three sampling points from each port were selected to collect samples among which one is background point selected from the same hydrogeological area but away from any influence of the port and other industrial activities. In this study, >40 indigenous oysters from each sampling point were collected by hand from dock columns and rocks in surface water (0–1 m). Immediately after collection, the oysters were stored in bags in a cooler box with ice and transported to the laboratory. About 20 oyster samples were selected from each sampling point and were weighed, and their tissues and shells were separateted. The tissues were then dried in an oven at 105 ± 5 °C for 8 hours to a constant weight39. The soft tissue, after the removal of the liquid, was then weighted. Prior to analysis, the dried samples were ground and the powdered sample then used for analysis where each analysis was replicated twice.
Analytical procedure
The oyster tissue samples (0.05 g) were digested in 1 mL concentrated HNO3 acid at 80 °C on the hot plate for 24 hours until the samples were completely digested. The sample solutions were then diluted three times with Milli-Q water. Metals and major element (tweenty three) concentrations21 in the samples were determined by inductively coupled plasma mass spectrometry (ICP-MS Agilent 7700X and Varian vista-pro ICP-AES) respectively, while mercury was determined by cold vapour atomic absorption spectroscopy (CV-AAS) to reach the PQLs (practical quantitation limits). Quality and accuracy of the experimental procedure and the equipment was ensured using replicate analyses, certified referennce material (CRM) (oyster tissue, SRM 1566b) and sample spikes. The recovery percentage of all trace metals in CRM were 90–110% and the analytical precision expressed as coefficients of variance was <10% for all the metals based on replicate analysis. The detection limit of the method (MDL) was estimated as the standard error of 10 blank replicates40. The recovery percentage and detection limits of all trace elements are presented in Table 1.
Table 1 Recovery (%) and practical quantification limit (mg/kg dry wt.) of analyzed trace elements.
Bioconcentration Ratio (BCR)
Bioconcentration is a process in which biological organisms absorb a chemical compound from their surrounding environment through different body parts41. It is a quantitative measure of the biota's bioaccumulative capacity42. The measured bioconcentration ratios also form the base for assessing the risk of adverse effects of hazardous substances on specific biota43. The extent of bioconcentration is calculated by using the formula (1)44:
$$BCR=\frac{{C}_{Organism}}{{C}_{Water}}$$
where Corganism is the concentration (mg/kg) of an element in the oyster, which was measured in this study, while CWater is the concentration (mg/l) of the same element in the water of the same study locations, which was derived from the mean values published by Jahan and Strezov37. When the BCR is >1, bioaccumulation is considered.
Biota sediment accummulation factor (BSAF)
Biota sediment accumulation factor (BSAF) is the ratio between the concentration of element in a biota to the concentration of same element in sediment45. The BSAF for each element in the sample is calculated with equation (2):
$$BSAF=\frac{{C}_{organism}}{{C}_{Sediment}}$$
where Corganism and Csediment are the concentrations(mg/kg) of trace elements in the oyster and in sediment46. Typically, BSAF value >1 indicates bioaccumulation of trace element. In this study, the sedimentary trace element concentrations for the same study locations were derived from the mean values published by Jahan and Strezov22.
Integrated metal contamination (IMC)
The severity of metal pollution can be determined using the integrated metal contamination equation (3) given by Liu and Wang47.
$$IMC=\sum _{i=0}^{m}{C}_{Contaminated}^{i}-{C}_{Clean}^{i}$$
where CiContaminated is the concentration(mg/kg) of i metal in a contaminated oyster obtained from the port area, Ciclean is a reference value (mg/kg) for the i metal in oyster obtained from the background site of each port, while m is the number of metals investigated, which is m = 13 for this calculation.
Statistical analysis was performed by using Microsoft excel and SPSS version 24. Analyzed metal concentrations were presented as normalized concentration for standardized weight and length. The normality distribution of data were tested by Kolmogorov-Smirnov test and then normalized and analyzed using the multivariate statistical tools principal component analysis (PCA) and hierarchical cluster analysis (HCA) using both cases and variables to develop groups and identify links between elements and sampling sites by dendrogram as described by Jahan and Strezov22. The PCA and HCA were used to indentify the possible sources of trace elements in the sediments and group them based on their similarities. For PCA, only PCs with eigenvalue >1 were retained and variables were centered as mean48. Varimax rotation was applied to component loadings greater than 0.5 to facilitate the interpretation of the outcomes49.
Bioaccumulation pattern and normalized concentrations of trace elements (whose concentrations are significantly high) in the soft tissue of the oysters (30–40 g and 5–7 cm) (Saccostrea glomerata) are shown in Table 2. The concentrations of As in port Jackson, Botany, Kembla and Eden range from 5 to 9 mg/kg which are significantly higher than their background concentrations (1.88 mg/kg) in oysters of the NSW coast given by Scanes and Roach50, and are also higher than the standard quality guidelines for bivalve mollusks (4 mg/kg) given by FAO51. However, in Australia and New Zealand, the regulation appllied to seafood is related to inorganic arsenic. This is because marine organisms and plants, such as shellfish, molluscs and seaweed, can contain high levels of arsenic, but mostly in organic arsenosugar forms52.
Table 2 Comparison of the studied trace elements concentrations (normalized concentration mean ± SD. for 30–40 g standardized weight and 5–7 cm length) in S. glomerata with that of the maximum permissible limits set forth by various organizations.
The Cu concentrations in the oyster of all ports were found to be higher than the USEPA (provisional tolerable intake)53 standard (0.05 mg/kg) and FAO standard quality guidelines for bivalve mollusks (20 mg/kg)51. The highest concentration of Cu was detected at port Yamba (61 mg/kg), which is significantly above the standards and is identified as unsafe food54. Cu enrichment in the oysters at port Botany, Kembla, Newcastle and Yamba comparing to their background sites advocates the impacts of port activities which are associated with trade of coal, steel products, crude oil, fossil fuels, chemicals, fertilizers, mineral sands, preservative chemicals from wood chips and storage of hazardous products in the port vicinity37. However, Cu concentrations found in oysters are also associated with higher assimilation efficiencies and bioavailability in the port environment. The normalized Pb concentrations in the studied S. glomerata species at port Kembla exhibited six-fold increase when compared to the maximum permissible limits recommended by Food and Drug Administration55 and recommended as unsafe for human consumption. The Pb concentration (2–7 mg/kg) in this port is also higher than its background site and all other standards (0.2 0.025 and 1.7 mg/kg by FAO, USEPA and USFDA respectively) shown in Table 2. The industrial complexes, including the major metal smelting operations adjoining this port are likely the major sources of Pb56. However, Zn concentrations (13–51 mg/kg) in the oyster at all ports also demonstrate the oysters are unsafe as food as their concentrations are considerably higher than their corresponding USEPA (provisional tolerable intake) standard (0.3–1 mg/kg). It is known that mollusks possess high affinity for accumulation of Zn57 and it is generally agreed that the highest concentrations of zinc in marine biota are found in the tissues of filter feeding mollusks, especially oysters58. Unlike port Eden, the oysters in all other ports contained higher concentrations of Mn than the background values given by Scanes and Roach49. The results also portray remarkable Mn bioaccumulation inside port Jackson, Botany, Kembla and Yamba comparing to their background sites. Notable amounts of Al, Fe, Br, Si and Sr were detected in almost all ports although they do not have any standard values to compare. Significant amounts of Ti is also found in the oysters at port Kembla. In addition, concentrations of some elements (Hg, Cd, Ag, Ni, Co, Ba, Sn) were measured but found bellow detection limit in all study points, therefore they were not reported further.
BCR values in S. glomerata shown in Fig. 2 present an order of Fe > Cu > Zn > As > Mn > Al > Pb > Cr although they show variations among ports. BCR values of As, Cr and Pb in some of the ports are less than 1 which demonstrate almost similar concentrations of those elements in oyster and water. However, only eight metals are calculated because of others are below detection limit in water. BCR values greater than 1000 indicate significant and slow accumulation59. High BCR values also demonstrate the uptake of free metal ions from solution more effectively via dermal organs60. BCR values of Fe and Cu in almost all ports are >1000 with the highest values (Fe = 46,470, Cu = 10,588) at Port Kembla which demonstrate considerable Fe and Cu concentrations in the port environment. Except Port Kembla and Yamba, significant Zn concentrations (as BCR > 1000) were found in all other ports.
Bioconcentration Ratio (BCR) in oysters (S. glomerata) from the seaports of NSW, Australia.
Biota sediment accumulation factor (BASF)
The average concentration of the trace element was then applied to determine the biota sediment accumulation factor, as presented in Fig. 3. Significant bioaccumulation of Cu at port Botany (7), Newcastle (2.63) and Yamba (40) and bioaccumulation of Zn (2.43 and 4.33) and Mn (4.70 and 1.33) at port Botany and Yamba indicates availability of these metals in the port environment as well as high-level absorbing capacity in the soft tissues of the oysters. Bioaccumulation of As and Sr were observed at port Jackson whereas Si bioaccumulation was also found in the oysters at port Kembla. Based on the results, S. glomerata is considered to be strong accumulators for Cu and moderate accumulators for Zn and Mn.
Biota sediment accumulation factors (BSAFs) for oysters in the study ports of NSW, Australia.
Integrated metal concentration
The severity of metal pollution by integrated metal concentration (IMC) is presented in Table 3. The results suggest that the oysters at port Eden and Botany are comparatively less contaminated than the oyster samples from the other port sites. The results also imply that the oysters at port Kembla are severely contaminated followed by port Jackson and port Yamba with notable enrichment of Fe, Zn, Cu and Al. For calculation of IMC reference site values are required. If the reference values are affected by non-point pollution sources the IMC values may be affected because of the undue influence of one of the measurements used in the final composite values61. Therefore, no threshold for maximum pollution is given for this index.
Table 3 Integrated metal contamination (mg/kg) in the oysters of NSW seaports.
The variations of trace elements concentrations in the oysters between the background and study port areas by means of ANOVA are shown in Table 4. The results revealed that the variations were insignificant (P > 0.05).
Table 4 Significance (variations were significant at P < 0.05) analysis of trace elements concentrations between the background and port oysters.
Correlation analysis was also performed on the normalized data set to test the relationship between the environmental parameters and significant correlations among metals are presented in Table 5. According to the Pearson statistical analysis (significant at P < 0.05) strong positive relationship exists between weight and length of oyster (r2 = 0.98). Al shows strong positive correlation with Cu, Mn and Si whereas Cr shows strong positive relation with Pb and Ti. However, Cu shows strong positive correlation with Mn and Si, while Mn is strongly correlated with Si and I. Analysis results also reveal that a strong positive correlation exists between Pb and Sr.
Table 5 Correlation analysis of trace elements in the oyster of the seaports of NSW, Australia.
Principal component analysis (PCA) of the oyster data summarizes four groups of pollutants and the contamination levels of each group of pollutants in the oysters of the studied ports. Four significant principal component groups were determined by deriving the eigenvalues and eigenvectors from the correlation matrix. The percentage of the total variance of each principal component (PC) group is shown in Table 6. Four component groups generating about 95.8% of the total variance were obtained. The first component group consists of 37.75% of the variation with the greatest weights (>0.70) for Cu, Mn, U, Cd and Br, and moderate weights for I and Si. PC2 accounted for 27.64% of the variation with the important components comprising of Fe, Pb, Cr and Al. PC3 and PC4 exhibited 21.03% and 9.37% of the variation respectively and had moderate weights for U, Br and I.
Table 6 Component matrix of the oysters of NSW seaports.
The cluster analysis (HCA) results for the sampling sites based on the trace element concentrations were presented as a dendrogram shown in Fig. 4. Two main different clusters were identified from the trace element enrichment dendrogram (Fig. 4a). The first cluster group comprising Se, Cd, U, Cr, V, Pb and Ti with two sub-groups of Se, Cd, U as one sub-group and Cr, V, Pb and Ti as the second sub-group. The second HCA cluster group also consists of two sub-cluster groups, one of which comprises of Cu, Mn and I and the other group includes Fe, Si, Zn, Al, B, Sr, Br and As.
Hierarchical dendrogram showing the clustering of (a) trace elements and (b) study sites of the sea ports of NSW, Australia.
The dendrogram can also help to explain and group the impact of port activities on trace element enrichments, as presented in Fig. 4b. The analysis results demonstrated that the fishing fleet activities and trade of woodchip, break and bulk machinery for the oil and gas industry at port Eden are significantly responsible for the trace element contamination in oyster followed by the container, cruide oil and bulk liquid operations (fossil fuel, chemical and bio-fuel) at port Botany and bulk liquids, oil, fertiliser, pulp, steel products and various ores related activities at port Kembla.
The present study showed the pattern distribution of trace elements in the sea port environments using oyster (S. glomerata) as a bioindicator. S. glomerata has been known as an effective ecological tool to trace the heavy metals or toxic elements (for example, Cu, Zn, As, Pb, Fe, Mn and Sr) as it is widely grown in the Pacific coastal areas. The results illustrate that the varying levels of trace elements in the oyster and their concentrations were highly dependent on the nature of the ports and human activities in the vicinity of the port areas. The BCR and BSAF analyses demonstrate significant accumulation of Fe, Cu, Mn, Zn, As and Sr, which reflect their availability in seawater and sediments. Likewise, the integrated metal contamination analysis determined severe contamination of Fe, Zn, Cu and Al contamination in the oysters at all port areas. Overall, S. glomerata is an important bioindicator to detect the distribution of contaminants in the port environment. Further measures are still required for suitable and effective management of the toxic trace elements in the NSW ports to alleviate the anthropogenic impacts on the sea environment.
The datasets generated during and/or analyzed during the current study are available in supplementary dataset.
Pan, K. & Wang, W.-X. Trace metal contamination in estuarine and coastal environments in China. Science of the Total Environment 421–422, 3–16 (2012).
ADS Article Google Scholar
Wang, S. L., Xu, X. R., Sun, Y. X., Liu, J. L. & Li, H. B. Heavy metal pollution in coastal areas of South China: a review. Marine Pollution Bulletin 76, 7–15 (2013).
Wang, W.-X., Pan, K., Tan, Q., Guo, L. & Simpson, S. L. Estuarine pollution of metals in China: science and mitigation. Environmental Science and Technology 48, 9975–9976 (2014).
Kumar, V. et al. Linking environmental heavy metal concentrations and salinity gradients with metal accumulation and their effects: a case study in 3 mussel species of Vitória estuary and Espírito Santo bay, southeast Brazil. Science of the Total Environment 523, 1–15 (2015).
Lee, T. T. Y., Zimmermann, S. & Sures, B. How does the metallothionein induction in bivalves meet the criteria for biomarkers of metal exposure? Environmental Pollution 212, 257–268 (2016).
Yin, Q. & Wang, W. Relating metals with major cations in oyster Crassostrea hongkongensis: A novel approach to calibrate metals against salinity. Science of the Total Environment 577, 299–307 (2017).
Weng, N. & Wang, W.-X. Dynamics of maternally transferred trace elements in oyster larvae and latent growth effects. Scientific Reports 7, 3580 (2017).
Bazzi, A. Heavy metals in seawater, sediments and marine organisms in the Gulf of Chabahar, Oman Sea. Journal of Oceanography and Marine Science 5, 20–29 (2014).
Topcuoglu, S., Ergül, H., Baysal, A., Ölmez, E. & Kut, D. Determination of radionuclide and heavy metal concentrations in biota and sediment samples from Pazar and Rize stations in the eastern Black Sea. Fresenius Environmental Bulletin 12, 695–699 (2003).
Jahan, S. & Strezov, V. Assessment of trace elements pollution in sea ports of New South Wales (NSW), Australia using macrophytobenthic plant Ecklonia radiata as a bio-indicator. Chemosphere 218, 643–651 (2019).
Spooner, D. R., Maher, W. & Otway, N. Trace Metal Concentrations in Sediments and Oysters of Botany Bay, NSW, Australia. Archives of Environmental Contamination and Toxicology 45, 92–101 (2003).
Goldberg, E. D. The mussel watch concept. Environmental Monitoring and Assessment 7, 91–103 (1986).
Meng, J., Wang, W., Li, L., Yin, Q. & Zhang, G. Cadmium effects on DNA and protein metabolism in oyster (Crassostrea gigas) revealed by proteomic analyses. Scientific Reports 7, 11716 (2017).
Goldberg, E. D. The mussel watch-a first step in global monitoring. Marine Pollution Bulletin 6, 111 (1975).
Watling, H. & Watling, R. Trace metals in oysters from Krysna Estuary. Marine Pollution Bulletin 7, 45–48 (1976).
Phillips, D. J. H. The use of biological indicator organisms to monitor trace metal pollution in marine and estuarine environments - A review. Environmental Pollution 13, 281–317 (1977).
Davies, I. M. & Pirie, J. M. Evaluation of a "mussel watch" project for heavy metals in Scottish coastal waters. Marine Biology 57, 87–93 (1980).
Oliver, L., Fisher, W., Winstead, J., Hemmer, B. & Long, E. Relationships between tissue contaminants and defense-related characteristics of oysters (Crassostrea virginica) fromfive Florida bays. Aquatic Toxicology 55, 203–222 (2001).
Valdez Domingos, F. Multibiomarker assessment of three Brazilian estuaries using oysters as bioindicators. Environmental Research 105, 350–363 (2007).
Nasci, C. et al. Clam transplantation and stress-related biomarkers as useful tools for assessing Water quality in coastal environments. Marine Pollution Bulletin 39, 255–260 (1999).
Séguin, A. et al. Metal bioaccumulation and physiological condition of the Pacific oyster (Crassostrea gigas) reared in two shellfish basins and a marina in Normandy (northwest France). Marine Pollution Bulletin 106, 202–214 (2016).
Jahan, S. & Strezov, V. Comparison of pollution indices for the assessment of heavy metals in the sediments of seaports of NSW, Australia. Marine Pollution Bulletin 128, 295–306 (2018).
Batley, G. Heavy metal speciation in waters, sediments and biota from Lake Macquarie, New South Wales. Australian Journal of Marine and Freshwater Research 38, 591–606 (1987).
Birch, G., Evenden, D. & Teutsch, M. Dominance of point source in heavy metal distributions in sediments of a major Sydney estuary (Australia). Environmental Geology 28, 169–174 (1996).
Birch, G. & Taylor, S. Source of heavy metals in sediments of the Port Jackson estuary, Australia. Science of the Total Environment 227, 123–138 (1999).
Roach, A. Assessment of metals in sediments from Lake Macquarie, New South Wales, Australia, using normalization models and sediment quality guidelines. Marine Environmental Research 59, 453–472 (2005).
Creighton, N. & Twining, J. Bioaccumulation from food and water of cadmium, selenium and zinc in an estuarine fish, Ambassis jacksoniensis. Marine Pollution Bulletin 60, 1815–1821 (2010).
Ellis, J. I. et al. Multiple stressor effects on marine infauna: Responses of estuarine taxa and functional traits to sedimentation, nutrient and metal loading. Scientific Reports 7(1), 12013 (2017).
Stark, I. Heavy metal pollution and microbenthic assemblages in soft sediment in two Sydney estuaries, Australia. Marine and Freshwater Research 49, 533–540 (1998).
Mccready, S., Birch, G., Long, E., Spyrakis, G. & Greely, C. Relationships between toxicity and concentrations of chemical contaminants in sediments from Sydney Harbour, Australia, and vicinity. Environmental Monitoring and Assessment 120, 187–220 (2006).
Twining, J., Creighton, N., Hollins, S. & Szymczak, R. Probabilistic risk assessment and risk mapping of sediment metals in Sydney Harbour embayments. Human and Ecological Risk Assessment 14, 1202–1225 (2008).
Ahdy, H., Abdallah, A. & Tayel, F. Assessment of heavy metals and nonessential content of some edible and soft tissues. Egyptian Journal of Aquatic Research 33, 85–97 (2007).
Luna-Acosta, A. Integrative biomarker assessment of the effects of chemically and mechanically dispersed crude oil in Pacific oysters, Crassostrea gigas. Science of the Total Environment 598, 713–721 (2017).
Goldberg, E. D., Koide, M., Hodge, V., Flegal, A. & Martin, J. United States mussel watch – 1977–1978 results on trace metals and radionuclides. Estuarine and Coastal Shelf Sciences 16, 69–93 (1983).
Thompson, E. et al. A proteomic analysis of the effects of metal contamination on Sydney rock oyster (Saccostrea glomerata) haemolymph. Aquatic Toxicology 103, 241–249 (2011).
Lanlan, X., Chenglong, J., Huifeng, W., Qiaoguo, T. & Wen-Xiong, W. A comparative proteomic study on the effects of metal pollution in oysters Crassostrea hongkongensis. Marine Pollution Bulletin 112, 436–442 (2016).
Jahan, S. & Strezov, V. Water quality assessment of Australian ports using water quality evaluation indices. Plos One 12, e0189284 (2017).
Harris, P. & O'Brien, P. Australian Ports In: DIVISION, P. A. M. (ed.) Environmental Data and Risk Analysis. Australian Gelogical Survey Organization, Canbera, Australia (1998).
Baltas, H. et al. Experimental study on copper uptake capacity in the Mediterranean mussel (Mytilus galloprovincialis). Environmental Science and Pollution Research 23, 10983–10989 (2016).
Federal Register. Definition and procedure for determination of the method detection limit. EPA, 40 CFR Part 136, Appendix B, Revision 1.11 1(11), 198–199 (1984).
Jonathan, M. P. et al. Bioaccumulation of trace metals in farmed pacific oysters Crassostrea gigas from SW Gulf of California coast, Mexico. Chemosphere 187, 311–319 (2017).
Zalewska, T. & Suplińska, M. Reference organisms for assessing the impact of ionizing radiation on the environment of the southern Baltic Sea. Oceanological and Hydrobiological. Studies 41(4), 1–7 (2012).
IAEA. Handbook of Parameter Values for the Prediction of Radionuclide Transfer to Wildlife. IAEA Technical Reports Series 479, Vienna, Austria (2014).
Arnot, J. A. & Gobas, F. A. P. C. A review of Bioconcentration Factor (BCF) and Bioaccumulation Factor (BAF) assessments for organic chemical in aquatic organisms. Environmental Reviews 14, 257–297 (2006).
Thomann, R. V., Mahony, J. D. & Mueller, R. Steady state model of biota-sediment accumulation factor for metals in two marine bivalves. Environment and Toxic. Chemistry 4, 989–998 (1995).
Negri, A., Burns, K., Boyle, S., Brinkman, D. & Webster, N. Contamination in sediments, bivalves and sponges of McMurdo Sound, Antarctica. Environmental Pollution 143, 456–467 (2006).
Liu, F. & Wang, W.-X. Proteome pattern in oysters as a diagnostic tool for metal pollution. Journal of Hazardous Materials 239–240, 241–248 (2012).
Kaiser, H. F. The application of electronic computers to factor analysis. Educational and Psychological Measurement 20, 141–151 (1960).
Loska, K. & Wiechuła, D. Application of principal component analysis for the estimation of source of heavy metal contamination in surface sediments from the Rybnik reservoir. Chemosphere 51, 723–733 (2003).
Scanes, P. & Roach, A. Determining natural "background" concentrations of trace metals in oysters from New South Wales, Australia. Environmental Pollution 105, 437–446 (1999).
FAO. Report of the Workshop and Study Tour on Mollusk Sanitation and Marketing, Regional Sea Farming Development and Demonstration Project RAS/ 86/024 15-28 October [On Line]. http://www.fao.org/docrep/field/003/ AB710E24.htm (1989).
Andrewes, P. et al. Do arsenosugars pose a risk to human health? The comparative toxicities of a trivalent and pentavalent arsenosugar. Environmental Science & Technology 38, 4140–4148 (2004).
USEPA. Human Health Risk Assessment. In: AGENCY, U. N. E. P. (ed.). USA (2013).
Pazi, I. et al. Potential risk assessment of metals in edible fish species for human consumption from the Eastern Aegean Sea. Marine Pollution Bulletin 120, 409–413 (2017).
FDA. Guidance Document for Arsenic, Cadmium, Chromium, Lead, Nickel in Shellfish. US Department of Health and Human Services. Public Health Service, Office of Seafood (HFS-416). Food and Drug Administration, Washington, D.C, 39–45 (1993).
Luoma, S. N. & Rainbow, P. Why is metal bioaccumulation so variable? Biodynamics as a unifying concept. Environmental Science and Technology 39, 1921–1931 (2005).
Paez-Osuna, F. & Osuna-Martínez, C. C. Biomonitors of coastal pollution with reference to the situation in the Mexican coasts: a review on the utilization of organisms. Hidrobiologica 21, 229–238 (2011).
Eisler, R. Z. In: Handbook of Chemical Risk Assessment: Health Hazards to Humans, Plants, and Animals, vol. 1. Metals. (ed. Boca Raton, F. L.) 605–714 (Lewis Publishers 2000).
Kwok, C. K. et al. Bioaccumulation of heavy metals in fish and Ardeid at Pearl River estuary, China. Ecotoxicology and Environmental Safety 106, 62–67 (2014).
Jayaprakash, M. et al. Bioaccumulation of metals in fish species from water and sediments in macro-tidal Ennore creek, Chennai, SE Coast of India: A Metropolitan City effect. Ecotoxicology and Environmental Safety 120, 243–255 (2015).
Delvalls, T. A., Forja, J. M. & GóMez-Parra, A. Integrated assessment of sediment quality in two littoral ecosystems from the Gulf of Cádiz, Spain. Environ. Toxicol. Chem. 17, 1073–1084 (1998).
The authors honorably appreciate Macquarie University for the funding (iMQRES, grant no-2016237) of this research.
Department of Environmental Sciences, Faculty of Science and Engineering, Macquarie University NSW, 2109, Sydney, Australia
Sayka Jahan & Vladimir Strezov
Sayka Jahan
Vladimir Strezov
Study Conception and Design: Vladimir Strezov and Sayka Jahan Acquisition of Data: Vladimir Strezov and Sayka Jahan Analysis and Interpretation of Data: Vladimir Strezov and Sayka Jahan Drafting of Manuscript: Sayka Jahan Critical Revisions: Vladimir Strezov.
Correspondence to Sayka Jahan.
Supplementary dataset 1
Jahan, S., Strezov, V. Assessment of trace elements pollution in the sea ports of New South Wales (NSW), Australia using oysters as bioindicators. Sci Rep 9, 1416 (2019). https://doi.org/10.1038/s41598-018-38196-w
DOI: https://doi.org/10.1038/s41598-018-38196-w
Oyster arsenic, cadmium, copper, mercury, lead and zinc levels in the northern South China Sea: long-term spatiotemporal distributions, combined effects, and risk assessment to human health
Lifei Wang
Xuefeng Wang
Xiaoping Jia
Environmental Science and Pollution Research (2022)
About Scientific Reports
Guide to referees
Guest Edited Collections
Scientific Reports Top 100 2019
Scientific Reports Top 10 2018
Editorial Board Highlights
Author Highlights
10th Anniversary Editorial Board Interviews
Scientific Reports (Sci Rep) ISSN 2045-2322 (online) | CommonCrawl |
Adding ReputationRank to member promotion using skyline operator in social networks
Jiping Zheng1,2 &
Siman Zhang1
To identify potential stars in social networks, the idea of combining member promotion with skyline operator attracts people's attention. Some algorithms have been proposed to deal with this problem so far, such as skyline boundary algorithms in unequal-weighted social networks.
We propose an improved member promotion algorithm by presenting ReputationRank based on eigenvectors as well as Influence and Activeness and introduce the concept of skyline distance. Furthermore, we perform skyline operator over non-skyline set and choose the infra-skyline as our candidate set. The added ReputationRank helps a lot to describe the importance of a member while the skyline distance assists us to obtain the necessary condition for not being dominated so that some meaningless plans can be pruned.
Experiments on the DBLP and WikiVote datasets verify the effectiveness and efficiency of our proposed algorithm.
Treating the infra-skyline set as candidate set reduces the number of candidates. The pruning strategies based on dominance and promotion cost decrease the searching space.
Nowadays, more and more social activities take place in social networks (SNs for short) as the SNs become prevailing, such as sharing information, making friends or finishing some team work with others online. Human behaviours in SNs attract more attentions. We can conclude that different members play different roles, some members may be "leaders" [1], and others who seem ordinary for the moment but it may be outstanding in the future.
To specify who are about to be important in the future, making a standard of importance should be crucial. There are multiple disciplines to recognize an important one. For example, in an online community as "Sina Weibo", we consider the one who owns lots of followers as important or whose posts get many retweets as important [2]. In a word, different criteria make different "leaders", the one who does not match the criteria would fail to be important. Usually, a single attribute does not describe the importance of a member accurately. Thus, it is necessary for us to formulate a multi-criteria standard to measure importance. The skyline operator has thus been introduced to do this in SNs. It is well known that the skyline operator is a good tool for multi-criteria decision making. It can be used to query for those objects that are not worse than any other. When the skyline operator was first used to do promoting in SNs, Peng et al. [3] proposed the definition of member promotion and provided the brute-force algorithm to realize it. However, this algorithm was inadvisable for a waste of time and space. Thus the authors introduced the skyline operator and proposed the dominance-based pruning strategy to optimize the ways of result validation. Afterwards, they carried further research on it and put forward the concept of promotion boundary for limiting the promotion plans, thus led to the boundary-based pruning strategy [4]. At the same time, they also proposed a cost-based pruning strategy, which greatly improved the efficiency of member promotion. Nevertheless, the final result was unsatisfactory on account of the simple metric of importance.
In this paper, we mainly study directed social graphs with the knowledge of graph theory [4], taking Influence, Activeness and ReputationRank as metrics of member's importance. The attributes Influence and Activeness are easy to understand, and they are indegree and outdegree in a directed graph correspondingly. We consider that if a person owns lots of followers, s/he is influential and if a person follows lots of people, which indicates the ability to reach many other members, s/he is active. What is more, we learn from the idea of Google's pagerank algorithm, a way of measuring the importance of website pages, put forward ReputationRank to measure the importance of a member in SNs. Our goal is to find those members who can be "stars" in the future accurately and efficiently. To ensure accuracy, we assume that if a person is followed by some important persons, s/he is important too. Further, we assume that any two members in a specific direction can be connected only once and we employ edge addition as the promotion manner to simulate the process of relationship established. Usually, it will take cost to add new edges between two nodes. Therefore, the problem of member promotion in SNs is defined to excavate the most appropriate non-skyline member(s) which can be promoted to be skyline member(s) by adding new edges with the minimum cost. However, the calculation of added ReputationRank metric involves series of mathematical operations, it may need enormous computational cost.
To ensure efficiency and tackle the challenge of the computation cost, we mainly consider the changes of Influence and Activeness after adding edges, because we only need to add the number of directed edges involved. However, when calculating a point's ReputationRank, it involves some complicated matrix operations. We need to take the total number of the members as denominator. Apparently, for the great changes of the denominator (we assume the SN is dynamic), the subtle changes of numerator can be ignored. We conduct a skyline query on the dimensions of Influence, Activeness and ReputationRank to get the non-skyline set, then we carry out a second skyline query on the non-skyline set. We treat the skyline set in the second skyline query as our candidate set. It helps to reduce the number of candidates greatly. The contributions of this paper are summarized as follows.
We learn from the pagerank algorithm and propose to add the ReputationRank to measure the importance of a member, which helps to improve the accuracy of the prediction.
We carry a second skyline query over the non-skyline set which is obtained from the skyline query on the three-dimensional dataset and regard the infra-skyline as our candidates. It remarkably reduces the number of candidates. Then we introduce the skyline distance and the cost-based as well as dominance-based strategies to prune some meaningless promotion plans.
Experiments on DBLP and WikiVote datasets are conducted to show the effectiveness and efficiency of our approach.
The rest of this paper is organized as follows. "Related work" section reviews related work. In "Preliminaries" section, we introduce several preliminary concepts. Then we bring forward the problem and propose the algorithm with analysis in "Prediction of promoting members in SNs" section. The results of the experiments are presented to show the effectiveness and efficiency of our algorithm in "Experimental analysis" section. Finally, we conclude our work in "Conclusions" section.
Skyline queries
The skyline operator was first introduced by Börzsöny et al. [5]. It was a tool for multi-criteria decision making. Then some representative algorithms for skyline computation were proposed, such as Block-Nested-Loops (BNL) and Divide-and-Conquer (D&C) [5], Bitmap and Index [6], Nearest Neighbor (NN) [7], and the Branch and Bound Skyline (BBS) algorithm [8]. Both BNL and D&C had to traverse the entire dataset before returning skyline points. The bitmap-based method transformed each data points to bit vectors. In each dimension, the value was represented by the same number '1'. However, it could not guarantee a good initial response time and the bitmaps would be very large for large values. Therefore, another method which transformed multiple dimensions into a single one space where objects were clustered and indexed using a \(B^{+}\) tree was raised. It helped a lot to save processing time because skyline points could be determined without examining the rest of the objects not accessed yet. The NN algorithm was proposed by Kossmann et al. [7]. It could progressively report the skyline set in an order according to user's preferences. However, one data point may be accessed many times until being dominated. To find remedy for this drawback, Papadias et al. [8] proposed BBS, an R-tree based algorithm, which retrieved skyline points by traversing the R-tree by the Best-First strategy. There are also lots of studies on skyline variations for different applications such as subspace skylines [9], k-dominant skylines [10], probabilistic skyline computation on uncertain data [11], weighted attributes skylines [12], skyline queries over data streams [13], skyline analysis on time series data [14], spatial skyline queries [15], skyline computation in partially ordered domains [16] and using skylines to mine user preferences, making recommendations [17] and searching star scientists [18].
Member promotion
Peng et al. [3] first proposed the concept of member promotion in SNs and provided a brute-force algorithm to solve it. It stated that member promotion aimed at promoting the unimportant member which was most potential to be promoted and became important one. It considered "most potential" as the minimum promotion cost, which meant the member could be able to be promoted at minimum cost. And the brute-force algorithm tried out all the available added edges to find out the optimal promotion plans. However, some "meaningless" added edges would also be verified, it led to high time cost. Based on the characteristics of the promotion process, Peng et al. [3] proposed the IDP (Index-based Dynamic Pruning) algorithm, which could generate some prunable plans when met a failed promotion plan. Later, Peng et al. [4] conducted further research on the member promotion, which mainly focused on unequal SNs. They brought forward promotion boundary to limit promotion plans. At the same time, they proposed the cost-based and dominance-based pruning strategies to reduce the searching space. Furthermore, the authors expanded the algorithm, proposed an InfraSky algorithm based on equal-weighed SNs. They optimized the cost model and put forward a new concept named "Infra-Skyline" to remarkably prune the candidate space [4]. However, all the works of Peng et al. [3, 4] are limited for only metrics such as indegree and outdegree which could not describe a member's importance entirely, thus the prediction results of member promotion were not very satisfying.
A major distinction between our approach and Peng et al.'s works is that we add ReputationRank as a metric attribute, which is more suitable to describe a member's characteristic besides the two metrics. With an upgrade of the metrics, our work shows more efficiency.
In this paper, SN is modeled as a weighted directed graph G(V, E, W). The nodes in V represent the members in the SN. Those elements of E are the existing directed edges between the members. Each \(w\in W\) denotes the cost for establishing the directed edge between any two different members.
(Influence) Given a node v in an SN G(V, E, W), the Influence of v, marked as I(v), is the indegree of v.
(Activeness) Given a node v in an SN G(V, E, W), the Activeness of v, marked as A(v), is the outdegree of v.
(ReputationRank) Given a node v in an SN G(V, E, W), the ReputationRank of v, marked as P(v), is the value of the corresponding component in the eigenvector of the normalized social relationship matrix whose eigenvalue is 1.
Suppose that there are three nodes in an SN, let the nodes be \(v_{1}\), \(v_{2}\), \(v_{3}\), if the SN's normalized social relationship matrix has an eigenvalue 1 and its corresponding eigenvector is \(p=(p_{1}, p_{2}, p_{3})\) (we can obtain these values by the method introduced in "ReputationRank" section), then we know that \(v_{1}\), \(v_{2}\), \(v_{3}\)'s ReputaionRank is \(p_{1}\), \(p_{2}\) and \(p_{3}\), respectively.
(Social relationship matrix) Given an SN G(V, E, W), the social relationship matrix is an adjacency matrix which expresses the links between the members in the SN, denoted as M.
(Normalization social matrix) If a social relationship matrix is M, then its normalization social matrix is a matrix where the sum of the elements for each column is 1. We denote the normalization matrix as \(M'\).
(Dominance) Given an SN G(V, E, W), \(\forall v_{1}, v_{2} \in V\), we say \(v_{1}\) dominates \(v_{2}\) if and only if \(v_{1}\) is not worse in Influence dimension, Activeness dimension and ReputationRank dimension, and is better in at least one dimension than \(v_{2}\).
(Dominator set) Given an SN G(V, E, W), if \(v_{1}\) dominates \(v_{2}\), we say \(v_{1}\) is a dominator of \(v_{2}\). Correspondingly, all dominators of a member v, marked as \(\delta (v)\), are denoted as the dominator set of v.
(Skyline) Given an SN G(V, E, W), the skyline of G, denoted as \(S_{G}\), is the set of members which are not dominated by any other member.
(Infra-skyline) Given an SN G(V, E, W), the infra-skyline of G is the skyline of the set of all non-skyline members of G, namely, if \(S_{G}\) is the skyline set of G, then the infra-skyline of G is \(S_{G-S_{G}}\).
Given an SN consists of seven members, namely \(\{A, B, C, D, E, F, G\}\), suppose that the skyline set is \(\{A, B, D\}\), what is more, E is dominated by F, then the infra-skyline in the SN is \(\{C, F, G\}\).
Definition 10
(Promotion cost) Given an SN G(V, E, W), the promotion cost of a candidate c, is the sum of all the weights corresponding to the edges being added at c, denoted as \(cost(c, c')=\sum _{e\in E_{a}}\gamma (e)\), where \(c'\) is the point after the edges are added at point c, \(E_{a}\) is the set of added edges and \(\gamma (e)\) is the cost of adding edge e.
Assume I(v), A(v) and P(v) represent the Influence, Activeness and ReputationRank of node v in V, respectively. We consider the larger the values of I(v), A(v) and P(v) are, the better they are.
ReputationRank
ReputationRank is obtained by counting the number and quality of followers to a person to determine a rough estimate of how important the person is. The ReputationRank of a member is defined recursively and depends on the number and ReputationRank metric of all followers. A member that is followed by many members with high ReputationRank receives a high rank itself.
From the point of mathematics, members' ReputationRank depends on the reputation of those members who follow them. The ReputationRank of the follower also depends on persons who follow her/him, and the subsequent process can be implemented in the same manner. Thus, for solving this kind of "infinite regression", we define \(P(v_{i})\) as the ReputationRank of member i, and we notice that the ith column of the social relationship matrix shows those members who follow her/him. Therefore, we can get \(v_{i}\)'s ReputationRank by adding these products between the relation state and the ReputationRank of all other members, namely
$$\begin{aligned} P(v_{i})=x_{1i}P(v_{1})+x_{2i}P(v_{2})+\cdots +x_{gi}P(v_{g}), \end{aligned}$$
where the coefficient \(x_{ji}\) denotes the reciprocal of outdegree of member j, g is the number of the members.
If there are seven members in an SN, as shown in Fig. 1, the member \(v_{2}\) is followed by \(v_{1}\), \(v_{3}\) and \(v_{4}\), then the rest entries of the second column in the social relationship matrix are all 0s. Furthermore, \(v_{1}\)'s outdegree is 5, \(v_{3}\)'s outdegree is 2 and \(v_{4}\)'s outdegree is 4. Thus, we consider \(v_{2}\)'s ReputationRank is \(\frac{1}{5}p_{v_{1}}+\frac{1}{2}p(v_{3})+\frac{1}{4}p(v_{4})\).
A social network example
From Example 3, we know that if the members \(v_{1}\), \(v_{3}\) and \(v_{4}\) have a high ReputationRank, so does \(v_{2}\).
Therefore, we have g formulas such as Eq. (1), and we have a system of g linear equations. If we compute the social relationship matrix M, put the value of the ReputationRank into the vector and adopt Katz's Suppose [19] to normalize the social relationship matrix, the whole formula system could be expressed as
$$\begin{aligned} P={M^\text{T}}^{'}P, \end{aligned}$$
where P represents the vector consisting of the corresponding ReputationRank of each member in the limited state and \({M^\text{T}}^{'}\) denotes the normalized transposed social matrix.
By reorganizing these formulas, we obtain the formula \((I-{M^\text{T}}^{'})P=\mathbf {0}\), where I represents a g-dimensional unit matrix, and both P and \(\mathbf {0}\) represent vectors with the length of g. The corresponding component of eigenvector P whose eigenvalue is 1 represents the ReputationRank of the members [12].
The property of ReputationRank
It should be noticed that a point's ReputationRank is partially consistent with its Influence. However, this property alone cannot show the difference between the top and the next. Actually, the Activeness also affects the ReputationRank.
Given seven members in the SN, as shown in Fig. 1, its corresponding social relationship matrix M and its normalized transposed matrix \({M^\text{T}}^{'}\) are as follows:
$$\begin{aligned} \begin{array}{cc} M=\left[ \begin{array}{ccccccc} 0 &{} 1 &{} 1 &{} 1 &{} 1 &{} 0 &{} 1\\ 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 1 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0\\ 1 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0\\ 1 &{} 0 &{} 1 &{} 1 &{} 0 &{} 1 &{} 0\\ 1 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \end{array} \right] , &{} {M^T}^{'}=\left[ \begin{array}{ccccccc} 0 &{} 1 &{} \frac{1}{2} &{} \frac{1}{4} &{} \frac{1}{4} &{} \frac{1}{2} &{} 0\\ \frac{1}{5} &{} 0 &{} \frac{1}{2} &{} \frac{1}{4} &{} 0 &{} 0 &{} 0\\ \frac{1}{5} &{} 0 &{} 0 &{} \frac{1}{4} &{} \frac{1}{4} &{} 0 &{} 0\\ \frac{1}{5} &{} 0 &{} 0 &{} 0 &{} \frac{1}{4} &{} 0 &{} 0\\ \frac{1}{5} &{} 0 &{} 0 &{} \frac{1}{4} &{} 0 &{} \frac{1}{2} &{} 1\\ 0 &{} 0 &{} 0 &{} 0 &{} \frac{1}{4} &{} 0 &{} 0\\ \frac{1}{5} &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \end{array} \right] \end{array}. \end{aligned}$$
Then we obtain the eigenvector \(\alpha = (0.304,0.166,0141,0.105,0.179,0.045,0.061)^\text{T}\) of \({M^T}^{'}\) when the eigenvalue is 1. We can conclude that the ReputationRank of each member is almost consistent with their value of Influence. It is obvious that the one whose ID is 1 has the highest ReputationRank almost for one third of all. We think it is because that Member 1 gains all the reputation from Member 2 who has high ReputationRank. What is more, Member 1 has the highest Influence and Activeness, thus we consider Member 1 is the most popular one in the SN. On the other hand, we find that although Member 2 and Member 3 have the same Influence, Member 2's ReputationRank is larger than that of Member 3. The reason is that Member 2 owns one second of Member 3's ReputationRank but Member 3 only owns one fourth of Member 5' ReputationRank. Therefore, we conclude that the ReputationRank of a member in an SN is not only related to the Influence but also to the ReputationRank of their followers and their followers' Activeness.
Prediction of promoting members in SNs
The problem we study in this paper is to locate the most "potential" member(s) for promotion by means of elevating it (them) into the skyline. Suppose we have two datasets \(D_{1}\) and \(D_{2}\). \(D_{1}\) represents some data a few years ago and the \(D_{2}\) represents that of the following years. If \(S_{1}=SKY(D_{1})\), \(S_{1}'=SKY(D_{1}-S_{1})\), \(S_{2}=SKY(D_{2})\), where the SKY() represents the skyline set of the dataset, then \(S_{1}'\) is the candidate set in our algorithm. After promoting towards each point in \(S_{1}'\), if there exist some points in \(S_{1}'\) appearing in \(S_{2}\), the prediction is successful. Otherwise, it fails. Since the non-skyline members are candidates for promotion, if a non-skyline member is promoted, some edges are added to the network and the cost of this promotion is to sum up all the costs of the added edges. In addition, we know that added edges may have effects on the metrics of all members in the SN which may need to be recalculated frequently, thus the time cost to do promotion is extremely high. Therefore, finding the suitable non-skyline members promoted to be skyline members with minimum cost is the goal of member promotion in SNs.
The sort-projection operation
We project all the members into a two-dimensional Cartesian coordinate system in that we only consider the change of Influence and Activeness, where the x-axis represents the Influence and the y-axis represents the Activeness. Taking the candidate c as an example, suppose that c is dominated by t skyline points, it is worth noting that the candidate c is dominated in three dimensions (the Influence dimension, Activeness dimension and ReputationRank dimension). But in the process of edge addition, we just consider the dominance on the Influence and Activeness. Because it is obvious that if a member is not strictly dominated on two dimensions, s/he will not be dominated on three dimensions either [10]. We simply sort the skyline points in ascending order on x-axis. What is more, we assume the weights to be arbitrary positive integer numbers from 1 to 10. Some terms mentioned above are defined as follows.
(Strictly dominate) Given an SN G(V, E, W), if \(p_{1} \prec p_{2}\) and \(p_{1}\) is larger than \(p_{2}\) on each dimension, we say \(p_{1}\) strictly dominates \(p_{2}\), denoted by \(p_{1}\prec \prec p_{2}\).
(Skyline distance) Given a set DS of points in a two-dimensional space, a candidate c, and a path Path(., .), the skyline distance of c is the minimum value of \(Path(c, c')\), where \(c'\) is a position in the two-dimensional space such that \(x.c' \ge x.c\), and \(y.c' \ge y.c\), and \(c'\) is not strictly dominated by any point in DS. We denote the skyline distance as SkyDist().
Suppose that c is strictly dominated by t skyline points in SKY(DS). For any position \(c'\) which is not strictly dominated by any point in DS satisfies \(x.c' \ge x.c\), and \(y.c' \ge y.c\), the promotion from c to \(c'\) can be viewed as a path from c to \(c'\), which always goes up along axes. Since we use linear cost functions \(cost(c, c')\) as the sum of the weighted length of the segments on the path. We aim to find a path with the minimum value so that the end point \(c'\) is not strictly dominated by any skyline point, and \(x.c' \ge x.c, y.c' \ge y.c\).
(Skyline boundary) Given a set SKY of skyline points in DS, we say a point p is on the skyline boundary if there exists a point \(u \in SKY\) such that \(u\prec p\) and there does not exist a point \(u' \in SKY\), such that \(u' < < p\).
From the definition of skyline boundary, we conclude that the skyline distance of each point on the skyline boundary is 0 [20].
Given a candidate c and the t skyline points \(s_{1}\), \(s_{2}, \ldots , s_{t}\), we plot the lines \(x=x_{c}\), \(x=x_{s_{i}}\), \(y=y_{c}\) and \(y=y_{s_{i}}\), respectively, as shown in Fig. 2, we find there would be some intersections, we use triangles to represent these intersections. We call those intersections on the skyline boundary local optimal points. In Fig.2, \(p_{1}\), \(p_{2}\), \(p_{3}\), and \(p_{4}\) are the local optimal points.
A skyline distance example
Therefore, in the wo-dimensional space, for the candidate c and the t skyline points \(s_{1}\), \(s_{2}, \ldots ,s_{t}\), if we have \(x.s_{1}< x.s_{2}<\cdots <x.s_{t}\). Without loss of generality, we know \(y.s_{1}> y.s_{2}>\cdots >y.s_{t}\). We can conclude that there are \(t+1\) local optimal points and the ith one \(p_{i}\) is given by the following formula:
$$\begin{aligned} P_{i}={\left\{ \begin{array}{ll} (x.c, y.s_{1}), \qquad i=1; \\ (x.s_{i-1}, y.s_{i}), \quad 2\le i \le t;\\ (x.s_{t}, y.c), \qquad i=t+1. \end{array}\right. } \end{aligned}$$
Consider a candidate c dominated by t skyline points \(s_{1}\), \(s_{2},\ldots , s_{t}\). Let \(p_{1}, \ldots , p_{r}\) be the r local optimal points determined by c and \(s_{1}\), \(s_{2}, \ldots , s_{t}\), then the skyline distance of c is the minimum path from c to \(p_{i}\).
There is a candidate c and \(s_{1}, s_{2}, s_{3}\) are skyline points which dominate c, as shown in Fig. 2, we can obtain the four local optimal points \(p_{1}\), \(p_{2}\), \(p_{3}\) and \(p_{4}\) by Eq. (4), by comparing the path between c and \(p_{i}\), we can get the skyline distance of c. In Fig. 2, the path between c and \(p_{1}\), \(p_{2}\), \(p_{3}\), and \(p_{4}\) is 2, 2, 2.5 and 3, respectively. Therefore, the skyline distance of c is 2.
Algorithm 1 gives the pseudo-codes of the sort-projection operation. Assume that the number of input skyline points is m, it is easy to know that the cost of the sorting step is \(O(m\log m)\). Then the time cost of remaining step for obtaining the skyline distance mainly depends on the number of local optimal points. From Eq. (4), we know that the time complexity of calculating the local optimal points is O(1). Assume that the number of the local optimal points is k, then it is easy to know that the time complexity of obtaining the minimum path from candidate c to local optimal points is O(k). Therefore, the time complexity of Algorithm 1 is \(O(m \log m+1+k)=O(m \log m)\).
Pruning by cost and dominance
(Promotion plan) Given an SN G(V, E, W), for a candidate \(c \in\) candidate set, the promotion plan of c includes all the added edges in the process of a promotion attempt.
After obtaining the skyline distance of a candidate, we get the necessary condition for the candidate not being dominated by skyline points. Taking the candidate c as an example, assume that \(c'\) is the end point after promotion with the skyline distance of c, then there exists three different solutions towards the different values of \(c'\):
If \(x_{c'}=x_{c}\), then \(x_{c''}=x_{c'}, y_{c''}=y_{c'}+1\);
If \(y_{c'}=y_{c}\), then \(x_{c''}=x_{c'}+1, y_{c''}=y_{c'}\);
If \(x_{c'} \ne x_{c}\) and \(y_{c'} \ne y_{c}\), then \(x_{c''}=x_{c'}+1, y_{c''}=y_{c'}+1\).
We denote the transformed \(c'\) as \(c''\). It is obvious that \(c''\) could not be dominated by any point at all. If we call the position where a candidate will not be dominated as GoodPosition(), we say \(c''\in GoodPosition()\). Besides \(c''\), all points in the skyline set will not be dominated either. Thus, the dominator set of c belongs to GoodPosition(c).
In view of unequal costs for establishing different edges, it probably takes different costs to promote c by different plans. Therefore, we organize all the edges which can be added to the plans against each candidate c, respectively, denoted as \(E_{c}\) and sort the edges in ascending order of weights. Then we can locate the promotion plans which satisfy the constraints of GoodPosition(c) from the head of \(E_{c}\) and treat them as our original plans. These original plans will be put into a priority queue. When the plan is extracted from the priority queue to be verified, we first of all generate its successive plans and put the successive plans into the priority queue. The successive plans are generated by the Observation 1. Once the plan is verified to be successful to promote the candidates, the process of promotion will be ended. However, if a plan cannot successfully promote the candidates, we can generate some prunable plans based on the failed plan. The guidelines are shown in Observation 2. The idea is the same as the IDP algorithm [3].
The successive plans are generated by the following rules:
If the current plan does not contain the minimum-cost edge \(e_{0}\), add it to the current plan.
If the current plan does not contain any successive edge of \(e_{i}\), namely \(e_{i+1}\), replace \(e_{i}\) with \(e_{i+1}\).
The prunable plans are generated by the following rules:
Theorem 1
If the added edge e connecting node \(v_{i}\) and the candidate node c still cannot promote c to the skyline set, all the attempts of adding an edge \(e'\) connecting the node \(v_{j}\) and c with the same direction as e cannot promote c to the skyline set either, where \(v_{j} \in \delta (v_{i})\).
Assuming that after adding an edge e, \(v_{i}(I(v), A(v))\) will change to \(v_{i}(I'(v), A'(v))\), and c(I(c), A(c)) will change to \(c(I'(c), A'(c))\). Assume there is a point p still dominates c, if we add an edge \(e'\) connecting node \(v_{j}\) and c with the same direction as e, and \(v_{j}\) should belong to \(\delta (v)\), we consider there should be two situations for \(v_{j}\):
\(v_{j} \ne p\). If \(v_{j}\) is a dominator of \(v_{i}\) but not be p, after adding an edge from \(v_{j}\) to c, \((I(v_{j}), A(v_{j}))\) will change to \((I'(v_{j}), A'(v_{j}))\), and (I(c), A(c)) will change to \((I'(c), A'(c))\), then p will still dominate c;
\(v_{j} = p\). If \(v_{j}\) is a dominator of \(v_{i}\) and dominates c when (I(c), A(c)) changes to \((I'(c), A'(c))\), after adding an edge from p to c, (I(p), A(p)) will change in \((I'(p), A'(p))\), and (I(c), A(c)) will change to \((I'(c), A'(c))\), it is obvious that the changed p will still dominate c because it dominates c before one of the two values corresponding to the metrics increases.
In summary, all the attempts of adding an edge \(e'\) connecting the node \(v_{j}\) and c with the same direction as e cannot promote c to the skyline set either, where \(v_{j} \in \delta (v_{i})\). \(\square\)
Corollary 1
If a promotion plan \(p(e_{1}, \ldots , e_{w})\) cannot successfully promote its target candidate c to the skyline set, all the plans with w edges which belong to \(\prod _{i=1}^{w}{l_{i}}\) can be skipped in the subsequent verification process against c, where for each \(e_{i}\) connecting \(v_{i}\) and c, \(l_{i}\) is a list containing all the non-existing edges each of which links one member of \(\delta (v_{i})\) and c with the same direction as \(e_{i}\) (\(i=1, 2, \ldots , w)\), \(\prod _{i=1}^{w}{l_{i}}\) is the Cartesian product of \(l_{i}\).
According to Theorem 1, if each edge in \(l_{i}\) cannot successfully promote c, it means \(l_{i}\) cannot do it either. Thus, all the plans with w edges belonging to the Cartesian product of \(l_{i}\) will fail to promote the candidate.
The steps for pruning some plans are shown in Algorithm 2. Note that \(e_{ic}\) denotes the edge which connects from \(v_{i}\) to c. In Algorithm 2, Lines 3–6 and 7–9 are based on Theorem 1 and Corollary 1, respectively. Thus, we obtain the prunable plans of a given candidate.
Assume that for the candidate c, the number of available edges is k. For the worst case that all edges belong to available edge set fail to make c successfully promoted, suppose that the number of nodes which dominate c is h, then the time complexity of generating some prunable edges against each failed point is O(hk). Furthermore, the time complexity of generating the prunable plans is O(1). Thus, the total time complexity in the worst case is O(hk). \(\square\)
Verification of the result
After pruning some meaningless plans based on promotion cost and dominance, the remaining plans will be carried out for promotion. It is well known that the skyline set may change after a promotion attempt, thus the candidate may still be dominated by other members. Therefore, the final verification must be executed to examine the results of the promotions.
It is time-consuming if we recalculate the skyline set after each promotion. We notice that those points which do not dominate the candidate before promotion would not dominate it after promotion either. Thus we can ignore it in the verification process. Therefore, after pruning, we should just consider the following situations when verifying:
The points which dominate the candidate before promotion.
The points which are contained in the promotion plans.
The PromSky algorithm
The whole process of member promotion in an SN is presented in Algorithm 3. Line 2 represents the generation of candidate set. Line 4 represents a preprocessing phase by generating the sorted available edges. The skyline distance of each candidate is calculated in Line 5. Then GoodPosition() is generated in Lines 6–14. The point \(c'\) is the promoted point with the skyline distance of c. Line 16 shows that the corresponding promotion plans are generated and put into the priority queue Q. Once the queue is not empty, we fetch the plan with minimum cost for further verification. Line 18 shows that before verifying the plan, we first generate its children plans by Observation 1 so that we can verify all the possible plans in ascending order of cost. Lines 21–24 represent that after checking based on the result verification strategy the result will be output if the promotion succeeds. If not, some prunable plans will be generated. The generation of prunable plans are showed in Line 28. Lines 25–26 represent that if the plan is in the prunable list, there is no need of further verification. Lines 19–20 show that after a successful promotion, the process will halt once we encounter a plan with the higher cost.
We estimate the time complexity of our PromSky algorithm in the worst case. Assumed that the candidate set is M, it takes O(|M|) time to build its available edge set and \(O(|M|\log |M|)\) time to calculate the skyline distance. For the recursion on the basis of each plan, the worst time complexity of generating the children plans is O(|M|). It will take \(O(\log |M|)\) to build and search the min heap. The generation process of the prunable list will cost \(O(|m|^{2})\). We build an index such as \(B^{+}\) tree for speeding up the search in the prunable list, whose time cost can maintain steady at around \(O(|M|\log |M|)\). The result checking phase will take O(|M|) at worst. Theoretically, the worst time complexity of Algorithm 3 is \(O(|M|^{3})\)(However, the algorithm usually reaches the result at early time in experiments).
In the SkyBoundary algorithm, Peng et al. [4] only used the Authoritativeness(indegree) and Hubness(outdegree) as the metrics, and described the plan limitation for promotion by bringing forward a new concept called "promotion boundary", and then proposed an effective boundary-based pruning strategy to prune the searching space. In this paper, we propose the concept of ReputationRank based on the Google's pagerank algorithm and add it as a measure attribute to describe the importance of a member, which helps to improve the accuracy of the prediction to some degree. Then we present the definition of skyline distance to obtain the necessary condition for not being dominated. At the same time, it also helps a lot to cut down the number of promotion plans.
On the other hand, when making a comparison on the time, from the size of the candidate set, when experimenting on the real-world datasets, the candidate set is all the non-skyline set in the SkyBoundary algorithm [4]. However, we carry a skyline query over the non-skyline set under the consideration of three dimensions and take the infra-skyline as the candidates so that remarkably pruning the size of the candidates and controlling the result set in a reliable range. On the other hand, by calculating the skyline distance of the candidate, we obtain the minimum path from the candidate's position to where not being strictly dominated. Then after trying all the positions belong to GoodPositions(), we can get the promotion plans that succeed in promoting the candidate by verifying the plans one by one. However, in [4], the SkyBoundary algorithm although pruned some meaningless plans based on the promotion boundary and got the constraint of promotion plans. They merged all the possible good points with the skyline points which dominate the candidate, then verified it in sequence to get the minimum cost one. Apparently, their method needs more time compared to our proposed algorithm.
Experimental analysis
The experiments are implemented using C++ with Visual Studio 2010 and conducted on an Intel Core CPU [email protected] machine with 8G RAM and 1 TBytes Hard disk running on Windows 7. We use two datasets for the experiments.
WikiVote dataset: Wikipedia is an encyclopedia that any volunteers all over the world are able to write on it collaboratively. The datasetFootnote 1 contains all administrator elections and vote history data from 2004 to 2008. 2794 elections with 103663 total votes and 7066 users participating in the elections are contained in the dataset. Users are those who cast votes or are voted on. Each record includes 5 parts such as E, T, U, N, V. They correspondingly represent whether the election is successful or not, the time election is closed, user id (and username) of editor that is being considered for promotion, user id (and username) of the nominator and each voter's voting results. Nodes in the network represent users and a directed edge from node p to node q represents that user p votes on user q. We set all the weights to be random integers between 1 and 10 for simplicity.
DBLP dataset: DBLPFootnote 2 is a computer science bibliography website. Each record of the DBLP dataset consists of authors' names, paper title and published year. We collect all the records from 1992 to 2016. For a paper that was accomplished by several authors, we think the first author generally makes major contributions and the others do minor contributions. Thus, we build a directed graph by the co-author network. Nodes in the graph represent the authors and the directed edges with the first author as the end node and the other authors, respectively, as the start nodes represent the relationships between authors. We set all the weights of edges to be random integers between 1 and 10 for simplicity.
RanSky algorithm: we pick up a candidate from the candidate set, and we randomly choose some added edges from the available edges until this candidate being successfully promoted. We denote it as a RanSky algorithm which is an adaptive version of the random algorithm in [4].
Promotion cost comparisons
In this set of experiments, we make a comparison on promotion costs of our PromSky algorithm with the RanSky algorithm. We consider the sum of the added edges' weights as the promotion cost of the Random algorithm. Then we use the PromSky algorithm to find out the optimal promotion plans and calculate their promotion costs, respectively.
Figure 3 illustrates the promotion costs of the two algorithms on WikiVote and DBLP datasets, respectively. The promotion costs of the two algorithms both grow with the increase of the network scales. It is obvious that the promotion cost of RanSky algorithm is much more than the PromSky algorithm, which means that our PromSky algorithm always provides the optimal plans. What is more, the differences between the two promotion costs in both datasets basically grow along with the scale of the network. By the way, we think the promotion cost on the WikiVote dataset is much more than the cost on the DBLP dataset is due to the existing connected edges on the WikiVote are less than that on the DBLP dataset.
Promotion cost comparison with the Random algorithm
Successful rate comparisons
We make a comparison of our PromSky algorithm with the SkyBoundary algorithm and RanSky algorithm in various network scales. The target candidate is the one who can be successful promoted randomly selected from the result of our PromSky algorithm and its promotion cost is the optimal cost. We add e edges picked from the available edges against the candidate according to the PromSky and SkyBoundary algorithm, respectively, and add e edges randomly picked from the available edges, then we verify the result. We calculate the promotion successful rate by counting the number of successful promotions in ten times promotion attempts. We conduct the experiments on both WikiVote and DBLP. From Fig. 4, we find that the SkyBoundary algorithm and the RanSky algorithm cannot guarantee the promotion's success even though we picked the optimal candidate and achieved the minimal promotion cost, the RanSky algorithm works worse especially. On the contrary, our PromSky algorithm performs well in various network scales. This is because we add more attributes in our PromSky algorithm for a member that it should increase the number of skyline set. Thus our successful promotion rate is higher in various network scales.
Successful rate comparison on various network scales
Prediction on DBLP
In this section, we record the predicted potential stars and the skyline authors detected by our algorithm from 1992 to 2016. For each year's data, we consequently combine the current yearly data with its previous 4 years' data to generate a 5-year sub-network because publications too long ago will have little impact on the contributions made by the authors of the time and only one year's publications cannot accurately reflect the contributions of the authors [4]. Then we run our PromSky algorithm on each sub-network (from 1996 to 2016) to verify the corresponding yearly potential stars and those skyline authors in the following couple of years. The skyline authors are obtained by conducting a skyline query over the Influence dimension, Activeness dimension and ReputationRank dimension. The potential authors are the predicting results of our PromSky algorithm. We can get the successful rate using the number of potential stars promoted into skyline in the next few years divided by the size of the whole potential star set, namely
$$\begin{aligned} r={\mathrm{PN}}/{\mathrm{CS}}, \end{aligned}$$
where "r" denotes the successful rate, and "PN" and "CS" are the number of successfully promoted members and the number of all the candidates, respectively.
The skyline authors and potential stars for each year are illustrated in Table 1. From Table 1, we can see each year's skyline authors and potential skyline authors from 1996 to 2016. We think that if the potential skyline author become a skyline author in the next few years, the promotion is successful, otherwise, it fails. We obtain the number of the potential candidates is 20 by merging the duplicated potential stars and removing the potential stars of the year 2016 because it is unable to be verified, and the number of the potential candidates who appear in the next skyline authors is 13. Those names which are in italic represent the successfully promoted candidates. Therefore, we conclude that the successful rate is 65%. However, in the previous research [4], when conducting the experiments on the dataset from 1971 to 2012, we find the successful rate is only 48%. It shows that our algorithm is more accurate than the previous.
Table 1 Skyline authors and potential stars from 1996 to 2016
Time cost comparisons
We conduct the experiments to compare the time costs of our PromSky algorithm with the SkyBoundary algorithm on two datasets. For the reason of intolerable time complexity, we do not take the RanSky algorithm to be a compared algorithm.
Figure 5 shows the average running time under different network scales. From Fig. 5, we can see that as the network scale grows, the running time also increases and our PromSky algorithm is faster than the SkyBoundary algorithm whatever the network scale is. This is because the candidates in SkyBoundary algorithm are all the non-skyline set but we carry the skyline query over the non-skyline set and take the infra-skyline as the candidates thus remarkably reducing the size of the candidates and controlling the result in a reliable range to a great extent. Besides, by bringing forward the skyline distance, we can reduce the searching space of promotion plans remarkably.
Time cost comparison on various network scales
In this paper, we propose an improved member promotion algorithm in SNs, which aims at discovering the most potential stars which can be promoted into the skyline with the minimum cost. By adding the attribute of ReputationRank, we describe members' importance more precisely. Then we introduce the skyline distance to prune the data points for not being dominated. At the same time, it also helps a lot to reduce the number of promotion plans. Experimental results on the DBLP and WikiVote datasets illustrate the effectiveness and efficiency of our approach.
http://snap.stanford.edu/data/wiki-Vote.html.
http://dblp.org/.
Zhang C, Shou L, Chen K, Chen G, Bei Y. Evaluating geo-social influence in location-based social networks. In: 21st ACM international conference on information and knowledge management, CIKM'12, Maui, HI, USA, October 29–November 02, 2012. 2012. p. 1442–51.
Kempe D, Kleinberg JM, Tardos É. Maximizing the spread of influence through a social network. Theory Comput. 2015;11:105–47.
Z. Peng and C. Wang. Discovering the most potential stars in social networks. In Proceedings of the 3rd International Conference on Emerging Databases, 2011.
Peng Z, Wang C. Member promotion in social networks via skyline. World Wide Web. 2014;17(4):457–92.
Stephan B, Donald K, Konrad S. The skyline operator. In: ICDE. 2001. p. 421–30.
Tan KL, Eng PK, Ooi BC. Efficient progressive skyline computation. In: VLDB. 2001. p. 301–10.
Kossmann D, Ramsak F, Rost S. Shooting stars in the sky: an online algorithm for skyline queries. In: VLDB. 2002. p. 275–86.
Papadias D, Tao Y, Fu G, Seeger B. Progressive skyline computation in database systems. ACM Trans Database Syst. 2005;30(1):41–82.
Pei J, Jiang B, Lin X, Yuan Y. Probabilistic skylines on uncertain data. In: VLDB. 2007. p. 15–26.
Chan CY, et.al. Finding k-dominant skylines in high dimensional space. In: SIGMOD. 2006. p. 503–14.
Lian X, Chen L. Monochromatic and bichromatic reverse skyline search over uncertain databases. In: SIGMOD. 2008. p. 213–26.
Mindolin D, Chomicki J. Discovering relative importance of skyline attributes. Proc VLDB Endowment. 2009;2(1):610–21.
Sun S, Huang Z, Zhong H, Dai D, Liu H, Li J. Efficient monitoring of skyline queries over distributed data streams. Knowl Inf Syst. 2010;25:575–606.
Jiang B, Pei J. Online interval skyline queries on time series. In: ICDE. 2009. p. 1036–47.
Sharifzadeh M, Shahabi C. The spatial skyline queries. In: VLDB. 2006. p. 751–62.
Sacharidis D, Papadopoulos S, Papadias D. Topologically sorted skylines for partially ordered domains. In: ICDE. p. 1072–83.
Jiang B, Pei J, Lin X, Cheung DW, Han J. Mining preferences from superior and inferior examples. In: SIGKDD. 2008. p. 390–8.
Sidiropoulos A, Gogoglou A, Katsaros D, Manolopoulos Y. Gazing at the skyline for star scientists. J Informetr. 2016;10(3):789–813.
Katz L. A new status index derived from sociometric analysis. Psychometrika. 1953;18(1):39–43.
Huang J, Jiang B, Pei J, Chen J, Tang Y. Skyline distance: a measure of multidimensional competence. Knowl Inf Syst. 2013;34(2):373–96.
Zhang S, Zheng J. An efficient potential member promotion algorithm in social networks via skyline. In: The 6th international conference on computational social networks, CSoNet 2017. 2017. p. 678–90.
JZ designed the proposed member promotion model and experiments, conceived of the study and performed the experiments analysis. SZ conducted the experiments and drafted the manuscript. Both authors read and approved the final manuscript.
Jiping Zheng received the BS degree from Nanjing University of Information Science & Technology, Nanjing, in 2001, the MS and the Ph.D. degrees from Computer Science Department, Nanjing University of Aeronautics & Astronautics in 2004 and 2007, respectively. From 2007 to 2009, he was a Postdoctoral Fellow at the Department of Computer Science of Tsinghua University, Beijing. From February 2016 to February 2017, he was a Visiting Fellow at the School of Computer Science and Engineering of the University of New South Wales, Sydney, Australia. He is now an associate professor of the College of Computer Science & Technology, Nanjing University of Aeronautics & Astronautics. His research interests include skyline computation, sensor data management and spatial indexes, with an emphasis on data management. He has published more than 30 technical papers in these areas. He is a member of IEEE and ACM, a senior member of China Computer Federation (CCF) and Chinese Institute of Electronics (CIE).
Siman Zhang received the BS degree from Nanjing University of Aeronautics & Astronautics, Nanjing, in 2015. She is a graduate student in College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics. Her research interests include skyline computation in social networks.
This work is partially supported by the National Natural Science Foundation of China under Grant Nos. U1733112, 61702260, the Natural Science Foundation of Jiangsu Province of China under Grant No. BK20140826, the Fundamental Research Funds for the Central Universities under Grant No. NS2015095, Funding of Graduate Innovation Center in NUAA under Grant No. KFJJ20171605. The short version of this manuscript is in CSoNet 2017 [21]. The authors would like to thank for the invitation to submit the extended version to Computational Social Networks.
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
Jiping Zheng & Siman Zhang
Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, China
Jiping Zheng
Siman Zhang
Correspondence to Jiping Zheng.
Zheng, J., Zhang, S. Adding ReputationRank to member promotion using skyline operator in social networks. Comput Soc Netw 5, 7 (2018). https://doi.org/10.1186/s40649-018-0055-9
Skyline distance
Infra-skyline | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.