text
stringlengths
100
500k
subset
stringclasses
4 values
Conference PaperPDF Available Problems of Estimating Fractal Dimension by Higuchi and DFA Methods for Signals That Are a Combination of Fractal and Oscillations DOI:10.23919/Measurement52780.2021.9446804 Conference: 2021 13th International Conference on Measurement Hana Krakovska Anna Krakovská Stochastic fractals of the 1/f noise type are an important manifestation of the brain's electrical activity and other real-world complex systems. Fractal complexity can be successfully estimated by methods such as the Higuchi method and detrended fluctuation analysis (DFA). In this study, we show that if, as with the EEG, the signal is a combination of fractal and oscillation, the estimates of fractal characteristics will be inaccurate. On our test data, DFA overestimated the fractal dimension, while the Higuchi method led to underestimation in the presence of high-amplitude, densely sampled oscillations. Content uploaded by Anna Krakovská All content in this area was uploaded by Anna Krakovská on Nov 08, 2021 -2 0 2 4 ResearchGate has not been able to resolve any citations for this publication. Fractal Dimension of Self-Affine Signals: Four Methods of Estimation Hana Krakovská This paper serves as a complementary material to a poster presented at the XXXVI Dynamics Days Europe in Corfu, Greece, on June 6th-10th in 2016. In this study, fractal dimension ($D$) of two types of self-affine signals were estimated with help of four methods of fractal complexity analysis. The methods include the Higuchi method for the fractal dimension computation, the estimation of the spectral decay ($\beta$), the generalized Hurst exponent ($H$), and the detrended fluctuation analysis. For self-affine processes, the next relation between the fractal dimension, Hurst exponent, and spectral decay is valid: $D=2-H=\frac{5-\beta}{2}$. Therefore, the fractal dimension can be get from any of the listed characteristics. The goal of the study is to find out which of the four methods is the most reliable. For this purpose, two types of test data with exactly given fractal dimensions ($D = 1.2, 1.4, 1.5, 1.6, 1.8$) were generated: the graph of the self-affine Weierstrass function and the statistically self-affine fractional Brownian motion. The four methods were tested on the both types of time series. Effect of noise added to data and effect of the length of the data were also investigated. The most biased results were obtained by the spectral method. The Higuchi method and the generalized Hurst exponent were the most successful. Separating Fractal and Oscillatory Components in the Power Spectrum of Neurophysiological Signal Haiguang Wen Zhongming Liu Neurophysiological field-potential signals consist of both arrhythmic and rhythmic patterns indicative of the fractal and oscillatory dynamics arising from likely distinct mechanisms. Here, we present a new method, namely the irregular-resampling auto-spectral analysis (IRASA), to separate fractal and oscillatory components in the power spectrum of neurophysiological signal according to their distinct temporal and spectral characteristics. In this method, we irregularly resampled the neural signal by a set of non-integer factors, and statistically summarized the auto-power spectra of the resampled signals to separate the fractal component from the oscillatory component in the frequency domain. We tested this method on simulated data and demonstrated that IRASA could robustly separate the fractal component from the oscillatory component. In addition, applications of IRASA to macaque electrocorticography and human magnetoencephalography data revealed a greater power-law exponent of fractal dynamics during sleep compared to wakefulness. The temporal fluctuation in the broadband power of the fractal component revealed characteristic dynamics within and across the eyes-closed, eyes-open and sleep states. These results demonstrate the efficacy and potential applications of this method in analyzing electrophysiological signatures of large-scale neural circuit activity. We expect that the proposed method or its future variations would potentially allow for more specific characterization of the differential contributions of oscillatory and fractal dynamics to distributed neural processes underlying various brain functions. Discrimination ability of individual measures used in sleep stages classification Kristína Susmáková The paper goes through the basic knowledge about classification of sleep stages from polysomnographic recordings. The next goal was to review and compare a large number of measures to find the suitable candidates for the study of sleep onset and sleep evolution. A huge number of characteristics, including relevant simple measures in time domain, characteristics of distribution, linear spectral measures, measures of complexity and interdependency measures were computed for polysomnographic recordings of 20 healthy subjects. Summarily, all-night evolutions of 818 measures (73 characteristics for various channels and channel combinations) were analysed and compared with visual scorings of experts (hypnograms). Our tests involved classification of the data into five classes (waking and four sleep stages) and 10 classification tasks to distinguish between two specific sleep stages. To discover measures of the best decision-making ability, discriminant analysis was done by Fisher quadratic classifier for one-dimensional case. The most difficult decision problem, between S1 and REM sleep, were best managed by measures computed from electromyogram led by fractal exponent (classification error 23%). In the simplest task, distinction between wake and deep sleep, the power ratio between delta and beta band of electroencephalogram was the most successful measure (classification error 1%). Delta/beta ratio with mean classification error 42.6% was the best single-performing measure also in discrimination between all five stages. However, the error level shows impossibility to satisfactorily separate the five sleep stages by a single measure. Use of a few additional characteristics is necessary. Some novel measures, especially fractal exponent and fractal dimension turned up equally successful or even superior to the conventional scoring methods in discrimination between particular states of sleep. They seem to provide a very promising basis for automatic sleep analysis particularly in conjunction with some of the successful spectral standards. Mosaic organization of DNA nucleotides Chung-Kang Peng Sergey V Buldyrev Shlomo Havlin Ary Goldberger Long-range power-law correlations have been reported recently for DNA sequences containing noncoding regions. We address the question of whether such correlations may be a trivial consequence of the known mosaic structure ("patchiness") of DNA. We analyze two classes of controls consisting of patchy nucleotide sequences generated by different algorithms--one without and one with long-range power-law correlations. Although both types of sequences are highly heterogenous, they are quantitatively distinguishable by an alternative fluctuation analysis method that differentiates local patchiness from long-range correlations. Application of this analysis to selected DNA sequences demonstrates that patchiness is not sufficient to account for long-range correlation properties. Eke A, Herm??n P, Kocsis L, Kozak LRFractal characterization of complexity in temporal physiological signal. Physiol Meas 23:R1-R38 Andras Eke Peter Herman Laszlo Kocsis Lajos Rudolf Kozák This review first gives an overview on the concept of fractal geometry with definitions and explanations of the most fundamental properties of fractal structures and processes like self-similarity, power law scaling relationship, scale invariance, scaling range and fractal dimensions. Having laid down the grounds of the basics in terminology and mathematical formalism, the authors systematically introduce the concept and methods of monofractal time series analysis. They argue that fractal time series analysis cannot be done in a conscious, reliable manner without having a model capable of capturing the essential features of physiological signals with regard to their fractal analysis. They advocate the use of a simple, yet adequate, dichotomous model of fractional Gaussian noise (fGn) and fractional Brownian motion (fBm). They demonstrate the importance of incorporating a step of signal classification according to the fGn/fBm model prior to fractal analysis by showing that missing out on signal class can result in completely meaningless fractal estimates. Limitation and precision of various fractal tools are thoroughly described and discussed using results of numerical experiments on ideal monofractal signals. Steps of a reliable fractal analysis are explained. Finally, the main applications of fractal time series analysis in biomedical research are reviewed and critically evaluated. Fractal dimension characterizes seizure onset in epileptic patients Rosana Esteller George Vachtsevanos Javier Echauz Brian Litt We present a quantitative method for identifying the onset of epileptic seizures in the intracranial electroencephalogram (IEEG), a process which is usually done by expert visual inspection, often with variable results. We performed a fractal dimension (FD) analysis on IEEG recordings obtained from implanted depth and strip electrodes in patients with refractory mesial temporal lobe epilepsy (MTLE) during evaluation for epilepsy surgery. Results demonstrate a reproducible and quantifiable pattern that clearly discriminates the ictal (seizure) period from the pre-ictal (pre-seizure) period. This technique provides an efficient method for IEEG complexity characterization, which may be implemented in real time. Additionally, large volumes of IEEG data can be analyzed through compact records of FD values, achieving data compression on the order of one hundred fold. This technique is promising as a computational tool for determination of electrographic seizure onset in clinical applications Tests for hurst efiect R. B. Davies D. S. Harte Approach to an irregular time series on the basis of the fractal theory PHYSICA D T. Higuchi We present a technique to measure the fractal dimension of the set of points (t, f(t)) forming the graph of a function f defined on the unit interval. First we apply it to a fractional Brownian function [1] which has a property of self-similarity for all scales, and we can get the stable and precise fractal dimension. This technique is also applied to the observational data of natural phenomena. It does not show self-similarity all over the scale but has a different self-similarity across the characteristic time scale. The present method gives us a stable characteristic time scale as well as the fractal dimension. Discover more about: DFA Effects of sleep disturbances on day-time neurocognitive performance in patients with stroke (SleepCog) Roman Rosipal Spectral and nonlinear EEG characteristics of audio-visual stimulation and relaxation Michal Teplan Svorad Štolc - Determination of instant, short-term, and long-term effects of audio-visual stimulation - Recognition of different levels of relaxation with EEG - Performance comparison of different spectral an d nonlinear EEG measures - Synchronization and information flow in EEG ... [more] Multifractal Analysis of Chaotic Flashing-Induced Instabilities in Boiling Channels in the Natural-C... February 2008 · Nuclear science and engineering: the journal of the American Nuclear Society Christophe Demazière Christian Marcel Martin Rohde Tim van der Hagen In this paper, two-phase-flow oscillations at the natural-circulation CIRCUS test facility are investigated in a two-riser configuration. These oscillations are driven by flashing (and to some extent by geysering). For a given range of operating conditions of the facility, the oscillations exhibit erratic behavior. This study demonstrates that this behavior can be attributed to deterministic ... [Show full abstract] chaos. This is proven by performing a continuous wavelet transform of the measured time series. Any hidden selfsimilarity in the measurement is seen in the corresponding scale-space plane. The novelty of the present investigation lies with the multifractal approach used for characterizing the chaos. Both nonlinear time series analysis and wavelet-based analysis methods show that the dynamics of the flow oscillations has a multifractal structure. For the former, both Higuchi's method and detrended fluctuation analysis (DFA) were used, whereas for the latter, the wavelet-transform modulus-maxima method was used. The strange attractor corresponding to the dynamics of the system can thus be described as a set of interwoven monofractal objects. The global singular properties of the measured time series is then fully characterized by a spectrum of singularities f(a), which is the Hausdorff dimension of the set of points where the multifractal object has singularities of strength (or Holder exponents of) a. Whereas Higuchi's method and DFA allow easily determining whether the deterministic chaos has a monofractal or multifractal hierarchy, the wavelet-transform modulus-maxima has the advantage of giving a quantitative estimation of the fractal spectrum. The time-modeling of such behavior of the facility is therefore difficult since there is sensitive dependence on initial conditions. From a regulatory point of view, such behavior of naturalcirculation systems in a multiple-riser configuration has thus to be avoided. Two Decades of Search for Chaos in Brain. A short review of applications of methods of chaos theory to investigation of brain dynamics represented by EEG is given. This paper serves as a complementary material to a poster presented at the XXXVI Dynamics Days Europe in Corfu, Greece, on June 6th-10th in 2016. In this study, fractal dimension ($D$) of two types of self-affine signals were estimated with help of four methods of fractal complexity analysis. The methods include the Higuchi method for the fractal dimension computation, the estimation of the ... [Show full abstract] spectral decay ($\beta$), the generalized Hurst exponent ($H$), and the detrended fluctuation analysis. For self-affine processes, the next relation between the fractal dimension, Hurst exponent, and spectral decay is valid: $D=2-H=\frac{5-\beta}{2}$. Therefore, the fractal dimension can be get from any of the listed characteristics. The goal of the study is to find out which of the four methods is the most reliable. For this purpose, two types of test data with exactly given fractal dimensions ($D = 1.2, 1.4, 1.5, 1.6, 1.8$) were generated: the graph of the self-affine Weierstrass function and the statistically self-affine fractional Brownian motion. The four methods were tested on the both types of time series. Effect of noise added to data and effect of the length of the data were also investigated. The most biased results were obtained by the spectral method. The Higuchi method and the generalized Hurst exponent were the most successful. Automatic sleep scoring: A search for an optimal combination of measures September 2011 · Artificial Intelligence in Medicine Kristína Mezeiová The objective of this study is to find the best set of characteristics of polysomnographic signals for the automatic classification of sleep stages. A selection was made from 74 measures, including linear spectral measures, interdependency measures, and nonlinear measures of complexity that were computed for the all-night polysomnographic recordings of 20 healthy subjects. The adopted ... [Show full abstract] multidimensional analysis involved quadratic discriminant analysis, forward selection procedure, and selection by the best subset procedure. Two situations were considered: the use of four polysomnographic signals (EEG, EMG, EOG, and ECG) and the use of the EEG alone. For the given database, the best automatic sleep classifier achieved approximately an 81% agreement with the hypnograms of experts. The classifier was based on the next 14 features of polysomnographic signals: the ratio of powers in the beta and delta frequency range (EEG, channel C3), the fractal exponent (EMG), the variance (EOG), the absolute power in the sigma 1 band (EEG, C3), the relative power in the delta 2 band (EEG, O2), theta/gamma (EEG, C3), theta/alpha (EEG, O1), sigma/gamma (EEG, C4), the coherence in the delta 1 band (EEG, O1-O2), the entropy (EMG), the absolute theta 2 (EEG, Fp1), theta/alpha (EEG, Fp1), the sigma 2 coherence (EEG, O1-C3), and the zero-crossing rate (ECG); however, even with only four features, we could perform sleep scoring with a 74% accuracy, which is comparable to the inter-rater agreement between two independent specialists. We have shown that 4-14 carefully selected polysomnographic features were sufficient for successful sleep scoring. The efficiency of the corresponding automatic classifiers was verified and conclusively demonstrated on all-night recordings from healthy adults. Does the Complexity of Sleep EEG Increase or Decrease with Age? Radoslav Škoviera Georg Dorffner The goal of this study is to contribute to discussions about age-related changes in electroencephalogram (EEG) complexity. Eight characteristics of complexity were evaluated for sleep EEG of 175 healthy subjects. The complexity of the sleep EEG significantly increased up to the age of about 60 years. Over 60 years, the complexity stagnated or slightly decreased. The same tendencies were ... [Show full abstract] manifested during all sleep stages and also during the episodes of wakefulness. Interested in research on DFA? Join ResearchGate to discover and stay up-to-date with the latest research from leading experts in DFA and many other scientific topics.
CommonCrawl
Isovector giant dipole resonances in proton-rich Ar and Ca isotopes Ling Liu 1,, , Shuai Liu 1 , Shi-Sheng Zhang 2 , Li-Gang Cao 3,4,, College of Physics Science and Technology, Shenyang Normal University, Shenyang 110034, China School of Physics, Beihang University, Beijing 100191, China School of Mathematics and Physics, North China Electric Power University, Beijing 102206, China Key Laboratory of Beam Technology of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China The isovector giant dipole resonances (IVGDR) in proton-rich Ar and Ca isotopes have been systematically investigated using the resonant continuum Hartree-Fock+BCS (HF+BCS) and quasiparticle random phase approximation (QRPA) methods. The Skyrme SLy5 and density-dependent contact pairing interactions are employed in the calculations. In addition to the giant dipole resonances at energy around 18 MeV, pygmy dipole resonances (PDR) are found to be located in the energy region below 12 MeV. The calculated energy-weighted moments of PDR in nuclei close to the proton drip-line exhaust about 4% of the TRK sum rule. The strengths decrease with increasing mass number in each isotopic chain. The transition densities of the PDR states show that motions of protons and neutrons are in phase in the interiors of nuclei, while the protons give the main contribution at the surface. By analyzing the QRPA amplitudes of proton and neutron 2-quasiparticle configurations for a given low-lying state, we find that only a few proton configurations give significant contributions. They contribute about 95% to the total QRPA amplitudes, which indicates that the collectivity of PDR states is not strong in proton-rich nuclei in the present study. pygmy dipole resonances , proton-rich nuclei , Skyrme energy density functional [1] I. Daoutidis and S. Goriely, Phys. Rev. C 86, 034328 (2012) [2] N. Tsoneva et al., Phys. Rev. C 91, 044318 (2015) [3] N. Paar et al., Rep. Prog. Phys. 70, 691 (2007) [4] A. Klimkiewicz et al., Phys. Rev. C, 76 76, 051603(R) (0516) [5] L. Trippa et al., Phys. Rev. C 77, 061304(R) (0613) [6] L. G. Cao and Z. Y. Ma, Chin. Phys. Lett. 25, 1625 (2008) [7] A. Carbone et al., Phys. Rev. C 81, 041301(R) (0413) [8] Z. Zhang and L. W. Chen, Phys. Rev. C 90, 064317 (2014) [9] K. Yoshida and N. V. Giai, Phys. Rev. C 78, 014305 (2008) [10] N. Paar et al., Phys. Rev. C 67, 034312 (2003) [11] E. Litvinova et al., Phys. Rev. C 79, 054312 (2009) [12] L. G. Cao et al., Comm. Theor. Phys. 36, 178 (2001) [13] L. G. Cao and Z. Y. Ma, Mod. Phys. Lett. A 19, 2845 (2004) [14] L. G. Cao and Z. Y. Ma, Phys. Rev. C 71, 034305 (2005) [15] J. Liang et al., Phys. Rev. C 75, 054320 (2007) [16] D. Yang et al., Chin. Phys. C 37, 124102 (2013) [17] X. W. Sun, J. Chen, and D. H. Lu, Chin. Phys. C 42, 014101 (2018) [18] H. L. Ma et al., Phys. Rev. C 93, 014317 (2016) [19] C. Tao et al., Phys. Rev.C 87, 014621 (2013) [20] G. Co' et al., Phys. Rev. C 87, 034305 (2013) [21] A. Leistenschneider et al., Phys. Rev. Lett. 86, 5442 (2001) [22] J. Gibelin et al., Phys. Rev. Lett. 101, 212503 (2008) [23] O. Wieland et al., Phys. Rev. Lett. 102, 092502 (2009) [24] P. Adrich et al., Phys. Rev. Lett. 95, 132501 (2005) [25] Z. Z. Ren et al., Phys. Rev. C 53, 572(R) (1996) [26] E. Ryberg et al., Phys. Rev. C 89, 014325 (2014) [27] S. S. Zhang et al., Eur. Phys. J. A 49, 77 (2013) [28] J. Meng and P. Ring, Phys. Rev. Lett. 77, 3963 (1996) [29] S. S. Zhang et al., Phys. Lett. B 730, 30 (2014) [31] X. Z. Cai et al., Phys. Rev. C 65, 024610 (2002) [32] M. Pfützner et al., Rev. Mod. Phys. 84, 567 (2012) [33] N. Paar et al., Phys. Rev. Lett. 94, 182501 (2005) [34] N. Paar et al., Phys. Lett. B 624, 195 (2005) [35] Z. Y. Ma and Y. Tian, Sci. China Phys. Mech. Astron. 54, 49 (2011) [36] C. Barbieri et al., Phys. Rev. C 77, 024304 (2008) [38] Y. Kim and P. Papakonstantinou, Eur. Phys. J. A 52, 176 (2016) [39] H. Lv et al., Chin. Phys. Lett. 34, 082101 (2017) [40] J. C. Yang et al., Nucl. Instrum. Methods B 317, 263 (2013) [41] E. Chabanat et al., Nucl. Phys. A 635, 231 (1998) [42] P. Ring and P. Schuck, The Nuclear Many-Body Problem (Springer-Verlag, New York, 1980) [43] J. Dobaczewski et al., Nucl. Phys. A 422, 103 (1984) [44] J. Dobaczewski et al., Phys. Rev. C 53, 2809 (1996) [45] H. Kucharek and P. Ring, Z. Phys. A 339, 23 (1991) [46] N. Sandulescu et al., Phys. Lett. B 394, 6 (1997) [47] N. Sandulescu et al., Phys. Rev. C 61, 061301(R) (0613) [48] A. T. Kruppa et al., Phys. Rev. C 63, 044324 (2001) [49] L. G. Cao and Z. Y. Ma, Eur. Phys. J. A 22, 189 (2004) [50] M. Grasso et al., Phys. Rev. C 64, 064321 (2001) [52] L. G. Cao et al., Phys. Rev. C 86, 054313 (2012) [53] M. Wang et al., Chin. Phys. C 41, 030003 (2017) [54] X. W. Xia et al., At. Data Nucl. Data Tables 121, 1 (2018) [55] G. Colò et al., Comput. Phys. Commun. 184, 142 (2013) [56] X. Roca-Maza et al., Phys. Rev. C 85, 024601 (2012) [57] D. Vretenar et al., Phys. Rev. C 85, 044317 (2012) [58] N. Paar et al., Phys. Rev. Lett. 103, 032502 (2009) [59] J. Endres, E. Litvinova et al., Phys. Rev. Lett. 105, 212503 (2010) [60] D. Vretenar et al., Nucl. Phys. A 692, 496 (2001) [1] Wenjin Tan , Dongdong Ni , Zhongzhou Ren . Calculations of the β-decay half-lives of neutron-deficient nuclei. Chinese Physics C, doi: 10.1088/1674-1137/41/5/054103 [2] Pei-Wei Wen , Shi-Sheng Zhang , Li-Gang Cao , Feng-Shou Zhang . Fully self-consistent calculation of β-decay half-lives within Skyrme energy density functional. Chinese Physics C, doi: 10.1088/1674-1137/abc1d1 [3] Jun Su . Constraining symmetry energy at subnormal density by isovector giant dipole resonances of spherical nuclei. Chinese Physics C, doi: 10.1088/1674-1137/43/6/064109 [4] Zhong Xu , Zhi-Pan Li . Microscopic analysis of octupole shape transitions in neutron-rich actinides with relativistic energy density functional. Chinese Physics C, doi: 10.1088/1674-1137/41/12/124107 [5] Xu-Wei Sun , Jing Chen , Ding-Hui Lu . Stagnancy of the pygmy dipole resonance. Chinese Physics C, doi: 10.1088/1674-1137/42/1/014101 [6] LIU Hong-Liang , XU Fu-Rong . Shape coexistence of high-spin isomeric states in proton-rich A~190 nuclei. Chinese Physics C, [7] Benbouzid N , Allal M , Fellah M . Isovector pairing and particle-number fluctuation effects on the spectroscopic factors of one-proton stripping and one-neutronpick-up reactions in proton-rich nuclei. Chinese Physics C, doi: 10.1088/1674-1137/42/8/084104 [8] ZHU Jian-Yu , QI Chong , LIU Meng-Xi , CUI Xiao , XU Fu-Rong . Shape transitions in proton-rich Ho and Tm isotopes. Chinese Physics C, doi: 10.1088/1674-1137/33/S1/003 [9] Y. Benbouzid , N. H. Allal , M. Fellah , M. R. Oudih . Influence of isovector pairing and particle-number projection effects on spectroscopic factors for one-pair like-particle transfer reactions in proton-rich even-even nuclei. Chinese Physics C, doi: 10.1088/1674-1137/42/4/044103 [10] Zhi-Jun Bai , Chang-Feng Jiao , Yuan Gao , Fu-Rong Xu . Searching for high-K isomers in the proton-rich A~80 mass region. Chinese Physics C, doi: 10.1088/1674-1137/40/9/094102 [11] Yi-Dan Song , Hui-Ling Wei , Chun-Wang Ma . Predictions for cross sections of light proton-rich isotopesin the 40Ca + 9Be reaction. Chinese Physics C, doi: 10.1088/1674-1137/42/7/074102 [12] ZHENG Yong-Nan , ZHOU Dong-Mei . Nuclear structure of proton-rich unstable nucleus 28P studied by g-factor measurement. Chinese Physics C, doi: 10.1088/1674-1137/33/S1/069 [13] MA Li-Ying , HUA Hui , LU Fei , CHEN Dong , JIANG Xi-Yao , YE Yan-Lin , JIANG Dong-Xing , Qureshi Faisal-Jamil . A CsI(Tl) detector array used in the experiment of the proton-rich nucleus 17Ne. Chinese Physics C, doi: 10.1088/1674-1137/33/S1/056 [14] null . Yield ratios and directed flows of light fragments from reactions induced by neutron-rich nuclei at intermediate energy. Chinese Physics C, doi: 10.1088/1674-1137/41/4/044102 [15] Xia Keding , Cai Yanhuang . Multipole Giant Resonances in Highly Excited Nuclei. Chinese Physics C, [16] Zafar Yasin , Warda Iram , Muhammad Asghar , Ikram Shahzad . Cascade-exciton model analysis of level density parameter dependence in proton induced fission cross sections of some sub-actinide nuclei. Chinese Physics C, doi: 10.1088/1674-1137/35/11/006 [17] LIU Lang , ZHAO Peng-Wei . Exact treatment of pairing correlations in Yb isotopes with covariant density functional theory. Chinese Physics C, doi: 10.1088/1674-1137/38/7/074103 [18] LI Zhao-Xi , LI Zhi-Pan . Center-of-mass correction and rotational correction in covariant density functional theory. Chinese Physics C, doi: 10.1088/1674-1137/39/11/114101 [19] Wei Zhang , Wan-Li Lv , Ting-Ting Sun . Shell corrections with finite temperature covariant density functional theory. Chinese Physics C, [20] Lin Jinhu , Zhang Shou , Zheng Zhezhu . The Influence on the Baryon and Its Resonances from a New Additional Mass Term in the Skyrme Model. Chinese Physics C, Figures(7) / Tables(5) Ling Liu, Shuai Liu, Shi-Sheng Zhang and Li-Gang Cao. Isovector giant dipole resonances in proton-rich Ar and Ca isotopes[J]. Chinese Physics C. Ling Liu 1,, Shuai Liu 1, Shi-Sheng Zhang 2, Corresponding author: Ling Liu, [email protected] Corresponding author: Li-Gang Cao, [email protected] 1. College of Physics Science and Technology, Shenyang Normal University, Shenyang 110034, China 2. School of Physics, Beihang University, Beijing 100191, China 3. School of Mathematics and Physics, North China Electric Power University, Beijing 102206, China 4. Key Laboratory of Beam Technology of Ministry of Education, College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875, China Abstract: The isovector giant dipole resonances (IVGDR) in proton-rich Ar and Ca isotopes have been systematically investigated using the resonant continuum Hartree-Fock+BCS (HF+BCS) and quasiparticle random phase approximation (QRPA) methods. The Skyrme SLy5 and density-dependent contact pairing interactions are employed in the calculations. In addition to the giant dipole resonances at energy around 18 MeV, pygmy dipole resonances (PDR) are found to be located in the energy region below 12 MeV. The calculated energy-weighted moments of PDR in nuclei close to the proton drip-line exhaust about 4% of the TRK sum rule. The strengths decrease with increasing mass number in each isotopic chain. The transition densities of the PDR states show that motions of protons and neutrons are in phase in the interiors of nuclei, while the protons give the main contribution at the surface. By analyzing the QRPA amplitudes of proton and neutron 2-quasiparticle configurations for a given low-lying state, we find that only a few proton configurations give significant contributions. They contribute about 95% to the total QRPA amplitudes, which indicates that the collectivity of PDR states is not strong in proton-rich nuclei in the present study. Recently, the study of giant dipole resonances has been extended to unstable nuclei, as radioactive ion beam facilities have become available around the world. A new dipole excitation in the low energy region has been observed experimentally, called the pygmy dipole resonance (PDR). The PDR in neutron-rich nuclei is explained as a vibration in which the excess neutrons oscillate against a proton–neutron saturated core. The existence of the PDR in unstable nuclei may play a very important role in nuclear astrophysics because the PDR can affect the neutron-capture reaction cross sections contributing to nucleosynthesis and the abundance distribution of elements in the stars [1-3]. Moreover, a strong linear correlation between the PDR sum rule and the neutron skin has been found theoretically, which encourages constraints on the neutron skin and the density dependence of symmetry energy by the measured PDR strengths [4-8]. The PDR in neutron-rich nuclei has been studied extensively using different theoretical approaches, such as the deformed quasiparticle random phase approximation (QRPA) method based on Skyrme energy density functional theory [9], the relativistic QRPA [10], the relativistic quasiparticle time blocking approximation [11], the relativistic linear response theory [12-16], the relativistic deformed RPA [17], the shell model [18], the isospin-dependent quantum molecular dynamics model [19], and the RPA method with Gogny interaction [20]. Experimentally, the PDR in $ ^{20,22} $O, $ ^{26} $Ne, $ ^{68} $Ni, and $ ^{130,132} $Sn has been discovered and confirmed by different groups [21-24]. In proton-rich nuclei, a proton halo or skin is predicted in some cases [25-27]. The orbitals in the continuum play an important role in forming the halo structure [28-30]. Some proton halo or skin nuclei have been observed experimentally [31]. The study of proton-rich nuclei is also very important because they provide complementary insights to strong interactions, exhibit new forms of radioactivity, and are key for nucleosynthesis processes in astrophysics [32]. However, proton-rich nuclei have been found only for Z $ \leqslant $ 50 experimentally because of the Coulomb repulsive interaction. The proton drip line is much closer to the $ \beta $-stability line, and a proton halo or skin is possible only for the lighter isotopes. These reasons seem to disfavor the existence of a proton PDR in nuclei. No experimental observation of a proton PDR has been reported yet, and only a few theoretical studies have paid attention to the proton PDR using different models. The evolution of the low-lying E1 strength in proton-rich nuclei has been analyzed in the framework of relativistic QRPA [33-35]. In Ref. [36], the authors explored the PDR in proton-rich Ar isotopes using the unitary correlator operator method. The shell model has also been used to study the PDR in proton-rich nuclei [37]. A continuum random-phase approximation (CRPA) was used to investigate the PDR states in N = 20 isotones [38]. Recently, we applied the Skyrme HF+BCS plus QRPA to study the properties of the PDR in proton-rich $ ^{17,18} $Ne [39]. Knowledge of dipole excitations in nuclei towards the proton drip-line is rather limited. Measurement of the proton PDR should be possible in the near future at new facilities currently under construction, such as HIAF [40] in China. In this work, we will explore the properties of dipole excitations of proton-rich Ar and Ca nuclei in a fully self-consistent approach. The ground states of these nuclei will be calculated within the Skyrme Hartree-Fock plus Bardeen-Cooper-Schrieffer (HF+BCS) approach, where the pairing correlations, including the contribution of resonant states in continuum, will be treated properly. The QRPA is then applied to obtain the excited dipole states of $ ^{32,34} $Ar and $ ^{34,36} $Ca. The properties of the PDR, including the excitation energies, energy (non-energy) weighted moments, transition densities, and collectivity, are analyzed in detail using our theoretical model. The paper is organized as follows. The Skyrme HF+BCS and QRPA methods used in this work are briefly introduced in Section II. The properties of ground states and excited dipole states are presented and discussed in Section III. Finally, a summary and some remarks are given in Section IV. II. THEORETICAL FRAMEWORK The standard form of the Skyrme interaction and its energy density functional can be found in Ref. [41]. Within the Skyrme HF+BCS approximation, the quasiparticle wave functions and their quasiparticle energies are obtained from the self-consistent equation $ \begin{array}{l} \left(- \nabla\dfrac{\hbar^2}{2m^*_b( r)} \cdot \nabla +U_b( r) \right)\psi_b( r) = \varepsilon_b\psi_b( r), \end{array} $ where $U_b( r) = V_c^b( r)+\delta_{b,\rm proton}V_{\rm coul}( r)-i V_{\rm so}^b( r)\cdot( \nabla\times \sigma)+ V_{\rm pair}^b( r)$, and $ V_c^b( r) $, $V_{\rm coul}( r)$, $V_{\rm so}^b( r)$ as well as $V_{\rm pair}^b( r)$ are the nuclear central, Coulomb, spin-orbit, and pairing fields, respectively. Based on the calculated ground states, one can build the 2-quasiparticle (2${\rm qp}$) configurations for the QRPA calculations. Here, we will briefly summarize the formulas for the QRPA calculations. The well-known QRPA method [42] in matrix form is given by $ \begin{array}{l} \left( \begin{array}{cc} A & B \\ B^* & A^* \end{array} \right) \left( \begin{array}{c} X^b \\ Y^b \end{array} \right) = E_b \left( \begin{array}{cc} 1 & 0\\ 0 & -1 \end{array} \right) \left( \begin{array}{c} X^b \\ Y^b \end{array} \right), \end{array} $ where $ E_b $ is the eigenvalue of the $ b $-th QRPA state and X$ ^b $, Y$ ^b $ are the corresponding forward and backward 2qp amplitudes, respectively. The dipole strength in QRPA can be calculated as follows: $ \begin{aligned}[b] B(EJ,E_b) =& \frac{1}{2J+1}\\&\times\left|\sum_{\mu\mu'}\left[X^b_{\mu\mu'} + Y^b_{\mu\mu'}\right] \langle \mu \|\hat{O}_J\| \mu' \rangle (u_\mu \nu_{\mu'}+ \nu_\mu u_{\mu'} )\right|^2, \end{aligned} $ where $ \nu $ and $ u $ are the occupation numbers of the quasiparticle levels. The external field for isovector electric dipole excitation is defined as $ \begin{aligned}[b] \hat{O}_{\mu}^{J = 1} = e\frac{N}{A}\sum_{i}^Z r_iY_{1\mu}(\hat{r}_i)-e\frac{Z}{A}\sum_{i}^N r_iY_{1\mu}(\hat{r}_i). \end{aligned} $ The discrete spectra are averaged with the Lorentzian distribution $ \begin{aligned}[b] R(E) = \sum_{i} B(EJ,E_i)\frac{1}{\pi}\frac{\Gamma/2}{(E-E_i)^2+\Gamma^2/4}, \end{aligned} $ where the width of the Lorentz distribution is taken to be 0.5 MeV in the present calculations. After solving the QRPA equation, various moments are defined as $ \begin{aligned} m_k = \int E^kR(E) {\rm d} E. \end{aligned} $ III. RESULTS AND DISCUSSION A. Ground-state properties of proton-rich Ar and Ca isotopes First we will explore the ground state properties of proton-rich Ar and Ca isotopes. As pointed out in Refs. [43,44], the HF+BCS method is not well suited to describe nuclei close to the drip line because the continuum states in weakly bound nuclei are not correctly treated. Because of a nonzero occupation probability of quasibound states, there appears an unphysical gas of neutrons surrounding the nucleus [43,44]. The contribution of the coupling to the continuum would be prominent when the nucleus is close to the drip line, therefore a proper treatment of the continuum becomes more important. To do so, one can perform the calculations with the non-relativistic Hartree-Fock-Bogoliubov (HFB) [43,44] or relativistic Hartree-Bogoliubov (RHB) [45] method. On the other hand, it has also been pointed out that pairing correlations could be described well by the simple HF+BCS theory if single-particle states in the continuum are properly treated [46-49]. This method is called the resonant continuum HF+BCS (HF+BCSR) approximation. It has been shown [50] that the resonant continuum HF+BCS approximation could reproduce the pairing correlation energies predicted by the continuum HFB approach up to the drip line. To investigate the ground state properties of proton-rich Ar and Ca isotopes, we have extended the Skyrme HF+BCS method of Eq. (1) to the resonant continuum Skyrme HF+BCS by properly including the contribution of continuum resonant states. The equations are solved in coordinate space. We introduce single-particle resonant states into the pairing gap equations instead of the discretized continuous states. The wave functions of resonant states are obtained by imposing a scattering boundary condition [51]. More details of the resonant continuum HF+BCS method are given in Refs. [46-50]. In the present study, a spherical shape is assumed for proton-rich nuclei. The Skyrme interaction SLy5 is adopted [41]. For the pairing correlations, we adopt a mixed type density-dependent contact pairing interaction in our calculations [52]. The strength V0 is adjusted to reproduce the neutron or proton gaps in $ ^{34} $Ar and $ ^{36} $Ca. For the neutron and proton pairing in $ ^{34} $Ar, V0 is 560.9 MeVfm$ ^3 $ and 619.1 MeVfm$ ^3 $, respectively. It is 566.2 MeVfm$ ^3 $ for the neutron pairing in $ ^{36} $Ca. For the pairing window, we choose the states up to 1$ f_{7/2} $ for proton-rich Ar and Ca isotopes. The single-particle resonant states in the continuum are investigated in terms of the S-matrix method. The resonant state is characterized by the phase shift crossing $ \pi/2 $, where the scattering cross section of the corresponding partial wave reaches its maximum. The width of a resonant state is the full width at half maximum (FWHM) for the corresponding partial wave scattering cross section. The energies and widths of the calculated single-particle resonances for proton-rich Ar and Ca are listed in Table 1. Although the neutron resonant states 1$ f_{5/2} $ and 1$ g_{9/2} $ are not included in the pairing window, we also put the calculated results in Table 1. Due to the sufficiently high centrifugal barrier and Coulomb barrier, the widths of the proton resonant states 1$ d_{3/2} $ and 1$ f_{7/2} $ are rather narrow, especially for 1$ d_{3/2} $ in $ ^{30} $Ar and $ ^{32} $Ca, as well as 1$ f_{7/2} $ in $ ^{32,34} $Ar and $ ^{34,36} $Ca. These states are rather stable and have bound state characteristics. As an example, the single-particle levels for $ ^{30} $Ar and $ ^{32} $Ca are shown in Fig. 1 together with the central potentials. The states with positive energy in the continuum in Fig. 1 are the resonant states we have found. We plot the proton densities of $ ^{30} $Ar and $ ^{32} $Ca in Fig. 2, where the solid, short-dashed, and short-dotted curves are obtained in the HF+BCS approximation by choosing the box size as 16, 20, and 24 fm, respectively, and the short dash-dotted curves are produced in the HF+BCSR approximation. It is observed that the tail of the density depends on the box size in the HF+BCS approximation, and an unphysical particle gas may appear in exotic nuclei, whereas the behaviours of proton densities are rather stable when one performs the resonant continuum HF+BCS calculations. Nucleus proton neutron Nucleus proton neutron state E$_p$ state E$_n$ state E$_p$ state E$_n$ $^{30}$ Ar 1d$_{3/2}$ 0.202+i0.000 1f$_{5/2}$ 0.010+i0.000 $^{32}$ Ca 1d$_{3/2}$ 0.489+i0.000 1g$_{9/2}$ 2.455+i0.026 1f$_{7/2}$ 5.226+i0.166 1g$_{9/2}$ 4.544+i0.249 1f$_{7/2}$ 5.158+i0.123 $^{32}$ Ar 1f$_{7/2}$ 3.097+i0.009 1f$_{5/2}$ 0.448+i0.001 $^{34}$ Ca 1f$_{7/2}$ 3.088+i0.006 1g$_{9/2}$ 2.764+i0.040 1g$_{9/2}$ 4.825+i0.305 Table 1. Energies and widths of single-particle resonant states in proton-rich Ar and Ca isotopes. The results are calculated with the SLy5 parameter set. All energies are in MeV. Figure 1. Proton and neutron single-particle levels for $ ^{30} $Ar (a) and $ ^{32} $Ca (b). All results are obtained in the Skyrme HF+BCSR calculations with the SLy5 parameter set. Figure 2. (color online) Proton density distributions in proton-rich nuclei $ ^{30} $Ar (a) and $ ^{32} $Ca (b). The results are obtained in the Skyrme HF+BCS and HF+BCSR approximation with the SLy5 parameter set. In Table 2 we show the calculated ground state properties for proton-rich Ar and Ca isotopes with the Skyrme HF+BCSR approximation, including the total binding energies, one-proton and two-proton separation energies, neutron and proton Fermi energies, root-mean-square radii (rms radii), and charge radii. Values in parentheses are the corresponding experimental data from Refs. [53,54]. It is found that the predicted total binding energies for most of the nuclei are about 3-5 MeV larger than the experimental data. We have checked that similar results are obtained by using other Skyrme interactions. The one-proton and two-proton separation energies provide information on whether a nucleus is stable against one or two proton emissions, and thus define the proton drip lines. One can see from the table that the calculated separation energies of $ ^{30} $Ar and $ ^{32} $Ca become negative, which means that these two nuclei are unbound and stay at the proton drip line, so we will not include these two nuclei in the IVGDR calculations below. The Fermi energies of the proton-rich Ca nuclei in Table 2 are obtained simply by using the average value of the energies of the last hole state and first particle state. They are not calculated using the BCS approximation because proton number 20 is a magic number. The Fermi energies of protons change from negative to positive as the nuclei become more and more proton-rich. As we can see from Table 2, the calculated proton Fermi energies of $ ^{30} $Ar and $ ^{32} $Ca are positive, which again means these two nuclei are unbound and proton decay may occur. Although the proton Fermi energy obtained with the simple approach in $ ^{34} $Ca is positive, the energy of the last occupied proton level is -1.48 MeV, indicating that this nucleus is weakly bound. The neutron and proton density distributions in $ ^{32,34} $Ar and $ ^{34,36} $Ca are displayed in Fig. 3. It is clearly seen that the proton density distributions are much extended outside compared to those of neutrons. The predicted proton rms radii are much larger than those of neutrons, which suggests that a proton skin could be formed in these nuclei. The experimental charge radii of $ ^{32,34} $Ar are reproduced well by our theoretical model. $E_B$ S$_p$ S$_{2p}$ $\lambda_n$ $\lambda_p$ $r_n$ $r_p$ $r_c$ $^{30}$ Ar 213.2 −0.15 −2.10 −19.79 0.28 3.017 3.344 3.437 $^{32}$ Ar 249.9(246.4) 2.76 2.58 −17.39 −1.87 3.092 3.315 3.409(3.346) $^{32}$ Ca 209.9 −1.44 −3.32 −21.82 1.49 3.060 3.459 3.550 $^{34}$ Ca 250.1(244.9) 0.42 0.12 −19.53 0.51 3.134 3.410 3.502 $^{36}$ Ca 285.4(281.4) 2.86 4.06 −17.13 −1.30 3.222 3.401 3.493 Table 2. The calculated ground state properties of proton-rich Ar and Ca nuclei, including the total binding energies ($E_B$), one-proton separation energies (S$_p$) and two-proton separation energies (S$_{2p}$), neutron and proton Fermi energies ($\lambda_n$,$\lambda_p$), neutron and proton rms radii ($r_n$,$r_p$) and charge radii ($r_c$). Values in parentheses are the corresponding experimental data [53,54]. Energies are in MeV and radii are in fm. Figure 3. (color online) Neutron and proton density distributions in proton-rich $ ^{32,34} $Ar and $ ^{34,36} $Ca nuclei. All results are obtained in the Skyrme HF+BCSR approximation with the SLy5 parameter set. B. Properties of 1$ ^- $ excited states in proton-rich Ar and Ca isotopes To obtain the dipole excitations of proton-rich Ar and Ca nuclei, we have performed the fully self-consistent QRPA calculations by using the SLy5 Skyrme interaction. There is no approximation in the residual interaction, since all its terms are considered the same as that used in the ground state calculations. The details of the residual interaction can be found in Ref. [55]. After solving the HF+BCS equations in coordinate space, we build up a model space of two-quasiparticle configurations for dipole excitation, and then solve the QRPA matrix equation in that space. The $ \Delta_n = 12 $ shell cut-off is adopted to build up the QRPA model space, which is large enough that the isovector dipole energy-weighted moment exhausts practically 99.9% of the double-commutator (DC) value. Since we discretize the continuum by using the box approximation, the QRPA results may depend on the box size. In Table 3, we show the calculated total QRPA E1 strengths, energy-weighted moments and centroid energies in the energy region 0 $ < E \leqslant $40 MeV for $ ^{32} $Ar, calculated by setting the box size as 16, 20, 24 and 28 fm, respectively. The results in lower (0 $ \leqslant $ E $ \leqslant $ 12 MeV) and higher (12 $ < $ E $ \leqslant $ 40 MeV) energy regions are also presented. The strength distributions are plotted in Fig. 4. As we know, the properties of the ground state are calculated with the HF+BCSR method, and are independent of the box size since the widths of resonant states in the continuum are rather narrow, and their energies are stable when using different box sizes. For the other states in the continuum used to build the QRPA quasiparticle configurations, like the nonresonant states, these states are obtained by discretizing the continuum with the box approximation, and their eigenvalues and wavefunctions keep changing with increasing box size. The changes may affect the calculated properties of dipole states, as shown in Fig. 4; the distributions of the calculated strengths are slightly different, and the seesaw structure of the states around 12 MeV may affect the values of the lower and higher energy regions shown in Table 3. Anyway, better converged results are obtained if we use a box size larger than 20 fm. For example, the calculated energy-weighted moment in 0 $ < E \leqslant $40 MeV for R = 24 fm is 129.71 e$ ^2 $fm$ ^2 $MeV, and the DC value is 130.95 e$ ^2 $fm$ ^2 $MeV, which exhausts about 99.1% of the DC value. So all results are calculated by setting the box size as 24 fm in this study. R/fm 0 $< E \leqslant$ 40 MeV 0 $ < E \leqslant$ 12 MeV 12 $< E \leqslant$ 40 MeV $m_{0}$ $m_{1}$ $E_c$ $m_{0}$ $m_{1}$ $E_c$ $m_{0}$ $m_{1}$ $E_c$ 16 6.996 128.510 18.34 0.369 3.628 9.83 6.627 124.882 18.85 20 6.992 128.391 18.36 0.447 4.580 10.26 6.545 123.811 18.92 Table 3. Total QRPA E1 strengths $m_{0}$ (in e$^2$fm$^2$), energy-weighted moments $m_{1}$(in e$^2$fm$^2$MeV) and centroid energies $E_c$ (in MeV) in the energy region 0 $< E \leqslant$40 MeV for $^{32}$Ar, calculated by setting the box size as 16, 20, 24 and 28 fm, respectively. The values for lower (0 $ < E \leqslant$12 MeV) and higher (12 $ < E \leqslant$40 MeV) energy regions are also presented. Figure 4. (color online) The QRPA strength distributions of $ ^{32} $Ar calculated with different box size R. The low-lying strengths between 5 and 15 MeV are also displayed in the insert. The width of the Lorentz distribution is set to be 0.5 MeV. In Fig. 5 we show the calculated dipole strength distributions of proton-rich Ar and Ca nuclei, denoted by solid lines. The discrete QRPA peaks have been smeared out by using a Lorentzian function. Pronounced peaks located at energy around 18 MeV for proton-rich Ar and Ca nuclei are found, which correspond to the normal GDR strengths. In the energy region below 12 MeV, there are some low-lying strengths which appear for all selected proton-rich nuclei. For $ ^{32} $Ar and $ ^{34} $Ca nuclei, the low-lying strengths are more notable. They are the so-called PDR strengths which appear in unstable nuclei. Figure 5. The QRPA strength distributions of proton-rich Ar (a,b) and Ca (c,d) nuclei for isovector dipole excitation. The width of the Lorentz distribution is set to be 0.5 MeV. In Table 4 we show the energy (non-energy)-weighted moment $ m_1 $ ($ m_0 $) and the centroid energies of dipole strengths of Ar and Ca nuclei calculated in the QRPA. We separate the energy region into two parts: the lower energy part (0 MeV $ < E \leqslant $ 12 MeV) and the higher energy part (12 MeV $ < E \leqslant $ 40 MeV), where the PDR and GDR strengths are mainly distributed. The classical TRK dipole sum rules for those nuclei are also given in the table. It is clearly seen that the values of energy (non-energy)-weighted moment $ m_1 $ ($ m_0 $) of PDR states decrease in each isotopic chain as the mass numbers of nuclei increase. For example, the energy (non-energy)-weighted moment $ m_1 $ ($ m_0 $) of PDR states in $ ^{32} $Ar is 3.204 e$ ^2 $fm$ ^2 $MeV (0.333 e$ ^2 $fm$ ^2 $), while the value in $ ^{34} $Ar is about 2.971 e$ ^2 $fm$ ^2 $MeV (0.274 e$ ^2 $fm$ ^2 $). The energy-weighted moments $ m_1 $ of PDR states in these nuclei exhaust about 2 to 4 percent of the TRK sum rule. For the nucleus $ ^{32} $Ar ($ ^{34} $Ca), it is about 3% (4%). For the GDR states, the energy (non-energy)-weighted moment $ m_1 $ ($ m_0 $) increases in each isotopic chain as the mass numbers increase. The energy weighted moments for all selected nuclei exhaust about 107% of the TRK sum rule. The calculated centroid energies of the GDR are distributed at an energy around 18.5 MeV for all nuclei; the energy is smaller for the heavier nucleus in each isotopic chain. For the PDR, the energies are around 10.0 MeV; it is larger for the heavier nucleus in each isotopic chain. 0$ < E \leqslant$ 12 12$< E \leqslant$ 40 S$_{\text{TRK}}$ $m_{0}$ $m_{1}$ $E_c$ $m_{0}$ $m_{1}$ $E_c$ $^{32}$ Ar 0.333 3.204 9.63 6.705 126.51 18.87 117.33 $^{34}$ Ar 0.274 2.971 10.82 7.261 135.49 18.66 126.21 $^{34}$ Ca 0.548 4.783 8.73 6.976 128.60 18.43 122.71 Table 4. The energy (non-energy)-weighted moments $m_1$ ($m_0$) and the centroid energies of dipole strengths in lower and higher energy regions. The values in the last column are obtained from the classical TRK dipole sum rule (e$^2$fm$^2$MeV). The units are e$^2$fm$^2$ and e$^2$fm$^2$MeV for $m_0$ and $m_1$, respectively. Energies are in MeV. The calculated proton and neutron transition densities of the PDR states (marked with energies less or around 12 MeV) and GDR states (marked with energies larger than 15 MeV) are shown in Fig. 6 (for Ar isotopes) and Fig. 7 (for Ca isotopes). For the PDR states we can see from the figures that the protons and neutrons move in phase in the nuclear interior, while the contribution at the surface comes mainly from protons. This shows that the low-lying states in proton-rich nuclei are typical pygmy resonances, similar to what has been found in neutron-rich nuclei. The nature of the PDR states in neutron-rich nuclei has been discussed extensively in several publications [56-59]. For the nature of PDR states in proton-rich nuclei, one may need to analyze the properties of isoscalar dipole resonances as done in Refs. [56-59]. This needs more work and is not discussed further in the present study. The transition densities for the GDR states show that the motions of protons and neutrons are out of phase, and there is almost no contribution from either protons or neutrons in the exterior region, which is the typical isovector GDR mode. Figure 6. Calculated proton and neutron transition densities for the PDR states and GDR states in proton-rich $ ^{32,34} $Ar. The SLy5 effective interaction is employed in the calculations. Figure 7. Same as in Fig. 6. Calculated proton and neutron transition densities for the PDR states and GDR states in proton-rich $ ^{34,36} $Ca. The SLy5 effective interaction is employed in the calculations. The QRPA amplitudes of proton and neutron 2qp configuration for a given excited state b are expressed as $ \begin{array}{l} \xi^b_{2qp} = |X^b_{2qp}|^2-|Y^b_{2qp}|^2 \end{array} $ and the normalization condition $ \begin{array}{l} \sum_{2qp}\xi^b_{2qp} = 1. \end{array} $ In Table 5 we show the largest QRPA amplitudes of proton and neutron 2qp configurations for the given excited states in the proton-rich Ar and Ca nuclei, which can help us in understanding the collectivity of dipole states. For GDR states, as we can see, there are more than 10 2qp configurations with amplitude larger than 1%, which means a coherent superposition of many 2qp configurations and shows the collective excitation of the GDR states. For the PDR states, however, only a few configurations give a significant contribution. For proton-rich Ar nuclei, the proton configurations ($ 3p_{3/2}2s_{1/2}^{-1})^\pi $, ($ 4p_{1/2}1d_{3/2}^{-1})^\pi $ and ($ 3p_{1/2}1d_{3/2}^{-1})^\pi $ contribute more than 86% to the total QRPA amplitude for the first PDR state, as shown in Table 5. For the second PDR states in Table 5, the main contribution to the total QRPA amplitude is from protons in $ 2s_{1/2} $ and $ 1d_{3/2} $ orbitals, which contribute more than 90% to the total QRPA amplitude. For the PDR in proton-rich Ca nuclei, the proton 2qp configurations ($ 3p_{3/2}2s_{1/2}^{-1})^\pi $, ($ 3f_{5/2}1d_{3/2}^{-1})^\pi $ and $ (4p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ dominate in the QRPA amplitudes. For example, in $ ^{34} $Ca and $ ^{36} $Ca, the proton 2qp configuration ($ 3p_{3/2}2s_{1/2}^{-1})^\pi $ contributes about 83% to the total QRPA amplitude for the first 1$ ^- $ state at energies E = 9.38 MeV and E = 10.55 MeV, respectively. For the second notable PDR states shown in Table 5, the main contribution also comes mainly from the protons in $ 2s_{1/2} $ and $ 1d_{3/2} $ orbitals, as for the Ar isotopes, and they contribute more than 99% to the total QRPA amplitude. All those results show that the PDR states in proton-rich Ar and Ca nuclei are more like a quasiparticle excitation. In Ref. [60], the authors studied the evolution of collectivity in the isovector dipole response in the low-energy region of neutron-rich isotopes of O, Ca, Ni, Zr, and Sn. They found that the onset of dipole strength in the low-energy region is due to single-particle excitations of the loosely bound neutrons in light nuclei. Our results are similar to what was found by the authors of Ref. [60]. $ ^{32} $ Ar $ ^{34} $ Ar E = 9.89 MeV $ \xi^b_{2qp} $ E = 12.32 MeV $ \xi^b_{2qp} $ E = 17.00 MeV $ \xi^b_{2qp} $ E = 10.49 MeV $ \xi^b_{2qp} $ E = 11.20 MeV $ \xi^b_{2qp} $ E = 17.45 MeV $ \xi^b_{2qp} $ $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 3% $ (4p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 5% $ (1d_{\frac{3}{2}}1p_{\frac{1}{2}}^{-1})^\pi $ 4% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 2% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 6% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 13% $ (3p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 28% $ (5p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 7% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 20% $ (2p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 2% $ (3p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 1% $ (4p_{\frac{3}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 9% $ (4p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 58% $ (5p_{\frac{3}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 21% $ (5p_{\frac{3}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 9% $ (3p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 3% $ (3p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 68% $ (3f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 12% $ (4p_{\frac{3}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 6% $ (3f_{\frac{5}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 62% $ (4f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 3% $ (3p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 86% $ (3p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 7% $ (5p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 3% $ (2d_{\frac{5}{2}}1f_{\frac{7}{2}}^{-1})^\pi $ 1% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 1% $ (6p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 7% $ (3p_{\frac{3}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 5% $ (4p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 5% $ (6p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 2% $ (6p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 18% $ (4p_{\frac{3}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 9% $ (6p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 2% $ (5f_{\frac{5}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 5% $ (6p_{\frac{3}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 2% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\nu $ 4% $ (2s_{\frac{1}{2}}1p_{\frac{1}{2}}^{-1})^\nu $ 23% $ (2p_{\frac{3}{2}}1d_{\frac{5}{2}}^{-1})^\nu $ 6% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\nu $ 14% $ (2p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\nu $ 3% $ (2p_{\frac{3}{2}}1d_{\frac{5}{2}}^{-1})^\nu $ 3% $ ^{34} $ Ca $ ^{36} $ Ca $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 2% $ (4p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 2% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 7% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 4% $ (4p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 70% $ (4p_{\frac{3}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 2% $ (3p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 5% $ (4p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 2% $ (6p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 4% $ (3p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 2% $ (3f_{\frac{5}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 15% $ (5p_{\frac{3}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 3% $ (3p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 87% $ (3f_{\frac{5}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 90% $ (6p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 10% $ (3p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 83% $ (4f_{\frac{5}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 8% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 3% $ (5p_{\frac{3}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 1% $ (4f_{\frac{5}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 2% $ (5f_{\frac{5}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 5% $ (4p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 4% $ (3p_{\frac{3}{2}}1d_{\frac{5}{2}}^{-1})^\pi $ 1% $ (6p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 3% $ (1d_{\frac{5}{2}}1p_{\frac{3}{2}}^{-1})^\nu $ 7% $ (5p_{\frac{1}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 1% $ (2p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\nu $ 2% $ (6p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\pi $ 3% $ (2s_{\frac{1}{2}}1p_{\frac{1}{2}}^{-1})^\nu $ 18% $ (5f_{\frac{5}{2}}1d_{\frac{3}{2}}^{-1})^\pi $ 30% $ (1d_{\frac{3}{2}}1p_{\frac{1}{2}}^{-1})^\nu $ 2% $ (2s_{\frac{1}{2}}1p_{\frac{1}{2}}^{-1})^\nu $ 4% $ (1f_{\frac{7}{2}}1d_{\frac{5}{2}}^{-1})^\nu $ 24% $ (1d_{\frac{3}{2}}1p_{\frac{1}{2}}^{-1})^\nu $ 18% $ (2p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\nu $ 4% $ (2p_{\frac{3}{2}}1d_{\frac{5}{2}}^{-1})^\nu $ 15% $ (2p_{\frac{3}{2}}2s_{\frac{1}{2}}^{-1})^\nu $ 3% $ (2p_{\frac{1}{2}}2s_{\frac{1}{2}}^{-1})^\nu $ 2% Table 5. The largest contributions of the proton and neutron 2qp excitations to the isovector reduced dipole QRPA amplitudes for the given states for proton-rich Ar and Ca nuclei. IV. SUMMARY In conclusion, we have systematically studied the properties of isovector giant dipole resonances in proton-rich Ar and Ca nuclei in a fully self-consistent microscopic approach. The ground state properties were calculated in a resonant continuum Skyrme HF+BCS approach, where the contribution of resonant states in the continuum to pairing correlations is properly considered. The SLy5 Skyrme interaction and a density-dependent contact pairing interaction were adopted in the calculations. The proton separation energies of $ ^{30} $Ar and $ ^{32} $Ca are negative, which means these two nuclei stay at the proton drip line in the present study. It is shown that a proton skin structure has been found in the proton-rich nuclei $ ^{32,34} $Ar and $ ^{34,36} $Ca. The experimental charge radii of some nuclei can be reproduced well by our theoretical model. The QRPA has been applied to explore the properties of dipole states in the selected nuclei. Around 18 MeV, one can find a pronounced GDR in all the nuclei studied. Besides the GDR states, some low-lying strengths are distributed in the energy region below 12 MeV. The strengths are weaker than those of the GDR, and are the so-called PDR states. The energy-weighted moments of PDR states for nuclei close to the proton drip-line exhaust about 4% of the TRK sum rule. The values decrease as the mass number increases in each isotopic chain. The transition densities of the PDR states show that the motions of protons and neutrons are in phase in the interiors of the nuclei, while the protons give the main contribution at the surface. By analyzing the QRPA amplitudes of proton and neutron 2-quasiparticle configurations for a given low-lying state, we find that the main contribution comes from a few proton 2-quasiparticle configurations which contribute at least 83% to the total QRPA amplitudes. Our conclusion is that the PDR excitation in these nuclei is more like a quasiparticle excitation, and the collectivity is not strong. The authors acknowledge Gianluca Colo for valuable comments on the manuscript.
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Does a straight, current-carrying wire always repel a permanent magnet? Regardless of the permanent magnet's orientation? Several people have told me that a current-carrying wire usually (or always) repels a nearby permanent magnet.... Most recently, I saw this Veritasium video on YouTube: 'How Special Relativity Makes Magnets Work'. At 2:40 to 2:45 of the video, the host says that, 'A wire with current in it deflects nearby magnets'... But why? Why would a wire with current always, or even usually, deflect another magnet? Rather than attract it? electromagnetism magnetic-fields electromagnetic-induction magnetic-moment Kurt HikesKurt Hikes "Deflects" here is not the same as "repel" and is not the opposite of "attract". The orientation of a magnetic needle is what may change in the presence of a current. This was what Oersted obsrved in 1820. If there is a net force it will depend on the gradient of the field. So, I think you missinterpret what the video intends to say when you take it to mean repulsion only. nasunasu It's the Lorentz force at work $\vec{F} = q\vec{v} \times \vec{B}$, not magnetic attraction or repulsion per se as you might imagine between permanent or electromagnets where there is a clearly defined north and south pole. It's not so much that it either attracts or repels but that it's always in one direction, either continuously to the left or the right depending on the magnet's orientation. It's not really trying to move away (repel) or towards (attract) anything. It's just trying to move to the left or the right, and whether this constitutes "towards" or "away" depends on where their starting positions are relative to each other. Taken from: https://www.feynmanlectures.caltech.edu/II_01.html#Ch1-S2 DKNguyenDKNguyen Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged electromagnetism magnetic-fields electromagnetic-induction magnetic-moment or ask your own question. Special relativity and electromagnetism Why a motionless proton doesn't feel any force in a magnetic field according to special relativity? Why are magnetic field lines circular around a current carrying wire? Flemings left hand rule Electrical generators: power balance Can a magnet induce current in a stationary, straight copper wire? How is magnetism a result of special relativity?
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Why do we not have spin greater than 2? It is commonly asserted that no consistent, interacting quantum field theory can be constructed with fields that have spin greater than 2 (possibly with some allusion to renormalization). I've also seen (see Bailin and Love, Supersymmetry) that we cannot have helicity greater than 1, absenting gravity. I am yet to see an explanation as to why this is the case; so can anyone help? quantum-field-theory quantum-spin unitarity s-matrix-theory higher-spin Qmechanic♦ 161k3030 gold badges397397 silver badges18901890 bronze badges JamesJames Higher spin particles have to be coupled to conserved currents, and there are no conserved currents of high spin in quantum field theories. The only conserved currents are vector currents associated with internal symmetries, the stress-energy tensor current, the angular momentum tensor current, and the spin-3/2 supercurrent, for a supersymmetric theory. This restriction on the currents constrains the spins to 0,1/2 (which do not need to be coupled to currents), spin 1 (which must be coupled to the vector currents), spin 3/2 (which must be coupled to a supercurrent) and spin 2 (which must be coupled to the stress-energy tensor). The argument is heuristic, and I do not think it rises to the level of a mathematical proof, but it is plausible enough to be a good guide. Preliminaries: All possible symmetries of the S-matrix You should accept the following result of O'Raferteigh, Coleman and Mandula--- the continuous symmetries of the particle S-matrix, assuming a mass-gap and Lorentz invariance, are a Lie Group of internal symmetries, plus the Lorentz group. This theorem is true, given its assumptions, but these assumptions leave out a lot of interesting physics: Coleman-Mandula assume that the symmetry is a symmetry of the S-matrix, meaning that it acts nontrivially on some particle state. This seems innocuous, until you realize that you can have a symmetry which doesn't touch particle states, but only acts nontrivially on objects like strings and membranes. Such symmetries would only be relevant for the scattering of infinitely extended infinite energy objects, so it doesn't show up in the S-matrix. The transformations would become trivial whenever these sheets close in on themselves to make a localized particle. If you look at Coleman and Mandula's argument (a simple version is presented in Argyres' supersymmetry notes, which gives the flavor. There is an excellent complete presentation in Weinberg's quantum field theory book, and the original article is accessible and clear), it almost begs for the objects which are charged under the higher symmetry to be spatially extended. When you have extended fundamental objects, it is not clear that you are doing field theory anymore. If the extended objects are solitons in a renormalizable field theory, you can zoom in on ultra-short distance scattering, and consider the ultra-violet fixed point theory as the field theory you are studying, and this is sufficient to understand most examples. But the extended-object exception is the most important one, and must always be kept in the back of the mind. Coleman and Mandula assume a mass gap. The standard extension of this theorem to the massless case just extends the maximal symmetry from the Poincare group to the conformal group, to allow the space-time part to be bigger. But Coleman and Madula use analyticity properties which I am not sure can be used in a conformal theory with all the branch-cuts which are not controlled by mass-gaps. The result is extremely plausible, but I am not sure if it is still rigorously true. This is an exercise in Weinberg, which unfortunately I haven't done. Coleman and Mandula ignore supersymmetries. This is fixed by Haag–Lopuszanski–Sohnius, who use the Coleman mandula theorem to argue that the maximal symmetry structure of a quantum field theory is a superconformal group plus internal symmetries, and that the supersymmetry must close on the stress-energy tensor. What the Coleman Mandula theorem means in practice is that whenever you have a conserved current in a quantum field theory, and this current acts nontrivially on particles, then it must not carry any space-time indices other than the vector index, with the only exceptions being the geometric currents: a spinor supersymmetry current, $J^{\alpha\mu}$, the (Belinfante symmetric) stress-energy tensor $T^{\mu\nu}$, the (Belinfante) angular momentum tensor $S^{\mu\nu\lambda} = x^{\mu} T^{\nu\lambda} - x^\nu T^{\mu\lambda}$, and sometimes the dilation current $D^\mu = x^\mu T^\alpha_\alpha$ and conformal and superconformal currents too. The spin of the conserved currents is found by representation theory--- antisymmetric indices are spin 1, whether there are 1 or 2, so the spin of the internal symmetry currents is 1, and of the stress energy tensor is 2. The other geometric tensors derived from the stress energy tensor are also restricted to spin less then 2, with the supercurrent having spin 3/2. What is a QFT? Here this is a practical question--- for this discussion, a quantum field theory is a finite collection of local fields, each corresponding to a representation of the Poincare group, with a local interaction Lagrangian which couples them together. Further, it is assumed that there is an ultra-violet regime where all the masses are irrelevant, and where all the couplings are still relatively small, so that perturbative particle exchange is ok. I say pseudo-limit, because this isn't a real ultra-violet fixed point, which might not exist, and it does not require renormalizability, only unitarity in the regime where the theory is still perturbative. Every particle must interact with something to be part of the theory. If you have a noninteracting sector, you throw it away as unobservable. The theory does not have to be renormalizable, but it must be unitary, so that the amplitudes must unitarize perturbatively. The couplings are assumed to be weak at some short distance scale, so that you don't make a big mess at short distances, but you can still analyze particle emission order by order The Froissart bound for a mass-gap theory states that the scattering amplitude cannot grow faster than the logarithm of the energy. This means that any faster than constant growth in the scattering amplitude must be cancelled by something. Propagators for any spin The propagators for massive/massless particles of any spin follow from group theory considerations. These propagators have the schematic form $$ s^J\over s-m^2$$ And the all-important s scaling, with its J-dependence can be extracted from the physically obvious angular dependence of the scattering amplitude. If you exchange a spin-J particle with a short propagation distance (so that the mass is unimportant) between two long plane waves (so that their angular momentum is zero), you expect the scattering amplitude to go like $\cos(\theta)^J$, just because rotations act on the helicity of the exchanged particle with this factor. For example, when you exchange an electron between an electron and a positron, forming two photons, and the internal electron has an average momentum k and a helicity +, then if you rotate the contribution to the scattering amplitude from this exchange around the k-axis by an angle $\theta$ counterclockwise, you should get a phase of $\theta/2$ in the outgoing photon phases. In terms of Mandelstam variables, the angular amplitude goes like $(1-t)^J$, since t is the cosine of the scattering variable, up to some scaling in s. For large t, this grows as t^J, but "t" is the "s" of a crossed channel (up to a little bit of shifting), and so crossing t and s, you expect the growth to go with the power of the angular dependence. The denominator is fixed at $J=0$, and this law is determined by Regge theory. So that for $J=0,1/2$, the propagators shrink at large momentum, for $J=1$, the scattering amplitudes are constant in some directions, and for $J>1$ they grow. This schematic structure is of course complicated by the actual helicity states you attach on the ends of the propagator, but the schematic form is what you use in Weinberg's argument. Spin 0, 1/2 are OK That spin 0 and 1/2 are ok with no special treatment, and this argument shows you why: the propagator for spin 0 is $$ 1\over k^2 + m^2$$ Which falls off in k-space at large k. This means that when you scatter by exchanging scalars, your tree diagrams are shrinking, so that they don't require new states to make the theory unitary. Spinors have a propagator $$ 1\over \gamma\cdot k + m $$ This also falls off at large k, but only linearly. The exchange of spinors does not make things worse, because spinor loops tend to cancel the linear divergence by symmetry in k-space, leaving log divergences which are symptomatic of a renormalizable theory. So spinors and scalars can interact without revealing substructure, because their propagators do not require new things for unitarization. This is reflected in the fact that they can make renormalizable theories all by themselves. Introducing spin 1, you get a propagator that doesn't fall off. The massive propagator for spin 1 is $$ { g_{\mu\nu} - {k_\mu k_\nu\over m^2} \over k^2 + m^2 }$$ The numerator projects the helicity to be perpendicular to k, and the second term is problematic. There are directions in k-space where the propagator does not fall off at all! This means that when you scatter by spin-1 exchange, these directions can lead to a blow-up in the scattering amplitude at high energies which has to be cancelled somehow. If you cancel the divergence with higher spin, you get a divergence there, and you need to cancel that, and then higher spin, and so on, and you get infinitely many particle types. So the assumption is that you must get rid of this divergence intrinsically. The way to do this is to assume that the $k_\mu k_\nu$ term is always hitting a conserved current. Then it's contribution vanishes. This is what happens in massive electrodynamics. In this situation, the massive propagator is still ok for renormalizability, as noted by Schwinger and Feynman, and explained by Stueckelberg. The $k_\mu k_\nu$ is always hitting a $J^\mu$, and in x-space, it is proportional to the divergence of the current, which is zero because the current is conserved even with a massive photon (because the photon isn't charged). The same argument works to kill the k-k part of the propagator in Yang-Mills fields, but it is much more complicated, because the Yang-Mills field itself is charged, so the local conservation law is usually expressed in a different way, etc,etc. The heuristic lesson is that spin-1 is only ok if you have a conservation law which cancels the non-shrinking part of the numerator. This requires Yang-Mills theory, and the result is also compatible with renormalizability. If you have a spin-1 particle which is not a Yang-Mills field, you will need to reveal new structure to unitarize its longitudinal component, whose propagator is not properly shrinking at high energies. Spin 3/2 In this case, you have a Rarita Schwinger field, and the propagator is going to grow like $\sqrt{s}$ at large energies, just from the Mandelstam argument presented before. The propagator growth leads to unphysical growth in scattering exchanging this particle, unless the spin-3/2 field is coupled to a conserved current. The conserved current is the Supersymmetry current, by the Haag–Lopuszanski–Sohnius theorem, because it is a spinor of conserved currents. This means that the spin-3/2 particle should interact with a spin 3/2 conserved supercurrent in order to be consistent, and the number of gravitinos is (less then or equal to) the number of supercharges. The gravitinos are always introduced in a supermultiplet with the graviton, but I don't know if it is definitely impossible to introduce them with a spin-1 partner, and couple them to the supercurrent anyway. These spin-3/2/spin-1 multiplets will probably not be renormalizable barring some supersymmetry miracle. I haven't worked it out, but it might be possible. In this case, you have a perturbative graviton-like field $h_{\mu\nu}$, and the propagator contains terms growing linearly with s. In order to cancel the growth in the numerator, you need the tensor particle to be coupled to a conserved current to kill the parts with too-rapid growth, and produce a theory which does not require new particles for unitarity. The conserved quantity must be a tensor $T_{\mu\nu}$. Now one can appeal to the Coleman Mandula theorem and conclude that the conserved tensor current must be the stress energy tensor, and this gives general relativity, since the stress-tensor includes the stress of the h field too. There is a second tensor conserved quantity, the angular momentum tensor $S_{\mu\nu\sigma}$, which is also spin-2 (it might look like its spin 3, but its antisymmetric on two of its indices). You can try to couple a spin-2 field to the angular momentum tensor. To see if this works requires a detailed analysis, which I haven't done, but I would guess that the result will just be a non-dynamical torsion coupled to the local spin, as required by the Einstein-Cartan theory. Witten mentions yet another possiblity for spin 2 in chapter 1 of Green Schwarz and Witten, but I don't remember what it is, and I don't know whether it is viable. I believe that these arguments are due to Weinberg, but I personally only read the sketchy summary of them in the first chapters of Green Schwarz and Witten. They do not seem to me to have the status of a theorem, because the argument is particle by particle, it requires independent exchange in a given regime, and it discounts the possiblity that unitary can be restored by some family of particles. Of course, in string theory, there are fields of arbitrarily high spin, and unitarity is restored by propagating all of them together. For field theories with bound states which lie on Regge trajectories, you can have arbitrarily high spins too, so long as you consider all the trajectory contributions together, to restore unitarity (this was one of the original motivations for Regge theory--- unitarizing higher spin theories). For example, in QCD, we have nuclei of high ground-state spin. So there are stable S-matrix states of high spin, but they come in families with other excited states of the same nuclei. The conclusion here is that if you have higher spin particles, you can be pretty sure that you will have new particles of even higher spin at higher energies, and this chain of particles will not stop until you reveal new structure at some point. So the tensor mesons observed in the strong interaction mean that you should expect an infinite family of strongly interacting particles, petering out only when the quantum field substructure is revealed. James said: It seems higher spin fields must be massless so that they have a gauge symmetry and thus a current to couple to A massless spin-2 particle can only be a graviton. These statements are as true as the arguments above are convincing. From the cancellation required for the propagator to become sensible, higher spin fields are fundamentally massless at short distances. The spin-1 fields become massive by the Higgs mechanism, the spin 3/2 gravitinos become massive through spontaneous SUSY breaking, and this gets rid of Goldstone bosons/Goldstinos. But all this stuff is, at best, only at the "mildly plausible" level of argument--- the argument is over propagator unitarization with each propagator separately having no cancellations. It's actually remarkable that it works as a guideline, and that there aren't a slew of supersymmetric exceptions of higher spin theories with supersymmetry enforcing propagator cancellations and unitarization. Maybe there are, and they just haven't been discovered yet. Maybe there's a better way to state the argument which shows that unitarity can't be restored by using positive spectral-weight particles. Big Rift in 1960s James askes Why wasn't this pointed out earlier in the history of string theory? The history of physics cannot be well understood without appreciating the unbelievable antagonism between the Chew/Mandelstam/Gribov S-matrix camp, and the Weinberg/Glashow/Polyakov Field theory camp. The two sides hated each other, did not hire each other, and did not read each other, at least not in the west. The only people that straddled both camps were older folks and Russians--- Gell-Mann more than Landau (who believed the Landau pole implied S-matrix), Gribov and Migdal more than anyone else in the west other than Gell-Mann and Wilson. Wilson did his PhD in S-matrix theory, for example, as did David Gross (under Chew). In the 1970s, S-matrix theory just plain died. All practitioners jumped ship rapidly in 1974, with the triple-whammy of Wilsonian field theory, the discovery of the Charm quark, and asymptotically freedom. These results killed S-matrix theory for thirty years. Those that jumped ship include all the original string theorists who stayed employed: notably Veneziano, who was convinced that gauge theory was right when t'Hooft showed that large-N gauge fields give the string topological expansion, and Susskind, who didn't mention Regge theory after the early 1970s. Everybody stopped studying string theory except Scherk and Schwarz, and Schwarz was protected by Gell-Mann, or else he would never have been tenured and funded. This sorry history means that not a single S-matrix theory course is taught in the curriculum today, nobody studies it except a few theorists of advanced age hidden away in particle accelerators, and the main S-matrix theory, string-theory, is not properly explained and remains completely enigmatic even to most physicists. There were some good reasons for this--- some S-matrix people said silly things about the consistency of quantum field theory--- but to be fair, quantum field theory people said equally silly things about S-matrix theory. Weinberg came up with these heuristic arguments in the 1960s, which convinced him that S-matrix theory was a dead end, or rather, to show that it was a tautological synonym for quantum field theory. Weinberg was motivated by models of pion-nucleon interactions, which was a hot S-matrix topic in the early 1960s. The solution to the problem is the chiral symmetry breaking models of the pion condensate, and these are effective field theories. Building on this result, Weinberg became convinced that the only real solution to the S-matrix was a field theory of some particles with spin. He still says this every once in a while, but it is dead wrong. The most charitable interpretation is that every S-matrix has a field theory limit, where all but a finite number of particles decouple, but this is not true either (consider little string theory). String theory exists, and there are non-field theoretic S-matrices, namely all the ones in string theory, including little string theory in (5+1)d, which is non-gravitational. Lorentz indices James comments: regarding spin, I tried doing the group theoretic approach to an antisymmetric tensor but got a little lost - doesn't an antisymmetric 2-form (for example) contain two spin-1 fields? The group theory for an antisymmetric tensor is simple: it consists of an "E" and "B" field which can be turned into the pure chiral representations E+iB, E-iB. This was also called a "six-vector" sometimes, meaning E,B making an antisymmetric four-tensor. You can do this using dotted and undotted indices more easily, if you realize that the representation theory of SU(2) is best done in indices--- see the "warm up" problem in this answer: Mathematically, what is color charge? Ron MaimonRon Maimon $\begingroup$ "Every particle must interact with something to be part of the theory. If you have a noninteracting sector, you throw it away as unobservable." Doesn't everything interact with gravity, and shouldn't a "noninteracting" sector be retained as a dark matter candidate? $\endgroup$ – Hugh Allen $\begingroup$ Higher spin particles have to be coupled to conserved currents, and there are no conserved currents of high spin in quantum field theories. you lost me here :) $\endgroup$ – wha7ever There is a fabulous explanation in Schwartz QFT and the standard model, p153. The absence of massless particles with spin > 2 is a consequence of little group invariance and charge conservation. For massless particles you can take the soft limit in the scattering matrix elements Lorentz invariance implies the matrix numbers should be the same in different frames but the polarizations certainly do not have to be the same. Schwartz also winds up showing that massless spin 2 particles imply gravity is universal. For massless spin 3 we end up with The sum of a "charge times energy squared" (for the zero component of 4 momentum) of incoming particles equals the same thing going out. This is sort of like conservation of charge only we also multiply by the sum of squared energy. This condition is too constraining to get anywhere unless the charges = 0. It should be noted that spin > 2 MASSIVE particles exist. Basically for massless particles: Spin 1 => conservation of charge Spin 2 => gravity is universal (the incoming and outgoing charges are equal for all particles in the interaction Spin 3 => charges = 0 This argument was discovered by Weinberg back in the 60s and it is just incredible. $\begingroup$ Massless higher-spin particle may exist in flat space time as far as their interactions die off at very large distances (deep IR) $\endgroup$ – apt45 Not the answer you're looking for? Browse other questions tagged quantum-field-theory quantum-spin unitarity s-matrix-theory higher-spin or ask your own question. Are interactions mediated by spin 3 or higher bosons allowed in mainstream QFT? Is Quantum spin greater than $1$ possible? Would a spin-2 particle necessarily have to be a graviton? Mathematically, what is color charge? What is the meaning of spin two? Relation among anomaly, unitarity bound and renormalizability What are the mathematical problems in introducing Spin 3/2 fermions? Spin-$J$ Amplitude $A_J(s,t) = - \frac{g^2(-s)^J}{t-M^2}$? Why don't we study spin-3/2 fields? Why is boson spin number related to attraction and repulsion? How can mesons have spin greater than 1? Uniqueness of Yang-Mills theory The spin-orbit interaction for a classical magnetic dipole moving in an electric field Photons have Spin 1 - Franz Gross' Relativistic Quantum mechanics and Field Theory How do you explain stretched metrics in the graviton picture? Symmetrisation of (fermionic) two-particle system without vs. with spin in wave function What's wrong with using a vielbein to define Wick rotation?
CommonCrawl
The Journal of Symbolic Logic Search within journal Search within society URL: /core/journals/journal-of-symbolic-logic Published on behalf of Association of Symbolic Logic Last 12 months (10) Last 3 years (21) MathJax MathJax help Close MathJax help MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org. Volume 87 - Issue 3 - September 2022 FORCING CONSTRUCTIONS AND COUNTABLE BOREL EQUIVALENCE RELATIONS Connections with other structures, applications Topological dynamics SU GAO, STEVE JACKSON, EDWARD KROHNE, BRANDON SEWARD We prove a number of results about countable Borel equivalence relations with forcing constructions and arguments. These results reveal hidden regularity properties of Borel complete sections on certain orbits. As consequences they imply the nonexistence of Borel complete sections with certain features. COMPLEXITY OF INDEX SETS OF DESCRIPTIVE SET-THEORETIC NOTIONS Computability and recursion theory REESE JOHNSTON, DILIP RAGHAVAN Descriptive set theory and computability theory are closely-related fields of logic; both are oriented around a notion of descriptive complexity. However, the two fields typically consider objects of very different sizes; computability theory is principally concerned with subsets of the naturals, while descriptive set theory is interested primarily in subsets of the reals. In this paper, we apply a generalization of computability theory, admissible recursion theory, to consider the relative complexity of notions that are of interest in descriptive set theory. In particular, we examine the perfect set property, determinacy, the Baire property, and Lebesgue measurability. We demonstrate that there is a separation of descriptive complexity between the perfect set property and determinacy for analytic sets of reals; we also show that the Baire property and Lebesgue measurability are both equivalent in complexity to the property of simply being a Borel set, for $\boldsymbol {\Sigma ^{1}_{2}}$ sets of reals. NEW RELATIONS AND SEPARATIONS OF CONJECTURES ABOUT INCOMPLETENESS IN THE FINITE DOMAIN Proof theory and constructive mathematics ERFAN KHANIKI In [20] Krajíček and Pudlák discovered connections between problems in computational complexity and the lengths of first-order proofs of finite consistency statements. Later Pudlák [25] studied more statements that connect provability with computational complexity and conjectured that they are true. All these conjectures are at least as strong as $\mathsf {P}\neq \mathsf {NP}$ [23–25].One of the problems concerning these conjectures is to find out how tightly they are connected with statements about computational complexity classes. Results of this kind had been proved in [20, 22].In this paper, we generalize and strengthen these results. Another question that we address concerns the dependence between these conjectures. We construct two oracles that enable us to answer questions about relativized separations asked in [19, 25] (i.e., for the pairs of conjectures mentioned in the questions, we construct oracles such that one conjecture from the pair is true in the relativized world and the other is false and vice versa). We also show several new connections between the studied conjectures. In particular, we show that the relation between the finite reflection principle and proof systems for existentially quantified Boolean formulas is similar to the one for finite consistency statements and proof systems for non-quantified propositional tautologies. WAYS OF DESTRUCTION BARNABÁS FARKAS, LYUBOMYR ZDOMSKYY We study the following natural strong variant of destroying Borel ideals: $\mathbb {P}$ $+$ -destroys $\mathcal {I}$ if $\mathbb {P}$ adds an $\mathcal {I}$ -positive set which has finite intersection with every $A\in \mathcal {I}\cap V$ . Also, we discuss the associated variants $$ \begin{align*} \mathrm{non}^*(\mathcal{I},+)=&\min\big\{|\mathcal{Y}|:\mathcal{Y}\subseteq\mathcal{I}^+,\; \forall\;A\in\mathcal{I}\;\exists\;Y\in\mathcal{Y}\;|A\cap Y|<\omega\big\},\\ \mathrm{cov}^*(\mathcal{I},+)=&\min\big\{|\mathcal{C}|:\mathcal{C}\subseteq\mathcal{I},\; \forall\;Y\in\mathcal{I}^+\;\exists\;C\in\mathcal{C}\;|Y\cap C|=\omega\big\} \end{align*} $$ of the star-uniformity and the star-covering numbers of these ideals. Among other results, (1) we give a simple combinatorial characterisation when a real forcing $\mathbb {P}_I$ can $+$ -destroy a Borel ideal $\mathcal {J}$ ; (2) we discuss many classical examples of Borel ideals, their $+$ -destructibility, and cardinal invariants; (3) we show that the Mathias–Prikry, $\mathbb {M}(\mathcal {I}^*)$ -generic real $+$ -destroys $\mathcal {I}$ iff $\mathbb {M}(\mathcal {I}^*)\ +$ -destroys $\mathcal {I}$ iff $\mathcal {I}$ can be $+$ -destroyed iff $\mathrm {cov}^*(\mathcal {I},+)>\omega $ ; (4) we characterise when the Laver–Prikry, $\mathbb {L}(\mathcal {I}^*)$ -generic real $+$ -destroys $\mathcal {I}$ , and in the case of P-ideals, when exactly $\mathbb {L}(\mathcal {I}^*)$ $+$ -destroys $\mathcal {I}$ ; and (5) we briefly discuss an even stronger form of destroying ideals closely related to the additivity of the null ideal. APPLICATIONS OF PCF THEORY TO THE STUDY OF IDEALS ON PIERRE MATET Let $\kappa $ be a regular uncountable cardinal, and a cardinal greater than or equal to $\kappa $ . Revisiting a celebrated result of Shelah, we show that if is close to $\kappa $ and (= the least size of a cofinal subset of ) is greater than , then can be represented (in the sense of pcf theory) as a pseudopower. This can be used to obtain optimal results concerning the splitting problem. For example we show that if and , then no $\kappa $ -complete ideal on is weakly -saturated. COMPLETE INTUITIONISTIC TEMPORAL LOGICS FOR TOPOLOGICAL DYNAMICS General logic JOSEPH BOUDOU, MARTÍN DIÉGUEZ, DAVID FERNÁNDEZ-DUQUE Published online by Cambridge University Press: 04 February 2022, pp. 995-1022 The language of linear temporal logic can be interpreted on the class of dynamic topological systems, giving rise to the intuitionistic temporal logic ${\sf ITL}^{\sf c}_{\Diamond \forall }$ , recently shown to be decidable by Fernández-Duque. In this article we axiomatize this logic, some fragments, and prove completeness for several familiar spaces. DESCRIPTIVE COMPLEXITY IN CANTOR SERIES Number theory: Connections with logic Probabilistic theory: distribution modulo $1$; metric theory of algorithms DYLAN AIREY, STEVE JACKSON, BILL MANCE Published online by Cambridge University Press: 27 September 2021, pp. 1023-1045 A Cantor series expansion for a real number x with respect to a basic sequence $Q=(q_1,q_2,\dots )$ , where $q_i \geq 2$ , is a generalization of the base b expansion to an infinite sequence of bases. Ki and Linton in 1994 showed that for ordinary base b expansions the set of normal numbers is a $\boldsymbol {\Pi }^0_3$ -complete set, establishing the exact complexity of this set. In the case of Cantor series there are three natural notions of normality: normality, ratio normality, and distribution normality. These notions are equivalent for base b expansions, but not for more general Cantor series expansions. We show that for any basic sequence the set of distribution normal numbers is $\boldsymbol {\Pi }^0_3$ -complete, and if Q is $1$ -divergent then the sets of normal and ratio normal numbers are $\boldsymbol {\Pi }^0_3$ -complete. We further show that all five non-trivial differences of these sets are $D_2(\boldsymbol {\Pi }^0_3)$ -complete if $\lim _i q_i=\infty $ and Q is $1$ -divergent. This shows that except for the trivial containment that every normal number is ratio normal, these three notions are as independent as possible. MEAGER-ADDITIVE SETS IN TOPOLOGICAL GROUPS Locally compact abelian groups Topological and differentiable algebraic systems ONDŘEJ ZINDULKA By the Galvin–Mycielski–Solovay theorem, a subset X of the line has Borel's strong measure zero if and only if $M+X\neq \mathbb {R}$ for each meager set M. A set $X\subseteq \mathbb {R}$ is meager-additive if $M+X$ is meager for each meager set M. Recently a theorem on meager-additive sets that perfectly parallels the Galvin–Mycielski–Solovay theorem was proven: A set $X\subseteq \mathbb {R}$ is meager-additive if and only if it has sharp measure zero, a notion akin to strong measure zero. We investigate the validity of this result in Polish groups. We prove, e.g., that a set in a locally compact Polish group admitting an invariant metric is meager-additive if and only if it has sharp measure zero. We derive some consequences and calculate some cardinal invariants. YET ANOTHER IDEAL VERSION OF THE BOUNDING NUMBER RAFAŁ FILIPÓW, ADAM KWELA Let $\mathcal {I}$ be an ideal on $\omega $ . For $f,\,g\in \omega ^{\omega }$ we write $f \leq _{\mathcal {I}} g$ if $f(n) \leq g(n)$ for all $n\in \omega \setminus A$ with some $A\in \mathcal {I}$ . Moreover, we denote $\mathcal {D}_{\mathcal {I}}=\{f\in \omega ^{\omega }: f^{-1}[\{n\}]\in \mathcal {I} \text { for every } n\in \omega \}$ (in particular, $\mathcal {D}_{\mathrm {Fin}}$ denotes the family of all finite-to-one functions). We examine cardinal numbers $\mathfrak {b}(\geq _{\mathcal {I}}\cap (\mathcal {D}_{\mathcal {I}} \times \mathcal {D}_{\mathcal {I}}))$ and $\mathfrak {b}(\geq _{\mathcal {I}}\cap (\mathcal {D}_{\mathrm {Fin}}\times \mathcal {D}_{\mathrm {Fin}}))$ describing the smallest sizes of unbounded from below with respect to the order $\leq _{\mathcal {I}}$ sets in $\mathcal {D}_{\mathrm {Fin}}$ and $\mathcal {D}_{\mathcal {I}}$ , respectively. For a maximal ideal $\mathcal {I}$ , these cardinals were investigated by M. Canjar in connection with coinitial and cofinal subsets of the ultrapowers. We show that $\mathfrak {b}(\geq _{\mathcal {I}}\cap (\mathcal {D}_{\mathrm {Fin}} \times \mathcal {D}_{\mathrm {Fin}})) =\mathfrak {b}$ for all ideals $\mathcal {I}$ with the Baire property and that $\aleph _1 \leq \mathfrak {b}(\geq _{\mathcal {I}}\cap (\mathcal {D}_{\mathcal {I}} \times \mathcal {D}_{\mathcal {I}})) \leq \mathfrak {b}$ for all coanalytic weak P-ideals (this class contains all $\bf {\Pi ^0_4}$ ideals). What is more, we give examples of Borel (even $\bf {\Sigma ^0_2}$ ) ideals $\mathcal {I}$ with $\mathfrak {b}(\geq _{\mathcal {I}}\cap (\mathcal {D}_{\mathcal {I}} \times \mathcal {D}_{\mathcal {I}}))=\mathfrak {b}$ as well as with $\mathfrak {b}(\geq _{\mathcal {I}}\cap (\mathcal {D}_{\mathcal {I}} \times \mathcal {D}_{\mathcal {I}})) =\aleph _1$ . We also study cardinals $\mathfrak {b}(\geq _{\mathcal {I}}\cap (\mathcal {D}_{\mathcal {J}} \times \mathcal {D}_{\mathcal {K}}))$ describing the smallest sizes of sets in $\mathcal {D}_{\mathcal {K}}$ not bounded from below with respect to the preorder $\leq _{\mathcal {I}}$ by any member of $\mathcal {D}_{\mathcal {J}}\!$ . Our research is partially motivated by the study of ideal-QN-spaces: those cardinals describe the smallest size of a space which is not ideal-QN. ALMOST DISJOINT AND MAD FAMILIES IN VECTOR SPACES AND CHOICE PRINCIPLES Basic linear algebra ELEFTHERIOS TACHTSIS Published online by Cambridge University Press: 29 October 2021, pp. 1093-1110 In set theory without the Axiom of Choice ( $\mathsf {AC}$ ), we investigate the open problem of the deductive strength of statements which concern the existence of almost disjoint and maximal almost disjoint (MAD) families of infinite-dimensional subspaces of a given infinite-dimensional vector space, as well as the extension of almost disjoint families in infinite-dimensional vector spaces to MAD families. SUBCOMPACT CARDINALS, TYPE OMISSION, AND LADDER SYSTEMS YAIR HAYUT, MENACHEM MAGIDOR Published online by Cambridge University Press: 04 February 2022, pp. 1111-1129 We provide a model theoretical and tree property-like characterization of $\lambda $ - $\Pi ^1_1$ -subcompactness and supercompactness. We explore the behavior of these combinatorial principles at accessible cardinals. COUNTING SIBLINGS IN UNIVERSAL THEORIES SAMUEL BRAUNFELD, MICHAEL C. LASKOWSKI Published online by Cambridge University Press: 10 January 2022, pp. 1130-1155 We show that if a countable structure M in a finite relational language is not cellular, then there is an age-preserving $N \supseteq M$ such that $2^{\aleph _0}$ many structures are bi-embeddable with N. The proof proceeds by a case division based on mutual algebraicity. MOST SIMPLE EXTENSIONS OF $\textbf{FL}_{\textbf{e}}$ ARE UNDECIDABLE Ordered structures Algebraic logic NIKOLAOS GALATOS, GAVIN ST. JOHN Published online by Cambridge University Press: 10 June 2021, pp. 1156-1200 All known structural extensions of the substructural logic $\textbf{FL}_{\textbf{e}}$ , the Full Lambek calculus with exchange/commutativity (corresponding to subvarieties of commutative residuated lattices axiomatized by $\{\vee , \cdot , 1\}$ -equations), have decidable theoremhood; in particular all the ones defined by knotted axioms enjoy strong decidability properties (such as the finite embeddability property). We provide infinitely many such extensions that have undecidable theoremhood, by encoding machines with undecidable halting problem. An even bigger class of extensions is shown to have undecidable deducibility problem (the corresponding varieties of residuated lattices have undecidable word problem); actually with very few exceptions, such as the knotted axioms and the other prespinal axioms, we prove that undecidability is ubiquitous. Known undecidability results for non-commutative extensions use an encoding that fails in the presence of commutativity, so and-branching counter machines are employed. Even these machines provide encodings that fail to capture proper extensions of commutativity, therefore we introduce a new variant that works on an exponential scale. The correctness of the encoding is established by employing the theory of residuated frames. COPYING ONE OF A PAIR OF STRUCTURES RACHAEL ALVIR, HANNAH BURCHFIELD, JULIA F. KNIGHT We ask when, for a pair of structures $\mathcal {A}_1,\mathcal {A}_2$ , there is a uniform effective procedure that, given copies of the two structures, unlabeled, always produces a copy of $\mathcal {A}_1$ . We give some conditions guaranteeing that there is such a procedure. The conditions might suggest that for the pair of orderings $\mathcal {A}_1$ of type $\omega _1^{CK}$ and $\mathcal {A}_2$ of Harrison type, there should not be any such procedure, but, in fact, there is one. We construct an example for which there is no such procedure. The construction involves forcing. On the way to constructing our example, we prove a general result on modifying Cohen generics. INTERPRETING A FIELD IN ITS HEISENBERG GROUP Other groups of matrices Connections with logic RACHAEL ALVIR, WESLEY CALVERT, GRANT GOODMAN, VALENTINA HARIZANOV, JULIA KNIGHT, RUSSELL MILLER, ANDREY MOROZOV, ALEXANDRA SOSKOVA, ROSE WEISSHAAR Published online by Cambridge University Press: 23 December 2021, pp. 1215-1230 We improve on and generalize a 1960 result of Maltsev. For a field F, we denote by $H(F)$ the Heisenberg group with entries in F. Maltsev showed that there is a copy of F defined in $H(F)$ , using existential formulas with an arbitrary non-commuting pair of elements as parameters. We show that F is interpreted in $H(F)$ using computable $\Sigma _1$ formulas with no parameters. We give two proofs. The first is an existence proof, relying on a result of Harrison-Trainor, Melnikov, R. Miller, and Montalbán. This proof allows the possibility that the elements of F are represented by tuples in $H(F)$ of no fixed arity. The second proof is direct, giving explicit finitary existential formulas that define the interpretation, with elements of F represented by triples in $H(F)$ . Looking at what was used to arrive at this parameter-free interpretation of F in $H(F)$ , we give general conditions sufficient to eliminate parameters from interpretations. NULL SETS AND COMBINATORIAL COVERING PROPERTIES Fairly general properties PIOTR SZEWCZAK, TOMASZ WEISS A subset of the Cantor cube is null-additive if its algebraic sum with any null set is null. We construct a set of cardinality continuum such that: all continuous images of the set into the Cantor cube are null-additive, it contains a homeomorphic copy of a set that is not null-additive, and it has the property $\unicode{x3b3} $ , a strong combinatorial covering property. We also construct a nontrivial subset of the Cantor cube with the property $\unicode{x3b3} $ that is not null additive. Set-theoretic assumptions used in our constructions are far milder than used earlier by Galvin–Miller and Bartoszyński–Recław, to obtain sets with analogous properties. We also consider products of Sierpiński sets in the context of combinatorial covering properties. CONNECTEDNESS IN STRUCTURES ON THE REAL NUMBERS: O-MINIMALITY AND UNDECIDABILITY ALFRED DOLICH, CHRIS MILLER, ALEX SAVATOVSKY, ATHIPAT THAMRONGTHANYALAK We initiate an investigation of structures on the set of real numbers having the property that path components of definable sets are definable. All o-minimal structures on $(\mathbb {R},<)$ have the property, as do all expansions of $(\mathbb {R},+,\cdot ,\mathbb {N})$ . Our main analytic-geometric result is that any such expansion of $(\mathbb {R},<,+)$ by Boolean combinations of open sets (of any arities) either is o-minimal or defines an isomorph of $(\mathbb N,+,\cdot )$ . We also show that any given expansion of $(\mathbb {R}, <, +,\mathbb {N})$ by subsets of $\mathbb {N}^n$ (n allowed to vary) has the property if and only if it defines all arithmetic sets. Variations arise by considering connected components or quasicomponents instead of path components. INITIAL SEGMENTS OF THE DEGREES OF CEERS URI ANDREWS, ANDREA SORBI It is known that every non-universal self-full degree in the structure of the degrees of computably enumerable equivalence relations (ceers) under computable reducibility has exactly one strong minimal cover. This leaves little room for embedding wide partial orders as initial segments using self-full degrees. We show that considerably more can be done by staying entirely inside the collection of non-self-full degrees. We show that the poset can be embedded as an initial segment of the degrees of ceers with infinitely many classes. A further refinement of the proof shows that one can also embed the free distributive lattice generated by the lower semilattice as an initial segment of the degrees of ceers with infinitely many classes. FIRST-ORDER AXIOMATISATIONS OF REPRESENTABLE RELATION ALGEBRAS NEED FORMULAS OF UNBOUNDED QUANTIFIER DEPTH ROB EGROT, ROBIN HIRSCH Using a variation of the rainbow construction and various pebble and colouring games, we prove that RRA, the class of all representable relation algebras, cannot be axiomatised by any first-order relation algebra theory of bounded quantifier depth. We also prove that the class At(RRA) of atom structures of representable, atomic relation algebras cannot be defined by any set of sentences in the language of RA atom structures that uses only a finite number of variables. Front Cover (OFC, IFC) and matter JSL volume 87 issue 3 Cover and Front matter Published online by Cambridge University Press: 25 August 2022, pp. f1-f3
CommonCrawl
Derive an expression for (i) induced emf and (ii) induced current when a conductor of length $$I$$ is moved with a uniform velocity, normal to a uniform magnetic field $$B$$. Assume the resistance of conductor to be $$R$$. Expression for Induced emf: We know that if a charge $$q$$ moves with velocity $$\overrightarrow { V } $$ in a magnetic field of strength $$\overrightarrow { B } $$, making an angle $$\theta$$ then magnetic Lorentz force $$F=q\ vB \sin \theta$$ If $$\overrightarrow { v } $$ and $$\overrightarrow { B } $$ mutually perpendicular, then $$\theta =90^o$$ $$F=q\ vB \sin 90^o=qvB$$ The directon of this force is perpendicular to both $$\overrightarrow { v } $$ and $$\overrightarrow { B } $$ and is given by Fleming's left hand rule. Suppose a thin conductig rod $$PQ$$ is placed on two parallel metallic rails $$CD$$ and $$MN$$ in a magnetic field of strength $$\overrightarrow { B } $$ . The direction of magnetic field $$\overrightarrow { B } $$ is perpendicular to the plane of paper, downward. In fig $$\overrightarrow { B } $$ isrepresented by cross $$(\times )$$ marks. Suppose the rod is moving with velocity $$\overrightarrow { v } $$ , perpendicular to its own length, towards the right. We know that metallic conductors contain free electrons, which can move within the metal. As charge on electron, $$q=-e$$ therefore, each electron experiences a magnetic Lorents force, $$F_m=evB$$, whose direction, according to Fleming's left hand rule, will be from $$P$$ to $$Q$$ Thus the electrons are displaced from end $$P$$ toward end $$Q$$ Consequently the end $$P$$ of rod becomes positively charged and end $$Q$$ negatively charged. Thus a potential difference is produced between the ends of the conductor. This is the induced emf. Due to induced emf, an electric field is produced in the conducting rod. The strength of this electric field $$E=\dfrac{v}{l}$$ ...$$(i)$$ And its direction is from $$(+)$$ to $$(-)$$ charge, i.e., from $$p$$ to $$Q$$. The force on a free electron due to this electric field, $$F_e=eE$$ ...$$(ii)$$ The direction of this force is from $$Q$$ to $$P$$which is opposite to that of electric field. Thus the emf produced opposes the motion of electrons caused due to Lorentz force. This is in accordance with Lenz's law. As the number of electrons at end becomes more and more, the magnitude of electric force $$Fe$$ goes on increasing, and a stage comes when electric force $$\overrightarrow {Fe}$$ and magnetic force $$\overrightarrow {Fm}$$ become equal and opposite. In this situation the potential difference produced across the ends of rod becomes constant. In this condition $$F_e=F_m$$ $$eE=evB$$ or $$E=B_v$$ ...$$(iii)$$ $$\therefore$$ The potential difference produced, $$V=EI=B\ v\ I\ Volt$$ Also the induced current $$I=\dfrac{V}{R}=\dfrac{Bvl}{R}$$ ampere
CommonCrawl
/hey.runat.me/ science/ moshpits_project moshpits project smectics zombietown usa ising.js openkim pipline yo.runat.me the prank it's morning. csaba csalk 2 Collective Motion of Mosh pits By Matt Bierbaum A look into patterns formed in extreme dancing Curious collective behaviors are all around us from the atomic to the astronomical scale. For example, how do defects in crystals (Plasticity project) organize themselves into sharp, wall-like structures when left to their own devices? How galaxies form the neat spiral shapes (Spiral galaxy) appears to be an open question still (says the wiki). On smaller scales, flocks of birds create very cool patterns such as those found in starlings (movie below). How do they decide which direction to fly? How is information transmitted from bird to bird? On the human scale, how do marching bands work? What is the nature of the intricate patterns that a marching band makes as they perform a halftime show? Are they only moving relative to one another and memorized separation vectors, or have they memorized specific positions on the field and when to move between them? Is there a set of measurements that you could perform to determine which of these methods or combination of methods they use? I presume that these positions are determined prior to the performance (otherwise super kudos to them), meaning that there could be no interactions between the performers and they could still make these impressive patterns. What would halftime look like if they had no prior knowledge and were simply told to make the shape of a pterodactyl? I bet that would not go over very well. It turns out that beautiful collective motions also occur in a very different scenario: in the crowds at heavy metal concerts. When these energetic crowds get together, a whole zoo of collective motions can be seen, including: Mosh pits - members of the crowd run around bumping into each other chaotically Cirlce pits - a portion of the crowd runs in a circle Fist pumping - throwing fists in the air to the beat / influenced by people around you Synchronized jumping - jumping to the beat, with local coupling and global forcing Wall of death - the crowd separates into two halves which then run at each other Braveheart style Meat grinder - N concentric circle pits, each moving in the opposite direction (very rare) Many of these collective behaviors are highlighted in this compilation video, which I highly encourage you to watch. Of these behaviors, Jesse Silverberg and I thought that the mosh pit, circle pit, and the relationship between them seemed like an interesting and tractable problem. Since 1987, starting with Craig Reynolds, a type of model called a flocking model has been successfully used to describe many collective motions in various systems including birds, bison, and humans. Given this success, we adapted the flocking model to the situation of extreme collective behaviors at heavy metal concerts, attempting to describe the mosh pit and circle pit. After some reasonsing, reading, and testing we discovered that there are 4 aspects that are important to replicating the behaviors we were after. They are People are solid bodies, they should not pass through one another At the events, people are self-propelled - they run around Individuals don't have perfect information about their surroundings or control of themselves A notion from the flocking science community that individuals like to move in the direction of the people around them For these four aspects, we wrote down a model with four forces on each individual, in the same order that they are listed above. During a simulation, we calculate the total force on each individual $i$ using the forces below and then integrate these forces to see how the crowd as a whole behaves. $$ \vec{F}_{i}^{\rm repulsion} = \epsilon \left(1-\frac{r_{ij}}{2r_0}\right)^{5/2} \hat{r}_{ij} $$ $$ \vec{F}_{i}^{\rm propulsion} = \beta (v_0 - v_i) \hat{v}_i $$ $$ \vec{F}_{i}^{\rm noise} = \vec{\eta}_i $$ $$ \vec{F}_{i}^{\rm flocking} = \alpha \sum_{j=0}^{N_i} \vec{v}_j \Big/ \left|\sum_{j=0}^{N_i} \vec{v}_j \right| $$ These forces are not novel, each as has been used in many situations before. However, if we split the parameters for these particles into two groups, we find that the behaviors that are accessible are quite surprising. In particular, we can make two groups called active and passive moshers which are distinguished by $\alpha$ and $\beta$. Active moshers flock and run around ($\alpha \ne 0$ and $\beta \ne 0$) whereas passive ones don't ($\alpha = \beta = 0$). To mimic a concert, we first began with a circle of active moshers surrounded by a crowd of passive ones (what you often see at a concert). Doing this and tuning the parameters, we find that we can produce both a mosh pit and circle from the same model. When the flocking strength is low, mosh pits form. As this strength is increased, circle pits begin to form instead. These two behaviors can be seen in the videos below: You can explore the various behaviors of these equations of motion by visiting our interactive simulation built for the web at: Moshpits.js If you thought that the initial conditions of starting off in a circle were a bit contrived, then you'd be right. We did too. But, it turned out that started with the populations mixed led to a spontaneous self-segregation! After the circle formed, then a mosh pit or circle pit would form anyway. This hints that these dynamical structures are actually stable, which was supported by the fact that even extremely large pits did not dissolve after a very long time. Below is the largest circle pit we simulated (~100k participants). The red particles are active moshers while the black are passive. The black particles are shaded gray according the force that they feel, thus labeling grain boundaries in the crowd. In the second movie, you can watch the segregation take place in a system of 100k participants. It's a rather long movie so feel free to fast forward and look at several different states. For more information, you can visit the Cohen lab's page on moshpits: Cohen Group Page Or read the original paper on the ArXiv: ArXiv paper
CommonCrawl
An ant on an infinite chessboard There is an infinite chessboard, and an ant $A$ is in the middle of one of the squares. The ant can move in any of the eight directions, from the center of one square to another. If it moves 1 square north, south, east or west; it requires $1$ unit energy. If it moves to one of its diagonal neighbor (NE, NW, SE, SW); it requires $\sqrt 2$ units of energy. It is equally likely to move in any of the eight directions. If it initially has $20$ units of energy, find the probability that, after using the maximum possible energy, the ant will be $2$ units away from its initial position. If in case it doesn't have enough energy to move in a particular set of directions, it will move in any of the other directions with equal probability. I approached this problem, considering that the case that it finally ends up $2$ units to the east (we can multiply by four to get all the cases). If it ends up $2$ units to the east, then $\text{Total steps to right}=2+\text{Total steps to left}$. We will somehow balance these steps, considering that the ant has a total of $20$ units of energy at the start. I don't know how to effectively calculate the sample space either. If the ant takes a total of $n$ steps, such that while taking all $n$ steps it is equally likely to move in any of the eight directions, then the sample space would be $8^n$. But here we do not know $n$. Further, if the energy left after the second-last step is less than $\sqrt 2$ and more than $1$, then the ant will not be able to move diagonally. I wasn't able to think of much after this. Help is appreciated. probability combinatorics pkwssis pkwssispkwssis $\begingroup$ Nice problem. What is your source? $\endgroup$ – Did Oct 7 '14 at 8:34 $\begingroup$ @Did I had a similar problem in a book. I thought of this myself though. I don't know if it is even solvable.. $\endgroup$ – pkwssis Oct 7 '14 at 8:39 $\begingroup$ If the ant is supposed to use all its energy it can never make a diagonal move: the only way that $x\cdot 1+y\sqrt 2$ equals 20 is when $y=0$. Should the problem read: "find the probability that, at the moment the ant is no longer able to move, the ant will be 2 units away from its initial position"? $\endgroup$ – Leen Droogendijk Oct 7 '14 at 9:53 $\begingroup$ By quasi brute-force, the probability I get is $26872167014433/2^{49} \approx 0.047734557665594$. It is sort of hard to describe what I have done and I've no other way to validate whether this number make sense or not. Let's wait whether other people can make another independent estimate/calculation.... $\endgroup$ – achille hui Oct 7 '14 at 13:41 $\begingroup$ By a completely different way of counting (similar to the spirit in Dale M's answer), I can reproduce numbers in above comment exactly. So the probability is indeed about 4.77%. I'll post an answer describing the counting later (maybe tomorrow?) $\endgroup$ – achille hui Oct 7 '14 at 20:02 Here is a solution in terms of formulas. My ant starts at $(0,0)$ and visits lattice points. Energywise the history of the ant can be encoded as a word $AADADAAAD\ldots$ where the letter $A$ denotes a move parallel to one of the axes and $D$ a diagonal move. The individual letters are obtained by a coin toss, and the word ends when the ant's energy is used up. Denote by $a$ and $d$ the number of $A$- resp. $D$-steps. Then $$d\leq d_a:=\left\lfloor{20-a\over\sqrt{2}}\right\rfloor\ .$$ Each word corresponds to a staircase path in the first quadrant of the $(a,d)$-plane, as shown in the following figure: The path ends at a red point, where there is no more energy for an additional step. At a blue point the following rule takes place: If a $D$ is thrown (with probability ${1\over2}$) at that point the ant makes an $A$-move nevertheless. It follows that all paths end at a red point $(a,d_a)$. The probability $p(a)$ that the number of $A$-moves is exactly $a$ is then given by the following formulas: $$p(a)=0\qquad{\rm if}\qquad d_a=d_{a+1}\ ;$$ $$p(a)={a+d_a\choose d_a}2^{-(a+d_a)} \qquad{\rm if}\qquad d_{a-1}>d_a>d_{a+1}\ ;$$ $$p(a)= {a+d_a\choose d_a}2^{-(a+d_a)}+{1\over2}{a-1+d_a\choose d_a}2^{-(a-1+d_a)}\qquad{\rm if}\qquad d_{a-1}=d_a>d_{a+1}\ .$$ We now compute the probability $p_{20}(a)$ that the actual grid path of the ant ends at $(2,0)$, given that it makes $a$ type $A$ moves. It is easy to see that $a$ has to be even in such a case. Given $a$, there are $h$ horizontal moves and $v=a-h$ vertical moves of the ant, where $0\leq h\leq a$. In reality we have $h+d_a$ independent horizontal $\pm1$-steps, which should add up to $2$, and $a-h+d_a$ vertical $\pm1$-steps, which should add up to $0$. Therefore $h+d_a$ has to be even as well. On account of the Bernoulli distribution of the $\pm1$ signs we obtain in this way $$p_{20}(a)=\sum_{0\leq h\leq a, \ h+d_a\ {\rm even}}{a\choose h} 2^{-a}{h+d_a\choose (h+d_a)/2-1}\cdot{a-h+d_a\choose(a-h+d_a)/2} 2^{-(a+2d_a)}\ .$$ The requested probability $p$ therefore comes to $$p=4\sum_{0\leq a\leq 20, \ a\ {\rm even}} p(a)\>p_{20}(a)\ .$$ The computation gave $$p={26872167014433\over562949953421312}\doteq0.0477346\ .$$ Christian BlatterChristian Blatter Thanks for the interesting and creative problem. It made me curious as to how large the path could get for Energy = 20, so I mapped it: The origin is the green cell and the possible end cells are yellow. The cells that your question asks for percentages on are the orange squares. With problems like these with a gajillion possibilities, I lean heavily toward using a simulator. I ran 4 simulations, each for 100 million trials, and since all 4 of them gave similar results, I concluded that the number of trials for each run was large enough. Here are the results: ------------------------ Original answer end ------------------------ Edit: This may be overkill, but I was curious how many distinct stopping points there were, and I wanted to see how the probabilities decreased as the endpoint got further from the origin. So, I altered the sim again and re-ran it for 2,147,483,646 runs, and here are those results. The percentages are out of all possible paths, not just for the white slice shown. I am showing only the slice because it considers all 161 distinct points (and makes it small enough to read, but you'll probably need to save it to your machine to view it). All other possible ending points are a reflection of these points. Quite a few possible ending points far from the origin were never hit once, and some were hit only once (15,10; 16,7; and 17,0) JLeeJLee $\begingroup$ @Pkwssis When I looked it over, I realized that there are only 4 valid ending points that are considered "within 2 units", correct? If so, I can modify my answer. $\endgroup$ – JLee Oct 7 '14 at 20:54 $\begingroup$ I understand the word "within" as "less or equal," thus, we are interested when the and ends up back in the exact center, or anywhere in the 3×3 square centered over the initial position (that's distances 1 and $\sqrt 2$), or in any of the four squares labelled "2" on your diagram. Anywhere else you would be farther than 2 grid units. $\endgroup$ – user22961 Oct 7 '14 at 21:31 $\begingroup$ In the question, the comment "considering that the case that it finally ends up 2 units to the east (we can multiply by four to get all the cases)." makes me think that he meant just those 4 squares. $\endgroup$ – JLee Oct 7 '14 at 21:35 $\begingroup$ @JLee Yes that's what I considered.. $\endgroup$ – pkwssis Oct 8 '14 at 0:51 $\begingroup$ @Pkwssis Would you like me to run the sim for those 4 squares only? I don't mind at all. $\endgroup$ – JLee Oct 8 '14 at 4:29 There's probably an elegant approach, but I can't think of one. Computers to the rescue! We can exactly solve $20$ generations of the Markov chain on states (x, y, energy) with just $37932$ multiply-add operations on $64$-bit integers. I got the same answer as achille hui: $55034198045558784/ 8^{20} \approx 4.8\% $. Here's a breakdown of paths that stopped at a distance of $2$ by the number of diagonal moves: d[0] = 225684492800 d[1] = 4280403359232 d[2] = 0 d[4] = 2261233261281280 d[7] = 23843942578520064 d[10] = 0 d[11] = 3931022473297920 d[14] = 10806934634496 For example, $2261233261281280 / 8^{20}$ is the probability of stopping at a distance of $2$ after performing a total of $4$ diagonal moves (and $14$ non-diagonal moves). To explain the zeros, $\left\lceil d\sqrt{2}\right\rceil$ must be even in order to execute an even number of non-diagonal moves and end on a square of the same color as the starting square. Chris CulterChris Culter $\begingroup$ +1 Cool, it saves me the trouble to write an answer! I look back at what I got, I have the same set of numbers of breakdown of paths by number of diagonal moves. $\endgroup$ – achille hui Oct 7 '14 at 21:57 $\begingroup$ @achillehui Aww, I was looking forward to your answer. :) Was it also going to be a direct solution of the Markov chain state, or did you figure out a simpler approach? $\endgroup$ – Chris Culter Oct 7 '14 at 22:34 $\begingroup$ The two ways I derive the answer are both computer assisted solution to the Markov chain state. Though one of it is closer to Dale H's answer in enumerating the possible combination of where the diagonal moves can go. It is sort of incomplete and I need a computer to help me filling a lot of gaps. As of this moment, both of them won't be any simpler than what you have done. The fun is figuring out the answer, not writing them down ;-p $\endgroup$ – achille hui Oct 7 '14 at 22:49 $\begingroup$ "The fun is figuring out the answer, not writing them down" Amen to that! $\endgroup$ – Chris Culter Oct 7 '14 at 23:32 $\begingroup$ @LeenDroogendijk That's already accounted for. In principle, at every time step, I applied a transition matrix for which every column sums to $1$. In practice, I wanted to use integer arithmetic, so I multiplied the transition matrix by $8$, and every column sums to $8$. A state with $\geq\sqrt2$ energy gets a column with $8$ entries equal to $1$; a state with energy $\in[1,\sqrt2)$ gets a column with $4$ entries equal to $2$; and a state with $<1$ energy left gets a column with $1$ entry equal to $8$. After $n$ steps, dividing by $8^n$ is the right normalization. $\endgroup$ – Chris Culter Oct 8 '14 at 0:59 This is small enough that you can enumerate the possibilities. To solve the sub-problem, at some point in the sequence you need either 2 lefts $p=\frac{1}{64}$ and $c=2$ or 2 diagonals $p=\frac{2}{64}$ and $c=2\sqrt2$. There also need to be an even number of other moves that all cancel out $p=\frac{1}{8}$ and $c=2$ or $c=2\sqrt2$ and diagonals are as likely as orthogonals. If all moves are orthogonal there can be a maximum of 10 pairs (counting the 2 lefts) if all are diagonal then then the maximum is 7. What are the combinations? Dale MDale M Not the answer you're looking for? Browse other questions tagged probability combinatorics or ask your own question. Traversing the infinite square grid How can I find the probability of moving an object on a board, and it landing at the origin after X moves? Random walk - Solution verification What is the Probability that a Knight stays on chessboard after N hops? Random walk on a finite square grid: probability of given position after 15 or 3600 moves The probability of stepping and ending where you start. Determine optimal strategy in board game Prove that every number of form $3k$, $3k+1$ can be reached Number of paths on chess board for a rook? King on reduced chessboard $2\times 2$ moving randomly, what is the probability that it ends up in one of the corners after $1000$ moves?
CommonCrawl
Problems in Mathematics Problems by Topics Gauss-Jordan Elimination Linear Transformation Vector Space Eigen Value Cayley-Hamilton Theorem Exam Problems Abelian Group Group Homomorphism Sylow's Theorem Module Theory Ring Theory LaTex/MathJax Login/Join us Solve later Problems My Solved Problems You solved 0 problems!! Solved Problems / Solve later Problems Tagged: system of linear equations by Yu · Published 02/12/2018 Find the Vector Form Solution to the Matrix Equation $A\mathbf{x}=\mathbf{0}$ Find the vector form solution $\mathbf{x}$ of the equation $A\mathbf{x}=\mathbf{0}$, where $A=\begin{bmatrix} 1 & 1 & 1 & 1 &2 \\ 1 & 2 & 4 & 0 & 5 \\ \end{bmatrix}$. Also, find two linearly independent vectors $\mathbf{x}$ satisfying $A\mathbf{x}=\mathbf{0}$. Read solution Click here if solved 35 Add to solve later If the Augmented Matrix is Row-Equivalent to the Identity Matrix, is the System Consistent? Consider the following system of linear equations: \begin{align*} ax_1+bx_2 &=c\\ dx_1+ex_2 &=f\\ gx_1+hx_2 &=i. \end{align*} (a) Write down the augmented matrix. (b) Suppose that the augmented matrix is row equivalent to the identity matrix. Is the system consistent? Justify your answer. Are Coefficient Matrices of the Systems of Linear Equations Nonsingular? (a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular? (b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular? (c) Let $A$ be a $4\times 4$ matrix and let \[\mathbf{v}=\begin{bmatrix} 1 \\ \end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix} \end{bmatrix}.\] Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular? Column Vectors of an Upper Triangular Matrix with Nonzero Diagonal Entries are Linearly Independent Suppose $M$ is an $n \times n$ upper-triangular matrix. If the diagonal entries of $M$ are all non-zero, then prove that the column vectors are linearly independent. Does the conclusion hold if we do not assume that $M$ has non-zero diagonal entries? Write a Vector as a Linear Combination of Three Vectors Write the vector $\begin{bmatrix} 1 \\ 3 \\ -1 \end{bmatrix}$ as a linear combination of the vectors \[\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} , \, \begin{bmatrix} 2 \\ -2 \\ 1 \end{bmatrix} , \, \begin{bmatrix} 2 \\ 0 \\ 4 \end{bmatrix}.\] Determine Trigonometric Functions with Given Conditions (a) Find a function \[g(\theta) = a \cos(\theta) + b \cos(2 \theta) + c \cos(3 \theta)\] such that $g(0) = g(\pi/2) = g(\pi) = 0$, where $a, b, c$ are constants. (b) Find real numbers $a, b, c$ such that the function \[g(\theta) = a \cos(\theta) + b \cos(2 \theta) + c \cos(3 \theta)\] satisfies $g(0) = 3$, $g(\pi/2) = 1$, and $g(\pi) = -5$. Find a Quadratic Function Satisfying Conditions on Derivatives Find a quadratic function $f(x) = ax^2 + bx + c$ such that $f(1) = 3$, $f'(1) = 3$, and $f^{\prime\prime}(1) = 2$. Here, $f'(x)$ and $f^{\prime\prime}(x)$ denote the first and second derivatives, respectively. Click here if solved 6 Determine a 2-Digit Number Satisfying Two Conditions A 2-digit number has two properties: The digits sum to 11, and if the number is written with digits reversed, and subtracted from the original number, the result is 45. Find the number. Determine Whether Matrices are in Reduced Row Echelon Form, and Find Solutions of Systems Determine whether the following augmented matrices are in reduced row echelon form, and calculate the solution sets of their associated systems of linear equations. (a) $\left[\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & -3 \\ 0 & 0 & 1 & 6 \end{array} \right]$. (b) $\left[\begin{array}{rrr|r} 1 & 0 & 3 & -4 \\ 0 & 1 & 2 & 0 \end{array} \right]$. (c) $\left[\begin{array}{rr|r} 1 & 2 & 0 \\ 1 & 1 & -1 \end{array} \right]$. by Yu · Published 09/23/2017 · Last modified 09/25/2017 Linear Algebra Midterm 1 at the Ohio State University (1/3) The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017. There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. This post is Part 1 and contains the first three problems. Check out Part 2 and Part 3 for the rest of the exam problems. Problem 1. Determine all possibilities for the number of solutions of each of the systems of linear equations described below. (a) A consistent system of $5$ equations in $3$ unknowns and the rank of the system is $1$. (b) A homogeneous system of $5$ equations in $4$ unknowns and it has a solution $x_1=1$, $x_2=2$, $x_3=3$, $x_4=4$. Problem 2. Consider the homogeneous system of linear equations whose coefficient matrix is given by the following matrix $A$. Find the vector form for the general solution of the system. \[A=\begin{bmatrix} 1 & 0 & -1 & -2 \\ 2 &1 & -2 & -7 \\ 0 & 1 & 0 & -3 \end{bmatrix}.\] Problem 3. Let $A$ be the following invertible matrix. -1 & 2 & 3 & 4 & 5\\ 6 & -7 & 8& 9& 10\\ 11 & 12 & -13 & 14 & 15\\ 16 & 17 & 18& -19 & 20\\ 21 & 22 & 23 & 24 & -25 \end{bmatrix} \] Let $I$ be the $5\times 5$ identity matrix and let $B$ be a $5\times 5$ matrix. Suppose that $ABA^{-1}=I$. Then determine the matrix $B$. (Linear Algebra Midterm Exam 1, the Ohio State University) Solve the System of Linear Equations Using the Inverse Matrix of the Coefficient Matrix Consider the following system of linear equations 2x+3y+z&=-1\\ 3x+3y+z&=1\\ 2x+4y+z&=-2. (a) Find the coefficient matrix $A$ for this system. (b) Find the inverse matrix of the coefficient matrix found in (a) (c) Solve the system using the inverse matrix $A^{-1}$. If a Matrix is the Product of Two Matrices, is it Invertible? (a) Let $A$ be a $6\times 6$ matrix and suppose that $A$ can be written as \[A=BC,\] where $B$ is a $6\times 5$ matrix and $C$ is a $5\times 6$ matrix. Prove that the matrix $A$ cannot be invertible. (b) Let $A$ be a $2\times 2$ matrix and suppose that $A$ can be written as \[A=BC,\] where $B$ is a $ 2\times 3$ matrix and $C$ is a $3\times 2$ matrix. Can the matrix $A$ be invertible? Solve a System by the Inverse Matrix and Compute $A^{2017}\mathbf{x}$ Let $A$ be the coefficient matrix of the system of linear equations -x_1-2x_2&=1\\ 2x_1+3x_2&=-1. (a) Solve the system by finding the inverse matrix $A^{-1}$. (b) Let $\mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$ be the solution of the system obtained in part (a). Calculate and simplify \[A^{2017}\mathbf{x}.\] (The Ohio State University, Linear Algebra Midterm Exam Problem) Solve the System of Linear Equations and Give the Vector Form for the General Solution Solve the following system of linear equations and give the vector form for the general solution. x_1 -x_3 -2x_5&=1 \\ x_2+3x_3-x_5 &=2 \\ 2x_1 -2x_3 +x_4 -3x_5 &= 0 The Possibilities For the Number of Solutions of Systems of Linear Equations that Have More Equations than Unknowns Determine all possibilities for the number of solutions of each of the system of linear equations described below. (a) A system of $5$ equations in $3$ unknowns and it has $x_1=0, x_2=-3, x_3=1$ as a solution. (b) A homogeneous system of $5$ equations in $4$ unknowns and the rank of the system is $4$. Summary: Possibilities for the Solution Set of a System of Linear Equations In this post, we summarize theorems about the possibilities for the solution set of a system of linear equations and solve the following problems. Determine all possibilities for the solution set of the system of linear equations described below. (a) A homogeneous system of $3$ equations in $5$ unknowns. (b) A homogeneous system of $5$ equations in $4$ unknowns. (c) A system of $5$ equations in $4$ unknowns. (d) A system of $2$ equations in $3$ unknowns that has $x_1=1, x_2=-5, x_3=0$ as a solution. (e) A homogeneous system of $4$ equations in $4$ unknowns. (f) A homogeneous system of $3$ equations in $4$ unknowns. (g) A homogeneous system that has $x_1=3, x_2=-2, x_3=1$ as a solution. (h) A homogeneous system of $5$ equations in $3$ unknowns and the rank of the system is $3$. (i) A system of $3$ equations in $2$ unknowns and the rank of the system is $2$. (j) A homogeneous system of $4$ equations in $3$ unknowns and the rank of the system is $2$. Determine Conditions on Scalars so that the Set of Vectors is Linearly Dependent Determine conditions on the scalars $a, b$ so that the following set $S$ of vectors is linearly dependent. S=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}, \[\mathbf{v}_1=\begin{bmatrix} \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} a \\ Quiz 2. The Vector Form For the General Solution / Transpose Matrices. Math 2568 Spring 2017. (a) The given matrix is the augmented matrix for a system of linear equations. Give the vector form for the general solution. \[ \left[\begin{array}{rrrrr|r} 1 & 0 & -1 & 0 &-2 & 0 \\ 0 & 1 & 2 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ \end{array} \right].\] 1 & 2 & 3 \\ 4 &5 &6 \end{bmatrix}, B=\begin{bmatrix} \end{bmatrix}, C=\begin{bmatrix} 1 & 2\\ 0& 6 \end{bmatrix}, \mathbf{v}=\begin{bmatrix} \end{bmatrix}.\] Then compute and simplify the following expression. \[\mathbf{v}^{\trans}\left( A^{\trans}-(A-B)^{\trans}\right)C.\] Find All Matrices $B$ that Commutes With a Given Matrix $A$: $AB=BA$ \end{bmatrix}.\] Then (a) Find all matrices \[B=\begin{bmatrix} x & y\\ z& w \end{bmatrix}\] such that $AB=BA$. (b) Use the results of part (a) to exhibit $2\times 2$ matrices $B$ and $C$ such that \[AB=BA \text{ and } AC \neq CA.\] Vector Form for the General Solution of a System of Linear Equations Solve the following system of linear equations by transforming its augmented matrix to reduced echelon form (Gauss-Jordan elimination). Find the vector form for the general solution. x_1-x_3-3x_5&=1\\ 3x_1+x_2-x_3+x_4-9x_5&=3\\ x_1-x_3+x_4-2x_5&=1. This website's goal is to encourage people to enjoy Mathematics! This website is no longer maintained by Yu. ST is the new administrator. Linear Algebra Problems by Topics The list of linear algebra problems is available here. Elementary Number Theory (1) Field Theory (27) Group Theory (126) Linear Algebra (485) Math-Magic (1) Module Theory (13) Ring theory (67) Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog. Linearity of Expectations E(X+Y) = E(X) + E(Y) Successful Probability of a Communication Network Diagram Lower and Upper Bounds of the Probability of the Intersection of Two Events Find the Conditional Probability About Math Exam Experiment What is the Probability that Selected Coin was Two-Headed? A Linear Transformation from Vector Space over Rational Numbers to itself Conjugate of the Centralizer of a Set is the Centralizer of the Conjugate of the Set The Rotation Matrix is an Orthogonal Transformation Determine Whether Given Matrices are Similar How to Diagonalize a Matrix. Step by Step Explanation. Determine Whether Each Set is a Basis for $\R^3$ How to Find a Basis for the Nullspace, Row Space, and Range of a Matrix Prove Vector Space Properties Using Vector Space Axioms Show the Subset of the Vector Space of Polynomials is a Subspace and Find its Basis 12 Examples of Subsets that Are Not Subspaces of Vector Spaces The Intersection of Two Subspaces is also a Subspace Express a Vector as a Linear Combination of Other Vectors Rank of the Product of Matrices $AB$ is Less than or Equal to the Rank of $A$ Site Map & Index abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam field theory finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space Search More Problems Membership Level Free If you are a member, Login here. Problems in Mathematics © 2020. All Rights Reserved.
CommonCrawl
Works by Moshe Vardi ( view other items matching `Moshe Vardi`, view all matches ) Disambiguations Disambiguations: Moshe Y. Vardi [13] Moshe Vardi [4] Reasoning About Knowledge.Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Vardi - 1995 - MIT Press.details Reasoning About Knowledge is the first book to provide a general discussion of approaches to reasoning about knowledge and its applications to distributed ... Doxastic and Epistemic Logic in Logic and Philosophy of Logic Philosophy of Artificial Intelligence in Philosophy of Cognitive Science Reasoning in Epistemology Bookmark 318 citations On the Decision Problem for Two-Variable First-Order Logic.Erich Grädel, Phokion G. Kolaitis & Moshe Y. Vardi - 1997 - Bulletin of Symbolic Logic 3 (1):53-69.details We identify the computational complexity of the satisfiability problem for FO 2 , the fragment of first-order logic consisting of all relational first-order sentences with at most two distinct variables. Although this fragment was shown to be decidable a long time ago, the computational complexity of its decision problem has not been pinpointed so far. In 1975 Mortimer proved that FO 2 has the finite-model property, which means that if an FO 2 -sentence is satisfiable, then it has a finite (...) model. Moreover, Mortimer showed that every satisfiable FO 2 -sentence has a model whose size is at most doubly exponential in the size of the sentence. In this paper, we improve Mortimer's bound by one exponential and show that every satisfiable FO 2 -sentence has a model whose size is at most exponential in the size of the sentence. As a consequence, we establish that the satisfiability problem for FO 2 is NEXPTIME-complete. (shrink) Computational Complexity in Philosophy of Computing and Information Mathematical Logic in Formal Sciences Predicate Logic in Logic and Philosophy of Logic On Epistemic Logic and Logical Omniscience.William J. Rapaport & Moshe Y. Vardi - 1988 - Journal of Symbolic Logic 53 (2):668.details Review of Joseph Y. Halpern (ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference (Los Altos, CA: Morgan Kaufmann, 1986),. Epistemic Logic in Logic and Philosophy of Logic On the Unusual Effectiveness of Logic in Computer Science.Joseph Y. Halpern, Robert Harper, Neil Immerman, Phokion G. Kolaitis, Moshe Y. Vardi & Victor Vianu - 2001 - Bulletin of Symbolic Logic 7 (2):213-236.details Logics in Logic and Philosophy of Logic On The Decision Problem For Two-Variable First-Order Logic, By, Pages 53 -- 69.Erich Gr\"Adel, Phokion Kolaitis & Moshe Vardi - 1997 - Bulletin of Symbolic Logic 3 (1):53-69.details What is an Inference Rule?Ronald Fagin, Joseph Y. Halpern & Moshe Y. Vardi - 1992 - Journal of Symbolic Logic 57 (3):1018-1045.details What is an inference rule? This question does not have a unique answer. One usually finds two distinct standard answers in the literature; validity inference $(\sigma \vdash_\mathrm{v} \varphi$ if for every substitution $\tau$, the validity of $\tau \lbrack\sigma\rbrack$ entails the validity of $\tau\lbrack\varphi\rbrack)$, and truth inference $(\sigma \vdash_\mathrm{t} \varphi$ if for every substitution $\tau$, the truth of $\tau\lbrack\sigma\rbrack$ entails the truth of $\tau\lbrack\varphi\rbrack)$. In this paper we introduce a general semantic framework that allows us to investigate the notion of inference (...) more carefully. Validity inference and truth inference are in some sense the extremal points in our framework. We investigate the relationship between various types of inference in our general framework, and consider the complexity of deciding if an inference rule is sound, in the context of a number of logics of interest: classical propositional logic, a nonstandard propositional logic, various propositional modal logics, and first-order logic. (shrink) Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic Common Knowledge Revisited.Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Y. Vardi - 1999 - Annals of Pure and Applied Logic 96 (1-3):89-105.details Verification of Concurrent Programs: The Automata-Theoretic Framework.Moshe Y. Vardi - 1991 - Annals of Pure and Applied Logic 51 (1-2):79-98.details Vardi, M.Y., Verification of concurrent programs: the automata-theoretic framework, Annals of Pure and Applied Logic 51 79–98. We present an automata-theoretic framework to the verification of concurrent and nondeterministic programs. The basic idea is that to verify that a program P is correct one writes a program A that receives the computation of P as input and diverges only on incorrect computations of P. Now P is correct if and only if a program PA, obtained by combining P and A, (...) terminates. We formalize this idea in a framework of ω-automata with a recursive set of states. This unifies previous works on verification of fair termination and verification of temporal properties. (shrink) Reasoning About Knowledge: A Response by the Authors. [REVIEW]Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Y. Vardi - 1997 - Minds and Machines 7 (1):113-113.details BDD-Based Decision Procedures for the Modal Logic K ★.Guoqiang Pan, Ulrike Sattler & Moshe Y. Vardi - 2006 - Journal of Applied Non-Classical Logics 16 (1-2):169-207.details We describe BDD-based decision procedures for the modal logic K. Our approach is inspired by the automata-theoretic approach, but we avoid explicit automata construction. Instead, we compute certain fixpoints of a set of types — which can be viewed as an on-the-fly emptiness of the automaton. We use BDDs to represent and manipulate such type sets, and investigate different kinds of representations as well as a "level-based" representation scheme. The latter turns out to speed up construction and reduce memory consumption (...) considerably. We also study the effect of formula simplification on our decision procedures. To prove the viability of our approach, we compare our approach with a representative selection of other approaches, including a translation of K to QBF. Our results indicate that the BDD-based approach dominates for modally heavy formulae, while search-based approaches dominate for propositionally heavy formulae. (shrink) Church's Problem Revisited.Orna Kupferman & Moshe Y. Vardi - 1999 - Bulletin of Symbolic Logic 5 (2):245-263.details In program synthesis, we transform a specification into a system that is guaranteed to satisfy the specification. When the system is open, then at each moment it reads input signals and writes output signals, which depend on the input signals and the history of the computation so far. The specification considers all possible input sequences. Thus, if the specification is linear, it should hold in every computation generated by the interaction, and if the specification is branching, it should hold in (...) the tree that embodies all possible input sequences. Often, the system cannot read all the input signals generated by its environment. For example, in a distributed setting, it might be that each process can read input signals of only part of the underlying processes. Then, we should transform a specification into a system whose output depends only on the readable parts of the input signals and the history of the computation. This is called synthesis with incomplete information. In this work we solve the problem of synthesis with incomplete information in its full generality. We consider linear and branching settings with complete and incomplete information. We claim that alternation is a suitable and helpful mechanism for coping with incomplete information. Using alternating tree automata, we show that incomplete information does not make the synthesis problem more complex, in both the linear and the branching paradigm. In particular, we prove that independently of the presence of incomplete information, the synthesis problems for CTL and CTL * are complete for EXPTIME and 2EXPTIME, respectively. (shrink) Review: Ronald Fagin, Moshe Y. Vardi, Knowledge and Implicit Knowledge in a Distributed Environment: Preliminary Report.William J. Rapaport, Ronald Fagin & Moshe Y. Vardi - 1988 - Journal of Symbolic Logic 53 (2):667.details Special Selection in Logic in Computer Science.Moshe Y. Vardi - 1997 - Journal of Symbolic Logic 62 (2):608.details Logic and Philosophy of Logic, General Works in Logic and Philosophy of Logic Madison, WI, USA March 31–April 3, 2012.Alan Dow, Isaac Goldbring, Warren Goldfarb, Joseph Miller, Toniann Pitassi, Antonio Montalbán, Grigor Sargsyan, Sergei Starchenko & Moshe Vardi - 2013 - Bulletin of Symbolic Logic 19 (2).details Climate Change in Applied Ethics Relating Word and Tree Automata.Orna Kupferman, Shmuel Safra & Moshe Y. Vardi - 2006 - Annals of Pure and Applied Logic 138 (1):126-146.details In the automata-theoretic approach to verification, we translate specifications to automata. Complexity considerations motivate the distinction between different types of automata. Already in the 60s, it was known that deterministic Büchi word automata are less expressive than nondeterministic Büchi word automata. The proof is easy and can be stated in a few lines. In the late 60s, Rabin proved that Büchi tree automata are less expressive than Rabin tree automata. This proof is much harder. In this work we relate the (...) expressiveness gap between deterministic and nondeterministic Büchi word automata and the expressiveness gap between Büchi and Rabin tree automata. We consider tree automata that recognize derived languages. For a word language L, the derived language of L, denoted L, is the set of all trees all of whose paths are in L. Since often we want to specify that all the computations of the program satisfy some property, the interest in derived languages is clear. Our main result shows that L is recognizable by a nondeterministic Büchi word automaton but not by a deterministic Büchi word automaton iff L is recognizable by a Rabin tree automaton and not by a Büchi tree automaton. Our result provides a simple explanation for the expressiveness gap between Büchi and Rabin tree automata. Since the gap between deterministic and nondeterministic Büchi word automata is well understood, our result also provides a characterization of derived languages that can be recognized by Büchi tree automata. Finally, it also provides an exponential determinization of Büchi tree automata that recognize derived languages. (shrink) Special Selection in Logic in Computer Science.Moshe Vardi - 1997 - Journal of Symbolic Logic 62 (2):608-608.details Logic for Programming Artificial Intelligence and Reasoning 10th International Conference, Lpar 2003, Almaty, Kazakhstan, September 22-26, 2003 : Proceedings. [REVIEW]Moshe Y. Vardi & A. Voronkov - 2003details Areas of Mathematics in Philosophy of Mathematics
CommonCrawl
What Is Net Interest Margin? Calculating Net Interest Margin What Affects Net Interest Margin Net Interest Margin and Banking Historical Net Interest Margins What Is Net Interest Margin? Overview, Formula, Example Andrew Bloomenthal Andrew Bloomenthal has 20+ years of editorial experience as a financial journalist and as a financial services marketing writer. David Kindness Reviewed by David Kindness David Kindness is a Certified Public Accountant (CPA) and an expert in the fields of financial accounting, corporate and individual tax planning and preparation, and investing and retirement planning. David has helped thousands of clients improve their accounting and financial systems, create budgets, and minimize their taxes. Katrina Munichiello Fact checked by Katrina Munichiello Katrina Ávila Munichiello is an experienced editor, writer, fact-checker, and proofreader with more than fourteen years of experience working with print and online publications. Investopedia / Crea Taylor Net interest margin (NIM) is a measurement comparing the net interest income a financial firm generates from credit products like loans and mortgages, with the outgoing interest it pays holders of savings accounts and certificates of deposit (CDs). Expressed as a percentage, the NIM is a profitability indicator that approximates the likelihood of a bank or investment firm thriving over the long haul. This metric helps prospective investors determine whether or not to invest in a given financial services firm by providing visibility into the profitability of their interest income versus their interest expenses. Simply put: a positive net interest margin suggests that an entity operates profitably, while a negative figure implies investment inefficiency. In the latter scenario, a firm may take corrective action by applying funds toward outstanding debt or shifting those assets towards more profitable investments. Net Interest Margin Net interest margin may be calculated by the following formula: Net Interest Margin = IR − IE Average Earning Assets where: IR = Investment returns IE = Interest expenses \begin{aligned} &\text{Net Interest Margin} = \frac { \text{IR} - \text{IE} }{ \text{Average Earning Assets} } \\ &\textbf{where:} \\ &\text{IR} = \text{Investment returns} \\ &\text{IE} = \text{Interest expenses} \\ \end{aligned} ​Net Interest Margin=Average Earning AssetsIR−IE​where:IR=Investment returnsIE=Interest expenses​ Consider the following fictitious example: Assume Company ABC boasts a return on investment of $1,000,000, an interest expense of $2,000,000, and average earning assets of $10,000,000. In this scenario, ABC's net interest margin totals -10%, indicating that it lost more money due to interest expenses than it earned from its investments. This firm would likely fare better if it used its investment funds to pay off debts rather than making this investment. Multiple factors may affect a financial institution's net interest margin—chief among them: supply and demand. If there's a large demand for savings accounts compared to loans, net interest margin decreases, as the bank is required to pay out more interest than it receives. Conversely, if there's a higher demand in loans versus savings accounts, where more consumers are borrowing than saving, a bank's net interest margin increases. Monetary policy and fiscal regulation can impact a bank's net interest margin as the direction of interest rates dictate whether consumers borrow or save. Monetary policies set by central banks also heavily influence a bank's net interest margins because these edicts play a pivotal role in governing the demand for savings and credit. When interest rates are low, consumers are more likely to borrow money and less likely to save it. Over time, this generally results in higher net interest margins. Contrarily, if interest rates rise, loans become costlier, thus making savings a more attractive option, which consequently decreases net interest margins. Net Interest Margin and Retail Banking Most retail banks offer interest on customer deposits, which generally hovers around 1% annually. If such a bank marshaled together the deposits of five customers and used those proceeds to issue a loan to a small business, with an annual interest rate of 5%, the 4% margin between these two amounts is considered the net interest spread. Looking one step further, the net interest margin calculates that ratio over the bank's entire asset base. Let's assume a bank has earning assets of $1.2 million, $1 million in deposits with a 1% annual interest to depositors, and loans out $900,000 at an interest of 5%. This means its investment returns total $45,000, and its interest expenses are $10,000. Using the aforementioned formula, the bank's net interest margin is 2.92%. With its NIM squarely in positive territory, investors may wish to strongly consider investing in this firm. The Federal Financial Institutions Examination Council (FFIEC) releases an average net interest margin figure for all U.S. banks on a quarterly basis. Historically, this figure has trended downward while averaging about 3.8% since first being recorded in 1984. Recessionary periods coincide with dips in average net interest margins, while periods of economic expansion have witnessed sharp initial increases in the figure, followed by gradual declines. The overall movement of the average net interest margin has tracked the movement of the federal funds rate over time. Case in point: following the financial crisis of 2008, U.S. banks operated under decreasing net interest margins due to a falling rate that reached near-zero levels from 2008 to 2016. During this recession, the average net interest margin for banks in the U.S. shed nearly a quarter of its value before finally picking up again in 2015. FRED Economic Data. "Net Interest Margin for all U.S. Banks." Accessed Aug. 26, 2020. Net Interest Income: What It Is, How It's Calculated, Examples Net interest income reflects the difference between the revenue from a bank's interest-bearing assets and expenses on its interest-bearing liabilities. Interest Rates: Different Types and What They Mean to Borrowers The interest rate is the amount lenders charge borrowers and is a percentage of the principal. It is also the amount earned from deposit accounts. Net Interest Rate Spread: Definition and Use in Profit Analysis The net interest rate spread is the difference between the average yield a financial institution receives from loans, along with other interest-accruing activities, and the average rate it pays on deposits and borrowings. Breakeven Yield The breakeven yield is the yield required to cover the cost of marketing a banking product or service. Time Deposit: Definition, How It's Used, Rates, and How to Invest A term deposit is a type of financial account where money is locked up for some period of time in return for above average interest payments on those amounts. What Is the Multiplier Effect? Formula and Example The multiplier effect measures the impact that a change in investment will have on final economic output. Earnings Reports and News Bank of America Q3 FY2022 Earnings Report Preview: What to Look For What Is the Average Profit Margin for a Company in the Banking Sector? What Net Interest Margin Is Typical for a Bank? Do Banks Have Working Capital? Certificate of Deposits (CDs) Do CD Rates Go up When the Prime Goes Up? Key Financial Ratios to Analyze Retail Banks
CommonCrawl
Results for 'Joe T. Massey' (try it on Scholar) Mental Rotation of the Neuronal Population Vector.Apostólos P. Georgopoulos, Joseph T. Lurito, Michael Petrides, Andrew B. Schwartz & Joe T. Massey - 1994 - In H. Gutfreund & G. Toulouse (eds.), Biology and Computation: A Physicist's Choice. World Scientific. pp. 183.details Philosophy of Consciousness in Philosophy of Mind $143.90 new $145.72 direct from Amazon $186.01 used (collection) Amazon page Centers Don't Have to Be Points, Political Influence of US Republican Party Overseas.Ash Amin, H. Baker, D. Massey & N. Thrift - 2005 - In Bruno Latour & Peter Weibel (eds.), Making Things Public. MIT Press.details Republicanism in Social and Political Philosophy Justification for a Home-Based Education Programme for Kidney Patients and Their Social Network Prior to Initiation of Renal Replacement Therapy.E. K. Massey, M. T. Hilhorst, R. W. Nette, P. J. H. Smak Gregoor, M. A. van den Dorpel, A. C. van Kooij, W. C. Zuidema, R. Zietse, J. J. V. Busschbach & W. Weimar - 2011 - Journal of Medical Ethics 37 (11):677-681.details In this article, an ethical analysis of an educational programme on renal replacement therapy options for patients and their social network is presented. The two main spearheads of this approach are: (1) offering an educational programme on all renal replacement therapy options ahead of treatment requirement and (2) a home-based approach involving the family and friends of the patient. Arguments are offered for the ethical justification of this approach by considering the viewpoint of the various stakeholders involved. Finally, reflecting on (...) these ethical considerations, essential conditions for carrying out such a programme are outlined. The goal is to develop an ethically justified and responsible educational programme. (shrink) Biomedical Ethics in Applied Ethics Justification for a Home-Based Education Programme for Kidney Patients and Their Social Network Prior to Initiation of Renal Replacement Therapy.Emma K. Massey, Medard T. Hilhorst, Robert W. Nette, Peter Jh Smak Gregoor, Marinus A. van den Dorpel, Anthony C. van Kooij, Willij C. Zuidema, Robert Zietse, Jan Jv Busschbach & Willem Weimar - 2011 - Journal of Medical Ethics 37 (11):677-681.details Stimulus Generalization According to Palatability in Lithium-Chloride-Induced Taste Aversions.Oliver T. Massey & William H. Calhoun - 1977 - Bulletin of the Psychonomic Society 10 (2):92-94.details Conscious and Unconscious Learning in Philosophy of Cognitive Science J. Alberto Coffa.W. C. Salmon, G. Massey, N. D. Belnap Jr & T. M. Simpson - 1993 - In David-Hillel Ruben (ed.), Explanation. Oxford University Press.details $5.84 used $19.99 new (collection) Amazon page A Short Critical History of Architecture. By H. Heathcote Statham. London: B. T. Batsford.F. T. - 1914 - Journal of Hellenic Studies 34:160-161.details Greek Art: A Commemorative Catalogue of an Exhibition Held in 1946 at the Royal Academy, Burlington House, London. By J. Chittenden and C. T. Seltman. Pp. 72; Pl. 128. London: Faber and Faber, 1947. 30s. [REVIEW]B. L. W. T. - 1946 - Journal of Hellenic Studies 66:136-136.details The Orders of Architecture: Greek, Roman, and Renaissance. By Arthur Stratton. Pp. 49; 80 Plates, 26 Illustrations in the Text. London: B. T. Batsford, 1931. 21s. [REVIEW]F. T. - 1932 - Journal of Hellenic Studies 52 (1):133-133.details Profiles of Greek Mouldings. By Lucy T. Shoe. Text, Pp. Xvi + 187; Plates, Loose, in Folding Flap Case: A to F, Photographic, and I to LXXIX, in Line Block. Cambridge, Mass.: Harvard University Press, 1936. [REVIEW]F. T. - 1937 - Journal of Hellenic Studies 57 (2):260-261.details Greek Comic Costume: Its History and Diffusion . By T. B. L. Webster. Pp. 26, with 2 Plates. Manchester: Rylands Library. 1954. 3s. [REVIEW]J. D. T. - 1955 - Journal of Hellenic Studies 75:208-209.details Stephen J. Massey 1948-1992.Kathleen Nicholson Massey - 1994 - Proceedings and Addresses of the American Philosophical Association 67 (4):144 - 145.details The Art of Knowing One-Self: Or, an Enquiry Into the Sources of Morality [Tr. By T.W.].Jacques Abbadie & W. T. - 1695details LAGUNA, T. DE.-Introduction to the Study of Ethics. [REVIEW]A. E. T. - 1915 - Mind 24:421.details An Account of the Life and Writings of Mr. John Locke [by J. Le Clerc, Tr. By T.F.P.].Jean Le Clerc & F. P. T. - 1713details Locke: Life and Times in 17th/18th Century Philosophy An Account of the Life and Writings of Mr. John Locke [by J. Le Clerc, Tr. By T.F.P.]. [Followed by] the Last Will and Testament of John Locke. [REVIEW]Jean Le Clerc & F. P. T. - 1714details The Life and Character of Mr. John Locke. Done Into Engl. By T.F.P.Jean Le Clerc & F. P. T. - 1706details Doreen Massey and Richard Meegan.Doreen Massey - 1989 - In Derek Gregory & Rex Walford (eds.), Horizons in Human Geography. Barnes & Noble. pp. 244.details $0.62 used $50.96 direct from Amazon $84.12 new (collection) Amazon page A Dialogue Between Mr. Merriman, and Dr. Chymist: Concerning John Sergents Paradoxes, in His New Method to Science, and His Solid Philosophy. By T.W. [REVIEW]W. T. - 1698 - [S.N.].details British Philosophy in European Philosophy NUNN, T. P. -The Aim and Achievements of Scientific Method. [REVIEW]L. T. L. T. - 1908 - Mind 17:274.details Atran's Evolutionary Psychology: "Say It Ain't Just-so, Joe".James Maffie - 1998 - Behavioral and Brain Sciences 21 (4):583-584.details Atran advances three theses: our folk-biological taxonomy is (1) universal, (2) innate, and (3) the product of natural selection. I argue that Atran offers insufficient support for theses (2) and (3) and that his evolutionary psychology thus amounts to nothing more than a just-so story. Evolutionary Psychology in Philosophy of Cognitive Science Why Joe Schmoe Doesn't Buy Evolution.William Dembski - manuscriptdetails Evolutionary Biology in Philosophy of Biology Public Conversation: Joe Sacco and W.J.T. Mitchell.Jim Chandler - 2014 - Critical Inquiry 40 (3):53-70.details Reflections on Medicine: Essays by Robert U. Massey, M.D.Jerome Lowenstein - 2011 - Perspectives in Biology and Medicine 54 (4):595-598.details Reflections on Medicine is a rich sampling of 70 essays from a collection of more than 300 essays Robert Massey wrote for Connecticut Medicine: The Journal of the Connecticut State Medical Society, between 1973 and 2005. It is an elegant buffet of the thoughts and observations of a remarkable man. In his foreword to the book, Sherwin Nuland writes: "he applied his massive erudition to so many [other] themes, universal and specific—he accepted the uncertainty of human wisdom and even (...) knowledge, recognizing that it is for each generation to look anew at dilemmas both modern and ancient" (p. ix). In fact, there are countless references to the writings of Albert Schweitzer, T. S. Eliot, Mark Twain, and Lewis Thomas .. (shrink) Philosophy of Medicine, Misc in Philosophy of Science, Misc Don't Stop Believin'.Joe Smith - 1999details Philosophy of Love in Philosophy of Gender, Race, and Sexuality Satisficing Consequentialism Still Doesn't Satisfy.Joe Slater - forthcoming - Utilitas:1-10.details Satisficing consequentialism is an unpopular theory. Because it permits gratuitous sub-optimal behaviour, it strikes many as wildly implausible. It has been widely rejected as a tenable moral theory for more than twenty years. In this article, I rehearse the arguments behind this unpopularity, before examining an attempt to redeem satisficing. Richard Yetter Chappell has recently defended a form of 'effort satisficing consequentialism'. By incorporating an 'effort ceiling' – a limit on the amount of willpower a situation requires – and requiring (...) that agents produce at least as much good as they could given how much effort they are exerting, Chappell avoids the obvious objections. However, I demonstrate that the revised theory is susceptible to a different objection, and that the resulting view requires that any supererogatory behaviour must be efficient, which fails to match typical moral verdicts. (shrink) Aristotle's Physics: A Guided Study. Joe Sachs.Paul T. Keyser - 1996 - Isis 87 (4):716-717.details Aristotle in Ancient Greek and Roman Philosophy The Politics of Conscience: T. H. Green and His Age.Joe E. Barnhart - 1967 - Journal of the History of Philosophy 5 (1):96-98.details Melvin Richter, "The Politics of Conscience: T. H. Green and His Age". [REVIEW]Joe Edward Barnhart - 1967 - Journal of the History of Philosophy 5 (1):96.details The Economics of Collective Choice, by Joe B. Stevens.T. P. Abeles - 1994 - Agriculture and Human Values 11:57-57.details Theory in Economics in Philosophy of Social Science Epistemic Modals Are Assessment-Sensitive.John MacFarlane - 2011 - In Andy Egan & B. Weatherson (eds.), Epistemic Modality. Oxford University Press.details By "epistemic modals," I mean epistemic uses of modal words: adverbs like "necessarily," "possibly," and "probably," adjectives like "necessary," "possible," and "probable," and auxiliaries like "might," "may," "must," and "could." It is hard to say exactly what makes a word modal, or what makes a use of a modal epistemic, without begging the questions that will be our concern below, but some examples should get the idea across. If I say "Goldbach's conjecture might be true, and it might be false," (...) I am not endorsing the Cartesian view that God could have made the truths of arithmetic come out differently. I make the claim not because I believe in the metaphysical contingency of mathematics, but because I know that Goldbach's conjecture has not yet been proved or refuted. Similarly, if I say "Joe can't be running," I am not saying that Joe's constitution prohibits him from running, or that Joe is essentially a non-runner, or that Joe isn't allowed to run. My basis for making the claim may be nothing more than that I see Joe's running shoes hanging on a hook. (shrink) Epistemic Contextualism and Relativism in Epistemology Epistemic Modals in Philosophy of Language Epistemic Possibility in Epistemology Relativism about Truth in Philosophy of Language $35.19 used $70.95 new $110.00 direct from Amazon (collection) Amazon page Exploring Ethical Issues Related to Patient Engagement in Healthcare: Patient, Clinician and Researcher's Perspectives.Marjorie Montreuil, Joé T. Martineau & Eric Racine - 2019 - Journal of Bioethical Inquiry 16 (2):237-248.details Patient engagement in healthcare is increasingly discussed in the literature, and initiatives engaging patients in quality improvement activities, organizational design, governance, and research are becoming more and more common and have even become mandatory for certain health institutions. Here we discuss a number of ethical challenges raised by this engagement from patients from the perspectives of research, organizational/quality improvement practices, and patient experiences, while offering preliminary recommendations as to how to address them. We identified three broad categories of ethical issues (...) that intersect between the different types of patient engagement: establishing a shared vision about goals of patient engagement and respective roles; the process and method of engaging with patients; and practical aspects of patient engagement. To explain these issues, we build from our personal, professional, and academic experiences, as well as traditions such as pragmatism and hermeneutics that stress the importance of participation, empowerment, and engagement. Patient engagement can be highly valuable at numerous levels, but particular attention should be paid to the process of engaging with patients and related ethical issues. Some lessons from the literature on the ethics of participatory research can be translated to organizational and quality improvement practices. (shrink) On the Unethical Use of Privileged Information in Strategic Decision-Making: The Effects of Peers' Ethicality, Perceived Cohesion, and Team Performance.Kevin J. Johnson, Joé T. Martineau, Saouré Kouamé, Gokhan Turgut & Serge Poisson-de-Haro - 2018 - Journal of Business Ethics 152 (4):917-929.details In order to make strategic decisions and improve their firm's performance, top management teams must have information on the competitive context in general, and the firm's competitors in particular. During the decision-making process, top managers can have access to "privileged information"—i.e., information of a confidential and potentially strategic nature that could ultimately confer a decisional advantage over competing parties. However, obtaining and using privileged information in a business context is often illegal—and if not, is usually deemed unethical or "against the (...) rules." Using a quasi-experimental design, this study explores the reasons why an individual might engage in such unethical behavior. We assess the extent to which managers use privileged information with respect to perceived team cohesion and peers' ethicality. More specifically, our results show that the use of privileged information is predicted by the decision-maker's perceptions of their team cohesion and their peers' ethicality. Moreover, we find that team performance, as a group-level nonself-reported factor, moderates the relationship between cohesion and the use of privileged information. The relationship between cohesion, ethical behavior, and team performance is also discussed. We draw on these findings to make some practical suggestions on how to incorporate practices that could better prevent the unethical use of privileged information in strategic decision-making processes. (shrink) La gestion de l'éthique dans les organisations québécoises : déploiement, portrait et pistes de développement souhaitables.Joé T. Martineau & Pauchant - 2017 - Éthique Publique. Revue Internationale D'Éthique Sociétale Et Gouvernementale 19 (1).details Cet article brosse un portrait de l'évolution du domaine de l'éthique organisationnelle au Québec et propose une classification permettant de qualifier les approches réellement utilisées en entreprise, et de discuter de la composition des programmes d'éthique organisationnelle en fonction des besoins et des contextes organisationnels. Les résultats présentés sont issus d'une étude quantitative sur la présence et la perception des pratiques et programmes d'éthique mis en place dans les organisations québécoises. Cette étude a aussi permis de démontrer qu'il existe six (...) types d'orientations des programmes d'éthique organisationnelle qui regroupent chacune diverses pratiques de gestion de l'éthique. L'approche proposée dans cet article souhaite aller au-delà de l'approche classique opposant les approches de conformité et d'intégrité en éthique organisationnelle, en misant plutôt sur la complémentarité, la synergie et la variété des différentes approches. (shrink) A Study of Perennial Philosophy and Psychedelic Experience, with a Proposal to Revise W. T. Stace's Core Characteristics of Mystical Experience.Ed D'Angelo - manuscriptdetails A Study of Perennial Philosophy and Psychedelic Experience, with a Proposal to Revise W. T. Stace's Core Characteristics of Mystical Experience ©Ed D'Angelo 2018 -/- Abstract -/- According to the prevailing paradigm in psychedelic research today, when used within an appropriate set and setting, psychedelics can reliably produce an authentic mystical experience. According to the prevailing paradigm, an authentic mystical experience is one that possesses the common or universal characteristics of mystical experience as identified by the philosopher W. T. Stace (...) in his 1960 work Mysticism and Philosophy. Stace's common characteristics of mystical experience are the basis for the Hood Mysticism Questionnaire, which is the most widely used quantitative measure of mystical experience in experimental studies of psychedelic experience. In this paper, I trace the historical roots of Stace's common characteristics of mystical experience back to Christian Neoplatonism and apophatic theology, and I trace those, in turn, back to Plato's concept of the Good and to Aristotle's concept of God as active intellect. I argue that Stace's common characteristics of mystical experience are not universal or culturally invariant but are the product of a specifically Christian religious and moral tradition that has its roots in ancient Greek metaphysics. My paper concludes with a revised list of common characteristics of psychedelic experience that is a better candidate for a list of invariant structures of psychedelic experience than Stace's common characteristics of Christian mystical experience. (shrink) Philosophy of Religion, Misc in Philosophy of Religion States of Consciousness in Philosophy of Cognitive Science T.S. Eliot and Others: The (More or Less) Definitive History and Origin of the Term "Objective Correlative".Dominic Griffiths - 2018 - English Studies 6 (99):642-660.details This paper draws together as many as possible of the clues and pieces of the puzzle surrounding T. S. Eliot's "infamous" literary term "objective correlative". Many different scholars have claimed many different sources for the term, in Pound, Whitman, Baudelaire, Washington Allston, Santayana, Husserl, Nietzsche, Newman, Walter Pater, Coleridge, Russell, Bradley, Bergson, Bosanquet, Schopenhauer and Arnold. This paper aims to rewrite this list by surveying those individuals who, in different ways, either offer the truest claim to being the source of (...) the term, or contributed the most to Eliot's development of it: Allston, Husserl, Bradley and Bergson. What the paper will argue is that Eliot's possible inspiration for the term is more indebted to the idealist tradition, and Bergson's aesthetic development of it, than to the phenomenology of Husserl. (shrink) Henri Bergson in 20th Century Philosophy Husserl: Logical Investigations in Continental Philosophy Literature in Arts and Humanities Martin Heidegger in Continental Philosophy Acceptable Contradictions: Pragmatics or Semantics? A Reply to Cobreros Et Al. [REVIEW]Sam Alxatib, Peter Pagin & Uli Sauerland - 2013 - Journal of Philosophical Logic 42 (4):619-634.details Naive speakers find some logical contradictions acceptable, specifically borderline contradictions involving vague predicates such as Joe is and isn't tall. In a recent paper, Cobreros et al. (J Philos Logic, 2012) suggest a pragmatic account of the acceptability of borderline contradictions. We show, however, that the pragmatic account predicts the wrong truth conditions for some examples with disjunction. As a remedy, we propose a semantic analysis instead. The analysis is close to a variant of fuzzy logic, but conjunction and disjunction (...) are interpreted as intensional operators. (shrink) Dialetheism in Logic and Philosophy of Logic Fuzzy Logic in Logic and Philosophy of Logic Semantics-Pragmatics Distinction in Philosophy of Language The Cognitive Based Approach of Capacity Assessment in Psychiatry: A Philosophical Critique of the MacCAT-T. [REVIEW]Torsten Marcus Breden & Jochen Vollmann - 2004 - Health Care Analysis 12 (4):273-283.details This article gives a brief introduction to the MacArthur Competence Assessment Tool-Treatment (MacCAT-T) and critically examines its theoretical presuppositions. On the basis of empirical, methodological and ethical critique it is emphasised that the cognitive bias that underlies the MacCAT-T assessment needs to be modified. On the one hand it has to be admitted that the operationalisation of competence in terms of value-free categories, e.g. rational decision abilities, guarantees objectivity to a great extent; but on the other hand it bears severe (...) problems. Firstly, the cognitive focus is in itself a normative convention in the process of anthropological value-attribution. Secondly, it misses the complexity of the decision process in real life. It is therefore suggested that values, emotions and other biographic and context specific aspects should be considered when interpreting the cognitive standards according to the MacArthur model. To fill the gap between cognitive and non-cognitive approaches the phenomenological theory of personal constructs is briefly introduced. In conclusion some main demands for further research to develop a multi-step model of competence assessment are outlined. (shrink) Looking Into the Heart of Light: Considering the Poetic Event in the Work of T.S. Eliot and Martin Heidegger.Dominic Griffiths - 2014 - Philosophy and Literature 38 (2):350-367.details No one is quite sure what happened to T.S. Eliot in that rose-garden. What we do know is that it formed the basis for Four Quartets, arguably the greatest English poem written in the twentieth century. Luckily it turns out that Martin Heidegger, when not pondering the meaning of being, spent a great deal of time thinking and writing about the kind of event that Eliot experienced. This essay explores how Heidegger developed the concept of Ereignis, "event" which, in the (...) context of Eliot's poetry, helps us understand an encounter with the "heart of light" a little better. (shrink) Poetry in Aesthetics A Failed Encounter in Mathematics and Chemistry: The Folded Models of van 'T Hoff and Sachse.Michael Friedman - 2016 - Teorie Vědy / Theory of Science 38 (3):359-386.details Three-dimensional material models of molecules were used throughout the 19th century, either functioning as a mere representation or opening new epistemic horizons. In this paper, two case studies are examined: the 1875 models of van 't Hoff and the 1890 models of Sachse. What is unique in these two case studies is that both models were not only folded, but were also conceptualized mathematically. When viewed in light of the chemical research of that period not only were both of these (...) aspects, considered in their singularity, exceptional, but also taken together may be thought of as a subversion of the way molecules were chemically investigated in the 19th century. Concentrating on this unique shared characteristic in the models of van 't Hoff and the models of Sachse, this paper deals with the shifts and displacements between their operational methods and existence: between their technical and epistemological aspects and the fact that they were folded, which was forgotten or simply ignored in the subsequent development of chemistry. (shrink) Are We Conditionally Obligated to Be Effective Altruists?Thomas Sinclair - 2018 - Philosophy and Public Affairs 46 (1):36-59.details It seems that you can be in a position to rescue people in mortal danger and yet have no obligation to do so, because of the sacrifice to you that this would involve. At the same time, if you do save anyone, then you must not leave anyone to die whom it would cost you no additional sacrifice to save. On the basis of these claims, Theron Pummer and Joe Horton have recently defended a 'conditional obligation of effective altruism', which (...) requires one to give to the most cost-effective charity if one is going to make a charitable donation at all, all other things equal. Appealing to a distinction between 'thoroughgoing' and 'half-hearted' non-consequentialism, I argue that their inferences don't go through, and moreover that this sort of argument in general is unlikely to work as a way to defend effective altruism. (shrink) Charitable Giving, Misc in Applied Ethics Consequentialism and Deontology in Normative Ethics Effective Altruism in Applied Ethics Objections to Consequentialism, Misc in Normative Ethics Social Ethics in Applied Ethics Supererogation in Normative Ethics Dignity, Character and Self-Respect.Robin S. Dillon (ed.) - 1994 - Routledge.details This is the first anthology to bring together a selection of the most important contemporary philosophical essays on the nature and moral significance of self-respect. Representing a diversity of views, the essays illustrate the complexity of self-respect and explore its connections to such topics as personhood, dignity, rights, character, autonomy, integrity, identity, shame, justice, oppression and empowerment. The book demonstrates that self-respect is a formidable concern which goes to the very heart of both moral theory and moral life. Contributors: Bernard (...) Boxill, Stephen L. Darwall, John Deigh, Robin S. Dillon, Thomas E. Hill, Jr., Aurel Kolnai, Stephen J. Massey, Diana T. Meyers, Michelle M. Moody-Adams, John Rawls, Gabriele Taylor, Elizabeth Telfer, Laurence L. Thomas. (shrink) John Rawls in 20th Century Philosophy $10.50 used $41.05 new $59.95 direct from Amazon Amazon page Jump Liars and Jourdain's Card Via the Relativized T-Scheme.Ming Hsiung - 2009 - Studia Logica 91 (2):239-271.details A relativized version of Tarski's T-scheme is introduced as a new principle of the truth predicate. Under the relativized T-scheme, the paradoxical objects, such as the Liar sentence and Jourdain's card sequence, are found to have certain relative contradictoriness. That is, they are contradictory only in some frames in the sense that any valuation admissible for them in these frames will lead to a contradiction. It is proved that for any positive integer n, the n-jump liar sentence is contradictory in (...) and only in those frames containing at least an n-jump odd cycle. In particular, the Liar sentence is contradictory in and only in those frames containing at least an odd cycle. The Liar sentence is also proved to be less contradictory than Jourdain's card sequence: the latter must be contradictory in those frames where the former is so, but not vice versa. Generally, the relative contradictoriness is the common characteristic of the paradoxical objects, but different paradoxical objects may have different relative contradictoriness. (shrink) Liar Paradox in Logic and Philosophy of Logic Revision Theory of Truth in Philosophy of Language Theories of Truth, Misc in Philosophy of Language The Poet as 'Worldmaker': T.S. Eliot and the Religious Imagination.Dominic Griffiths - 2015 - In Francesca Knox & David Lonsdale (eds.), The Power of the Word: Poetry and the Religious Imagination. Ashgate. pp. 161-175.details Martin Heidegger defines the world as 'the ever non-objective to which we are subject as long as the paths of birth and death . . . keep us transported into Being'. He writes that the world is 'not the mere collection of the countable or uncountable, familiar and unfamiliar things that are at hand . . . The world worlds'. Being able to fully and richly express how the world worlds is the task of the artist, whose artwork is the (...) crystallization of this 'worlding'. For Heidegger it is especially the poet who is attuned to this 'worlding', for the poet's work is focussed on and happens within language itself. The poet is a 'world-maker', but this 'worlding' is not directed at creating a fictional world; rather it is aimed at revealing the world itself, drawing to the foreground the 'ever non-objective' nature of the world, the world happening and unfolding through and with us. This paper, using T.S. Eliot's poetry, particularly Four Quartets as an example, will delve into how the language of the poet is able to articulate this seemingly invisible boundary between the everyday world and that same world revealed as a mysterious potential that 'worlds'. The 'religious imagination' is central in how this transformation of reality, through poetic language, can manifest. Paul Ricoeur's work on the poetic and religious dimensions of imagination, particularly the notion of 'hope' provides the theoretical underpinnings to explain this transformation. (shrink) Literary Imagination in Aesthetics Paul Ricoeur in Continental Philosophy Religious Imagination in Philosophy of Religion The Metaphysics of Free Will: A Critique of Free Won'T as Double Prevention.Matteo Grasso - 2015 - Rivista Internazionale di Filosofia e Psicologia 6 (1):120-129.details The problem of free will is deeply linked with the causal relevance of mental events. The causal exclusion argument claims that, in order to be causally relevant, mental events must be identical to physical events. However, Gibb has recently criticized it, suggesting that mental events are causally relevant as double preventers. For Gibb, mental events enable physical effects to take place by preventing other mental events from preventing a behaviour to take place. The role of mental double preventers is hence (...) similar to what Libet names free won't, namely the ability to veto an action initiated unconsciously by the brain. In this paper I will propose an argument against Gibb's account, the causal irrelevance argument, showing that Gibb's proposal does not overcome the objection of systematic overdetermination of causal relevance, because mental double preventers systematically overdetermine physical double preventers, and therefore mental events are causally irrelevant. (shrink) Downward Causation in Metaphysics Topics in Free Will, Misc in Philosophy of Action Filosofía, modernidad, tradición e historia: analogías entre Martin Heidegger y T. S Eliot.David Sánchez Usanos - 2019 - Revista de Filosofía 44 (2):263-278.details Este artículo muestra la relación entre algunos planteamientos de Martin Heidegger a propósito de la filosofía, la historia, el pasado y la tradición y propuestas análogas en el campo literario, principalmente a cargo a cargo del poeta y crítico T. S Eliot. Nos centraremos fundamentalmente en cuatro aspectos: la reinvención del canon, la reconsideración metodológica del pasado, la atención a la forma de presentación y el cuestionamiento de la propia disciplina. T-Convexity and Tame Extensions.Dries Lou Van Den & H. Lewenberg Adam - 1995 - Journal of Symbolic Logic 60 (1):74 - 102.details Let T be a complete o-minimal extension of the theory of real closed fields. We characterize the convex hulls of elementary substructures of models of T and show that the residue field of such a convex hull has a natural expansion to a model of T. We give a quantifier elimination relative to T for the theory of pairs (R, V) where $\mathscr{R} \models T$ and V ≠ R is the convex hull of an elementary substructure of R. We deduce (...) that the theory of such pairs is complete and weakly o-minimal. We also give a quantifier elimination relative to T for the theory of pairs (R, N) with R a model of T and N a proper elementary substructure that is Dedekind complete in R. We deduce that the theory of such "tame" pairs is complete. (shrink) La ciencia y el mundo físico de acuerdo a W. T. Stace.Vicente Aboites & Gilberto Aboites - 2018 - Valenciana 21:187-206.details Se presenta el argumento de W. T. Stace sobre el realismo señalando no que éste sea falso sino solamente que no hay absolutamente ninguna razón para considerar que sea verdadero y por tanto no tenemos por qué creerlo. Esto se aplica a la discusión de la pregunta: ¿Cómo sabemos que los átomos existen? Haciendo referencia a algunas de las respuestas científicas más importantes conocidas que son en orden cronológico: i) La ley de las proporciones definidas o Ley de Proust, ii) (...) la teoría cinética de los gases, iii) el movimiento Browniano y, iv) imágenes de microscopio de efecto túnel. (shrink) A Problem with Societal Desirability as a Component of Responsible Research and Innovation: The "If We Don'T Somebody Else Will" Argument.John Weckert, Hector Rodriguez Valdes & Sadjad Soltanzadeh - 2016 - NanoEthics 10 (2):215-225.details The implementation of Responsible Research and Innovation is not without its challenges, and one of these is raised when societal desirability is included amongst the RRI principles. We will argue that societal desirability is problematic even though it appears to fit well with the overall ideal. This discord occurs partly because the idea of societal desirability is inherently ambiguous, but more importantly because its scope is unclear. This paper asks: is societal desirability in the spirit of RRI? On von Schomberg's (...) account, it seems clear that it is, but societal desirability can easily clash with what is ethically permissible; for example, when what is desirable in a particular society is bad for the global community. If that society chose not to do what was desirable for it, the world would be better off than if they did it. Yet our concern here is with a more complex situation, where there is a clash with ethical acceptability, but where the world would not be better off if the society chose not do what was societally desirable for itself. This is the situation where it is argued that someone else will do it if we do not. The first section of the paper gives an outline of what we take technology to be, and the second is a discussion of which criteria should be the basis for choosing research and innovation projects. This will draw on the account of technology outlined in the first section. This will be followed by an examination of a common argument, "If we don't do it, others will". This argument is important because it appears to justify acting in morally dubious ways. Finally, it will be argued that societal desirability gives support to the "If we don't…" argument and that this raises some difficulties for RRI. (shrink) Nanotechnology in Applied Ethics On Joseph Ransdell. Ransdell - 2013 - Transactions of the Charles S. Peirce Society 49 (4):449.details My father would have loved the idea of me writing this introduction on behalf of my family, a task which is, to be frank, a little intimidating, given this audience that he held in such high esteem. My father's mind could take him anywhere, to many places where—especially in the last year of his life—his body could not. Anyone lucky enough to have conversed with him knows that with Dr. Joseph Ransdell (Joe to many, and Dad to his daughters), you (...) started off a conversation in one place, and for the next hour at least, you followed his mind around subjects like perception, belief, the nature of reality, until, as in the T.S. Eliot quote he loved, you arrived where you started and knew the place for the first time.As .. (shrink)
CommonCrawl
Dr Aneta Neumann Aneta Neumann is a researcher at the School of Computer Science and Mathematical Sciences, at the University of Adelaide. She graduated in Computer Science from the Christian-Albrechts-University of Kiel, Germany and received her PhD from the University of Adelaide, Australia. She was a participant in the SALA 2016 -2018 exhibitions in Adelaide and has presented invited talks at UCL London, Goldsmiths, University of London, the University of Nottingham, the University of Sheffield, Hasso Plattner Institut University Potsdam, Sorbonne University and University of Melbourne in 2016-2022. Aneta is a co-designer and co-lecturer for the EdX Big Data Fundamentals course in the Big Data MicroMasters® program. She received an ACM-W scholarship, sponsored by Google, Microsoft, and Oracle, a Hans-Juergen and Marianna Ohff Research Grant, and the Best Paper Nomination at GECCO 2019, GECCO 2021 and GECCO 2022. Her main research interests include bio-inspired computation, particularly dynamic and stochastic optimisation, submodular functions, evolutionary diversity optimisation, and optimisation under uncertainty in the mining industry. Moreover, her work contributes to understanding the fundamental link between bio-inspired computation, machine learning, and computational creativity. She investigates evolutionary image transition and animation in the area of Artificial Intelligence and examines how to develop designs and applications of artificial intelligent methods based on complex agent-based models. News: Research Summer Projects, Honours, Masters and PhD Student Applications 2023 Project title: Artificial Intelligence - Innovative approaches for increasing the productivity of South Australia's copper and gold production. Project description: Artificial Intelligence is currently used in various ways to solve significant industry challenges. The students will develop advanced technologies to help boost South Australia's copper and gold production. The topic spans from experimental investigations of algorithms to data analysis using machine learning methods. The projects can be carried out dependent on the background and interest of the students. Project title: Advanced Ore Mine Optimisation under Uncertainty. Mining processes involve a lot of uncertainties due to the lack of information about ore grades within the ore body. Using an average model or an average gold/copper price leads to weakness/limitation of models that does not take into account the effect of potentially large losses due to deviations from the expected value. In order to improve the estimate of a mining project's value you will model the uncertainty and establish confidence intervals for optimised solutions. The topic spans from experimental investigations of algorithms for ore grade estimation and optimisation of mine design and scheduling to data analysis based on machine learning (e.g. deep neural networks). The project is suitable for a student interested both in a career in research or in industry. Project title: Evolutionary Diversity Optimisation. Project description: Diversity optimisation is beneficial to many industrial application areas as it provides a large variety of high quality and innovative design choices. Diversity can drive innovation and deliver promising results in complex problems and optimisation. You will design and analyse algorithms for computing a diverse set of solutions that all meet given quality criteria, and explore the impact of different diversity strategies. A background on algorithms and programming knowledge is beneficial. The project is suitable for a student interested both in a career in research or in industry. We are able to offer financial assistance for undergraduate students at the UofA. Project title: AI-based Time Use Optimization for Improving Health Outcomes. How you allocate your time is important to your health and well-being. In this project, we would be able to design AI-based methods that can be used to promote health and well-being by optimizing time usage. Based on data from a large population-based cohort, we will optimize health outcomes for viable time plans with different day structures. As the different health outcomes are competing for time allocations, you would be able to study how to optimize multiple health outcomes simultaneously in the form of a multi-objective optimization problem and point out the trade-offs achievable with respect to different health outcomes. The project is suitable for a student interested both in a career in research or in industry. The projects can be carried out dependent on the background and interest of the students. Project title: Towards Solving Real-World Optimization Problems: AI-based methods for the Traveling Thief Problem. In real-world optimization, it is common to face several sub-problems interacting and forming the main problem. There is an inter-dependency between the sub-problems, making it impossible to solve such a problem by focusing on only one component. The traveling thief problem (TTP) belongs to this category and is formed by the integration of the traveling salesperson problem (TSP) and the knapsack problem (KP). In this project, you will investigate a prominent multi-component optimisation problem, namely TTP, in the context of AI-based optimization. Moreover, we will examine the inter-dependency among the components of the problem and empirically determine the best method to solve this real-world problem. You will conduct an experimental investigation to examine the novel algorithms and compare the results to another recently introduced framework. The project is suitable for a student interested both in a career in research or in industry. The projects can be carried out dependent on the background and interest of the students. Project title: AI-based Computational Creativity. Project description: Can computer be capable of human-level creativity? Is artificial intelligence set to become next the great art/music movement? Artificial intelligence is substantially changing the nature of creative processes. The students will explore the interface between art, music and artificial intelligence. For example, evolutionary image transition can be utilised as inspiration for creating original digital art and videos. The focus will be on developing new tools using machine learning, neural networks and optimisation. The projects can be carried out dependent on the background and interest of the students. Students who are interested in this research topics are very welcome to contact me: [email protected]. 非常欢迎对我的研究主题感兴趣的学生与我联系 CSC PhD Applications The China Scholarship Council (CSC) and the University of Adelaide are jointly offering postgraduate research scholarships to applicants from the People's Republic of China who intend to undertake a Doctor of Philosophy at the University of Adelaide. Please email your expression of interest to [email protected]. Paper accepted at the AAAI 2023: Mingyu Guo, Max Ward, Aneta Neumann, Frank Neumann, Hung Nguyen: Scalable Edge Blocking Algorithms for Defending Active Directory Style Attack Graphs. I serve as a Chair of the track Genetic Algorithm (GA) at the Genetic and Evolutionary Computation Conference, GECCO 2023 with John Woodward. Article accepted at the journal ACM Transactions on Evolutionary Learning and Optimization 2022: Analysis of Evolutionary Diversity Optimization for Permutation Problems. Authors: A.V. Do, M Guo, A. Neumann, F. Neumann. 4 Papers accepted at PPSN 2022: Aneta Neumann, Yue Xie and Frank Neumann: Evolutionary Algorithms for Limiting the Effect of Uncertainty for the Knapsack Problem with Stochastic Profits [arXiv] Adel Nikfarjam, Aneta Neumann, Jakob Bossek and Frank Neumann: Co-Evolutionary Diversity Optimisation for the Traveling Thief Problem Adel Nikfarjam, Amirhossein Moosavi, Aneta Neumann and Frank Neumann: Computing High-Quality Solutions for the Patient Admission Scheduling Problem using Evolutionary Diversity Optimisation Yue Xie, Aneta Neumann, Ty Stanford, Charlotte Lund Rasmussen, Dorothea Dumuid and Frank Neumann: Evolutionary Time Use Optimization for Improving Children's Health Outcomes [arXiv] Article accepted at the Journal Theoretical Computer Science 2022 Single- and Multi-Objective Evolutionary Algorithms for the Knapsack Problem with Dynamically Changing Constraints. Authors: V. Roostapour, A. Neumann, F. Neumann. [arXiv] 5 Papers accepted at GECCO 2022: Aneta Neumann, Denis Antipov, Frank Neumann: Coevolutionary Pareto Diversity Optimization [arXiv] Adel Nikfarjam, Aneta Neumann, Frank Neumann: Evolutionary Diversity Optimisation for The Traveling Thief Problem [arXiv] Anh Viet Do, Mingyu Guo, Aneta Neumann, Frank Neumann: Niching-based Evolutionary Diversity Optimization for the Traveling Salesperson Problem [arXiv] Diksha Goel, Max Hector Ward-Graham, Aneta Neumann, Frank Neumann, Hung Nguyen, Mingyu Gao: Defending Active Directory by Combining Neural Network based Dynamic Program and Evolutionary Diversity Optimisation [arXiv] Adel Nikfarjam, Aneta Neumann, Frank Neumann: On the Use of Quality Diversity Algorithms for The Traveling Thief Problem [arXiv] I serve as a Chair of the track Real World Applications (RWA) at the Genetic and Evolutionary Computation Conference, GECCO 2022 with Richard Allmendinger. Mingyu Guo, Jialiang Li, Aneta Neumann, Frank Neumann, Hung Nguyen: Practical Fixed-Parameter Algorithms for Defending Active Directory Style Attack Graphs [arXiv] Competition "Evolutionary Submodular Optimisation" at GECCO 2022. Tutorial "Evolutionary Submodular Optimisation" at GECCO 2021 and GECCO 2022. Tutorial "Evolutionary Diversity Optimisation for Combinatorial Optimisation" at GECCO 2022. Tutorials "Evolutionary Computation for Digital Art" and "Evolutionary Diversity Optimisation" at IEEE CEC 2021. The 2022 Lorentz Center Workshop - (by-invitation only event): Benchmarked: Optimization Meets Machine Learning 2022, Hybrid, Ort, Leiden 30 May –- 3 Jun 2022 Leiden, The Netherlands. Article accepted at the journal of Artificial Intelligence (AIJ) Pareto optimization for subset selection with dynamic cost constraints. Authors: V. Roostapour, A. Neumann, F. Neumann, T. Friedrich, [arXiv] Best Paper Nomination (GECCO 2021) - for the work ``Analysis of Evolutionary Diversity Optimisation for Permutation Problems'', A.V Do, M. Guo, A, Neumann, F. Neumann at the Genetic and Evolutionary Computation Conference 2021, (Track Genetic Algorithms). Co-Chair of the Real World Applications (RWA) track at the Genetic and Evolutionary Computation Conference, GECCO 2021 Tutorial on Evolutionary Submodular Optimisation at GECCO 2021 with Frank Neumann and Chao Qian accepted at Genetic and Evolutionary Computation Conference (GECCO) 2021. Tutorials on Evolutionary Computation for Digital Art and Evolutionary Diversity Optimisation at IEEE CEC 2021. Co-organiser of Special Issue of Theoretical Computer Science titled "Theoretical Foundations of Evolutionary Computation'' with Per Kristian Lehre and Chao Qian, Submission Deadline: December 31, 2021. Paper accepted at FOGA 2021: Computing Diverse Sets of High Quality TSP Tours by EAX-Based Evolutionary Diversity Optimisation, Authors: A. Nikfarjam, J. Bossek, A. Neumann, F. Neumann, [arXiv], [download] Paper with industry partner: Maptek accepted at GECCO 2021 Companion, Workshop on Industrial Applications of Metaheuristics: Advanced Mine Optimisation under Uncertainty Using Evolution, [download] Diversifying Greedy Sampling and Evolutionary Diversity Optimisation for Constrained Monotone Submodular Functions. Authors: Aneta Neumann, J. Bossek, F. Neumann, [arXiv] Heuristic Strategies for Solving Complex Interacting Stockpile Blending Problem with Chance Constraints. Authors: Y. Xie, Aneta Neumann, F. Neumann, [arXiv] Runtime Analysis of RLS and the (1+1) EA for the Chance-constrained Knapsack Problem with Correlated Uniform Weights. Authors: Y. Xie, Aneta Neumann, F. Neumann, A. M. Sutton, [arXiv] Analysis of Evolutionary Diversity Optimisation for Permutation Problems. Authors: A. V. Do, M. Guo, Aneta Neumann, F. Neumann, [arXiv] Breeding Diverse Packings for the Knapsack Problem by Means of Diversity-Tailored Evolutionary Algorithms. Authors: J. Bossek, Aneta Neumann, F. Neumann, [arXiv] Entropy-Based Evolutionary Diversity Optimisation for the Traveling Salesperson Problem. Authors: A. Nikfarjam, J. Bossek, Aneta Neumann, F. Neumann, [arXiv] Paper accepted at CEC 2021: Heuristic Strategies for Solving Complex Interacting Large-Scale Stockpile Blending Problems. Authors: Y. Xie, Aneta Neumann, F. Neumann, [arXiv] Paper accepted at LION 2021: Exact Counting and Sampling of Optima for the Knapsack Problem. Authors: J. Bossek, A. Neumann, F. Neumann, [arxiv] The 2020 Lorentz Center Workshop (by-invitation only event): Benchmarked: Optimization meets Machine Learning 9--13 November 2020, Leiden, The Netherlands. Special Session accepted for WCCI 2020 - Theoretical Foundations of Bio-inspired Computation Organizators: Per Kristian Lehre, Aneta Neumann, Chao Qian. Tutorial on Evolutionary Diversity Optimisation at the International Conference on Parallel Problem Solving from Nature (PPSN XVI) in 2020 with Jakob Bossek and Frank Neumann. Tutorial on Evolutionary Computation for Digital Art at GECCO 2020 Tutorial on Evolutionary Computation for Digital Art with Frank Neumann accepted at Genetic and Evolutionary Computation Conference (GECCO) 2020. The tutorial slides Evolutionary Computation for Digital Art , Vimeo . Papers accepted at PPSN 2020: Optimising Chance-Constrained Submodular Functions Using Evolutionary Multi-Objective Algorithms, [arXiv] Jun 2020 Authors: Aneta Neumann and Frank Neumann Optimising tours for the weighted traveling salesperson problem and the traveling thief problem: A structural comparison of solutions, [arXiv] Jun 2020 Authors: Jakob Bossek, Aneta Neumann and Frank Neumann Evolving Sampling Strategies for One-Shot Optimization Tasks, [arXiv] Jun 2020 Authors: Jakob Bossek, Carola Doerr, Pascal Kerschke, Aneta Neumann and Frank Neumann Papers accepted at GECCO 2020: Specific Single- and Multi-Objective Evolutionary Algorithms for the Chance-Constrained Knapsack Problem, [arXiv] Apr 2020 Authors: Yue Xie, Aneta Neumann, Frank Neumann Evolving Diverse Sets of Tours for the Travelling Salesperson Problem, [arXiv] Apr 2020 Authors: Viet Anh Do, Jakob Bossek, Aneta Neumann, Frank Neumann Paper accepted at Evolutionary Computation Journal, MIT Press: Evolutionary Image Transition and Painting Using Random Walks, [download] [arXiv], Mar 2020 Authors: Aneta Neumann, Bradley Alexander, Frank Neumann Papers accepted at ECAI 2020: Evolutionary Bi-objective Optimization for the Dynamic Chance-Constrained Knapsack Problem Based on Tail Bound Objectives, [arXiv] 2020 Authors: Hirad Assimi, Oscar Harper, Yue Xie, Aneta Neumann and Frank Neumann Non-Monotone Submodular Maximization with Multiple Knapsacks in Static and Dynamic Settings, Authors: Vanja Doskoc, Tobias Friedrich, Andreas Göbel, Aneta Neumann, Frank Neumann and Francesco Quinzan, [arXiv] 2020 Paper accepted at AAAI 2020: Optimization of Chance-Constrained Submodular Functions, Authors: B. Doerr, C. Doerr, A. Neumann, F. Neumann, A. M. Sutton [download], 2020 The 2019 Workshop on AI-based Optimisation (AI-OPT 2019) Artificial Intelligence based optimisation techniques such as constraint programming, evolutionary computation, heuristic search, mixed integer programming, and swarm intelligence have found many applications in solving highly complex and challenging optimisation problems. Application domains include important areas such as cybersecurity, economics, engineering, renewable energy, health and supply chain management. Tutorial on Evolutionary Computation for Digital Art with Frank Neumann accepted at Genetic and Evolutionary Computation Conference (GECCO) 2019. The tutorial slides Evolutionary Computation for Digital Art, Vimeo. Paper accepted at FOGA 2019, Code: Evolving Diverse TSP Instances by Means of Novel and Creative Mutation Operators, Authors: J. Bossek, P. Kerschke, A. Neumann, M. Wagner, F. Neumann, H. Trautmann STEM WORKSHOPS in German with GOETHE INSTITUT Papers accepted at GECCO 2019, Code: Evolutionary Diversity Optimization Using Multi-Objective Indicators - Nominated for Best Paper Award in the track "Genetic Algorithms" Authors: Aneta Neumann, Wanru Gao, Markus Wagner, Frank Neumann, [download] [researchgate] July 2019 Evolutionary Algorithms for the Chance-Constrained Knapsack Problem Authors: Yue Xie, Oscar Harper, Hirad Assimi, Aneta Neumann, Frank Neumann, [download] [researchgate] July 2019 Tutorial on Evolutionary Computation for Digital Art with Frank Neumann accepted at Genetic and Evolutionary Computation Conference (GECCO 2019). GMG Adelaide Forum: Secure and Integrated Energy and Mining Systems, 2019 Bringing the mining and energy industries together to achieve secure integration across the sector with Prof Stephen Grano, Executive Director of the University of Adelaide's Institute for Mineral and Energy Resources (IMER). V. Roostapour, A. Neumann, F. Neumann, T. Friedrich (2019): Pareto optimization for subset selection with dynamic cost constraints. In: Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, download, [arxiv], Jul 2019 Tutorial on Evolutionary Computation for Digital Art at AI 2018 Tutorial on Evolutionary Computation for Digital Art with Frank Neumann accepted at the Australasian Joint Conference on Artificial Intelligence (AI 2018), LNCS 11320, Springer. The tutorial slides Evolutionary Computation for Digital Art, last updated: 11 December 2018. Aneta has been awarded a Hans-Juergen and Marianna Ohff Research Grant for 2018 This grant will support a research visit at the Algorithm Engineering group lead by Prof. Dr. Tobias Friedrich, the Hasso Plattner Institute Potsdam, Germany. Media: Research grant for Aneta Neumann Aneta has been awarded an ACM-W scholarship 2018, sponsored by Google, Microsoft, Oracle. Media: Aneta Neumann Recipient of ACM-W Scholarship [09/2018], GECCO 2018, ACM-W Awards Tutorial on Evolutionary Computation for Digital Art at GECCO 2018, Tutorial on Evolutionary Computation for Digital Art with Frank Neumann accepted at Genetic and Evolutionary Computation Conference (GECCO) 2018. The tutorial slides Evolutionary Computation for Digital Art, last updated: 20 July 2018. Big Data Fundamentals MOOC's course Aneta is a co-designer and co-lecturer for Big Data Fundamentals (Start Date: Mar 1, 2019) course in the Big Data MicroMasters® program, an open online graduate level series of courses https://blogs.adelaide.edu.au/adelaidex/2017/05/24/adelaidex-launches-big-data-micromasters/. Cover Page on SIGEVOlution SIGEVOlution newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computation, Volume 10, Issue 3, http://www.sigevolution.org/issues/SIGEVOlution1003.pdf CS Researcher in SALA Art Exhibition Aneta Neumann, a researcher from the School of Computer Science is exhibiting two mixed media artworks in Hub Central. The artworks are inspired by images produced by new methods in evolutionary image transition, pioneered by Aneta in her research toward her PhD. This research carried out within the Optimisation and Logistic Research Group in the School explores the fundamental link between evolutionary processes and generative art. The images will be on display in the Hub until the 18th of August. http://blogs.adelaide.edu.au/cs/2017/08/15/cs-researcher-in-sala-art-exhibition/ artificial intelligence, machine learning, evolutionary computation, multi-objective optimisation, quality diversity, generative art, computational creativity Human Interactive EEG-Based Evolutionary Image Animation Accepted as a full paper for publication at the 2020 IEEE Symposium Series on Computational Intelligence ABSTRACT: Evolutionary algorithms are adaptive algorithms that can alter their behaviour under changing circumstances.This makes them well-suited to act in interactive environments.We introduce a framework for human interactive animations based on EEG signals. EEG signals received through a brain computer interface headset are used as the fitness function and influence the animation based on the signals received. For our experimental investigation, we consider the recently introduced quasi-random animation process and alter the process through mutation operators that depend on the EEG signals received.Furthermore, the EEG signals trigger mutations and therefore alterations to the image animation process if their values are not in a predefined range. Our experimental study shows that a large variety of animations and intermediate images are obtainable in this way. Evolutionary Bi-objective Optimization for the Dynamic Chance-Constrained Knapsack Problem Based on Tail Bound Objectives Accepted as a full paper for publication at the ECAI 2020, [arxiv], 2020 ABSTRACT: Real-world optimization problems are often stochastic and dynamic and it is important to tackle stochastic and dynamic environments in a common approach. In this paper, we consider the stochastic chance-constrained knapsack problem where the constraint bound dynamically changes over time. We introduce a Pareto optimization approach for this problem that make use of important tail inequalities such as Chebyshev's inequality and Chernoff bound to estimate the probability of exceeding a given constraint bound. The key part of our approach is the introduction of an additional objective which calculates the minimal constraint bound for which a given solution for the stochastic component would still meet the chance constraint. This objective helps to cater for dynamic changes to the stochastic problem. Our experimental investigations show that the Pareto optimization is highly effective and outperforms its corresponding single-objective approach. Non-Monotone Submodular Maximization with Multiple Knapsacks in Static and Dynamic Settings Authors: Vanja Doskoc, Tobias Friedrich, Andreas Göbel, Aneta Neumann, Frank Neumann and Francesco Quinzan ABSTRACT: We study the problem of maximizing a non-monotone submodular function under multiple knapsack constraints. We propose a simple discrete greedy algorithm to approach this problem, and prove that it yields strong approximation guarantees for functions with bounded curvature. In contrast to other heuristics, this requires no problem relaxation to continuous domains and it maintains a constant-factor approximation guarantee in the problem size. In the case of a single knapsack, our analysis suggests that the standard greedy can be used in non-monotone settings. Additionally, we study this problem in a dynamic setting, in which knapsacks change during the optimization process. We modify our greedy algorithm to avoid a complete restart at each constraint update. This modification retains the approximation guarantees of the static case. We evaluate our results experimentally on a video summarization and sensor placement task. We show that our proposed algorithm competes with the state-of-the-art in static settings. Furthermore, we show that in dynamic settings with tight computational time budget, our modified greedy yields significant improvements over starting the greedy from scratch, in terms of the solution quality achieved. Accepted as a full paper for publication at the AAAI 2020 ABSTRACT: Submodular optimization plays a key role in many real-world problems. In many real-world scenarios, it is also necessary to handle uncertainty, and potentially disruptive events that violate constraints in stochastic settings need to be avoided.In this paper, we investigate submodular optimization problems with chance constraints. We provide a first analysis on the approximation behavior of popular greedy algorithms for submodular problems with chance constraints. Our results show that these algorithms are highly effective when using surrogate functions that estimate constraint violations based on Chernoff bounds. Furthermore, we investigate the behavior of the algorithms on popular social network problems and show that high quality solutions can still be obtained even ift here are strong restrictions imposed by the chance constraint. Evolving diverse TSP instances by means of novel and creative mutation operators Authors: J. Bossek, P. Kerschke, A. Neumann, M. Wagner, F. Neumann, H. Trautmann Accepted as a full paper for publication at the FOGA 2019, ABSTRACT: Evolutionary algorithms have successfully been applied to evolve problem instances that exhibit a significant difference in performance for a given algorithm or a pair of algorithms inter alia for the Traveling Salesperson Problem (TSP). Creating a large variety of instances is crucial for successful applications in the blooming field of algorithm selection. In this paper, we introduce new and creative mutation operators for evolving instances of the TSP. We show that adopting those operators in an evolutionary algorithm allows for the generation of benchmark sets with highly desirable properties: (1) novelty by clear visual distinction to established benchmark sets in the field, (2) visual and quantitative diversity in the space of TSP problem characteristics, and (3) significant performance differences with respect to the restart versions of heuristic state-of-the-art TSP solvers EAX and LKH. The important aspect of diversity is addressed and achieved solely by the proposed mutation operators and not enforced by explicit diversity preservation. Quasi-random Agents for Image Transition and Animation Accepted as for journal publication at Australian Journal of Intelligent Information Processing Systems, download [arxiv], Dec 2019 Authors: Aneta Neumann, Frank Neumann, Tobias Friedrich ABSTRACT: Quasi-random walks show similar features as standard random walks, but with much less randomness. We utilize this established model from discrete mathematics and show how agents carrying out quasi-random walks can be used for image transition and animation. The key idea is to generalize the notion of quasi-random walks and let a set of autonomous agents perform quasi-random walks painting an image. Each agent has one particular target image that they paint when following a sequence of directions for their quasi-random walk. The sequence can easily be chosen by an artist and allows them to produce a wide range of different transition patterns and animations. Evolving Pictures in Image Transition Space Authors: B. Alexander, D. Hin, A. Neumann, S. Ull-Karim Accepted as a full paper for publication at the ICONIP 2019 ABSTRACT: Evolutionary art creates novel images through a processes inspired by natural selection. Images are high dimensional objects, which can present challenges for evolutionary processes. Work to date has handled this problem by evolving compressed or encoded forms of images or by starting with prior images and evolving constrained variations of these. In this work we extend the prior-image concept by evolving interesting images in the transition-space between two bounding images. We define new feature metrics based on proximity to the two bounding images and show how these metrics, combined with other aesthetic features, can be used to drive the creation of new images incorporating features of both starting images. We extend this work further to evolve sets images that are diverse in one and two feature dimensions. Finally, we accelerate this evolutionary process using an autoencoder to capture the transition space and reduce the dimensionality of the search space. Evolutionary Diversity Optimization Using Multi-Objective Indicators Authors: Aneta Neumann, Wanru Gao, Markus Wagner, Frank Neumann Accepted as a full paper for publication at the Genetic and Evolutionary Computation Conference, GECCO 2019, was nominated for Best Paper Award for in the track "Genetic Algorithms" [download] [paper] [arxiv], 2019 ABSTRACT: Evolutionary diversity optimization aims to compute a diverse set of solutions where all solutions meet a given quality criterion. With this paper, we bridge the areas of evolutionary diversity optimization and evolutionary multi-objective optimization. We show how popular indicators frequently used in the area of multi-objective optimization can be used for evolutionary diversity optimization. Our experimental investigations for evolving diverse sets of TSP instances and images according to various features show that two of the most prominent multi-objective indicators, namely the hypervolume indicator and the inverted generational distance, provide excellent results in terms of visualization and various diversity indicators. Authors: Yue Xie, Oscar Harper, Hirad Assimi, Aneta Neumann, Frank Neumann Accepted as a full paper for publication at the Genetic and Evolutionary Computation Conference, GECCO 2019, ABSTRACT: Evolutionary algorithms have been widely used for a range of stochastic optimization problems. In most studies, the goal is to optimize the expected quality of the solution. Motivated by real-world problems where constraint violations have extremely disruptive effects, we consider a variant of the knapsack problem where the profit is maximized under the constraint that the knapsack capacity bound is violated with a small probability of at most $\alpha$. This problem is known as chance-constrained knapsack problem and chance-constrained optimization problems have so far gained little attention in the evolutionary computation literature. We show how to use popular deviation inequalities such as Chebyshev's inequality and Chernoff bounds as part of the solution evaluation when tackling these problems by evolutionary algorithms and compare the effectiveness of our algorithms on a wide range of chance-constrained knapsack instances. Pareto optimization for subset selection with dynamic cost constraints Authors: V. Roostapour, A. Neumann, F. Neumann, T. Friedrich Accepted as a full paper for publication at Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, [download] [arxiv], Nov 2018 ABSTRACT: In this paper, we consider subset selection problem for function f with constraint bound B which changes over time. We point out that adaptive variants of greedy approaches commonly used in the area of submodular optimization are not able to maintain their approximation quality. Investigating the recently introduced POMC Pareto optimization approach, we show that this algorithm efficiently computes a $\phi= (\alpha_f/2)(1-\frac{1}{e^{\alpha_f}})$-approximation, where αf is the submodularity ratio of f, for each possible constraint bound b ≤ B. Furthermore, we show that POMC is able to adapt its set of solutions quickly in the case that B increases. Our experimental investigations for the influence maximization in social networks show the advantage of POMC over generalized greedy algorithms. On the Performance of Baseline Evolutionary Algorithms on the Dynamic Knapsack Problem Accepted as a full paper for publication at Parallel Problem Solving from Nature (PPSN 2018), [download], BENCHMARK [download], Authors: Vahid Roostapour, Aneta Neumann, Frank Neumann ABSTRACT: Evolutionary algorithms are bio-inspired algorithms that can easily adapt to changing environments. In this paper, we study single- and multi-objective baseline evolutionary algorithms for the classical knapsack problem where the capacity of the knapsack varies over time. We establish different benchmark scenarios where the capacity changes every tau iterations according to a uniform or Normal distribution. Our experimental investigations analyze the behavior of our algorithms in terms of the magnitude of changes determined by parameters of the chosen distribution, the frequency determined by tau and the class of knapsack instance under consideration. Our results show that the multi-objective approaches using a population that caters for dynamic changes have a clear advantage on many benchmarks scenarios when the frequency of changes is not too high. Discrepancy-based Evolutionary Diversity Optimisation Accepted as a full paper for publication at GECCO 2018, [bibtex], [arxiv], [download], Authors: Aneta Neumann, Wanru Gao, Carola Doerr, Frank Neumann, Markus Wagner ABSTRACT: Diversity plays a crucial role in evolutionary computation. While diversity has been mainly used to prevent the population of an evolutionary algorithm from premature convergence, the use of evolutionary algorithms to obtain a diverse set of solutions has gained increasing attention in recent years. Diversity optimization in terms of features on the underlying problem allows to obtain a better understanding of possible solutions to the problem at hand and can be used for algorithm selection when dealing with combinatorial optimization problems such as the Traveling Salesperson Problem. We explore the use of the star-discrepancy measure to guide the diversity optimization process of an evolutionary algorithm. In our experimental investigations, we consider our discrepancy-based diversity optimization approaches for evolving diverse sets of images as well as instances of the Traveling Salesperson problem where a local search is not able to find near optimal solutions. Our experimental investigations comparing three diversity optimization approaches show that a discrepancy-based diversity optimization approach using a tie-breaking rule based on weighted differences to surrounding feature points provides the best results in terms of the star discrepancy measure. On the Use of Colour-based Segmentation in Evolutionary Image Composition Accepted as a full paper for publication at IEEE CEC 2018 [download], Authors: Aneta Neumann, Frank Neumann ABSTRACT: Evolutionary algorithms have been widely used in the area of creativity in order to help create art and music. We consider the recently introduced evolutionary image composition approach based on feature covariance matrices [1] which allows composing two images into a new one based on their feature characteristics. When using evolutionary image composition it is important to obtain a good weighting of interesting regions of the two images. We use colour-based segmentation based on K-Means clustering to come up with such a weighting of the images. Our results show that this preserves the chosen colour regions of the images and leads to composed images that preserve colours better than the previous approach based on saliency masks [1]. Furthermore, we evaluate our composed images in terms of aesthetic feature and show that our approach based on colour-based segmentation leads to higher feature values for most of the investigated features. Evolution of Images with Diversity and Constraints Using a Generator Network Accepted as a full paper for publication at International Conference on Neural Information Processing (ICONIP 2018), [bibtex], [arxiv], [download], Authors: Aneta Neumann, Christo Pyromallis, Bradley Alexander ABSTRACT: Evolutionary search has been extensively used to generate artistic images. Raw images have high dimensionality which makes a direct search for an image challenge. In previous work, this problem has been addressed by using compact symbolic encodings or by constraining images with priors. Recent developments in deep learning have enabled a generation of compelling artistic images using generative networks that encode images with lower-dimensional latent spaces. To date, this work has focused on the generation of images concordant with one or more classes and transfer of artistic styles. There is currently no work which uses search in this latent space to generate images scoring high or low aesthetic measures. In this paper, we use evolutionary methods to search for images in two datasets, faces and butterflies and demonstrate the effect of optimising aesthetic feature scores in one or two dimensions. The work gives a preliminary indication of which feature measures promote the most interesting images and how some of these measures interact. Evolutionary Image Composition Using Feature Covariance Matrices The Genetic and Evolutionary Computation Conference (GECCO 2017) [ bibtex ] [download] [arxiv], Authors: Aneta Neumann, Zygmunt L Szpak, Wojciech Chojnacki, Frank Neumann ABSTRACT: Evolutionary algorithms have recently been used to create a wide range of artistic work. In this paper, we propose a new approach for the composition of new images from existing ones, that retain some salient features of the original images. We introduce evolutionary algorithms that create new images based on a fitness function that incorporates feature covariance matrices associated with different parts of the images. This approach is very flexible in that it can work with a wide range of features and enables targeting specific regions in the images. For the creation of the new images, we propose a population-based evolutionary algorithm with mutation and crossover operators based on random walks. Our experimental results reveal a spectrum of aesthetically pleasing images that can be obtained with the aid of our evolutionary process. Evolution of Artistic Image Variants Through Feature Based Diversity Optimisation The Genetic and Evolutionary Computation Conference (GECCO 2017) [bibtex] [download], Authors: Bradley Alexander, James Kortman, Aneta Neumann ABSTRACT: Measures aimed to improve the diversity of images and image features in evolutionary art help to direct search toward more novel and creative parts of the artistic search domain. To date, such measures have focused on relatively indirect means of ensuring diversity in the context of search to maximise an aesthetic or similarity metric. In recent work on TSP problem instance classification, selection based on a direct measure of each individual's contribution to diversity was successfully used to generate hard and easy TSP instances. In this work, we use an analogous search framework to evolve diverse variants of a source image in one and two feature dimensions. The resulting images show the spectrum of effects from transforming images to score across the range of each feature. The evolutionary process also reveals interesting correlations between feature values in both one and two dimensions. Multi-objectiveness in the Single-objective Traveling Thief Problem The Genetic and Evolutionary Computation Conference (GECCO 2017), poster, [bibtex] [download], Authors: Mohamed El Yafrani, Shelvin Chand, Aneta Neumann, Belaid Ahiod, Markus Wagner ABSTRACT: Multi-component problems are optimization problems that are composed of multiple interacting sub-problems. The motivation of this work is to investigate whether it can be better to consider multiple objectives when dealing with multiple interdependent components. Therefore, the Travelling Thief Problem, a relatively new benchmark problem, is investigated as a bi-objective problem. An NSGA-II adaptation for the bi-objective model is compared to three of the best known algorithms for the original single-objective problem. The results show that our approach generates diverse sets of solutions while being competitive with the state-of-the-art single-objective algorithms. Evolutionary Image Transition Using Random Walks Evolutionary and Biologically Inspired Music, Sound, Art and Design (EvoMUSART'17). Springer, Cham. [bibtex] [download], [download Springer] ABSTRACT: We present a study demonstrating how random walk algorithms can be used for evolutionary image transition. We design different mutation operators based on uniform and biased random walks and study how their combination with a baseline mutation operator can lead to interesting image transition processes in terms of visual effects and artistic features. Using feature-based analysis we investigate the evolutionary image transition behaviour with respect to different features and evaluate the images constructed during the image transition process. A Modified Indicator-based Evolutionary Algorithm (mIBEA) IEEE Congress on Evolutionary Computation 2017, San Sebastián, [bibtex] [download], Authors: Li,W, Ozcan, E, John, R, Drake, JH, Neumann, A, Wagner, M ABSTRACT: Multi-objective evolutionary algorithms (MOEAs) based on the concept of Pareto-dominance have been successfully applied to many real-world optimisation problems. Recently, research interest has shifted towards indicator-based methods to guide the search process towards a good set of trade-off solutions. One commonly used approach of this nature is the indicator-based evolutionary algorithm (IBEA). In this study, we highlight the solution distribution issues within IBEA and propose a modification of the original approach by embedding an additional Pareto-dominance based component for selection. The improved performance of the proposed modified IBEA (mIBEA) is empirically demonstrated on the well-known DTLZ set of benchmark functions. Our results show that mIBEA achieves comparable or better hypervolume indicator values and epsilon approximation values in the vast majority of our cases (13 out of 14 under the same default settings) on DTLZ1-7. The modification also results in an over 8-fold speed-up for larger populations. The Evolutionary Process of Image Transition in Conjunction with Box and Strip Mutation International Conference on Neural Information Processing (ICONIP2016) [bibtex] [url] [download], ABSTRACT: Evolutionary algorithms have been used in many ways to generate digital art. We study how the evolutionary processes can be used for evolutionary art and present a new approach to the transition of images. Our main idea is to define evolutionary processes for digital image transition, combining different variants of mutation and evolutionary mechanisms. We introduce box and strip mutation operators which are specifically designed for image transition. Our experimental results show that the process of an evolutionary algorithm in combination with these mutation operators can be used as a valuable way to produce unique generative art. Evolutionary Image Transition Based on Theoretical Insights of Random Processes [download] ABSTRACT: Evolutionary algorithms have been widely studied from a theoretical perspective. In particular, the area of runtime analysis has contributed significantly to a theoretical understanding and provided insights into the working behaviour of these algorithms. We study how these insights into evolutionary processes can be used for evolutionary art. We introduce the notion of evolutionary image transition which transfers a given starting image into a target image through an evolutionary process. Combining standard mutation effects known from the optimization of the classical benchmark function OneMax and different variants of random walks, we present ways of performing evolutionary image transition with different artistic effects. Invited lectures/talks/scientific visit: The University of Sydney, Australia, Dec 2019 The 2019 Workshop on AI-based Optimisation (AI-OPT 2019), Melbourne, Australia, Oct 2019 The scientific visit to the Sorbonne University, Paris, France, May 2019 The scientific visit to Algorithm Engineering Group at the Hasso Plattner Institute, Potsdam, Germany, Apr/May 2019 AAAI Conference on Artificial Intelligence, AAAI 2019, USA, Jan/Feb 2019 The scientific visit to Algorithm Engineering Group at the Hasso Plattner Institute, Potsdam, Germany, Oct 2018 University of Münster, Germany, May 2018 The scientific visit to Algorithm Engineering Group at the Hasso Plattner Institute, Potsdam, Germany, Apr 2018 Goldsmiths, University of London, Nov 2017 The scientific visit to Mixed Reality Laboratory at the University of Nottingham, UK, Oct/Nov 2017 Invited talk, University of Sheffield, Department of Computer Science, UK, Oct 2017 Invited talk, Goldsmiths, University of London, UK, Dec 2016 Invited talk, University College London, Department of Computer Science, London, UK, Dec 2016 The scientific visit to University of Sheffield, Department of Computer Science, UK, Dec 2016 The scientific visit to University of Nottingham, School of Computer Science, UK, Nov 2016 Conference Programme Committee/Member: Association for the Advancement of Artificial Intelligence (AAAI), 2019, 2020, 2021, 2022 The International Conference on Learning Representations (ICLR), 2020, 2021 The International Conference on Machine Learning (ICML), 2020 The International Conference on Computational Intelligence in Music, Sound, Art and Design, 2021 The International Joint Conference on Artificial Intelligence (IJCAI), 2020, 2021 The International Joint Conference on Neural Networks (IJCNN) 2021, 2022 The Genetic and Evolutionary Computation Conference (GECCO), 2020, 2021, 2022 The International Conference on Parallel Problem Solving from Nature (PPSN), 2020 IEEE Congress on Evolutionary Computation (CEC), 2020, 2021, 2022 The European Conference on Artificial Intelligence (ECAI), 2020 International Conference on Neural Information Processing, (ICONIP), 2019, 2020, 2021, 2022 The Evolutionary Computation Journal (ECJ), 2017, 2018, 2019, 2020, 2021 Australasian Conference on Artificial Life and Computational Intelligence, 2016, 2017, 2018, 2019 Association for Computing Machinery Membership (ACM), 2018, 2019 IEEE Computational Intelligence Society Membership, 2017, 2018, 2019 IEEE Theoretical Foundations of Bio-inspired Computation Task Force, 2017, 2018, 2019, 2020, 2021 Association for Computing Machinery Membership, SIGEVO 2017-2019 ECMS Volunteer & Ambassador Program, 2017, 2018, 2019 Presentations and exhibitions: SALA, South Australia Living Artists Festival, August 2016, 2017, 2018, 2020 Media Article, CS Researcher in SALA Art Exhibition, the University of Adelaide If you have any questions about my work or suggestions for this webpage, please send me an email. Entry last updated: 3 Jan 2023 Artificial Intelligence Neural, Evolutionary and Fuzzy Computation Optimisation 2022 The Department of Defence Grant, Artificial Intelligence for Decision-making, the Office of National Intelligence (ONI) and the Defence Science and Technology Group (DSTG), "Applying machine learning techniques to games on graphs for the detection and concealment of spatially defined communication networks", Lead CI, 2022 The Department of Defence Grant, Artificial Intelligence for Decision-making, the Office of National Intelligence (ONI) and the Defence Science and Technology Group (DSTG), "Abstract Game Prototype for Cyber Attack/Defence, CI, 2022 The Department of Defence Grant, Artificial Intelligence for Decision-making, the Office of National Intelligence (ONI) and the Defence Science and Technology Group (DSTG), "Tackling the TTCP CAGE challenge using Monte-Carlo planning for large-scale POMDPs", CI, 2022 Lorentz Center Grant, Benchmarked: Optimization Meets Machine Learning, Leiden, The Netherlands, 2022 Travel support within the Premier's Research and Industry Fund (PRIF) Research Consortium 'Unlocking Complex Resources through Lean Processing', University of Adelaide, Australia, 2018 Hans-Juergen and Marianna Ohff Research Grant, 2018 ACM-W scholarship sponsored by Google, Microsoft, Oracle, 2018 Association for Computing Machinery (ACM) Travel Grant, 2017 EvoStar Travel Bursaries Award, 2016 School of Computer Science Postgraduate Scholarship, University of Adelaide, Australia 2022, Lecturer and Course Coordinator, Evolutionary Computation, 3-year and master students of Computer Science, Sem 2, 3 Units 2021, Invited Lecturer, Evolutionary Computation, 3-year and master students of Computer Science, Sem 2, 3 Units 2020, Pearson Online Education, co-designer and co-lecturer Master of Data Science (online), Working with Big Data 2018, Lecturer, Mining Big Data, 3-year and master students of Computer Science, Sem 1, 3 Units 2017, EdX MOOC, co-designer and co-lecturer Big Data Fundamentals, MicroMasters 2017, Lecturer, Foundations of Computer Science, Master of Computing and Innovation, Sem 1, 6 Units 2017, University of Adelaide, School of Computer Science, Australia, Areas: Supervisor: Introduction to Programming Processing, EdX Course: Think. Create. Code, Sem 1 2012, University of Adelaide, School of Computer Science, Australia, Areas: Supervisor: Object-oriented programming in Java 2012, University of Adelaide, School of Computer Science, Australia, Areas: Supervisor: Internet Computing 2011, University of Adelaide, School of Computer Science, Australia, Areas: Supervisor: Introduction to programming for engineers (Matlab/C) Position: Researcher Email: [email protected] Last updated: Sat, 01/28/2023 - 06:10
CommonCrawl
Mikhailov, Roman Valerevich Total publications: 51 (51) in MathSciNet: 46 (46) in zbMATH: 38 (38) in Web of Science: 42 (42) in Scopus: 42 (42) Cited articles: 38 Citations in Math-Net.Ru: 39 Citations in Web of Science: 123 Citations in Scopus: 141 Presentations: 24 This page: 20161 Abstract pages: 5773 Full texts: 2444 Doctor of physico-mathematical sciences (2010) Speciality: 01.01.06 (Mathematical logic, algebra, and number theory) Main publications: Roman Mikhailov and Inder Bir S. Passi, "The quasivariety of groups with the trivial forth dimension subgroup", J. Group Theory, 9 (2006), 369–381 Roman Mikhailov and Inder Bir S. Passi, Lower central and dimension series of groups, Lecture Notes in Mathematics, 1952, Springer, 2009 Roman Mikhailov and Inder Bir S. Passi, "Augmentation powers and group homology", Journal of Pure Appl. Algebra, 192 (2004), 225–238 Ioannis Emmanouil and Roman Mikhailov, "A limit approach to group homology", Journal of Algebra, 319 (2008), 1450–1461 Graham Ellis and Roman Mikhailov, "A colimit of classifying spaces,", Advances in Math., 223 (2010), 2097-2113; arXiv: 0804.3581 http://www.mathnet.ru/eng/person9204 https://scholar.google.com/citations?user=rRrAqIEAAAAJ&hl=en https://zbmath.org/authors/?q=ai:mikhailov.roman|mikhailov.r-v|mikhajlov.r-v https://elibrary.ru/author_items.asp?authorid=122971 Full list of publications: | scientific publications | by years | by types | by times cited in WoS | by times cited in Scopus | common list | 1. S. O. Ivanov, R. V. Mikhailov, F. Yu. Pavutnitskiy, "Limits, standard complexes and $\mathbf{fr}$-codes", Sb. Math., 211:11 (2020), 1568–1591 2. S. O. Ivanov, R. V. Mikhailov, V. A. Sosnilo, "Higher colimits, derived functors and homology", Sb. Math., 210:9 (2019), 1222–1258 (cited: 1) 3. Roman V. Mikhailov, "An Example of a Fractal Finitely Generated Solvable Group", Proc. Steklov Inst. Math., 307 (2019), 125–129 4. Roman Mikhailov, Inder Bir S. Passi, "Generalized dimension subgroups and derived functors", J. Pure Appl. Algebra, 220:6 (2016), 2143–2163 (cited: 2) (cited: 4) (cited: 4) 5. Sergei O. Ivanov, Roman Mikhailov, "On a problem of Bousfield for metabelian groups", Adv. Math., 290 (2016), 552–589 (cited: 4) (cited: 4) 6. V. G. Bardakov, R. Mikhailov, V. V. Vershinin, J. Wu, "On the pure virtual braid group $PV_3$", Comm. Algebra, 44:3 (2016), 1350–1378 (cited: 9) (cited: 10) 7. Roman Mikhailov, "On transfinite nilpotence of the Vogel–Levine localization", Forum Math., 28:2 (2016), 333–338 (cited: 1) (cited: 1) 8. Roman Mikhailov, "A one-relator group with long lower central series", Forum Math., 28:2 (2016), 327–331 (cited: 2) (cited: 2) 9. Roman Mikhailov, Inder Bir S. Passi, "The subgroup determined by a certain ideal in a free group ring", J. Algebra, 449 (2016), 400–407 (cited: 1) (cited: 1) 10. Roman Mikhailov, Jie Wu, "On the metastable homotopy of mod 2 Moore spaces", Algebr. Geom. Topol., 16:3 (2016), 1773–1797 (cited: 1) (cited: 1) 11. S. O. Ivanov, R. Mikhailov, "A higher limit approach to homology theories", J. Pure Appl. Algebra, 219:6 (2015), 1915–1939 (cited: 3) (cited: 5) (cited: 1) (cited: 5) 12. R. Mikhailov, K. E. Orr, "Group localization and two problems of Levine", Math. Z., 280:1 (2015), 355–366 13. S. V. Ivanov, R. Mikhailov, "On zero-divisors in group rings of groups with torsion", Canad. Math. Bull., 57:2 (2014), 326–334 , arXiv: 1209.1443 (cited: 1) (cited: 1) (cited: 1) 14. G. Baumslag, R. Mikhailov, "Residual properties of groups defined by basic commutators", Groups Geom. Dyn., 8:3 (2014), 621–642 15. R. Mikhailov, "Homotopical and Combinatorial Aspects of the Theory of Normal Series in Groups", Proc. Steklov Inst. Math., 286, suppl. 1 (2014), S1–S135 16. R. Mikhailov, J. Wu, "Combinatorial group theory and the homotopy groups of finite complexes", Geom. Topol., 17:1 (2013), 235–272 , arXiv: 1108.3055 (cited: 1) (cited: 1) (cited: 1) 17. J. Wu, R. V. Mikhailov, "Homotopy groups as centres of finitely presented groups", Izv. Math., 77:3 (2013), 581–593 18. G. Baumslag, R. Mikhailov, K. E. Orr, "A new look at finitely generated metabelian groups", Combinatorial and computational group theory with cryptography, Contemp. Math., 582, Amer. Math. Soc., Providence, RI, 2012, 21–37 , arXiv: 1203.5431 19. V. Bardakov, R. Mikhailov, J. Wu, V. Vershinin, "Brunnian braids on surfaces", Alg. Geom. Top., 12 (2012), 1607–1648 , arXiv: 0909.3387 (cited: 9) (cited: 6) (cited: 10) 20. R. Mikhailov, On the splitting of polynomial functors, 2012 , arXiv: 1202.0586 21. L. Breen, R. Mikhailov, "Derived functors of nonadditive functors and homotopy theory", Algebr. Geom. Topol., 11:1 (2011), 327–415 (cited: 7) (cited: 4) (cited: 8) 22. R. Mikhailov, I. B. S. Passi, J. Wu, "Symmetric ideals in group rings and simplicial homotopy", J. Pure Appl. Algebra, 215:5 (2011), 1085–1092 (cited: 5) (cited: 6) (cited: 5) 23. R. Mikhailov, J. Wu, A combinatorial description of homotopy groups of spheres, 2011 , 27 pp., arXiv: 1108.3055 24. A. Belov, R. Mikhailov, "Free subalgebras of Lie algebras close to nilpotent", Groups Geom. Dyn., 4:1 (2010), 15–29 (cited: 1) 25. R. Mikhailov, J. Wu, "On homotopy groups of the suspended classifying spaces", Algebr. Geom. Topol., 10:1 (2010), 565–625 (cited: 9) (cited: 8) (cited: 10) 26. R. Mikhailov, I. B. S. Passi, "Limits over categories of extensions", Indian J. Pure Appl. Math., 41:1 (2010), 113–131 (cited: 2) (cited: 1) (cited: 3) 27. G. Ellis, R. Mikhailov, "A colimit of classifying spaces", Adv. Math., 223:6 (2010), 2097–2113 (cited: 11) (cited: 9) (cited: 10) 28. R. Mikhailov, On the homology of the dual de Rham complex, 2010 , 11 pp., arXiv: 1001.2824 29. H.-J. Baues, R. Mikhailov, "Homotopy types of reduced 2-nilpotent simplicial groups", Indian J. Pure Appl. Math., 40:1 (2009), 35–80 , arXiv: 0804.2000 30. R. Mikhailov, I. B. S. Passi, Lower central and dimension series of groups, Lecture Notes in Math., 1952, Springer-Verlag, Berlin, 2009 , xxii+346 pp. 31. V. Bardakov, R. Mikhailov, V. Vershinin, J. Wu, On the pure virtual braid group $PV_3$, 2009 , 19 pp., arXiv: 0906.1743 32. H.-J. Baues, R. Mikhailov, "Intersection of subgroups in free groups and homotopy groups", Internat. J. Algebra Comput., 18:5 (2008), 803–823 (cited: 2) (cited: 2) (cited: 3) 33. V. G. Bardakov, R. Mikhailov, "On certain questions of the free group automorphisms theory", Comm. Algebra, 36:4 (2008), 1489–1499 (cited: 4) (cited: 1) (cited: 4) 34. I. Emmanouil, R. Mikhailov, "A limit approach to group homology", J. Algebra, 319:4 (2008), 1450–1461 (cited: 6) (cited: 5) (cited: 8) 35. M. Hartl, R. Mikhailov, I. B. S. Passi, "Dimension quotients", J. Indian Math. Soc. (N.S.), 2007, Special volume on the occasion of the centenary year of IMS (1907–2007) (2008), 63–107 , arXiv: 0803.3290 36. R. Mikhailov, I. B. S. Passi, "Residually nilpotent groups", L'Enseignement Mathematique, 54 (2008), 145–146 37. R. Mikhailov, I. B. S. Passi, "Homology of centralizers", Comm. Algebra, 35:7 (2007), 2191–2207 38. R. V. Mikhailov, "Baer invariants and residual nilpotence of groups", Izv. Math., 71:2 (2007), 371–390 (cited: 2) (cited: 1) (cited: 1) (cited: 2) 39. R. V. Mikhailov, "Asphericity and approximation properties of crossed modules", Sb. Math., 198:4 (2007), 521–535 40. V. G. Bardakov, R. V. Mikhailov, "On the residual properties of link groups", Siberian Math. J., 48:3 (2007), 387–394 (cited: 7) (cited: 4) (cited: 4) (cited: 7) 41. R. Mikhailov, I. B. S. Passi, "Faithfulness of certain modules and residual nilpotence of groups", Internat. J. Algebra Comput., 16:3 (2006), 525–539 (cited: 5) (cited: 4) (cited: 7) 42. R. Mikhailov, I. B. S. Passi, "The quasi-variety of groups with trivial fourth dimension subgroup", J. Group Theory, 9:3 (2006), 369–381 (cited: 2) (cited: 2) (cited: 3) 43. R. Mikhailov, "On residual nilpotence of projective crossed modules", Comm. Algebra, 34:4 (2006), 1451–1458 (cited: 4) (cited: 6) (cited: 7) 44. R. V. Mikhailov, "Faithful Group Actions and Aspherical Complexes", Proc. Steklov Inst. Math., 252 (2006), 172–181 (cited: 1) (cited: 3) 45. R. Mikhailov, I. B. S. Passi, "A transfinite filtration of Schur multiplicator", Internat. J. Algebra Comput., 15:5-6 (2005), 1061–1073 (cited: 1) (cited: 1) (cited: 2) 46. R. Mikhailov, I. B. S. Passi, "Higher traces on group rings", Comm. Algebra, 33:4 (2005), 987–997 (cited: 2) (cited: 3) (cited: 3) 47. R. V. Mikhailov, "Residual nilpotence and residual solubility of groups", Sb. Math., 196:11 (2005), 1659–1675 (cited: 8) (cited: 4) (cited: 4) (cited: 7) 48. R. Mikhailov, I. B. S. Passi, "Augmentation powers and group homology", J. Pure Appl. Algebra, 192:1-3 (2004), 225–238 (cited: 6) (cited: 7) (cited: 7) 49. R. V. Mikhailov, "On invisible subgroups", Russian Math. Surveys, 57:6 (2002), 1232–1233 50. R. V. Mikhailov, "Transfinite Lower Central Series of Groups: Parafree Properties and Topological Applications", Proc. Steklov Inst. Math., 239 (2002), 236–252 51. S. A. Melikhov, R. V. Mikhailov, "Links modulo knots and the isotopic realization problem", Russian Math. Surveys, 56:2 (2001), 414–415 (cited: 1) (cited: 1) Presentations in Math-Net.Ru 1. Гомотопические группы сфер и алгебра R. V. Mikhailov Shafarevich Seminar 2. Решение проблемы Боусфилда R. Mikhailov One-day conference dedicated to the memory of Vladimir A. Voevodsky 3. Жизнь за омегой Matsbornik-150: algebra, geometry, analysis November 7, 2016 16:20 4. fr-конструкторы в гомологической алгебре II Summer mathematical school "Algebra and Geometry", 2015 5. fr-конструкторы в гомологической алгебре I 6. Localization, completions and metabelian groups International conference "Arithmetic as Geometry: Parshin Fest" 7. Гомотопические группы сфер, функторы и косы October 2, 2012 15:00 8. Functorial and predatory spectral sequences Seminar on Arithmetic Algebraic Geometry 9. On the intersection of conjugate subgroups of finite index General Mathematics Seminar of the St. Petersburg Division of Steklov Institute of Mathematics, Russian Academy of Sciences 10. Derived functors in unstable homotopy theory International conference "ALGEBRAIC GEOMETRY: Methods, Relations, and Applications" dedicated to the 70th birthday anniversary of Andrei Nikolaevich Tyurin 11. Некоторые трюки в классической теории гомотопий и К-теории Seminar by Algebra Department 12. Картинки в К-теории Seminar on Geometry of Algebraic Varieties 13. Derived functors in the sense of Dold and Puppe The second annual conference-meeting MIAN–POMI "Algebra and Algebraic Geometry" 14. Описание некоторых гомотопических групп пространств Мура как функторов в категории абелевых групп 15. Гомотопические типы нильпотентных симплициальных групп 16. Алгебраические аспекты теории гомотопических групп сфер 17. Нижние центральные и размерные ряды в группах 18. Пересечение подгрупп в свободных группах и теория гомотопий 19. О взаимоотношениях теории групп и гомотопической топологии Meetings of the Moscow Mathematical Society 20. Зацепления и простые числа (продолжение) 21. Зацепления и простые числа 22. Асферичность и алгебраические модели гомотопических 2-типов 23. От конкордантности зацеплений к теории «диких» групп и производных функторов A. Bondal Seminar 24. Faithful modules and aspherical complexes Books in Math-Net.Ru R. V. Mikhailov, Homotopical and Combinatorial Aspects of the Theory of Normal Series in Groups, Sovrem. Probl. Mat., 18, 2014, 146 с. http://mi.mathnet.ru/book1512 St. Petersburg Department of Steklov Mathematical Institute of Russian Academy of Sciences Tata Institute of Fundamental Research, Mumbai, India Laboratory of Modern Algebra and Applications, St. Petersburg State University Chebyshev Laboratory, St. Petersburg State University, Department of Mathematics and Mechanics Steklov Mathematical Institute of Russian Academy of Sciences, Moscow Institute for Advanced Study, Princeton, NJ University of Latvia, Institute of Chemical Physics
CommonCrawl
Only show content I have access to (11) Last 3 years (16) Journal of Materials Research (7) Chinese Journal of Agricultural Biotechnology (4) Disaster Medicine and Public Health Preparedness (4) The Journal of Laryngology & Otology (3) High Power Laser Science and Engineering (2) Public Health Nutrition (2) Geological Magazine (1) Journal of Radiotherapy in Practice (1) Neuron Glia Biology (1) Psychological Medicine (1) The Journal of Navigation (1) Zygote (1) Society for Disaster Medicine and Public Health, Inc. SDMPH (4) The Australian Society of Otolaryngology Head and Neck Surgery (3) Nestle Foundation - enLINK (2) Ryan Test (2) International Psychogeriatric Association (1) Testing Membership Number Upload (1) World Association for Disaster and Emergency Medicine (1) The process and delivery of CBT for depression in adults: a systematic review and network meta-analysis José A. López-López, Sarah R. Davies, Deborah M. Caldwell, Rachel Churchill, Tim J. Peters, Deborah Tallon, Sarah Dawson, Qi Wu, Jinshuo Li, Abigail Taylor, Glyn Lewis, David S. Kessler, Nicola Wiles, Nicky J. Welton Journal: Psychological Medicine , First View Cognitive-behavioural therapy (CBT) is an effective treatment for depressed adults. CBT interventions are complex, as they include multiple content components and can be delivered in different ways. We compared the effectiveness of different types of therapy, different components and combinations of components and aspects of delivery used in CBT interventions for adult depression. We conducted a systematic review of randomised controlled trials in adults with a primary diagnosis of depression, which included a CBT intervention. Outcomes were pooled using a component-level network meta-analysis. Our primary analysis classified interventions according to the type of therapy and delivery mode. We also fitted more advanced models to examine the effectiveness of each content component or combination of components. We included 91 studies and found strong evidence that CBT interventions yielded a larger short-term decrease in depression scores compared to treatment-as-usual, with a standardised difference in mean change of −1.11 (95% credible interval −1.62 to −0.60) for face-to-face CBT, −1.06 (−2.05 to −0.08) for hybrid CBT, and −0.59 (−1.20 to 0.02) for multimedia CBT, whereas wait list control showed a detrimental effect of 0.72 (0.09 to 1.35). We found no evidence of specific effects of any content components or combinations of components. Technology is increasingly used in the context of CBT interventions for depression. Multimedia and hybrid CBT might be as effective as face-to-face CBT, although results need to be interpreted cautiously. The effectiveness of specific combinations of content components and delivery formats remain unclear. Wait list controls should be avoided if possible. The volume ratio effect on flow patterns and transition processes of thermocapillary convection Qi Kang, Jia Wang, Li Duan, Yinyin Su, Jianwu He, Di Wu, Wenrui Hu Journal: Journal of Fluid Mechanics / Volume 868 / 10 June 2019 Thermocapillary convection has always been one of the most important research topics in microgravity fluid physics. A space experimental study on the thermocapillary convection in an open annular liquid pool – a typical thermocapillary flow system – has been conducted on the SJ-10 satellite of China. This space experiment has observed the spatial temperature distribution of the liquid free surface using an infrared thermal imager, obtained the flow pattern transition process, analysed the oscillation characteristics and revealed the instability mechanism of themocapillary convection. The shape effects on the flow instability are researched by changing the volume ratio, Vr, which denotes the ratio of the liquid volume to the volume of the cylindrical gap between the walls. The volume ratio effect has been focused on for the first time. For a certain volume ratio, the flow pattern would transform from the steady state to the oscillation state accompanied by directional propagating hydrothermal waves with increasing temperature difference. In addition, the significant influences of the volume ratio on the critical conditions and wavenumber selection have been analysed in detail. Cardiovascular diseases and related risk factors accelerated cognitive deterioration in patients with late-life depression: a one-year prospective study Xiaomei Zhong, Zhangying Wu, Cong Ouyang, Wanyuan Liang, Ben Chen, Qi Peng, Naikeng Mai, Yuejie Wu, Xinru Chen, Min Zhang, Yuping Ning Journal: International Psychogeriatrics , First View Published online by Cambridge University Press: 30 January 2019, pp. 1-7 Cognitive impairment in late-life depression is common and associated with a higher risk of all-cause dementia. Late-life depression patients with comorbid cardiovascular diseases (CVDs) or related risk factors may experience higher risks of cognitive deterioration in the short term. We aim to investigate the effect of CVDs and their related risk factors on the cognitive function of patients with late-life depression. A total of 148 participants were recruited (67 individuals with late-life depression and 81 normal controls). The presence of hypertension, coronary heart disease, diabetes mellitus, or hyperlipidemia was defined as the presence of comorbid CVDs or related risk factors. Global cognitive functions were assessed at baseline and after a one-year follow-up by the Mini-Mental State Examination (MMSE). Global cognitive deterioration was defined by the reliable change index (RCI) of the MMSE. Late-life depression patients with CVDs or related risk factors were associated with 6.8 times higher risk of global cognitive deterioration than those without any of these comorbidities at a one-year follow-up. This result remained robust after adjusting for age, gender, and changes in the Hamilton Depression Rating Scale (HAMD) scores. This study suggests that late-life depression patients with comorbid CVDs or their related risk factors showed a higher risk of cognitive deterioration in the short-term (one-year follow up). Given that CVDs and their related risk factors are currently modifiable, active treatment of these comorbidities may delay rapid cognitive deterioration in patients with late-life depression. Dispersion effects on performance of free-electron laser based on laser wakefield accelerator Ke Feng, Changhai Yu, Jiansheng Liu, Wentao Wang, Zhijun Zhang, Rong Qi, Ming Fang, Jiaqi Liu, Zhiyong Qin, Ying Wu, Yu Chen, Lintong Ke, Cheng Wang, Ruxin Li Journal: High Power Laser Science and Engineering / Volume 6 / 2018 Published online by Cambridge University Press: 19 December 2018, e64 In this study, we investigate a new simple scheme using a planar undulator (PU) together with a properly dispersed electron beam ( $e$ beam) with a large energy spread ( ${\sim}1\%$ ) to enhance the free-electron laser (FEL) gain. For a dispersed $e$ beam in a PU, the resonant condition is satisfied for the center electrons, while the frequency detuning increases for the off-center electrons, inhibiting the growth of the radiation. The PU can act as a filter for selecting the electrons near the beam center to achieve the radiation. Although only the center electrons contribute, the radiation can be enhanced significantly owing to the high-peak current of the beam. Theoretical analysis and simulation results indicate that this method can be used for the improvement of the radiation performance, which has great significance for short-wavelength FEL applications. Si-doped high-energy Li1.2Mn0.54Ni0.13Co0.13O2 cathode with improved capacity for lithium-ion batteries Leah Nation, Yan Wu, Christine James, Yue Qi, Bob R. Powell, Brian W. Sheldon Journal: Journal of Materials Research / Volume 33 / Issue 24 / 28 December 2018 Published online by Cambridge University Press: 07 December 2018, pp. 4182-4191 Li[Lix/3Mn2x/3M1−x]O2 (M = Ni, Mn, Co) (HE-NMC) materials, which can be expressed as a combination of trigonal LiTMO2 (TM = transition metal) and monoclinic Li2MnO3 phases, are of great interest as high capacity cathodes for lithium-ion batteries. However, structural stability prevents their commercial adoption. To address this, Si doping was applied, resulting in improved stability. Raman and differential capacity analyses suggest that silicon doping improves the structural stability during electrochemical cycling. Furthermore, the doped material exhibits a 10% higher capacity relative to the control. The superior capacity likely results from the increased lattice parameters as determined by X-ray diffraction (XRD) and the lower resistance during the first cycle found by impedance and direct current resistance (DCR) measurements. Density functional theory (DFT) predictions suggest that the observed lattice expansion is an indication of increased oxygen vacancy concentration and may be due to the Si doping. An 8-year point-prevalence surveillance of healthcare-associated infections and antimicrobial use in a tertiary care teaching hospital in China Yi-Le Wu, Xi-Yao Yang, Meng-Shu Pan, Ruo-Jie Li, Xiao-Qian Hu, Jing-Jing Zhang, Li-Qi Yang Journal: Epidemiology & Infection / Volume 147 / 2019 Published online by Cambridge University Press: 25 October 2018, e31 Healthcare-associated infections (HAIs) are a major worldwide public-health problem, but less data are available on the long-term trends of HAIs and antimicrobial use in Eastern China. This study describes the prevalence and long-term trends of HAIs and antimicrobial use in a tertiary care teaching hospital in Hefei, Anhui, China from 2010 to 2017 based on annual point-prevalence surveys. A total of 12 505 inpatients were included; 600 HAIs were recorded in 533 patients, with an overall prevalence of 4.26% and a frequency of 4.80%. No evidence was found for an increasing or decreasing trend in prevalence of HAI over 8 years (trend χ2 = 2.15, P = 0.143). However, significant differences in prevalence of HAI were evident between the surveys (χ2 = 21.14, P < 0.001). The intensive care unit had the highest frequency of HAIs (24.36%) and respiratory tract infections accounted for 62.50% of all cases; Escherichia coli was the most common pathogen (16.67%). A 44.13% prevalence of antimicrobial use with a gradually decreasing trend over time was recorded. More attention should be paid to potential high-risk clinical departments and HAI types with further enhancement of rational antimicrobial use. A dietary pattern rich in animal organ, seafood and processed meat products is associated with newly diagnosed hyperuricaemia in Chinese adults: a propensity score-matched case–control study Yang Xia, Qi Xiang, Yeqing Gu, Suwei Jia, Qing Zhang, Li Liu, Ge Meng, Hongmei Wu, Xue Bao, Bin Yu, Shaomei Sun, Xing Wang, Ming Zhou, Qiyu Jia, Yuntang Wu, Kun Song, Kaijun Niu Journal: British Journal of Nutrition / Volume 119 / Issue 10 / 28 May 2018 Previous studies have indicated that some food items and nutrients are associated with uric acid metabolism in humans. However, little is known about the role of dietary patterns in hyperuricaemia. We designed this case–control study to evaluate the associations between dietary patterns and newly diagnosed hyperuricaemia in Chinese adults. A total of 1422 cases and 1422 controls were generated from 14 538 participants using the 1:1 ratio propensity score matching methods. Dietary intake was assessed using a validated self-administered FFQ. Dietary patterns were derived by factor analysis. Hyperuricaemia was defined as concentrations of serum uric acid higher than 7 mg/dl (416·5 μmol/l) for men and 6 mg/dl (357 μmol/l) for women. Three dietary patterns were derived by factor analysis: sweet pattern; vegetable pattern; animal foods pattern. The animal foods pattern characterised by higher intake of an animal organ, seafood and processed meat products was associated with higher prevalence of newly diagnosed hyperuricaemia (P for trend<0·01) after adjustment. Compared with the participants in the lowest quartile of the animal foods pattern, the OR of newly diagnosed hyperuricaemia in the highest quartile was 1·50 (95 % CI 1·20, 1·87). The other two dietary patterns were not associated with the prevalence of newly diagnosed hyperuricaemia after adjustment. In conclusion, a diet rich in animal organ, seafood and processed meat products is associated with higher prevalence of newly diagnosed hyperuricaemia in a Chinese population. Further cohort studies and randomised controlled trials are required to clarify these findings. Carbohydrate, dietary glycaemic index and glycaemic load, and colorectal cancer risk: a case–control study in China Jing Huang, Yu-Jing Fang, Ming Xu, Hong Luo, Nai-Qi Zhang, Wu-Qing Huang, Zhi-Zhong Pan, Yu-Ming Chen, Cai-Xia Zhang Journal: British Journal of Nutrition / Volume 119 / Issue 8 / 28 April 2018 Print publication: 28 April 2018 A carbohydrate-rich diet results in hyperglycaemia and hyperinsulinaemia; it may further induce the carcinogenesis of colorectal cancer. However, epidemiological evidence among Chinese population is quite limited. The aim of this study was to investigate total carbohydrate, non-fibre carbohydrate, total fibre, starch, dietary glycaemic index (GI) and glycaemic load (GL) in relation to colorectal cancer risk in Chinese population. A case–control study was conducted from July 2010 to April 2017, recruiting 1944 eligible colorectal cancer cases and 2027 age (5-year interval) and sex frequency-matched controls. Dietary information was collected by using a validated FFQ. The OR and 95 % CI of colorectal cancer risk were assessed by multivariable logistic regression models. There was no clear association between total carbohydrate intake and colorectal cancer risk. The adjusted OR was 0·85 (95 % CI 0·70, 1·03, P trend=0·08) comparing the highest with the lowest quartile. Total fibre was related to a 53 % reduction in colorectal cancer risk (adjusted ORquartile 4 v. 1 0·47; 95 % CI 0·39, 0·58). However, dietary GI was positively associated with colorectal cancer risk, with an adjusted ORquartile 4 v. 1 of 3·10 (95 % CI 2·51, 3·85). No significant association was found between the intakes of non-fibre carbohydrate, starch and dietary GL and colorectal cancer risk. This study indicated that dietary GI was positively associated with colorectal cancer risk, but no evidence supported that total carbohydrate, non-fibre carbohydrate, starch or high dietary GL intake were related to an increased risk of colorectal cancer in a Chinese population. Glucosinolate and isothiocyanate intakes are inversely associated with breast cancer risk: a case–control study in China Nai-Qi Zhang, Suzanne C. Ho, Xiong-Fei Mo, Fang-Yu Lin, Wu-Qing Huang, Hong Luo, Jing Huang, Cai-Xia Zhang Although previous studies have investigated the association of cruciferous vegetable consumption with breast cancer risk, few studies focused on the association between bioactive components in cruciferous vegetables, glucosinolates (GSL) and isothiocyanates (ITC), and breast cancer risk. This study aimed to examine the association between consumption of cruciferous vegetables and breast cancer risk according to GSL and ITC contents in a Chinese population. A total of 1485 cases and 1506 controls were recruited into this case–control study from June 2007 to March 2017. Consumption of cruciferous vegetables was assessed using a validated FFQ. Dietary GSL and ITC were computed by using two food composition databases linking GSL and ITC contents in cruciferous vegetables with responses to the FFQ. The OR and 95 % CI were assessed by unconditional logistic regression after adjusting for the potential confounders. Significant inverse associations were found between consumption of cruciferous vegetables, GSL and ITC and breast cancer risk. The adjusted OR comparing the highest with the lowest quartile were 0·51 (95 % CI 0·41, 0·63) for cruciferous vegetables, 0·54 (95 % CI 0·44, 0·67) for GSL and 0·62 (95 % CI 0·50, 0·76) for ITC, respectively. These inverse associations were also observed in both premenopausal and postmenopausal women. Subgroup analysis by hormone receptor status found inverse associations between cruciferous vegetables, GSL and ITC and both hormone-receptor-positive or hormone-receptor-negative breast cancer. This study indicated that consumption of cruciferous vegetables, GSL and ITC was inversely associated with breast cancer risk among Chinese women. Intakes of magnesium, calcium and risk of fatty liver disease and prediabetes Wenshuai Li, Xiangzhu Zhu, Yiqing Song, Lei Fan, Lijun Wu, Edmond K Kabagambe, Lifang Hou, Martha J Shrubsole, Jie Liu, Qi Dai Journal: Public Health Nutrition / Volume 21 / Issue 11 / August 2018 Published online by Cambridge University Press: 02 April 2018, pp. 2088-2095 Obesity and insulin resistance play important roles in the pathogenesis of non-alcoholic fatty liver disease (NAFLD). Mg intake is linked to a reduced risk of metabolic syndrome and insulin resistance; people with NAFLD or alcoholic liver disease are at high risk of Mg deficiency. The present study aimed to investigate whether Mg and Ca intakes were associated with risk of fatty liver disease and prediabetes by alcohol drinking status. We analysed the association between Ca or Mg intake and fatty liver disease, prediabetes or both prediabetes and fatty liver disease in cross-sectional analyses. Third National Health and Nutrition Examination Survey (NHANES III) follow-up cohort of US adults. Nationally representative sample of US adults in NHANES (n 13 489). After adjusting for potential confounders, Mg intake was associated with approximately 30 % reduced odds of fatty liver disease and prediabetes, comparing the highest intake quartile v. the lowest. Mg intake may only be related to reduced odds of fatty liver disease and prediabetes in those whose Ca intake is less than 1200 mg/d. Mg intake may also only be associated with reduced odds of fatty liver disease among alcohol drinkers. The study suggests that high intake of Mg may be associated with reduced risks of fatty liver disease and prediabetes. Further large studies, particularly prospective cohort studies, are warranted to confirm the findings. DNA methylation is not involved in dietary restriction induced lifespan extension in adult Drosophila TING LIAN, UMA GAUR, QI WU, JIANBO TU, BOYUAN SUN, DEYING YANG, XIAOLAN FAN, XUEPING MAO, MINGYAO YANG Journal: Genetics Research / Volume 100 / 2018 Published online by Cambridge University Press: 01 February 2018, e1 Dietary restriction (DR) is widely regarded as a viable intervention to extend lifespan and healthspan in diverse organisms. The precise molecular regulatory mechanisms are largely unknown. Epigenetic modifications are not stable upon DR and also keep changing with age. Here, we employed whole genome bisulfite sequencing to determine the DNA methylation changes upon DR in adult Drosophila. Our results indicate that although a low level of DNA methylation exists in the adult Drosophila genome, there is no significant difference in DNA methylation levels upon DR when compared to unrestricted flies. This suggests that other epigenetic components such as histone modifications might be altered by DR. Nonlinear phase-resolved reconstruction of irregular water waves Yusheng Qi, Guangyu Wu, Yuming Liu, Moo-Hyun Kim, Dick K. P. Yue Journal: Journal of Fluid Mechanics / Volume 838 / 10 March 2018 We develop and validate a high-order reconstruction (HOR) method for the phase-resolved reconstruction of a nonlinear wave field given a set of wave measurements. HOR optimizes the amplitude and phase of $L$ free wave components of the wave field, accounting for nonlinear wave interactions up to order $M$ in the evolution, to obtain a wave field that minimizes the reconstruction error between the reconstructed wave field and the given measurements. For a given reconstruction tolerance, $L$ and $M$ are provided in the HOR scheme itself. To demonstrate the validity and efficacy of HOR, we perform extensive tests of general two- and three-dimensional wave fields specified by theoretical Stokes waves, nonlinear simulations and physical wave fields in tank experiments which we conduct. The necessary $L$ , for general broad-banded wave fields, is shown to be substantially less than the free and locked modes needed for the nonlinear evolution. We find that, even for relatively small wave steepness, the inclusion of high-order effects in HOR is important for prediction of wave kinematics not in the measurements. For all the cases we consider, HOR converges to the underlying wave field within a nonlinear spatial-temporal predictable zone ${\mathcal{P}}_{NL}$ which depends on the measurements and wave nonlinearity. For infinitesimal waves, ${\mathcal{P}}_{NL}$ matches the linear predictable zone ${\mathcal{P}}_{L}$ , verifying the analytic solution presented in Qi et al. (Wave Motion, vol. 77, 2018, pp. 195–213). With increasing wave nonlinearity, we find that ${\mathcal{P}}_{NL}$ contains and is generally greater than ${\mathcal{P}}_{L}$ . Thus ${\mathcal{P}}_{L}$ provides a (conservative) estimate of ${\mathcal{P}}_{NL}$ when the underlying wave field is not known. Effects of dietary grape proanthocyanidins on the growth performance, jejunum morphology and plasma biochemical indices of broiler chicks J. Y. Yang, H. J. Zhang, J. Wang, S. G. Wu, H. Y. Yue, X. R. Jiang, G. H. Qi Journal: animal / Volume 11 / Issue 5 / May 2017 Published online by Cambridge University Press: 24 October 2016, pp. 762-770 Grape proanthocyanidins (GPCs) are a family of naturally derived polyphenols that have aroused interest in the poultry industry due to their versatile role in animal health. This study was conducted to investigate the potential benefits and appropriate dosages of GPCs on growth performance, jejunum morphology, plasma antioxidant capacity and the biochemical indices of broiler chicks. A total of 280 newly hatched male Cobb 500 broiler chicks were randomly allocated into four treatments of seven replicates each, and were fed a wheat–soybean meal-type diet with or without (control group), 7.5, 15 or 30 mg/kg of GPCs. Results show that dietary GPCs decrease the feed conversion ratio and average daily gain from day 21 to day 42, increase breast muscle yield by day 42 and improve jejunum morphology between day 21 and day 42. Chicks fed 7.5 and 15 mg/kg of GPCs show increased breast muscle yield and exhibit improved jejunum morphologies than birds in the control group. Dietary GPCs fed at a level of 15 mg/kg markedly increased total superoxide dismutase (T-SOD) activity between day 21 and day 42, whereas a supplement of GPCs at 7.5 mg/kg significantly increased T-SOD activity and decreased lipid peroxidation malondialdehyde content by day 42. A supplement of 30 mg/kg of GPCs has no effect on antioxidant status but adversely affects the blood biochemical indices, as evidenced by increased creatinine content, increased alkaline phosphatase by day 21 and increased alanine aminotransferase by day 42 in plasma. GPC levels caused quadratic effect on growth, jejunum morphology and plasma antioxidant capacity. The predicted optimal GPC levels for best plasma antioxidant capacity at 42 days was 13 to 15 mg/kg, for best feed efficiency during grower phase was 16 mg/kg, for best jejunum morphology at 42 days was 17 mg/kg. In conclusion, GPCs (fed at a level of 13 to 17 mg/kg) have the potential to be a promising feed additive for broiler chicks. Role of CD46 Polymorphisms in the Occurrence of Disease in Young Chinese Men With Human Adenovirus Type 55 Infection Qi Lv, Hui Ding, Zi-quan Liu, Hong-wei Gao, Bao-guo Yu, Zhou-wei Wu, Hao-jun Fan, Shi-ke Hou Journal: Disaster Medicine and Public Health Preparedness / Volume 12 / Issue 4 / August 2018 Human adenovirus type 55 (HAdV-55) has recently caused multiple outbreaks. This study examined polymorphisms in CD46 to determine their involvement in HAdV-55 infection. A total of 214 study subjects infected with HAdV-55 were included in our study. The study subjects were divided into those with silent infections (n=91), minor infections (n=85), and severe infections (n=38). Ten single nucleotide polymorphisms (SNPs) from CD46 were examined. Compared with the AA genotype, the TT genotype at rs2724385 (CD46, A/T) was associated with a protective effect against disease occurrence, with an odds ratio (95% confidence interval) of 0.20 (0.04-0.97) (P=0.038). There were no significant differences between the patients with minor and severe infection and those who had silent HAdV-55 infection in the other CD46 SNPs. We next compared the polymorphisms of these genes according to disease severity in HAdV-55-infected patients with clinical symptoms. The results showed that there were no significant differences between minor infections and severe infections. Our results suggested that the CD46 SNP at rs2724385 is associated with the occurrence of disease in HAdV-55-infected patients. A much larger number of samples is required to understand the role of CD46 polymorphisms in the occurrence and progression of infection by HAdV-55. (Disaster Med Public Health Preparedness. 2018;12:427–430) Analysis of the Three-Tiered Treatment Model for Emergency Medical Rescue Services After the Lushan Earthquake ZiQuan Liu, Zhen Yang, Qi Lv, Hui Ding, XinJun Suo, HongWei Gao, LiMin Xin, WenLong Dong, RuiChang Wu, HaoJun Fan, ShiKe Hou Journal: Disaster Medicine and Public Health Preparedness / Volume 12 / Issue 3 / June 2018 To explore the 3-tiered treatment model for medical treatment after an earthquake. Based on the practices of the national emergency medical rescue services in the Lushan earthquake zone, the 3-tiered treatment classification approach was retrospectively reviewed. Medical rescue teams assembled and reported quickly to the disaster areas after the earthquake. The number of injured people had reached 25,176 as of April 30; of these, 18,611 people were treated as outpatients, 6565 were hospitalized, and 977 were seriously or severely injured. The 3-tiered treatment model was the main approach used by rescue services after the Lushan earthquake. Primary and secondary treatments were of the highest importance and formed the basis of the Lushan model of earthquake rescue and treatment. (Disaster Med Public Health Preparedness. 2018; 12: 301–304) High-Level 14C Contamination and Recovery at Xi'an AMS Center Weijian Zhou, Shugang Wu, Todd E Lange, Xuefeng Lu, Peng Cheng, Xiaohu Xiong, Richard J Cruz, Qi Liu, Yunchong Fu, Wennian Zhao Journal: Radiocarbon / Volume 54 / Issue 2 / 2012 A sample with a radiocarbon concentration estimated to be greater than 105 times Modern was inadvertently graphitized and measured in the Xi'an AMS system last year. Both the sample preparation lines and the ion source system were seriously contaminated and a series of cleaning procedures were carried out to remove the contamination from them. After repeated and careful cleaning as well as continuous flushing with dead CO2 gas, both systems have recovered from the contamination event. The machine background is back to 2.0 x 10–16 and the chemical blank is beyond 50 kyr. Visible-light responsive plasmonic Ag2O/Ag/g-C3N4 nanosheets with enhanced photocatalytic degradation of Rhodamine B Shurong Fu, Yiming He, Qi Wu, Ying Wu, Tinghua Wu Journal: Journal of Materials Research / Volume 31 / Issue 15 / 15 August 2016 Print publication: 15 August 2016 Visible-light responsive plasmonic Ag2O/Ag/g-C3N4 nanosheets (NS) were successfully prepared by a simple and green photodeposition method. The obtained composites were characterized by XRD, Fourier transform infrared, transmission electron microscopy, UV-vis, and the photoluminescence (PL) results indicated that the Ag2O/Ag/g-C3N4 NS composites showed better photoabsorption performance than g-C3N4 due to the surface plasmon resonance effect of Ag nanoparticles. Meanwhile, the composite exhibited excellent photocatalytic activities, which was ∼3.8 and ∼3.0 times higher than those of bulk g-C3N4 and pure g-C3N4 NS, respectively. Moreover, the as-prepared composites showed a high structural stability in the photodegradation of Rhodamine B. A possible photocatalytic and charge separation mechanism was suggested based on the PL spectra and the active species trapping experiment. Mapping Global Shipping Density from AIS Data Lin Wu, Yongjun Xu, Qi Wang, Fei Wang, Zhiwei Xu Journal: The Journal of Navigation / Volume 70 / Issue 1 / January 2017 Mapping global shipping density, including vessel density and traffic density, is important to reveal the distribution of ships and traffic. The Automatic Identification System (AIS) is an automatic reporting system widely installed on ships initially for collision avoidance by reporting their kinematic and identity information continuously. An algorithm was created to account for errors in the data when ship tracks seem to 'jump' large distances, an artefact resulting from the use of duplicate identities. The shipping density maps, including the vessel and traffic density maps, as well as AIS receiving frequency maps, were derived based on around 20 billion distinct records during the period from August 2012 to April 2015. Map outputs were created in three different spatial resolutions: 1° latitude by 1° longitude, 10 minutes latitude by 10 minutes longitude, and 1 minute latitude by 1 minute longitude. The results show that it takes only 56 hours to process these records to derive the density maps, 1·7 hours per month on average, including data retrieval, computation and updating of the database. Hearing restoration for adults with vestibular schwannoma in the only-hearing ear: ipsilateral, contralateral or bilateral cochlear implantation?: Presenting Author: Zhihua Zhang Zhihua Zhang, Zirong Huo, Qi Huang, Zhaoyan Wang, Jun Yang, Hao Wu Journal: The Journal of Laryngology & Otology / Volume 130 / Issue S3 / May 2016 Published online by Cambridge University Press: 03 June 2016, pp. S253-S254 Influence of cystic tumor degeneration on management strategy in vestibular schwannoma: Presenting Author: Zhihua Zhang Published online by Cambridge University Press: 03 June 2016, p. S253
CommonCrawl
Robert F. Tichy's Homepage After 5 days in the mountains (2008) Mont Blanc (2017) Office: +43/316/873/7120 Fax: +43/316/873/7126 email: [email protected] my address: Institut für Analysis und Zahlentheorie Uniform Distribution and Discrepancy Analytic Combinatorics Algorithmic Number Theory Fractal Structures Asymptotic and Stochastic Analysis Information Based Complexity Quasi-Monte Carlo Methods Mathematics in Finance and Insurance Administrative Positions Head of the Department of Mathematics (TU Graz, 1994-2000) Co-Speaker of the research area "Number-Theoretic Algorithms and their Applications" (funded by FWF) (2000-2005) Member of the Senate of the TU Graz (1998-2006) Member of the Convent of the TU Graz (2003-2004) Vice-President of the Austrian Mathematical Society (OeMG) (2002-2005) President of the Austrian Mathematical Society (OeMG) (2006-2009) Dean (Faculty of Science, TU Graz, 2003), Dean (2004-2009) and Vice-Dean (2010-2017) of the Faculty of Mathematical and Physical Sciences Member of the Board (Kuratorium) of the FWF (Austrian Science Foundation) (2006-2014) Appointment to the ESF Standing Committee for Physical and Engineering Sciences (PESC, 2011-2012) Austrian delegate in the General Assembly of ICM 2014 in Korea Dean (Faculty of Mathematics, Physics and Geodesy, 2018-2019) Editorial Duties Journal of Number Theory (1991 - 2000) Journal de Théorie des Nombres, Bordeaux Monatshefte für Mathematik (advisory board) Mathematica Slovaca Fibonacci Quarterly Uniform Distribution Theory Grazer Mathematische Berichte "Number Theoretic Analysis", Lecture Notes in Mathematics volume 1452, Springer 1990, with E. Hlawka "Algebraic Number Theory and Diophantine Analysis", de Gruyter Proceedings in Mathematics, de Gruyter 2000, with F. Halter-Koch Integers, Electronic Journal of Combinatorial Number Theory (Member of the Editorial Board) "Diophantine Approximation: Festschrift for Wolfgang Schmidt", Springer 2008, with H.P. Schlickewei and K. Schmidt "Dependence in Probability, Analysis and Number Theory" (In Memory of Walter Philipp), Kendrick Press 2010, with I. Berkes, R.C. Bradley, H Dehling and M. Peligrad International Mathematical News (IMN, Austrian Mathematical Society) Professional Positions and Awards Ph.D. University of Vienna (1979) Life-Insurance consultant (1979-1981) Assistant (Vienna, 1980-1983) Lecturer of Actuarial Sciences (Linz, 1980) Award of the Austrian Mathematical Society (1985) Dozent (Habilitation TU Vienna, 1983-1990) Full Professor (TU Graz, since 1990) Visiting Positions: Salzburg (1986), Tata Institute Bombay (1992), Marseille (1993, 1995, 2010, 2011), Debrecen (1997), University of Illinois (Urbana-Champaign, 2000), University of the Witwatersrand (Johannesburg, 2003), University of Vienna (2010), Macquarie University Sydney (2010), University Paris 7 (2018) Member of the New York Academy of Sciences (1997-2002) Corresponding Member of the Austrian Academy of Sciences (2004- ) Faculty Member of the doctoral school DK Discrete Mathematics (2010-) Faculty Member of the SFB "Quasi-Monte Carlo Methods: Theory and Applications" SFB Quasi-Monte Carlo Methods (2014-) Honorary Doctor, University of Debrecen (2017) Senior Member of the London Mathematical Society Life Member of the American Mathematical Society Jean-Morlet Chair (CIRM and University Aix-Marseille, 2020) Each semester, I do teaching for undergraduate engineering students as well as undergraduate and graduate math students. For details see "List of current courses and lectures". Furthermore, each year, I give a lecture in insurance mathematics. PhD Students (Mathematics Genealogy Project) Peter Grabner, Peter Kirschenhofer and Robert Tichy: Combinatorial and arithmetical properties of linear numeration systems. Combinatorica, 22 (2002), 245-267 Hansjörg Albrecher, Jozef Teugels and Robert Tichy: On a gamma series expansion for the time-dependent probability of collective ruin. Insurance / Mathematics & economics, 29 (2001), 345-355 Yuri F. Bilu and Robert Tichy: The diophantine equation f(x)=g(y). Acta arithmetica, 95/3 (2000), 261-288 Michael Drmota and Robert Tichy: Sequences, discrepancies and applications, Springer-Verlag Berlin, 1997 Peter Grabner, Pierre Liardet and Robert Tichy: Odometers and systems of numeration. Acta arithmetica, 70 (1995), 103-123 Philippe Flajolet, Peter Grabner, Peter Kirschenhofer, Helmut Prodinger and Robert Tichy: Mellin transforms and asymptotics: digital sums. Theoretical computer science, 123/2 (1994), 291-314 Peter Grabner and Robert Tichy: alpha-expansions, linear recurrences and the sum-of-digits function. Manuscripta mathematica, 70 (1991), 311-324 Martin Blümlinger, Michael Drmota and Robert Tichy: A uniform law of the iterated logarithm for Brownian motion on compact Riemannian manifolds. Mathematische Zeitschrift, 201/4 (1989), 495-507 Philippe Flajolet, Peter Kirschenhofer and Robert Tichy: Deviations from uniformity in random strings. Probability theory and related fields, 80/1 (1988), 139-150 Robert Tichy: Ein metrischer Satz über vollständig gleichverteilte Folgen. Acta arithmetica, 48/2 (1987), 197-207 Norbert Kopecek, Gerhard Larcher, Robert Tichy and Gerhard Turnwald: On the discrepancy of sequences associated with the sum-of-digits function. Annales de l'Institut Fourier, 37/3 (1987), 1-17 Christian Buchta, Josef S. Müller and Robert Tichy: Stochastical approximation of convex bodies. Mathematische Annalen, 271/2 (1985), 225-235 Harald Niederreiter and Robert Tichy: Solution of a problem of Knuth on complete uniform distribution of sequences. Mathematika, 32/1 (1985), 26-32 Viktor Losert, Werner Georg Nowak and Robert Tichy: On the Asymptotic Distribution of the Powers of $(s\times s)$-Matrices. Compositio mathematica, 45/2 (1982), 273-291 Helmut Prodinger and Robert Tichy: Fibonacci numbers of graphs. The Fibonacci quarterly, 20/1 (1982), 16-21 Publications since 2000 Mahadi Ddamulira, Florian Luca and Robert Tichy, On the Shorey–Tijdeman Diophantine equation involving terms of Lucas sequences, Indagationes Mathematicae, Elsevier B.V., (2021). [doi] Robert Tichy, Ingrid Vukusic, Daodao Yang and Volker Ziegler, Integers representable as differences of linear recurrence sequences, Research in Number Theory, Springer, 7(2), (2021). [doi] Manfred Madritsch and Robert Tichy, MULTIDIMENSIONAL VAN DER CORPUT SETS AND SMALL FRACTIONAL PARTS OF POLYNOMIALS, Mathematika, University College London, 65(2), (2019), 400–435. [doi] Milan Paštéka and Robert Tichy, Measurable sequences, Rivista di matematica della Università di Parma, Università degli Studi di Parma, 10(1), (2019), 63–84. Dijana Kreso and Robert Tichy, Diophantine equations in separated variables, Periodica mathematica Hungarica, Springer Netherlands, (2018), 47–67. [doi] Christoph Aistleitner, Gerhard Larcher, Friedrich Pillichshammer, Eddin, Sumaia Saad and Tichy, Robert F., On Weyl products and uniform distribution modulo one, Monatshefte für Mathematik, Springer Wien, 185(3), (2018), 365–391. [doi] Thonhauser, Stefan Michael, Robert Tichy and Preischl, Michael Julius, Integral Equations, Quasi-Monte Carlo Methods and Risk Modelling, Chapter in (Dick, J., F. Kuo, Wo\'zniakowski, H., eds.), Springer Verlag, (2018), 1051–1074. [doi] Peter Grabner, Robert Tichy and Gregory Derfel, On the asymptotic behaviour of the zeros of solutions of one functional-differential equation with rescaling, Chapter in , Springer, 263, (2018), 281–293. Ernst Stadlober and Robert Tichy, Ulrich Dieter 1932-2018, Internationale Mathematische Nachrichten, Österreichische Mathematische Gesellschaft, ÖMG, 238, (2018), 33–43. Manfred Madritsch, A. M. Scheerer and R. Tichy, Computable absolutely Pisot normal numbers, Acta Arithmetica, Instytut Matematyczny, 184, (2018), 7–29. [doi] Gregory Derfel, Grabner, Peter J. and Tichy, Robert F., On the asymptotic behaviour of the zeros of the solutions of a functional-differential equation with rescaling, Chapter in , Springer International Publishing AG, (2018), 281–295. [doi] Christoph Aistleitner, Robert Tichy, Florian Pausinger and Svane, Anne Marie, On functions of bounded variation, Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, 162(3), (2017), 405–418. [doi] Christian Elsholtz, Niclas Technau and Robert Tichy, On the regularity of primes in arithmetic progressions, International Journal of Number Theory, World Scientific Publishing, 13(5), (2017). [doi] Michael Kerber, Robert Tichy and Weitzer, Mario Franz, Constrained Triangulations, Volumes of Polytopes, and Unit Equations, In 33rd International Symposium on Computational Geometry (SoCG 2017), Schloss Dagstuhl - Leibniz-Zentrum für Informatik GmbH, (2017), 46:1—-46:15. I. Berkes and R. Tichy, The Kadec-Pelczynski theorem in L^p, Proceedings of the American Mathematical Society, American Mathematical Society, 144(5), (2016), 2053–2066. [doi] Iacò, Maria Rita, Wolfgang Steiner and Tichy, Robert F., Linear recursive odometers and beta-expansions, Uniform Distribution Theory, Slovak Academy of Sciences, 11(1), (2016), 175–186. Thonhauser, Stefan Michael, Robert Tichy, Iaco, Maria Rita, Oto Strauch and Vladimir Balaz, An extremal problem in uniform distribution theory, Uniform Distribution Theory, Slovak Academy of Sciences, (2016). Manfred Madritsch and Robert Tichy, Dynamical systems and uniform distribution of sequences, Chapter in , Springer Verlag, (2015). Iaco, Maria Rita, Milan Pasteka and Robert Tichy, Measure density for set decompositions and uniform distribution, Rendiconti del Circolo Matematico di Palermo, Circolo Matematico di Palermo, 64, (2015), 323–339. [doi] Dijana Kreso and Tichy, Robert F., Functional composition of polynomials: indecomposability, Diophantine equations and lacunary polynomials, Grazer mathematische Berichte, 363, (2015), 143–170. Iaco, Maria Rita, Thonhauser, Stefan Michael and Robert Tichy, Distribution functions, extremal limits and optimal transport, Indagationes Mathematicae, Elsevier B.V., 26(5), (2015), 823–841. [doi] Markus Hofer, Iaco, Maria Rita and Robert Tichy, Ergodic properties of $\beta$-adic Halton sequences, Ergodic theory and dynamical systems, Cambridge University Press, 35(3), (2015), 895–909. [doi] Christopher Frei, Robert Tichy and Volker Ziegler, On sums of s-integers of bounded norm, Monatshefte für Mathematik, Springer Wien, 175(2), (2014), 241–247. [doi] Bergelson V., [No Value], Grigori Kolesnik and Manfred Madritsch, Younghwan Son and Robert Tichy, Uniform distribution of prime powers and sets of recurrence and van der Corput sets in Z^k, Israel journal of mathematics, Springer New York, 201(2), (2014), 729–760. Vitaly Bergelson, Grigori Kolesnik, Manfred Madritsch, Younghwan Son and Robert Tichy, Uniform distribution of prime powers and sets of recurrence and van der Corput sets in ℤ k, Israel journal of mathematics, Springer New York, 201(2), (2014), 729–760. [doi] István Berkes and Robert Tichy, On permutation-invariance of limit theorems, Journal of Complexity, Elsevier USA, 31(3), (2014), 372–379. [doi] Manfred Madritsch and Robert Tichy, Construction of normal numbers via generalized prime power sequences, Journal of Integer Sequences, AT & T, 16(2), (2013), 17 –30. Robert Tichy and István Berkes, Lacunary series and stable distributions, Chapter in , ., (2013), 135–143. Christoph Aistleitner, István Berkes and Robert Tichy, On the system $f(nx)$ and probabilistic number theory, In Analytic and Probabilistic Number Theory, ., (2012), 1–18. Christoph Aistleitner, István Berkes and Robert Tichy, On the law of the iterated logarithm for permuted lacunary sequences, Proceedings of the Steklov Institute of Mathematics, Springer Science+Business Media, 276, (2012), 3–20. Christoph Aistleitner, Markus Hofer and Robert Tichy, A central limit theorem for Latin hypercube sampling with dependence and application to exotic basket option pricing, International journal of theoretical and applied finance, World Scientific, 15, (2012), 1–20. Christoph Aistleitner, István Berkes and Robert Tichy, On permutations of lacunary series, RIMS Kôkyûroku / Bessatsu, B34, (2012), 1–25. Fabrizio Barroero, Christopher Frei and Robert Tichy, Additive unit representations in rings over global fields - A survey, Publicationes Mathematicae, Kossuth Lajos Tudomanyegyetem, 79(3-4), (2011), 291–307. [doi] Christoph Aistleitner, István Berkes and Robert Tichy, On the asymptotic behavior of weakly lacunary sequences, Proceedings of the American Mathematical Society, American Mathematical Society, 139(7), (2011), 2505–2517. C. Aistleitner, I. Berkes and R. Tichy, On the asymptotic behavior of weakly lacunary series, Proceedings of the American Mathematical Society, American Mathematical Society, 139(7), (2011), 2505–2517. [doi] Christoph Aistleitner, István Berkes and Robert Tichy, On permutations of Hardy- Littlewood-Pólya sequences, Transactions of the American Mathematical Society, American Mathematical Society, 363, (2011), 6219–6244. Robert Tichy, Christoph Aistleitner and István Berkes, Lacunary sequences and permutations, Chapter in , Kendrick Press, (2010), 35–49. Dependence in Probability, Analysis and Number Theory, (István Berkes, Bradley, Richard C., Herold Dehling, Magda Peligrad, Robert Tichy, eds.), Kendrick Press, (2010). Robert Tichy and Martin Zeiner, Baire results of multi-sequences, Uniform Distribution Theory, Slovak Academy of Sciences, 5(1), (2010), 13–44. Robert Tichy and Johannes Wallner, Johannes Frischauf - eine schillernde Persönlichkeit in Mathematik und Alpinismus, Internationale Mathematische Nachrichten, Österreichische Mathematische Gesellschaft, ÖMG, 210, (2009), 21–32. Robert Tichy, Nachruf auf Edmund Hlawka, Monatshefte für Mathematik, Springer Wien, 158, (2009), 107–120. Fuchs, Clemens Josef, Robert Tichy and Volker Ziegler, On quantitative aspects of the unit sum number problem, Archiv der Mathematik, Springer International Publishing AG, 93, (2009), 259–268. István Berkes, Walter Philipp and Robert Tichy, Entropy conditons for subsequences of random variables with applications to empirical processes, Monatshefte für Mathematik, Springer Wien, 153(3), (2008), 183–204. [doi] Alan Filipin, Robert Tichy and Volker Ziegler, The additive unit structure of pure quartic complex fields, Functiones et approximatio, Adam Mickiewicz University Press, 39(1), (2008), 113–131. [doi] Robert Tichy and Stephan Wagner, Algorithmic generation of molecular graphs with large Merrifield-Simmons index, Match - communications in mathematical and in computer chemistry, University of Kragujevac, Faculty of Science, 59(1), (2008), 239–252. Robert Tichy, Nachruf auf Walter Philipp, Monatshefte für Mathematik, Springer Wien, 153, (2008), 177–182. Attila Pethö, Fuchs, Clemens Josef and Robert Tichy, On the diophantine equation $G_n(x)=G_m(P(x))$ with $Q(x,y)=0$, Chapter in , Springer, 16, (2008), 199–209. [doi] Manfred Madritsch, Jörg Thuswaldner and Robert Tichy, Normality of numbers generated by the values of entire functions, Journal of Number Theory, Academic Press, 128(5), (2008), 1127–1145. [doi] István Berkes, Walter Philipp and Robert Tichy, Metric discrepancy results for sequences $\{n_kx\}$ and diophantine equations, Chapter in , Springer, 16, (2008), 95–105. [doi] Robert Tichy, Volker Ziegler and Alan Filipin, On the quantitative unit sum number problem - an application of the subspace theorem, Acta Arithmetica, Instytut Matematyczny, 133, (2008), 297–308. Volker Ziegler, Robert Tichy and Alan Filipin, The additive unit structure of purely quartic complex fields, Functiones et approximatio, Adam Mickiewicz University Press, 39(1), (2008), 113–131. Thomas Stoll and Robert Tichy, Diophantine equations for Morgan-Voyce and other modified orthogonal polynomials, Mathematica Slovaca, deGruyter, 58(1), (2008), 11–18. Robert Tichy, István Berkes and Walter Philipp, Pseudorandom numbers and entropy conditions, Journal of Complexity, Elsevier USA, 23(4-6), (2007), 516–527. Volker Ziegler, Robert Tichy and Stephan Wagner, Graphs, Partitions and Fibonacci Numbers, Discrete Applied Mathematics, Elsevier B.V., 155(10), (2007), 1175–1187. Volker Ziegler and Robert Tichy, Units generating the ring of integers of complex cubic fields, Colloquium Mathematicum, Institute of Mathematics, Polish Academy of Sciences, 109(1), (2007), 71–83. István Berkes, Walter Philipp and Robert Tichy, Empirical processes in probabilistic number theory: the LIL for the discrepancy of $(n_kw)$ mod 1, Illinois journal of mathematics, University of Illinois at Urbana-Champaign, 50(1), (2006), 107–145. István Berkes, Walter Philipp and Tichy, Robert F., Empirical processes in probabilistic number theory: The LIL for the discrepancy of (nkω) MOD 1, Illinois journal of mathematics, University of Illinois at Urbana-Champaign, 50(1), (2006), 107–145. [doi] Thomas Stoll and Tichy, Robert F., Octahedrons with equally many lattice points and generalizations, (2006), 724–729. Clemens Heuberger, Attila Pethö and Robert Tichy, Thomas' Family of Thue Equations over Imaginary Quadratic Fields, II, Sitzungsberichte und Anzeiger / Österreichische Akademie der Wissenschaften, Mathematisch-Naturwissenschaftliche Klasse : Abteilung I, Biologische Wissenschaften und Erdwissenschaften ; Abteilung II, Mathematische, Physikalische und Technische Wissenschaften, Verlag der Österreichischen Akademie der Wissenschaften, 142, (2006), 3–7. Peter Grabner, Pierre Liardet and Robert Tichy, Spectral Disjointness of Dynamical Systems Related to Some Arithmetic Functions, Publicationes Mathematicae, Kossuth Lajos Tudomanyegyetem, 66, (2005), 213–244. Andrej Dujella, Ivica Gusic and Robert Tichy, On the indecomposability of polynomials, Sitzungsberichte und Anzeiger / Österreichische Akademie der Wissenschaften, Mathematisch-Naturwissenschaftliche Klasse : Abteilung I, Biologische Wissenschaften und Erdwissenschaften ; Abteilung II, Mathematische, Physikalische und Technische Wissenschaften, Verlag der Österreichischen Akademie der Wissenschaften, 214, (2005), 81–88. Jörg Thuswaldner and Robert Tichy, Waring's problem with digital restrictions, Israel journal of mathematics, Springer New York, 149, (2005), 317–344. Hansjörg Albrecher, Jürgen Hartinger and Robert Tichy, On the distribution of dividend payments and the discounted penalty function in a risk model with linear dividend barrier, Scandinavian Actuarial Journal, Taylor and Francis Ltd., no. No. 2, (2005), 103–126. Robert Tichy and Stephan Wagner, Extremal Problems for Topological Indices in Combinatorial Chemistry, Journal of computational biology, Mary Ann Liebert Inc., 12(7), (2005), 1004–1013. Ladislav Misik and Robert Tichy, Large null sets in metric spaces, Journal of Mathematical Analysis and Applications, Elsevier B.V., 305, (2005), 424–437. G. Dorfer and R. F. Tichy, Quadratic algebraic numbers with finite $b$-adic expansion on the unit circle and their distribution, Mathematische Nachrichten, Wiley-VCH, 273, (2004), 58–74. Th Stoll and Tichy, R. F., The Diophantine equation α( m x) + β( n y) = γ, Publicationes Mathematicae, Kossuth Lajos Tudomanyegyetem, 64(1-2), (2004), 155–165. T. Stoll and R. F. Tichy, The Diophantine equation alpha(x choose m)+beta(y choose n)=gamma, Publicationes Mathematicae, Kossuth Lajos Tudomanyegyetem, 64, (2004), 155–165. Hansjörg Albrecher, Jürgen Hartinger and Robert Tichy, Quasi-Monte Carlo techniques for CAT bond pricing, Monte Carlo methods and applications, de Gruyter, 10(3-4), (2004), 197–212. Hansjörg Albrecher, Jürgen Hartinger and Tichy, Robert F., QMC techniques for CAT bond pricing, Monte Carlo methods and applications, de Gruyter, 10(3-4), (2004), 197–211. [doi] T. Stoll and R. F. Tichy, Diophantine equations involving general Meixner and Krawtchouk polynomials, Quaestiones mathematicae, Taylor and Francis Ltd., 27, (2004), 105–115. Jürgen Hartinger, Reinhold Kainhofer and Robert Tichy, Quasi-Monte Carlo Algorithms for unbounded, weighted integration problems, Journal of Complexity, Elsevier USA, 20(5), (2004), 654–668. Gerhard Larcher, Martin Predota and Robert Tichy, Arithmetic average options in the hyperbolic model, Monte Carlo methods and applications, de Gruyter, 9, (2003), 227–239. T. Stoll and R. F. Tichy, Diophantine equations for classical continuous orthogonal polynomials, Indagationes Mathematicae, Elsevier B.V., 14, (2003), 263–274. Clemens Fuchs, Attila Petho and Tichy, Robert F., On the Diophantine equation Gn(x) = Gm(P(x)): Higher-order recurrences, Transactions of the American Mathematical Society, American Mathematical Society, 355(11), (2003), 4657–4681. [doi] Fuchs, Clemens Josef, Robert Tichy and Attila Pethö, On the Diophantine equation G_n(x)=G_m(P(x)): higher-order recurrences, Transactions of the American Mathematical Society, American Mathematical Society, 355, (2003), 4657–4681. Hansjörg Albrecher, Jürgen Hartinger and Robert Tichy, Multivariate approximation methods for the pricing of catastrophe-linked bonds, International Series of Numerical Mathematics, Springer Nature Switzerland AG, 145, (2003), 21–39. L. L. Cristea and R. F. Tichy, Discrepancies of point sequences on the Sierpinski carpet, Mathematica Slovaca, deGruyter, 53, (2003), 351–367. M. Pasteka and R. F. Tichy, A note on the correlation coeffcient of arithmetic functions, Acta Academiae Paedagogicae Agriensis / Sectio Mathematicae, Eszterhazy Karoly College, 30, (2003), 109–114. Fuchs, Clemens Josef and Robert Tichy, Perfect powers in linear recurring sequences, Acta Arithmetica, Instytut Matematyczny, 107.1, (2003), 9–25. Hansjörg Albrecher, Reinhold Kainhofer and Robert Tichy, Simulation Methods in Ruin Models with Non-linear Dividend Barriers, Mathematics and Computers in Simulation, Elsevier B.V., 62, (2003), 277–287. Clemens Fuchs, Attila Petho and Tichy, Robert F., On the diophantine equation Gn(x) = Gm(P(x)), Monatshefte für Mathematik, Springer Wien, 137(3), (2002), 173–196. [doi] T. Siegl and R. F. Tichy, A model in ruin theory using derivative securities, Schweizerische Aktuarvereinigung: Mitteilungen, no. 1, (2002), 13–30. Peter Grabner, Peter Kirschenhofer and Robert Tichy, Combinatorial and arithmetical properties of linear numeration systems, Combinatorica, Springer, 22, (2002), 245–267. Clemens Heuberger, Attila Pethö and Robert Tichy, Thomas' family of Thue equations over imaginary quadratic fields, Journal of Symbolic Computation, Elsevier B.V., 34, (2002), 437–449. [doi] Fuchs, Clemens Josef, Robert Tichy and Attila Pethö, On the Diophantine equation G_n(x)=G_m(P(x)), Monatshefte für Mathematik, Springer Wien, 137(3), (2002), 173–196. W. Philipp and R. F. Tichy, Metric theorems for distribution measures of pseudorandom sequences, Monatshefte für Mathematik, Springer Wien, 135, (2002), 321–326. Fuchs, Clemens Josef, Robert Tichy and Andrej Dujella, Diophantine m-tuples for linear polynomials, Periodica mathematica Hungarica, Springer Netherlands, 45(1-2), (2002), 21–33. Y. Bilu, B. Brindza, P. Kirschenhofer, A. Pinter and R. F. Tichy, Diophantine equations and Bernoulli polynomials, Compositio mathematica, Cambridge University Press, 131, (2002), 173–188. Hansjörg Albrecher, Reinhold Kainhofer and Robert Tichy, Efficient Simulation Techniques for a Generalized Ruin Model, Grazer mathematische Berichte, 245, (2002), 79–110. Reinhold Kainhofer and Robert Tichy, QMC methods for the solution of differential equations with multiple delayed arguments, Grazer mathematische Berichte, 345, (2002), 111–129. Hansjörg Albrecher, Jozef Teugels and Robert Tichy, On a gamma series expansion for the time-dependent probability of collective ruin, Insurance / Mathematics & economics, Elsevier B.V., 29(3), (2001), 345–355. István Berkes, Walter Philipp and Robert Tichy, Pair correlation and U-statistics for independent and weakly dependent random variables, Illinois journal of mathematics, University of Illinois at Urbana-Champaign, 45, (2001), 559–580. A. Dujella and R. F. Tichy, Diophantine equations for second order recursive sequences of polynomials, The Quarterly Journal of Mathematics, Oxford University Press, 52, (2001), 161–169. Wolfgang Müller, Jörg Thuswaldner and Robert Tichy, Fractal properties of number systems, Periodica mathematica Hungarica, Springer Netherlands, 42, (2000), 51–68. Y. F. Bilu, T. Stoll and R. F. Tichy, Octahedrons with equally many lattice points, Periodica mathematica Hungarica, Springer Netherlands, 40, (2000), 229–238. Hansjörg Albrecher and Robert Tichy, Zur Konvergenz eines Lösungsverfahrens für ein Risikomodell mit gammaverteilten Schäden, Schweizerische Aktuarvereinigung: Mitteilungen, no. 2, (2000), 115–127. Hansjörg Albrecher, Jiri Matousek and Robert Tichy, Discrepancy of point sequences on fractal sets, Publicationes Mathematicae, Kossuth Lajos Tudomanyegyetem, 56(3-4), (2000), 233–249. T. Siegl and R. F. Tichy, Ruin theory with risk proportional to free reserve and securitization, Insurance / Mathematics & economics, Elsevier B.V., 26(1), (2000), 59–73. [doi] Bilu, Yuri F. and Tichy, Robert F., The Diophantine equation f(cursive Greek chi) = g(y), Acta Arithmetica, Instytut Matematyczny, 95(3), (2000), 261–288. [doi] Bilu, Yuri F. and Robert Tichy, The diophantine equation f(x)=g(y), Acta Arithmetica, Instytut Matematyczny, 95(3), (2000), 261–288. J. M. Thuswaldner and R. F. Tichy, An Erdös-Kac theorem for systems of $q$-additive functions, Indagationes Mathematicae, Elsevier B.V., 11, (2000), 283–291. M. Pasteka and R. F. Tichy, Uniformly distributed sequences and rings of complex numbers, Rivista di matematica della Università di Parma, Università degli Studi di Parma, 3, (2000), 1–10. Gesäuse mountains (2019) Baikal Number Theory Conference 2019 You might also like to have a look at some photographs taken at regular mountain tours. Also, look at some pictures from South Africa (2003) Back to the homepage of the group
CommonCrawl
neverendingbooks Tag: nim-addition How to win transfinite Nimbers? Published January 11, 2011 by lievenlb Last time we introduced the game of transfinite Nimbers and asked a winning move for the transfinite game with stones a at position $~(2,2) $, b at $~(4,\omega) $, c at $~(\omega+2,\omega+3) $ and d at position $~(\omega+4,\omega+1) $. Above is the unique winning move : we remove stone d and by the rectangle-rule add three new stones, marked 1. We only need to compute in finite fields to solve this and similar problems. First note that the largest finite number occuring in a stone-coordinate is 4, so in this case we can perform all calculations in the field $\mathbb{F}_{2^{2^2}}(\omega) = \mathbb{F}_{2^{12}} $ where (as before) we identify $\mathbb{F}_{2^{2^2}} = { 0,1,2,\ldots,15 } $ and we have seen that $\omega^3=2 $ (for ease of notation all Nim-additions and Nim-multiplications are denoted this time by + and x instead of $\oplus $ and $\otimes $ as we did last time, so for example $\omega^3 = \omega \otimes \omega \otimes \omega $). If you're not nimble with the Nim-tables, you can check all calculations in SAGE where we define this finite field via sage: R.< x,y,z>=GF(2)[] sage: S.< t,f,o>=R.quotient((x^2+x+1,y^2+y+x,z^3+x)) and we can now calculate in $\mathbb{F}_{2^{12}} $ using the symbols t for Two, f for Four and o for $\omega $. For example, we have seen that the nim-value of a stone is the nim-multiplication of its coordinates sage: t*t sage: f*o f*o sage: (o+t)*(o+t+1) o^2 + o + 1 sage: (o+f)*(o+1) f*o + o^2 + f + o That is, the nim-value of stone a is 3, stone b is $4 \times \omega $, stone c is $\omega^2+\omega+1 $ and finally that of stone d is equal to $\omega^2+5 \times \omega +4 $. By adding them up, the nim-value of the original position is a finite number : 6. Being non-zero we know that the 2nd player has a winning strategy. Just as in ordinary nim, we compare the value of a stone to the sum of the values of the other stones, to determine the stone we will play. These sums are for the four stones : 5 for a, $4 \times \omega+6 $ for b, $\omega^2+\omega+7 $ for c and $\omega^2+5 \times \omega+2 $ for d. There is only one stone where this sum is smaller than the stone-value, so we know we have to make our move with stone d. By the Nimbers-rule we need to find a rectangle with top-right hand corner $~(\omega+4,\omega+1) $ and lower-left hand corner $~(u,v) $ such that the values of the three new corners adds up to $\omega^2+5 \times \omega+2 $, that is we have to solve $u \times v + u \times (\omega+1) + (\omega+4)\times v = \omega^2+5 \times \omega+2 $ where u and v are ordinals smaller than $\omega+4 $ and $\omega+1 $. u and v cannot be both finite, for then we wouldn't obtain a $\omega^2 $ term. Similarly (u,v) cannot be of the form $~(u,\omega) $ with u finite because then the left-hand sum would be $\omega^2+4 \times \omega + u $ and likewise it cannot be of the form $~(\omega+i,v) $ with i and v finite as then the coefficient of $\omega $ in the left-hand sum will be i+1 and we cannot take i equal to 4. The only remaining possibility is that (u,v) is of the form $~(\omega+i,\omega) $ with i finite, in which case the left-hand sum equals $~\omega^2+ 5 \times \omega + i $ whence i=2 and we have found our unique winning move! But, our opponent can make life difficult by forcing us to compute in larger and larger finite fields. For example, if she would move next by dropping the c stone down to the 256-th row, what would be our next move? (one possible winning move is to remove the stone at $~(\omega+2,\omega) $ and add stones at the three new corners of the rectangle : $~(257,2), (257,\omega) $ and $~(\omega+2,2) $)
CommonCrawl
[bibshow file=library.bib key_format=cite] Schematic of a human brain, with some key landmarks/structures The brain is a fascinating device, that remains to be understood. Made of approximatly neurons, each of them connected to about 10000 targets, it is thus a complex and unique network with more than connections interacting with each others in an efficient and robust manner. We all have the same organization at a macroscopic scale, the same generic cotical areas: sensory inputs coming from our senses are wired in a reproducible manner at a large scale, but the fine details of those connections, that are making each of us unique, are still unknown. Understanding how the primary sensory areas of the neocortex are structured in order to process sensory inputs is a crucial step in analysing the mechanisms underlying the functional role, from an algorithmic point of view, of cerebral activity. This understanding of the sensory dynamics, at a large scale level, implies using simplified models of neurons, such as the "integrate-and-fire models", and a particular framework, the "balanced" random network, which allows the recreation of dynamical regimes of conductances close to those observed in vivo, in which neurons spike at low rates and with an irregular discharge. The Neuron Neurons are, with glia cells, the fundamental processing units of the brain. Because of their fast time constants, they are considered as reponsible for most of the information processing, while the latter are thought to be involved in much slower mechanisms such as plasticity or metabolism. Therefore, as a (pretty strong) first assumption, they will be ignored in the following. Neurons are basically acting as spatio-temporal integrators, exchanging binary information through action potentials: they are receiving inputs from pre-synaptic cells, and when "enough" are received, either temporally or spatially, summed with complex non-linear interaction depending on their morphologies, they emit an action potential and send it to their targets. To picture it, imagine a bath tube, with a hole at the bottom, and thus leeking water. Incoming action potentials are like buckets of water that will be poured in time into the bath tube. If enough are incoming during a short period of time (and thus compensating for the leak), water will overflow. The neuron is emitting an action potential, and its voltage (equivalent here to the water level) will be reset to a default value. This is the principe of the "integrate and fire neurons". Of course, this model is a crude simplication of all the biological processes that are taking place in a real neurons. It is discarding all the non-linearities and the complex operations that may be achieved in the dendrites. However, because of its tractability, either from a computational or an analytical point of view, it is widely used in computational neuroscience. First hand drawings of cortical neurons, drawn by Ramon y Cajal The integrate-and-fire model of Lapicque. (A) The equivalent circuit with membrane capacitance and membrane resistance . is the membrane potential, is the resting membrane potential, and is an injected current. (B) The voltage trajectory of the model. When reaches a threshold value, an action potential is generated and is reset to a subthreshold value. (C) An integrate-and-fire model neuron driven by a timevarying current. The upper trace is the membrane potential and the bottom trace is the input current. The Integrate-and-fire model From a more mathematical point of view, inputs to the neurons are described as ionic currents flowing through the cell membrane when neurotransmitters are released. Their sum is seen as a physical time-dependent current and the membrane is described as an circuit, charged by (see Figure taken from [bibcite key=Abbott1999]). When the membrane potential reaches a threshold value , a spike is emitted and the membrane potential is reset. In its basic form, the equation of the integrate and fire model is: \tau_{\mathrm{m}} \frac{dV_{\mathrm{m}}(t)}{dt} = -V_{\mathrm{m}}(t) + RI(t) where is the resistance of the membrane, with . To refine and be more precise, the neuronal input approximated as a fluctuating current but synaptic drives are better modelled by fluctuating conductances: the amplitudes of the post synaptic potentials (PSP) evoked by neurotransmitter release from pre-synaptic neuron depend on the post-synaptic depolarization level. A lot of study focuses now on this integrate-and-fire model with conductance-based synapses [bibcite key=Destexhe2001,Tiesinga2000,Cessac2008,Vogels2005]. The equation of the membrane potential dynamic is then: \tau_{\mathrm{m}} \frac{dV_{\mathrm{m}}(t)}{dt} = (V_{\mathrm{rest}}-V_{\mathrm{m}}(t)) + g_{\mathrm{exc}}(t)(E_{\mathrm{exc}}-V_{\mathrm{m}}(t)) + g_{\mathrm{inh}}(t)(E_{\mathrm{inh}}-V_{\mathrm{m}}(t)) When reaches the spiking threshold , a spike is generated and the membrane potential is held at the resting potential for a refractory period of duration . Synaptic connections are modelled as conductance changes: when a spike is emitted followed by exponential decay with time constants and for excitatory and inhibitory post-synaptic potentials, respectively. The shape of the PSP may not be exponential. Other shapes for the PSP can be used, such as alpha synapses , or double shaped exponentials synapses . and are the reversal potentials for excitation and inhibition. The chemical synapse The synapse is a key element where the axon of a pre-synaptic neuron connects with the dendritic arbour of a post-synaptic neuron . It transmit the electrical influx emitted by neuron to . Synapses are crucial in shaping a network's structure, and their ability to modify their efficacy according to the activity of the pre and the post-synaptic neuron is at the origin of synaptic plasticity and memory retention in neuronal networks. Synapses can be either chemical or electrical, but again, for a more exhaustive description,the latter here will be discarded. To focus only on the chemical synapses, the pre-synaptic neuron releases a neurotransmitter into the synaptic cleft which then binds to receptors located on the surface of the post-synaptic neuron , embedded in the plasma membrane. These neurotransmitters are stored in vesicles, regenerated continuously, but a too strong stimulation of the synapse may lead to a temporary lack of neurotransmitter, or to a saturation of the post-synaptic receptors on . This short-term plasticity phenomenon is called synaptic adaptation. The type of neurotransmitter which is received to the post-synaptic neuron influences its activity. The synaptic current is cancelled for a given inversion potential: if this inversion potential is below (the voltage threshold for triggering an action potential), the net synaptic effect inhibits the neuron, and if it is below, it excits the cell. Classical neurotransmitter such as glutamate leads to a depolarization (i.e. an increase of the membrane potential), and the synapse is said to be excitatory. In contrast, gamma-aminobutyric acid (GABA) leads to an hyper-polarization (a decrease of the membrane potential), and the synapse is said to be inhibitory. In general, a given neuron produces only one type of neurotransmitter, being either only excitatory or only inhibitory. This principle is known as the Dale's principle, and is a common assumption made in the models of neuronal networks. Top: schematic illustration of a synaptic contact between two neurons. The axon of pre-synaptic neuron establishes a synapse with a dendrite of post-synaptic neuron $B$. Bottom: detail of the synaptic cleft. Neurotransmitters stored in vesicles are liberated when the pre-synaptic membrane is depolarized, and then docked onto receptors of
CommonCrawl
Team:Valencia UPV/Modeling/diffusion Alejovigno (Talk | contribs) <div align="center"><div id="cn-box" align="justify"> <h3 class="hook" align="left"><a href="http://2014.igem.org/Team:Valencia_UPV/Modeling">Modeling</a> > <a href="http://2014.igem.org/Team:Valencia_UPV/Modeling/diffusion">Pheromone Diffusion</a></h3></p></br> <h3 class="hook" align="left"><a href="#">Modeling</a> > <a href="http://2014.igem.org/Team:Valencia_UPV/Modeling/diffusion">Pheromone Diffusion</a></h3></p></br> <div align="center"><span class="coda"><roja>P</roja>heromone <roja>D</roja>iffusion <br/><br/> and <roja>M</roja>oths <roja>R</roja>esponse</span> </div> Pheromone Production Pheromone Diffusion and Moth Response Policy and Practices Interlab Study Medal Requirements Modeling > Pheromone Diffusion Pheromone Diffusion and Moths Response Diffusion Equation Moth Response Sexually communication among moths is accomplished chemically by the release of an "odor" into the air. This "odor" is the sexual pheromones. Pheromones are molecules that can undergo a diffusion process in which the random movement of gas molecules transport the chemical away from its source (Sol I. Rubinow, Mathematical Problems in the Biological Sciences, Lecture 9). However, diffusion processes are complex, and modelling them analytically and with accuracy is difficult, even more when the geometry is not simple. For this reason, we decided to consider a simplified model in which pheromone chemicals obey to the heat diffusion equation. Then, it is solved by the Euler numeric approximation in order to obtain the spatial and temporal distribution of pheromone concentration. See more about heat equation and mathematical expressions for Euler method. Moths seem to respond to gradients of pheromone concentration attracted towards the source, although there are other factors that lead moths sexually to pheromone sources such as optomotor anemotaxis (J. N. Perry and C. Wall , A Mathematical Model for the Flight of Pea Moth to Pheromone Traps Through a Crop). However, increasing the pheromone concentration to unnaturally high levels may disrupt male orientation (W. L. Roelofs and R. T. Carde, Responses of Lepidoptera to synthetic sex pheromone chemicals and their analogues). See more about the modeling of moth flight paths. Using a modeling environment called Netlogo, we simulate the approximate moths behavior during the pheromone dispersion process. So, this will help us to predict moth response when they are also in presence of our synthetic plants. Since pheromones are chemicals released into the air, we have to consider both the motion of the fluid and the one of the particles suspended in the fluid. The motion of fluids can be described by the Navier–Stokes equations. But the frequent nonlinearity of these equations makes most problems difficult or impossible to solve, since it may exists turbulences in the air flow [1]. Now attending to the particles suspended in the fluid, an option for pheromone dispersion modeling consists in the assumption of pheromones diffusive-like behavior. That is: pheromones are molecules that can undergo a diffusion process in which the random movement of gas molecules transport the chemical away from its source [2]. There are two ways to introduce the notion of diffusion: either a phenomenological approach starting with Fick's laws of diffusion and their mathematical consequences, or a physical and atomistic one, by considering the random walk of the diffusing particles. In our case, we decided to hold our diffusion process by the Fick's laws. So it is postulated that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient. However, diffusion processes are complex, and modelling them analytically and with accuracy is difficult, even more when the geometry is not simple (final distribution of our plants in the crop field). For this reason, we decided to consider a simplified model in which pheromone chemicals obey to the heat diffusion equation. The diffusion equation is a partial differential equation which describes density dynamics in a material undergoing diffusion. It is also used to describe processes exhibiting diffusive-like behavior, like in our case. The equation is usually written as: $$\frac{\partial \phi (r,t) }{\partial t} = \nabla · [D(\phi,r) \nabla \phi(r,t)]$$ where $\phi(r, t)$ is the density of the diffusing material at location r and time t and $D(\phi, r)$ is the collective diffusion coefficient for density $\phi$ at location $r$; and $\nabla$ represents the vector differential operator. If the diffusion coefficient does not depend on the density then the equation is linear and $D$ is constant. Thus, the equation reduces to the following linear differential equation: $$\frac{\partial \phi (r,t) }{\partial t} = D \nabla^2 \phi(r,t)$$ also called the heat equation. Making use of this equation we can write pheromones chemicals diffusion equation with no wind effect consideration as: $$\frac{\partial c }{\partial t} = D \nabla^2 C = D \Delta c$$ where c is the pheromone concentration, $\Delta$ is the Laplacian operator, and $D$ is the pheromone diffusion constant in air. If we consider the wind, we face a diffusion system with drift and an advection term is added to the equation above. $$\frac{\partial c }{\partial t} = D \nabla^2 c - \nabla \cdot (\vec{v} c )$$ where $\vec{v}$ is the average velocity that the quantity is moving. Thus, $\vec{v}$ would be the velocity of the air flow. For simplicity, we are not going to consider the third dimension. In $2D$ the equation would be: $$\frac{\partial c }{\partial t} = D \left(\frac{\partial^2 c }{\partial^2 x} + \frac{\partial^2 c }{\partial^2 y}\right) – \left(v_{x} \cdot \frac{\partial c }{\partial x} + v_{y} \cdot \frac{\partial c }{\partial y} \right) = D \left( c_{xx} + c_{yy}\right) - \left(v_{x} \cdot c_{x} + v_{y} \cdot c_{y}\right) $$ For determining a numeric solution for this partial differential equation are used the so-called finite difference methods. The technic consists in approximating differential ratios as $h$ is closer to zero, so they are useful to approximate differential equations. With finite difference methods, partial differential equations are replaced by its approximations in finite differences, resulting in an algebraic equations system. The algebraic equations system is solved in each node $(x_i,y_j,t_k)$. These discrete values describe the temporal and spatial distribution of the unknown function. Although implicit methods are unconditionally stable so time steps could be larger and make the calculus process faster, the tool we have used to solve our heat equation is the Euler explicit method. Euler explicit method is the simplest option to approximate spatial derivatives, in which all values are assumed at the beginning of Time. The equation gives the new value of the pheromone level in terms of initial values in that node and its immediate neighbors. Since all these values are known the process is called explicit. $$c(t_{k+1}) = c(t_k) + dt \cdot c'(t_k),$$ Now applying this method for the first case (with no wind consideration) we followed the next steps. 1. Split time $t$ into $n$ slices of equal length dt: $$ \left\{ \begin{array}{c} t_0 &=& 0 \\ t_k &=& k \cdot dt \\ t_n &=& t \end{array} \right. $$ 2. Considering the backward difference for the Euler explicit method implies that the expression that refers to the current pheromone level each time step is: $$c (x, y, t) \approx c (x, y, t - dt ) + dt \cdot c'(x, y, t)$$ 3. And now considering the spatial dimension, it is applied central differences to the Laplace operator $\Delta$, and the backward differences to the vector differential operator $\nabla$ ( in 2D and assuming equal steps in x and y directions): $$c (x, y, t) \approx c (x, y, t - dt ) + dt \left( D \cdot \nabla^2 c (x, y, t) - \nabla \vec{v} c (x, y, t) \right)$$ $$ D \cdot \nabla^2 c (x, y, t) = D \left( c_{xx} + c_{yy}\right) = D \frac{c_{i,j-1} + c_{i,j+1} + c_{i-1,j } + c_{i+1,j} – 4 c_{I,j}}{s} $$ $$ \nabla \vec{v} c (x, y, t) = v_{x} \cdot c_{x} + v_{y} \cdot c_{y} = v_{x} \frac{c_{i,j} – c_{i-1,j}}{h} + v_{y} \frac{c_{i,j} – c_{i,j-1}}{h} $$ With respect to the boundary conditions, they are null since we are considering an opened-space. Attending to the implementation and simulation of this method, dt must be small enough to avoid instability. When one stares at moths, they apparently move with erratic flight paths. It is possibly because of predator avoiding reasons. In this frame, sex pheromones influence in moth behavior is also considered. Since these are pheromones released by females in order to attract an individual of the opposite sex, it makes sense that males respond to gradients of sex pheromone concentration, being attracted towards the source. As soon as a flying male randomly comes into conical pheromone-effective sphere of sex pheromone released by a virgin female, the male begins to seek the females in a zigzag way, approaches to the females and finally copulates with her [1]. In this project we approximate the resulting moth movement as a vectorial combination of a gradient vector and a random vector. The magnitude of the gradient vector comes from the change in pheromone concentration level among points separated by a differential stretch in space. More precisely, the gradient points in the direction of the greatest rate of increase of the function and its magnitude is the slope of the graph in that direction. The random vector is restricted in this 'moth response' model by a fixed angle, assuming that the turning movement is relatively continuous and for example the moth can't turn 180 relative degrees at the next instant. Since the objective of this project consists in avoiding pest damage by reaching the mating disrupting among moths, our synthetic plants are supposed to release enough sexual pheromone so as to saturate moth perception. In this sense the resulting moth vector movement will depend ultimately on the pheromone concentration levels in the field and the moth ability to follow better or worse the gradient of sex pheromone concentration. At this point, let's highlight the three main aspects we consider for the characterization of males moth behavior: Table 1. Male moths behaviour characterization. In this context, this ensemble of behaviors could be translated in a sum of vectors in which the random vector has a constant module and changeable direction inside a range, whereas the gradient vector module is a function of the gradient in the field. The question now is: how do we include the saturation effect in the resulting moth shift vector? With this in mind and focusing on the implementation process, our approach consists on the following: The gradient vector instead of experiencing a change in its magnitude, this will be always the unit and its direction that of the greatest rate of increase of the pheromone concentration. A random direction vector with constant module will not be literally considered, but a random turning angle starting from the gradient vector direction. Attending to the previous question how do we include the saturation effect in the resulting moth shift vector?, here the answer: the dependence on the moth saturation level (interrelated with the pheromone concentration in the field) will state in the random turning angle. Table 1. Approximation of the male moths behaviour. This random turning angle will not follow a uniform distribution, but a Poisson distribution in which the mean is zero (no angle detour from the gradient vector direction) and the standard-deviation will be inversely proportional to the intensity of the gradient of sex pheromone concentration in the field. This approach will drive to a 'sexual confusion' of the insect as the field homogeneity increases, since the moth in its direction of displacement will fit the gradient direction with certain probabilities which depend on how saturated they are. Yoshitoshi Hirooka and Masana Suwanai. Role of Insect Sex Pheromone in Mating Behavior I. Theoretical Consideration on Release and Diffusion of Sex Pheromone in the Air. W. L. Roelofs and R. T. Carde. Responses of Lepidoptera to Synthetic Sex Pheromone Chemicals and their Analogues. Using a modeling environment called Netlogo, we try to simulate the approximate moth population behavior when pheromone diffusion processes are given. Netlogo simulator can be find in its website at Northwestern University. To download the source file of our Sexyplant simlation in Netlogo click here: sexyplants.nlogo We consider three agents: male and female moths, and sexyplants. We have two kind of sexual pheromone emission sources: female moths and sexyplants. Our scenario is an opened crop field where sexyplants are intercropped and moths fly following different patterns depending on its sex. Females, apart from emitting sexual pheromones, they move with erratic random flight paths. After mating, females are 2 hours in which they are not emitting pheromone. Males also move randomly while they are under its detection threshold. But when they detect a certain pheromone concentration, they start to follow the pheromone concentration gradients until its saturation threshold is reached. sexyplants act as continuously- emitting sources and their activity is regulated by a Switch. On the side of pheromone diffusion process, it is simulated in Netlogo by implementing the Euler explicit method. Figure 1. NETLOGO Simulation environment. When sexyplants are switched-off, males move randomly until they detect pheromone traces from females, in that case they follow them. When sexyplants are switched-on, the pheromone starts to diffuse from them, rising up the concentration levels in the field. At first, the sexyplants have an effect of pheromone traps on the male moths. Figure 2. On the left: sexyplants are switched-off and a male moth follows the pheromone trace from a female. On the right: sexyplants are switched on and a male moth go towards the static source like it happens with synthetic pheromone traps. As the concentration rises in the field, it becomes more homogeneous. Remember that the random turning angle of the insect follows a Poisson distribution, in which the standard-deviation is inversely proportional to the intensity of the gradient. Thus, the probability of the insect to take a bigger detour from the faced gradient vector direction is higher. This means that it is less able to follow pheromone concentration gradients, so 'sexual confusion' is induced. Figure 3. NETLOGO Simulation of the field: sexyplants, female moths, pheromone diffusion and male moths. The parameters of this model are not as well-characterized as we expected at first. Finding the accurate values of these parameters is not a trivial task, in the literature it is difficult to find a number experimentally obtained. So we decided to take an inverse engineering approach. Doing a model parameters swept, we simulate many possible scenarios, and then we come up with values of parameters corresponding to our desired one: insects get confused. This will be useful to know the limitations of our system and to help to decide the final distribution of our plants in the crop field. Diffusion coefficient Range of search: 0.01-0.2 cm^2/s References: [1], [2], [3], [5] Release rate (female) Range of search: 0.02-1 µg/h References: [4], [5] Release rate (sexyplant) The range of search that we have considered is a little wider than the one for the release rate of females. References: It generally has been found that pheromone dispensers releasing the chemicals above a certain emission rate will catch fewer males. The optimum release rate or dispenser load for trap catch varies greatly among species [4]. This certain emission rate above which male start to get confused could be the release rate from females. Detection threshold Range of search: 0.001-1 [Mass]/[Distante]^2 Saturation threshold Range of search: 1-5[Mass]/[ Distante]^2 Moth sensitivity This is a parameter referred to the capability of the insect to detect changes in pheromone concentration in the patch it is located and the neighbor patch. When the field becomes more homogeneous, an insect with higher sensitivity will be more able to follow the gradients. Range: 0-0.0009 (The maximum level of moth sensitivity has to be less than the minimum level of release rate of females, since this parameter is obtained from the difference) Range: -0.1 – 0.1 cm/sec References: [7] (700cm/sec!!!) The number of males and females can be selected by the observer. Ticks (time step) !!! We'll consider the equivalence 20 ticks= 1 hour. That is 1 tick = 3 minutes. Patches !!! The approximate velocity of a male moth flying towards the female in natural environment is 0.3 m/sec [6]. Each moth moves 1 patch per tick, so if 1 tick is equal to 3 minutes (180 sec), the patch is 54 meter long to get that velocity. One can modify the number of patches that conform the field so as to analyze its own case. Wilson et al.1969, Hirooka and Suwanai, 1976. Monchich abd Mauson, 1961, Lugs, 1968. G. A. Lugg. Diffusion Coefficients of Some Organic and Other Vapors in Air. W. L. Roelofs and R. T. Carde. Responses of Lepidoptera to Synthetic Sex Pheromone Chemicals and their Analogues, Page 386. R.W. Mankiny, K.W. Vick, M.S. Mayer, J.A. Coeffelt and P.S. Callahan (1980) Models For Dispersal Of Vapors in Open and Confined Spaces: Applications to Sex Pheromone Trapping in a Warehouse, Page 932, 940. Tal Hadad, Ally Harari, Alex Liberzon, Roi Gurka (2013) On the correlation of moth flight to characteristics of a turbulent plume. Average Weather For Valencia, Manises, Costa del Azahar, Spain. The aim consists of reducing the possibility of meeting among moths of opposite sex. Thus, we will analyze the number of meetings in the three following cases: When sexyplants are switched-off and males only interact with females. When sexyplants are switched-on and have an effect of trapping males. When sexyplants are swiched-on and males get confused when the concentration of pheromone level is higher than their saturation threshold. It is also interesting to analyze a fourth case, what does it happen if females wouldn't emit pheromones and males just move randomly through the field? : Males and females move randomly. How much would our results differ from the rest of cases? What is important is that between the first and the third case, the number of meetings should be less in the latter than in the former. Then we are closer to our objective fulfillment. Go to Modeling Overview Go to Pheromone Production Sitemap | Twitter | Facebook | Email This wiki is designed and constructed by Valencia_UPV. Retrieved from "http://2014.igem.org/Team:Valencia_UPV/Modeling/diffusion"
CommonCrawl
Interpretation of a singular metric I'm interested to find out if we can say anything useful about spacetime at the singularity in the FLRW metric that occurs at $t = 0$. If I understand correctly, the FLRW spacetime is a combination of the manifold $\mathbb R^{3,1}$ and the FLRW metric $g$. The metric $g$ has a singularity at $t = 0$ because at that point the proper distance between every pair of spacetime points is zero. Presumably though, however the metric behaves, the manifold remains $\mathbb R^{3,1}$ so we still have a collection (a set?) of points in the manifold. It's just that we can no longer calculate distance between them. Is this a fair comment, or am I being overly optimistic in thinking we can say anything about the spacetime? I vaguely recall reading that the singularity is considered to be not part of the manifold, so the points with $t = 0$ simply don't exist, though I think this was said about the singularity in the Schwarzschild metric and whether it applies to all singular metrics I don't know. To try to focus my rather vague question a bit, I'm thinking about a comment made to my question Did the Big Bang happen at a point?. The comment was roughly: yes the Big Bang did happen at a point because at $t = 0$ all points were just one point. If my musings above are correct the comment is untrue because even when the metric is singular we still have the manifold $\mathbb R^{3,1}$ with an infinite number of distinct points. If my memory is correct that the points with $t = 0$ are not part of the manifold then we cannot say the Big Bang happened at a point, or the opposite, because we cannot say anything about the Big Bang at all. From the purely mathematical perspective I'd be interested to know what if anything can be said about the spacetime at $t = 0$ even if it has no physical relevance. general-relativity cosmology metric-tensor big-bang singularities $\begingroup$ Also related physics.stackexchange.com/q/148838 $\endgroup$ – user56963 Dec 7 '14 at 16:58 $\begingroup$ to quote a comment I made yeterday: a GR singularity is not necessarily topological: possibly, it's 'just' a metric degeneracy; @CristiStoica probably has something to say about that...; the gist: the FLRW singularity is 'quasi-regular', the densitized stress-energy-momentum tensor remains smooth and the Weyl curvature hypothesis holds $\endgroup$ – Christoph Dec 7 '14 at 17:06 $\begingroup$ I don't think that's the manifold. It should be $\mathbb R^3\times \mathbb R_{>0}$. $\endgroup$ – MBN Dec 7 '14 at 22:20 $\begingroup$ @MBN: entirely possible, my grasp of the maths involved is very shaky. $\endgroup$ – John Rennie Dec 8 '14 at 6:13 $\begingroup$ Yes, actually I think there is a bit of confusion: the metric is a structure you put on the manifold. The manifold is $\mathbb{R}^4$ topologically, and nothing changes that. When you consider a singular metric, you may exclude some point and restrict the maifold to a submanifold like $\mathbb{R}^4/\{0\}$. $\endgroup$ – Oscar Mar 16 '15 at 9:32 The nature of singularities in GR is a delicate issue. A good review of the difficulties presented to define a singularity are in Geroch's paper What is a singularity in GR? The problem of attaching a boundary in general to a spacetime is that there is not natural way to do it. for example, in the FRW metric the manifold at $t=0$ can be described by two different coordinate systems as: $$\{t,r\cos\theta,r\sin\theta \cos\phi,r\sin\theta \sin\phi\}$$ or $$\{t,a(t)r\cos\theta,a(t)r\sin\theta \cos\phi,a(t)r\sin\theta \sin\phi\}$$ In the first case we have a three dimensional surface, in the latter a point. It might be tempting to define a singularity following other physical theories as the points where the metric tensor is undefined or below $C^{2}$. However, this is troublesome because in the gravitational case the field defines also the spacetime background. This represents a problem because the size, location and shape of singularities can't be straightforward characterize by any physical measurement. The theorems of Hawking and Penrose, commonly used to show that singularities in GR are generic under certain circumstances have the conclusion that spacetime must be geodesically incomplete (Some light-paths or particle-paths cannot be extended beyond a certain proper-time or affine-parameter). As mentioned above the peculiar characteristic of GR of identifying the field and the background makes the task of assigning a location, shape or size to the singularities very delicate. If one thinks in a singularity of the gravitational potential in classical terms the statement that the field diverges at a certain location is unambiguous. As an example, take the gravitational potential of a spherical mass $$V(t,r,\theta,\phi)=\frac{GM}{r}$$ with a singularity at the point $r=0$ for any time $t$ in $\mathbb{R}$. The location of the singularity is well defined because the coordinates have an intrinsic character which is independent of $V$ and are defined with respect the static spacetime background. However, this prescription doesn't work in GR. Consider the spacetime with metric $$ds^{2}=-\frac{1}{t^{2}}dt^{2}+dx^{2}+dy^{2}+dz^{2}.$$ defined on $\{(t,x,y,z)\in \mathbb{R}\backslash \{0\}\times \mathbb{R}^{3}\}$. If we say that there is a singularity at the point $t=0$ we might be speaking to soon for two reasons. The first is that $t=0$ is not covered by our coordinate chart. It makes no sense to talk about $t=0$ as a point in our manifold using these coordinates. The second thing is that the lack of an intrinsic meaning of the coordinates in GR must be taken seriously. By making the coordinate transformation $\tau=\log(t)$ we obtain the metric $$ds^{2}=d\tau^{2}+dx^{2}+dy^{2}+dz^{2},$$ on $\mathbb{R}^{4}$ and remain isometric to the previous spacetime defined in $\{(t,x,y,z)\in \mathbb{R}\backslash \{0\}\times \mathbb{R}^{3}\}$. What we have done is find an extension of the metric to $\mathbb{R}^{4}$. The singularity was just a coordinate singularity, similar to the event horizon singularity in Schwarzschild coordinates. The extended spacetime is of course Minkowski spacetime which is non-singular. Another approach is to define a singularity in terms of invariant quantities such as scalar polynomials of the curvature. This are scalars formed by the Riemann tensor. If this quantities diverge it matches our physical idea that and object approaching regions of higher and higher values must suffer stronger and stronger deformations. Also, in many relevant cosmological models like FRW and Black Holes metrics one can show that this indeed happen. But as mentioned the domain of the gravitational field defines the location of events so a point where the curvature blow up might not be even in the domain. Therefore, we must formalise the following: statement "The scalar diverges as we approach a point that has been cut out of the manifold.". If we were in a Riemann manifold then the metric define a distance function $$d(x,y):(x,y)\in\cal{M}\times\cal{M}\rightarrow \inf\left\{\int\rVert\dot{\gamma}\rVert \right\}\in\mathbb{R}$$ where the infimum is taken over all piecewise $C^{1}$ curves $\gamma$ from $x$ to $y$. Moreover, the distance function allows us to define a topology. A basis of that topology is given by the set $\{B(x,r):y\in{\cal{M}}| d(x,y)\le r \forall x\in \cal{M}\}$. The topology naturally induce a notion of convergence. We say the sequence $\{x_{n}\}$ converges to $y$ if for $\epsilon> 0$ there is an $N\in \mathbb{N}$ such that for any $n\ge N$ $d(x_{n},y)\le \epsilon$. A sequence that satisfies this conditions is called a Cauchy sequence. If every Cauchy sequence converges we say that $\cal{M}$ is metrically complete Notice that now we can describe points that are not in the manifold as a point of convergence of a sequence of points that are. Then the formal statement can be stated as: "The sequence $\{R(x_{n})\}$ diverges as the sequence $\{x_{n}\}$ converges to $y$" where $R(x_{n})$ is some scalar evaluated at $x_{n}$ in $\cal{M}$ and $y$ is some point not necessarily in $\cal{M}$. In the Riemannian case if every Cauchy sequence converges in $\cal{M}$ then every geodesic can be extend indefinitely. That means we can take as the domain of every geodesic to be $\mathbb{R}$. In this case we say that $\cal{M}$ is geodesically complete. In fact also the converse is true, that is if $\cal{M}$ is geodesically complete then $\cal{M}$ is metrically complete. So far, all the discussion has been for Riemann metrics, but as soon as we move to Lorentzian metrics the previous discussion can't be used as stated. The reason is that Lorentzian metrics doesn't define a distance function. They do not satisfy the triangle inequality. So we only have left the notion of geodesic completeness. The three kinds of vectors available in any Lorentzian metric define three nonequivalent notions of geodesic completeness depending on the character of the tangent vector of the curve: spacelike completeness, null completeness and timelike completeness. Unfortunately, they are not equivalent it is possible to construct spacetimes with the following characteristics: timelike complete, spacelike and null incomplete spacelike complete, timelike and null incomplete null complete, timelike and spacelike incomplete timelike and null complete, spacelike incomplete spacelike and null complete, timelike incomplete timelike and spacelike complete, null incomplete Moreover, in the Riemannian case if $\cal{M}$ is geodesically complete it implies that every curve is complete, that means every curve can be arbitrarily extended . Again, in the Lorentzian case that is not the case, Geroch construct an example of a geodesically null, timelike and spacelike complete spacetime with a inextendible timelike curve of finite length. A free falling particle following this trajectory will accelerate but in a finite amount of time its spacetime location would stop being represented as a point in the manifold. Schmidt provided an elegant way to generalise the idea of affine length to all curves, geodesic and no geodesics. Moreover, the construction in case of incomplete curves allows to attach a topological boundary $\partial\cal{M}$ called the b-boundary to the spacetime $\cal{M}$. The procedure consist in building a Riemannian metric in the frame bundle $\cal{LM}$. We will use the solder form $\theta$ and the connection form $\omega$ associated to the Levi-Civita connection $\nabla$ on $\cal{M}$. Explicitly, \begin{equation} G_{ab}(X_{a},Y_{a})= \theta(X_{a}) \cdot \theta(Y_{a})+\omega(X_{a})\bullet \omega(Y_{a}) \end{equation} where $X_{a},Y_{a}\in T_{p}\cal{LM}$ and $\cdot,\bullet$ are the standard inner product in $\mathbb{R}^{n}$ and $\mathfrak{g}\cong\mathbb{R}^{n^{2}}$. Let $\gamma$ be a $C^{1}$ curve through $p$ in $\cal{M}$ and a basis $\{E_{a}\}$. Now choose a point $P$ in $\cal{LM}$ such that $P$ satisfies $\pi(P)=p$ and the basis of $T_{p}$ is given by $\{E_{a}\}$. Using the covariant derivative induced by the metric we can parallel propagate $\{E_{a}\}$ in the direction of $\dot{\gamma}$. This procedure defines a curve $\Upsilon $ in $\cal{LM}$. This curve is called the lift of the curve $\gamma$. The length of $\Upsilon$ with respect to Schmidt metric, $$l=\int_{\tau}\|\dot{\Upsilon} \|_{G} dt$$ is is called a generalised affine parameter. If $\gamma$ is a geodesic $l$ is an affine parameter. If every curve in a spacetime $\cal{M}$ that has finite affine generalised length has endpoints we call the spacetime b-complete. If it is not $b$- complete we call the spacetime b-incomplete. A classification of singularities in terms of the b-boundary (See Chapter 8, The large Scale Structure of spacetime) was done by Ellis and Schmidt here. In the case of the FRW the b-boundary $\partial\cal{M}$ was computed in this paper The result is that the boundary is a point. However, the resulting topology in $\partial\cal{M}\cup\cal{M}$ is non-Hausdorff. This means the singularity is in some sense arbitrary close to any event in spacetime. This was regarded as unphysical and attempts to improve the b-boundary construction were made without any attempt having a particular acceptance. Also the high dimensionality of the bundles involved make the b-boundary a difficult working tool. Another types of boundaries can be attached. For example: conformal boundaries used in Penrose diagrams and in the AdS/Cft correspondance. In this case the conformal boundary as seen here at $t=0$ is a three dimensional manifold. Causal boundaries. This constructions depends only on the causal structure, so it doesn't distinguish between boundary points at a finite distance or at infinity. (See chapter 6, The large scale structure of spacetime) Abstract boundary. I am unaware if in the two last cases explicit calculations have been done to the case of the FRW metric. 10 revs, 2 users 96% $\begingroup$ still, t=0 isn't part of the manifold. The transformation R\{0} x R^3 -> R^{4}; (t,x,y,z) -> (log(t),x,y,z) is no diffeomorphism and therefore it's not suprising that the singularity disappears. Transformations that are no diffeomorphisms change the manifold. $\endgroup$ – image Mar 17 '15 at 17:17 $\begingroup$ There is no singularity to start with. The difficulty is to know what is the maximal domain of the solution. This also make contact with the PDE point of view of GR and the smoothness of the defined solution. $\endgroup$ – yess Mar 17 '15 at 17:30 $\begingroup$ If the results turns out non-Hausdorff, it is clear that the attempt (to sensibly attach a boundary) must have failed because you no longer have a manifold, right? $\endgroup$ – Danu Mar 21 '15 at 13:00 $\begingroup$ That is correct in general $\cal{M}\cup\cal{\partial M}$ is not going to be a manifold. But,as I mentioned there were some attempts to fix the situation, nevertheless no attempt fix the situation entirely. $\endgroup$ – yess Mar 22 '15 at 15:10 Just for clarification: the manifolds used in general relativity are locally (in the sense of diffeomorphisms) $\mathbb R^{3+1}$. They are not $ \mathbb R^{3+1}$ in general, which is Minkowski space with zero curvature. In this sense, the points at $t=0$ do not belong to the manifold, as there is no neighborhood which is diffeomorphic to $\mathbb R^{3+1}$ as @Ali Moh already stated. This means that with the FLRW metric alone, one cannot make predictions for the "big bang" (although one can make predictions around $t=0+\epsilon$ for every $\epsilon > 0$). imageimage I'm assuming you are not talking about the modern view of the history of the universe where the big bang (reheating) was preceded by inflation, and we don't know what happened before inflation or whether there is even a beginning of time. So if you want to just look at the FRW metric then I think you're comment that the singularity does not belong to the manifold is correct because the limit $t \rightarrow 0$ is not well defined. For example looking at the volume $$\lim_{t\rightarrow 0^+} \int d^4 \sqrt{-g} = \infty$$ whereas $$\int d^4 \sqrt{-g}\,\,\Big|_{t=0} = 0$$ So you cannot smoothly add the singularity point to the manifold. Ali MohAli Moh $\begingroup$ where the big bang (reheating) was preceded by inflation Huh? The big bang is the singularity. Inflation happened after the big bang. $\endgroup$ – Ben Crowell May 7 '19 at 19:20 $\begingroup$ @BenCrowell to be fair, the term Big Bang is sometimes used to mean the reheating after inflation ended. This isn't the way I understand the term as I, like you, would take it to mean the singularity but nevertheless I do see it used in this other sense. $\endgroup$ – John Rennie May 8 '19 at 4:28 Just as an example, you can think of the degenerated metric $ds^2=dx^2$ on $\mathbb{R}^2$. Certainly the underlying manifold is $\mathbb{R}^2$ instead of $\mathbb{R}$, but from a purely Riemannian geometrically point of view, you will see no difference if you treat it as $\mathbb{R}$. Mathematically, this is because the metric can be pushed forward along the projective map $\pi:\mathbb{R}^2\to \mathbb{R} \,\,(x,y)\mapsto(x,0)$ (Recall that in general metric can only be pulled back), and we say that the plane shrinks to a line. Alice AkitsukiAlice Akitsuki $\begingroup$ a degenerate metric makes no sense since the imposed topology is not hausdorff. in your example, take a number x, then every point (x,y) couldn't be distinguished from a point (x,y') with y!=y'. one can try make that manifold hausdorff again by identification (x,y)~(x',y') if x=x' but then you would end up with R and not R^2 $\endgroup$ – image Mar 22 '15 at 0:25 $\begingroup$ @MarcelKöpke that's what I have said. $\endgroup$ – Alice Akitsuki Mar 22 '15 at 5:53 $\begingroup$ I didn't like notion of a non-hausdorff manifold, since manifolds are per definition hausdorff spaces $\endgroup$ – image Mar 22 '15 at 11:41 $\begingroup$ @but when dealing with Ricci flow or the Einstein eq. people always say "shrink" to something, that's what I wanna explain $\endgroup$ – Alice Akitsuki Mar 23 '15 at 6:22 $\begingroup$ @MarcelKöpke you can think of the example of Minkowski manifold, where the topology doesn't match with the metric. $\endgroup$ – Alice Akitsuki Mar 23 '15 at 13:22 Wikipedia has a webpage on Big Bang Nucleosynthesis, covering from ~1 second to perhaps as long as 20 minutes. In Mangiarotti and Martens's paper "A review of electron–nucleus bremsstrahlung cross sections between 1 and 10 MeV" they discuss screening, the effect of confinement on bremsstrahlung and the influence of multiple scattering. [I'm currently reading "Spin densities in 4f and 3d magnetic systems by Ian Maskery" to determine if I can improve this answer.] "I'm interested to find out if we can say anything useful about spacetime at the singularity in the FLRW metric that occurs at t=0.". Wikipedia's webpage titled "Scale Factor" says: "The relative expansion of the universe is parametrized by a dimensionless scale factor "a". Also known as the cosmic scale factor or sometimes the Robertson-Walker scale factor, this is a key parameter of the Friedmann equations. In the early stages of the big bang, most of the energy was in the form of radiation, and that radiation was the dominant influence on the expansion of the universe. Later, with cooling from the expansion the roles of mass and radiation changed and the universe entered a mass-dominated era. Recently results suggest that we have already entered an era dominated by dark energy, but examination of the roles of mass and radiation are most important for understanding the early universe. Using the dimensionless scale factor to characterize the expansion of the universe, the effective energy densities of radiation and mass scale differently. This leads to a radiation-dominated era in the very early universe but a transition to a matter-dominated era at a later time and, since about 5 billion years ago, a subsequent dark energy-dominated era.". See the Wikipedia webpage section "Idealized Hubble's Law" which says: "The mathematical derivation of an idealized Hubble's Law for a uniformly expanding universe is a fairly elementary theorem of geometry in 3-dimensional Cartesian/Newtonian coordinate space, which, considered as a metric space, is entirely homogeneous and isotropic (properties do not vary with location or direction). Simply stated the theorem is this: Any two points which are moving away from the origin, each along straight lines and with speed proportional to distance from the origin, will be moving away from each other with a speed proportional to their distance apart. The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterized by values of density parameters (ΩM for matter and ΩΛ for dark energy). A "closed universe" with ΩM > 1 and ΩΛ = 0 comes to an end in a Big Crunch and is considerably younger than its Hubble age. An "open universe" with ΩM ≤ 1 and ΩΛ = 0 expands forever and has an age that is closer to its Hubble age. For the accelerating universe with nonzero ΩΛ that we inhabit, the age of the universe is coincidentally very close to the Hubble age. That diagram uses the following exact solutions to the Friedmann equations: $${\displaystyle {\begin{cases}a(t)=H_{0}t&\Omega _{M}=\Omega _{\Lambda }=0\\{\begin{cases}a(q)={\tfrac {\Omega _{M}}{2(1-\Omega _{M})}}(\cosh q-1)\\t(q)={\tfrac {\Omega _{M}}{2H_{0}(1-\Omega _{M})^{3/2}}}(\sinh q-q)\end{cases}}&0<\Omega _{M}<1,\ \Omega _{\Lambda }=0\\a(t)=\left({\tfrac {3}{2}}H_{0}t\right)^{2/3}&\Omega _{M}=1,\ \Omega _{\Lambda }=0\\{\begin{cases}a(q)={\tfrac {\Omega _{M}}{2(\Omega _{M}-1)}}(1-\cos q)\\t(q)={\tfrac {\Omega _{M}}{2H_{0}(\Omega _{M}-1)^{3/2}}}(q-\sin q)\end{cases}}&\Omega _{M}>1,\ \Omega _{\Lambda }=0\\a(t)=\left({\tfrac {\Omega _{M}}{\Omega _{\Lambda }}}\sinh ^{2}\left({\tfrac {3}{2}}{\sqrt {\Omega _{\Lambda }}}H_{0}t\right)\right)^{1/3}&0<\Omega _{M}<1,\ \Omega _{\Lambda }=1-\Omega _{M}\end{cases}}}$$ "The comment was roughly: yes the Big Bang did happen at a point because at t=0 all points were just one point. If my musings above are correct the comment is untrue because even when the metric is singular we still have the manifold R3,1 with an infinite number of distinct points. If my memory is correct that the points with t=0 are not part of the manifold then we cannot say the Big Bang happened at a point, or the opposite, because we cannot say anything about the Big Bang at all. From the purely mathematical perspective I'd be interested to know what if anything can be said about the spacetime at t=0 even if it has no physical relevance.". A pointlike particle present at the center of a black hole has zero mass and size so theoretically an infinite number of zero sized particles can be compressed into a zero sized space; from a practical standpoint fewer than an infinite number of particles are available. As much mass as is present in the Milky Way galaxy, converted to photons or other zero sized particle, could theoretically be compressed to a point of zero size. While not a whole lot can be said, and far less proven, about what happened simultaneously with the moment of the Big Bang I feel safe saying that even less could be said about 'the other side of the Big Bang'. That said, I would speculate that 'how things worked/still operate? there' likely bears some similarities to how things work here. Perhaps a large low density black hole captured many small and high density black holes forming a object without size (mass), yet still able to exceed critical mass (or critical energy, if you prefer), that simply ripped a hole in where it was spitting it's energy into where we are. These are the illustrations that NASA has prepared on The Beginning: [Note: Images obtained from NASA's WMAP Website.] "... what if anything can be said about the spacetime at t=0 even if it has no physical relevance.". It was a point from which everything around us was derived. If the FLRW metric $dt^2- A(t) (dx^2+ dy^2 + dz^2)$ is asymptotically radiation dominant as $t → 0$ (i.e. that the curvature scalar $R ~ 0$ for $t ~ 0$), then $A(t) ~ kt$ for $t ~ 0$, where $k$ is a constant. Such a metric has the following properties: its geometry is not Riemannian or Riemann-Cartan; which limits the applicability or relevance of many folklore results (e.g. singularity theorems) it is geodesically complete; but tangent vectors on the $t = 0$ surface do not uniquely determine a geodesic the metric is signature changing across the $t = 0$ boundary, becoming locally 4+0 Euclidean for $t < 0$ the initial hypersurface is a null surface (i.e. light speed is infinite at $t = 0$) the $t = 0$ hypersurface is, thus, an instantaneous slice of Newton-Cartan geometry; the transition across the t = 0 boundary is a physical realization of both the Galilean limit (at $t = 0$) and Euclideanization for ($t < 0$); light speed goes to infinity as $t → 0$ and distances, as measured in light-speed units, therefore go to 0, though the geometry itself does not contract to a point (actually: none of the FRW geometries contract to a point, the common visualization of a universe contracting to a point is little more than a myth that's passed along in popularizations and even in professional circles as folklore) the cosmology is therefore inflationary (inflation follows as a junction condition for signature-changing cosmologies with a null initial hypersurface) the past-directed null geodesics reflect off of $t = 0$ as parabolic curves and reverse direction, becoming future-directed each point is therefore causally connected to remote areas of the universe the co-moving timelike geodesics go straight through $t = 0$ to the other side $t < 0$ all other timelike geodesics reflect off of $t = 0$ as catenary curves it is acausal: your future light cone and future worldline are contained inside the past light cone the spacelike geodesics for $t > 0$ are sinusoidal, and bounded away from $t = 0$. Kyle Kanos Rock BrentwoodRock Brentwood $\begingroup$ Curved-spacetime QFT on this background may be possible ... at least for t > 0; but subject to constraints since (a) past geodesics reverse and become future-directed at t = 0, (b) the effective Galilean limit at t = 0 and Euclideanization for t < 0 requires that the underlying field theory be formulated in a unifying framework that contains both relativistic and non-relativistic forms as special cases (as well as the Euclideanized version of field theory). $\endgroup$ – Rock Brentwood May 7 '19 at 19:10 $\begingroup$ its geometry is not Riemannian or Riemann-Cartan Not sure what you mean by this. The geometry is of course semi-Riemannian, as with all the spacetimes we study in GR. which limits the applicability or relevance of many folklore results (e.g. singularity theorems) "Folklore results" would normally be used to indicate something that isn't true but that is widely believed to be true. The Penrose and Hawking singularity theorems are not folklore results. They are rigorous mathematical theorems. The assumption of an energy condition is a problem for cosmological spacetimes, but this [...] $\endgroup$ – Ben Crowell May 7 '19 at 19:25 $\begingroup$ [...] is not really an issue for the cosmological parameters we observe (i.e., there is no "bounce"), as shown by Borde, Guth, and Vilenkin. $\endgroup$ – Ben Crowell May 7 '19 at 19:26 $\begingroup$ it is geodesically complete No, the FLRW spacetimes that we normally study are geodesically incomplete. the metric is signature changing across the t=0 boundary, becoming locally 4+0 Euclidean for t<0 No, the geodesic incompleteness means that you can't carry out an analytic extension through t=0. $\endgroup$ – Ben Crowell May 7 '19 at 19:27 Not the answer you're looking for? Browse other questions tagged general-relativity cosmology metric-tensor big-bang singularities or ask your own question. Did the Big Bang happen at a point? Causal and Global structure of Penrose Diagrams Do moving singularities crack/tear space-time? A true singularity at $t=0$, coordinate independent Big Bang Combining metric tensors/curvature tensors Excluding big bang itself, does spacetime have a boundary? Metric to describe an expanding spacetime from coordinates reflecting the perspective of a local observer Describing the shape of a singularity Coordinate Singularity in Metric Definition of naked singularities Could the spatially flat universe start small?
CommonCrawl
Effectiveness of Echium amoenum on premenstrual syndrome: a randomized, double-blind, controlled trial Maryam Farahmand1,2, Davood Khalili3,4, Fahimeh Ramezani Tehrani2, Gholamreza Amin5 & Reza Negarandeh ORCID: orcid.org/0000-0002-7597-50941 The present study aimed to evaluate the effect of Echium amoenum (EA) on the severity of premenstrual syndrome (PMS) in comparison with placebo. The present study was a randomized double-blind controlled clinical trial. A checklist questionnaire was completed by 120, 18 to 35-year-old, college students. And then, 84 eligible women (20 to 35 years old) were enrolled in the trial; they were randomly assigned to two groups of intervention (EA) and control (placebo), with 42 participants in each group. Participants in the intervention group received 450 mg capsules of EA per day (three times a day) from the 21st day of their menstrual cycle until the 3rd day of their next cycle for two consecutive cycles. The severity of PMS was measured and ranked using the premenstrual symptoms screening tool (PSST). The generalized estimating equation was used to compare the total score of the severity of PMS between the two groups. Sixty-nine women with regular menstrual cycles suffering from PMS completed the study. The mean scores of the symptoms in the EA group were 35.3 and 16.1 (P ≤ 0.001) at baseline and after 2 months, respectively, while the mean scores of the symptoms in the placebo group were 31.0 and 28.3 (P = 0.09) at baseline and after 2 months, respectively. The evaluation of the first and the second follow-ups in the intervention group showed that, after being adjusted for age and body mass index (P ≤ 0.001), the mean scores of the premenstrual syndrome, using GEE analysis, have decreased to 6.2 and 11.6, respectively. Based on the results, in comparison with the placebo group, EA was found to be more effective in improving the symptoms of PMS, and is highly recommended for treatment of this syndrome. IRCT2015110822779N3; Registration date: 2015–11–27. During recent years, health has become the main priority for women. Nowadays, in addition to the emotional role of women at home, women have been accepting more occupational and social roles in society [1]. Premenstrual Syndrome (PMS) is referred to as the periodic recurrence of a set of physical, psychological, and behavioral variations during the second half of the menstrual cycle, which is prevalent among women of reproductive ages and affects their health [2]. This syndrome with a high prevalence (80–90%) [3] has no clear etiology; however, there are some theories such as sensitivity to hormonal changes or disruption of endogenous opioids during the menstrual cycle, stress, and diet which could be related to its etiology [2, 4,5,6]. Several therapeutic options have been documented for PMS for instance hormonal and psychotropic drugs, non-steroidal anti-inflammatory drugs, diuretics, surgery, lifestyle changes, and herbal or complementary therapies [2, 4, 7]. Echiuma Amoneum (EA) has been traditionally used as a medicine and Romans were among the first individuals who approved its effectiveness around 300 B.C. [8]. In addition, the famous Greek poet, Homer, argued that EA could be benefitial for people's mood in general [9]. In Iran, people have been growing Echiuma Amoneum in the mountain areas in the north of the country [10] and it was conventionally believed that EA can have sedative effects on patients which is well-documented in old Persian medical textbooks such as the Qanoon by Avicenna [11]. To date, studies conducted on EA have demonstrated the efficacy of the plant as a sedative and diaphoretic, and also a treatment for cough, sore throat and pneumonia [12, 13]. Cyanidin 3-glucoside, the most common anthocyanin found in the petals of EA, has had neuroprotective effects and has traditionally been used as an anxiolytic and antidepressant medicine in Asia [14]. Moreover, it has been recommended as mood enhancement [15] and has been promoted for a variety of its effects as a demulcent, anti-inflammatory [16], antioxidant [17, 18] analgesic, anxiolytic, sedative [19,20,21] and anticonvulsant [22]. Traditionally, EA was being used for the treatment of hyperactive gastrointestinal, respiratory and cardiovascular disorders [23], regulation of metabolism and the hormonal system [24], and menopause symptoms such as hot flash [25]. Additionally, the results of the studies in Iran context have demonstrated the positive effects of the pharmaceutical components of this medicinal plant which include antimicrobial, antiviral, and anti-inflammatory effects as well as its effects on some psychiatric symptoms such as anxiety disorders, obsession, compulsion, and depression without any severe side effects [26,27,28,29,30,31]. Given the variety of therapeutic effects of the EA and taking into account the fact that anxiety is the most common symptom of PMS [32], for the first time in the present study, the researchers have examined the effectiveness and safety of the aqueous extract of EA on the severity of PMS. The present study was a randomized double-blind controlled clinical trial (CONSORT guidelines) conducted on college students from Tehran University of Medical Sciences and Tehran University after obtaining approval and confirmation from the ethics committee of Tehran University of Medical Sciences. The 4th edition of the checklist questionnaire of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) was used in this study. After receiving written informed consent, the 11-items checklist was distributed among the college students. According to this questionnaire, the criteria for the diagnosis of PMS include observing at least 5 of the 11 symptoms of PMS 7 days before menstruation and one of the first four symptoms of PMS (1-feeling sad, hopeless, or remarkably depressed; 2- significant anxiety, tension, impatience; 3- significant mood swings such as sudden sadness; 4- continuous visible anger and irritability and increased mood changes) [33]. Eligible students were selected and their written informed consent was obtained. And then, the demographic characteristics questionnaire and premenstrual symptoms screening tool (PSST) were completed by the participants before the initiation of the intervention. The PSST questionnaire consists of 19 items in two sections: The first section contains 14 items of PMS symptoms, and the second section contains five items evaluating the effects of symptoms on women suffering from PMS. All the items were answered using a 4-point Likert scale (none, mild, average, and severe), which are assigned a score from 0 to 3 (0–1–2-3), respectively [34]. The content validity ratio and content validity index of this questionnaire were calculated to be 0.7 and 0.8, respectively. Moreover, the reliability of the scale was confirmed by a Cronbach's alpha coefficient of 0.9 [35]. Subsequently, eligible participants were randomly assigned to two study groups, including herbal medicine or EA and the placebo. Random blocks were used for randomization. Since the size of the blocks was set at 4, six sequences were created. Random sequences were made by the statistician using random allocator software. The allocator concealed the block size from the executor; thus, the random allocation of the participants was assured. It must be noted that, based on the method of the study, which was a double-blind clinical trial, neither the executor nor the participants were aware of the type of the consumed drugs. In addition, executors did not participate in data analysis. After implementing the intervention for two consecutive menstrual cycles, the PSST questionnaire was completed again at the end of each menstrual cycle in order to evaluate the severity of PMS symptoms. Following a pilot study, the number of samples in each group was calculated to be 32. Given a 30% sample loss probability, a total of 84 samples were selected using the following formula: $$ \upalpha =0.05,\upbeta =0.1,{Z}_{1-\raisebox{1ex}{$\partial $}\!\left/ \!\raisebox{-1ex}{$2$}\right.}=1.96,{Z}_{1-\beta }=1.28,\updelta =6.2,\mathrm{d}=5,\delta = Standard\ deviation,\mathrm{d}=\mathrm{Effect}\ \mathrm{size}. $$ $$ {\displaystyle \begin{array}{c}n=\frac{2{\left({Z}_{1-\raisebox{1ex}{$\partial $}\!\left/ \!\raisebox{-1ex}{$2$}\right.}+{Z}_{1-\beta }\ \right)}^2{\delta}^2}{d^2}\\ {}n=\frac{2{\left(1.96+1.28\ \right)}^2\ 38.44}{25}\simeq 32\end{array}} $$ Inclusion criteria were as follows: having regular menstruation (menstrual cycles of 21 to 35 days), being 18 to 35 years old with diagnosed PMS, no record of smoking cigarettes or consuming alcoholic beverages, no record of consuming drugs including hormonal, herbal, anticonvulsant or antidepressant medications, and no record of allergic reactions to herbal medicines. In addition, the following was also considered as the inclusion criteria: no history of psychological diseases or any underlying physical diseases such as diabetes, hypertension, hyperlipidemia, or cardiovascular or endocrine diseases which could affect the autonomic nervous system such as pituitary insufficiency, thyroid disorders, and lack of any incident or surgery during the past months or during the study [36,37,38,39]. The method used for following up the participants is shown in Fig. 1. The process of the study Raw botanicals of EA (flowers of Echium amoenum) were purchased from Tehran drug/medicinal herbal market. They were then approved by Professor Gholamreza Amin at the Herbarium, Faculty of Pharmacy, Tehran University of Medical Sciences and kept under voucher number PMP-559. EA was crushed and extracted through the decoction method and was then freeze-dried; 150 mg of EA extract was granulated with Maize starch and were filled in 250 mg capsules. The capsules should be consumed three times a day, from the 21st day of one cycle to the 3rd day of the next cycle (10 days on aggregate) for two consecutive cycles. Placebo was prepared by filling only Maize starch in 250 mg capsules with the same shape as the EA capsules. Standardization of EA extracts Total flavonoid content was determined using a routine reference standard method and the result was 17.3 Ru/g. Total phenol content was determined using the Polin Ciocalteu's method and the result was 3.48 mg GA/g. The trial was conducted following the Declaration of Helsinki and following revisions [21] which were approved by the ethics committee of Tehran University of Medical Sciences; the study was registered in the Iranian Registry of Clinical Trials with the ID number of IRCT2015110822779N3, Registration date: 2015–11–27. To check the normal distribution of the continuous variables, Kolmogorov-Smirnoff test was used; all the continuous data are shown as the mean ± standard deviation (SD). Independent samples t-test was used to compare the characteristics of the participants at the baseline between the placebo and the EA groups. Chi-square test was used for the categorical variables. It was approved that all the study variables had a normal distribution except for specific scores for symptoms of PMS and the total score. Mann-Whitney and Friedman tests were used to compare the mean scores of the symptoms in the EA and the placebo groups according to the PSST before and after the first and the second cycles of the intervention. However, normality is not an essential assumption for Generalized Estimating Equation (GEE). Marginal modeling within GEE is considered as a powerful tool to analyze non-normally distributed data [40]. Therefore, GEE was used to compare the total scores of the intensity of PMS between the two study groups before and after the first and the second cycles of the intervention. The level of statistical significance was set at any p-value below 0.05, based on 2-tailed tests premises. All the statistical analyses were performed using SPSS software version 20.0 for Windows (SPSS Inc., Chicago, USA). The study flowchart is presented in Fig. 1. After randomization, 15 of all the eligible college students who participated in the present study, (5 in the EA group and 10 in the placebo group) were not willing to continue the study. In the placebo group, 10 participants did not finish the study; 4 due to using other drugs, 4 due to unwillingness to continue, and another 2 due to menstrual irregularity. In the EA group, 1 participant was excluded from the follow-up due to the misuse of EA capsules, and 4 due to being influenced by others' opinions for using the treatment. The baseline characteristics of the participants are reported in Table 1. No significant difference was observed between the two study groups regarding their baseline characteristics (P > 0.05). Table 1 Comparison of base line characteristics between the two study groups Using PSST, the mean scores of the qualitative symptoms and their interference with daily activities of both groups, before and after the intervention, are reported in Table 2. The mean scores of different components of PSST in the study cycle were significantly lower compared to those before the treatment with EA. However, in the placebo group, the mean scores of components of PSST were not significantly lower than the scores before the intervention. Table 2 Comparison of the mean rank of symptoms and intensity of complaints assessed by the Premenstrual Symptoms Screening Tool (PSST) in the Echium amoenum and placebo groups before and during the first and second cycle of intervention As shown in Table 2, statistically significant differences were observed between all the components of PSST in both the treatment and the placebo groups. According to Table 2, the anxiety/tension and the tearful symptoms were the most associated symptoms, whereas the overeating/food cravings and the difficulty in concentrating symptoms were the least associated symptoms with the positive impact of EA consumption, respectively. Based on GEE analysis during the follow-up, after adjustment for age and BMI variables, the overall mean score of the PMS was significantly higher in the placebo group than the EA group (Fig. 2). GEE estimated measures of the intensity of PMS according to the Premenstrual Symptoms Screening Tool (PSST), Total score (a), 14 symptoms (b), and 5 interferences with daily activities (c) in the Echium amoenum (EA) and the Placebo groups at 2 follow-ups regarding the interaction between time and the studied group and also adjusting for age and BMI. Patterns of mean changes differ between the EA group and the Placebo group Comparing the results of the first follow-up (cycle 1) demonstrated that the total scores of PMS, its 14 symptoms, and also its interference with various activities (according to PSST) in the EA group have decreased to 6.2, 4.4 and 1.3, respectively. Moreover, these values have decreased to 11.6, 8.5, and 2.8, respectively (Table 3) in the second follow-up (cycle 2) in the EA group. Table 3 Parameter Estimates of PMS severity (total & subtotal) using GEE Model for study groups The present study is the first, in the literature, to evaluate the safety and effectiveness of EA in comparison with placebo among women suffering from PMS. According to the findings, after the implementation of the intervention for two consecutive cycles, EA was more effective than a placebo in reducing the symptoms of PMS. Findings of the most recent studies have also shown that intervention with herbal medicine has reduced the symptoms of PMS [36,37,38, 41]; this can be explained by the fact that women who use therapeutic approaches have improved self-control over their lives, i.e., the psychological effects of the placebo can reduce the intensity of PMS [39]. PMS, which has a wide range of psychological, physical, and behavioral symptoms, is one of the most common health problems among women of reproductive ages and is highly prevalent worldwide. Women who are not willing to consume chemical drugs or who are dubious about consuming them might prefer herbal medicine for the treatment of PMS. And this can be considered as one of the main reasons why the use of herbal regimes is well-received in order to treat the symptoms of PMS in these groups [42, 43]. EA, known as "Gol-e-gavzaban" in Persian, is a plant that has not yet been reported to grow or be available in Europe or other parts of the world and is found exclusively in Iran [44]; it is one of the common traditional herbal medicines used as an effective treatment for skin disorders such as eczema, arthritis, diabetes, acute respiratory distress syndrome, alcoholism, obsessive-compulsive disorder, pain and swelling, and also to prevent heart diseases and stroke [23, 29, 45]. It is also used to treat bronchitis and colds and to help enhance sweating and increase breast milk flow and production. Traditionally, it has been used in hyperactive gastrointestinal and cardiovascular disorders [23]. Naturopathic practitioners use EA for the regulation of metabolism and the hormonal system [46]; they also consider it to be a good remedy for anxiety, PMS, and menopausal symptoms such as hot flash [25]. Flowers and the leaves are the main medicinal parts of the plant. The plant contains gamma-linolenic acid (GLA), alpha-linolenic acid (ALA), delta6-fatty acryl desaturase, delta8-sphingolipid desaturase, pyrrolizidine alkaloids, and mucilage, resin, potassium nitrate, calcium and mineral acids [27, 47, 48]. It seems that the functional mechanism of EA depends on a fatty acid called GLA. GLA might have anti-inflammatory effects, and so EA flower might have an antioxidant effect [25]. Studies have reported the useful effects of GLA on the reduction of the severity and duration of PMS symptoms [49]. In this regard, the results of a review study (2019) approved the efficacy of evening primrose oil, which is a rich source of GLA, in the reduction of severity of PMS symptoms after 4 to 6 months of consumption [50]. PMS has a complex series of behavioral, emotional and physical symptoms, and it is a documented fact that the majority of these symptoms are psychological; research has also shown elevated high-sensitivity C-reactive protein levels during PMS. In fact, among individuals with PMS, the transformation of inflammation from physiologic to pathologic state can increase the severity of PMS symptoms [51,52,53]. Furthermore, increased oxidative stress and decreased antioxidant capacity may occur in PMS [54]. As mentioned earlier, anti-inflammatory, analgesic, antioxidant, anti-anxiety, anxiolytic, anti-obsessive-compulsive, and antidepressant effects are among certain reported properties of EA [29]. The results of a study conducted by Sayyah et al. showed that EA is effective on certain psychiatric symptoms such as anxiety disorder, obsession-compulsion disorder, and depression without any severe side effects [29,30,31]; thereby, supporting the effectiveness of EA on reducing the severity of PMS symptoms. Based on the findings of the present study, few side effects were reported by the participants; there were three cases of nausea in the placebo group, and none in the EA group and the findings were in line with those of other related studies [30, 31]. The researchers recommend that further studies should be conducted on active ingredients of EA to determine the effectiveness and safety of various doses and treatment sessions. The present study had some limitations as well. First of all, EA was only administered for two cycles. Besides, the participants were college students and could not represent the general population of women with PMS. Self-report questionnaires were used to assess the intensity of PMS, which can affect the participant's responses. Nevertheless, the main strength of this study was that it is the first research conducted on this topic and also the first to use validated PMS questionnaires. Considering the high prevalence of PMS among women of reproductive ages, effective treatments and safe strategies are highly recommended. EA has several therapeutic functions that can help reduce the severity of PMS and promote the health of women, along with its efficacy and safety in reducing the symptoms of PMS. Furthermore, conducting more studies is needed to assess the effects of EA on PMS to determine the active components, effectiveness, and safety of various doses with larger sample sizes using long-term interventions. The datasets generated and/or analyzed during the current study are not publicly available due to confidentiality considerations. EA: Echium amoenum PMS: PSST: Premenstrual symptoms screening tool GEE: Generalized estimating equation GLA: Gamma-linolenic acid ALA: Alpha linolenic acid Temmerman M, Khosla R, Laski L, Mathews Z, Say L. Women's health priorities and interventions. BMJ. 2015;351:h4147. PubMed Article PubMed Central Google Scholar Ryu A, Kim T-H. Premenstrual syndrome: a mini review. Maturitas. 2015;82(4):436–40. Angst J, Sellaro R, Stolar M, Merikangas KR, Endicott J. The epidemiology of perimenstrual psychological symptoms. Acta Psychiatr Scand. 2001;104(2):110–6. CAS PubMed Article PubMed Central Google Scholar Yonkers KA, O'Brien PM, Eriksson E. Premenstrual syndrome. Lancet. 2008;371(9619):1200–10. Naheed B, Kuiper JH, Uthman OA, O'Mahony F, O'Brien PMS. Non-contraceptive oestrogen-containing preparations for controlling symptoms of premenstrual syndrome. Cochrane Database Syst Rev. 2017;3:CD010503-?. Grady-Weliky TA. Premenstrual dysphoric disorder. N Engl J Med. 2003;348(5):433–8. Dietz BM, Hajirahimkhan A, Dunlap TL, Bolton JL. Botanicals and their bioactive phytochemicals for women's health. Pharmacol Rev. 2016;68(4):1026–73. Grieve M. A modern herbal, vol. 1; 1931. p. 119. Parkinson J. The theater of plans. London: Tomas Cotes; 1940. p. 765. A, Z, Medicinal plants. 7th ed. 2011, Teharn: Tehran University Press. Acuang D. Herbal medicine. CPJ. 1990:121–3. Hooper D, McNair JB, Field H. Useful plants and drugs of Iran and Iraq, vol. 9: Field Museum of Natural History; 1937. Ranjbar A, Khorami S, Safarabadi M, et al. Antioxidant activity of Iranian Echium amoenum Fisch & CA Mey flower decoction in humans: a cross-sectional before/after clinical trial. Evid Based Complement Alternat Med. 2006;3(4):469–73. Munoz-Espada AC, Watkins BA. Cyanidin attenuates PGE2 production and cyclooxygenase-2 expression in LNCaP human prostate cancer cells. J Nutr Biochem. 2006;17(9):589–96. Moemen M. Tohfat-Al-Hakim Moemen, 2nd edR Mahmoodi Press. Tehran; 1967. Abolhassani M. Antiviral activity of borage (Echium amoenum). Arch Med Sci. 2010;6(3):366. Wettasinghe M, Shahidi F. Antioxidant and free radical-scavenging properties of ethanolic extracts of defatted borage (Borago officinalis L.) seeds. Food Chem. 1999;67(4):399–414. Bandonienė D, P. V, Gruzdienė D, Murkovic M. Antioxidative activity of sage (Salvia officinalis L.), savory (Satureja hortensis L.) and borage (Borago officinalis L.) extracts in rapeseed oil. Eur J Lipid Sci Technol. 2002;104(5):286–92. Shafaghi B, N N, Tahmasb L, Kamalinejad M. Anxiolytic Effect of Echium amoenum L. in Mice. Iran J Pharm Res. 2010;1(1):37–41. Pilerood SA, P.J., Evaluation of nutritional composition and antioxidant activity of Borage (Echium amoenum) and Valerian (Valerian officinalis). J Food Sci Technol. ;51(5):845–854., 2014. Rabbani M, Sajjadi SE, Vaseghi G, Jafarian A. Anxiolytic effects of Echium amoenum on the elevated plus-maze model of anxiety in mice. Fitoterapia. 2004;75(5):457–64. Heidari MR, M A, Hosseini A, Vahedian M. Anticonvulsant Effect of Methanolic Extract of Echium amoenum Fisch and C.A Mey. Against Seizure induced by picrotoxin in mice. Pak J Biol Sci. 2006;9(4). Gilani AH, Bashir S, Khan AU. Pharmacological basis for the use of Borago officinalis in gastrointestinal, respiratory and cardiovascular disorders. J Ethnopharmacol. 2007;114(3):393–9. Amirghofran Z, M. A, Keshavarzi F. Echium amoenum stimulate of lymphocyte. proliferation and inhibit of humeral antibody synthesis. Irn J Med Sci. 2000;25:119–24. Miraj S, S. K. A review study of therapeutic effects of Iranian borage (Echium amoenum Fisch). Pharm Lett. 2016;8(6):102–9. Mehrabani M, Shams-Ardakani M, Ghannadi A, Ghassemi-Dehkordi N, Sajjadi-Jazi S. Production of rosmarinic acid in Echium amoenum Fisch. and CA Mey. cell cultures. Iran J Pharm Res. 2010:111–5. Ghassemi N, Sajjadi SE, Ghannadi A, Shams-Ardakani M, Mehrabani M. Volatile constituents of Amedicinal Plant of Iran, Echium Amoenim Fisch. And CA Mey. DARU J Pharm Sci. 2003;11(1):32–3. Zakerin, S., M. Rezghi, H. Hajimehdipoor, L. Ara, And M. Hamzeloo-Moghadam*, antidepressant effect of a Polyherbal syrup based on Iranian traditional medicine. Res J Pharm, 2019. 6(2): p. 49–56. Sayyah M, Boostani H, Pakseresht S, Malaieri A. Efficacy of aqueous extract of Echium amoenum in treatment of obsessive-compulsive disorder. Prog Neuro-Psychopharmacol Biol Psychiatry. 2009;33(8):1513–6. Sayyah M, Siahpoosh A, Khalili H, Malayeri A, Samaee H. A double-blind, placebo-controlled study of the aqueous extract of Echium amoenum for patients with general anxiety disorder. Iran J Pharm Res. 2012;11(2):697–701. Sayyah M, Sayyah M, Kamalinejad M. A preliminary randomized double blind clinical trial on the efficacy of aqueous extract of Echium amoenum in the treatment of mild to moderate major depression. Prog Neuro-Psychopharmacol Biol Psychiatry. 2006;30(1):166–9. Nazari NH, Birashk B, Ghasemzadeh A. Effects of group counseling with cognitive-behavioral approach on reducing psychological symptoms of premenstrual syndrome (PMS). Procedia Soc Behav Sci. 2012;31:589–92. Asmundson GJ, Frombach I, McQuaid J, Pedrelli P, Lenox R, Stein MB. Dimensionality of posttraumatic stress symptoms: a confirmatory factor analysis of DSM-IV symptom clusters and other symptom models. Behav Res Ther. 2000;38(2):203–14. Hariri FZ, Moghaddam-Banaem L, Siah Bazi S, Saki Malehi A, Montazeri A. The Iranian version of the premenstrual symptoms screening tool (PSST): a validation study. Arch Womens Ment Health. 2013;16(6):531–7. Steiner M, Macdougall M, Brown E. The premenstrual symptoms screening tool (PSST) for clinicians. Arch Womens Ment Health. 2003;6(3):203–9. Ozgoli G, Selselei EA, Mojab F, Majd HA. A randomized, placebo-controlled trial of Ginkgo biloba L. in treatment of premenstrual syndrome. J Altern Complement Med. 2009;15(8):845–51. Khayat S, Fanaei H, Kheirkhah M, Moghadam ZB, Kasaeian A, Javadimehr M. Curcumin attenuates severity of premenstrual syndrome symptoms: a randomized, double-blind, placebo-controlled trial. Complement Ther Med. 2015;23(3):318–24. Agha-Hosseini M, Kashani L, Aleyaseen A, et al. Crocus sativus L.(saffron) in the treatment of premenstrual syndrome: a double-blind, randomised and placebo-controlled trial. BJOG Int J Obstet Gynaecol. 2008;115(4):515–9. Akbarzadeh M, Dehghani M, Moshfeghy Z, Emamghoreishi M, Tavakoli P, Zare N. Effect of Melissa officinalis capsule on the intensity of premenstrual syndrome symptoms in high school girl students. Nurs Midwifery Stud. 2015;4(2). Azuero A, Pisu M, McNees P, Burkhardt J, Benz R, Meneses K. An application of longitudinal analysis with skewed outcomes. Nurs Res. 2010;59(4):301. Bryant M, Cassidy A, Hill C, Powell J, Talbot D, Dye L. Effect of consumption of soy isoflavones on behavioural, somatic and affective symptoms in women with premenstrual syndrome. Br J Nutr. 2005;93(5):731–9. Marinac JS, Buchinger CL, Godfrey LA, Wooten JM, Sun C, Willsie SK. Herbal products and dietary supplements: a survey of use, attitudes, and knowledge among older adults. J Am Osteopath Assoc. 2007;107(1):13–20 quiz 21-3. Menati L, Khaleghinezhad K, Tadayon M, Siahpoosh A. Evaluation of contextual and demographic factors on licorice effects on reducing hot flashes in postmenopause women. Health Care Women Int. 2014;35(1):87–99. A., Z. Medicinal Plants, vol. 4. Tehran: Tehran University Publications; 1999. Asadi S, Amini H, Akhoundzadeh S, Saiiah M, Kamalinezhad M. Efficacy of aqueous extract of Echium amoenum L. in the treatment of mild to moderate major depressive disorder: A randomized double blind clinical trial; 2004. Z, A. Medicinal Plants as Immunosuppressive Agents in Traditional Iranian Medicine. Iran J Immunol. 2010;7(2):65–73. Mehrabani M, Ghassemi N, Ghannadi ESA, Shams-Ardakani M. Main phenolic compound of petals of Echium amoenum Fisch. And CA Mey., a famous medicinal plant of Iran. DARU J Pharm Sci. 2005;13(2):65–9. Mehrabani M, Ghannadi A, Sajjadi E, Ghassemi N, Shams-Ardakani M. Toxic PYRROLIZIDINE alkaloids of ECHIUM AMOENUM FISCH. & MEY. DARU J Pharm Sci. 2006;14(3):122–7. Watanabe S, Sakurada M, Tsuji H, MATSUMOTO S, KONDO K. Efficacy of γ-linolenic acid for treatment of premenstrual syndrome, as assessed by a prospective daily rating system. J Oleo Sci. 2005;54(4):217–24. Mahboubi M. Evening primrose (Oenothera biennis) oil in Management of Female Ailments. J Menopausal Med. 2019;25(2):74–82. Gold EB, Wells C, Rasor MO. The Association of Inflammation with premenstrual symptoms. J Womens Health (Larchmt). 2016;25(9):865–74. Bertone-Johnson E, Ronnenberg A, Houghton S, et al. Association of inflammation markers with menstrual symptom severity and premenstrual syndrome in young women. Hum Reprod. 2014;29(9):1987–94. Graziottin A, Zanello P. Menstruation, inflammation and comorbidities: implications for woman health. Minerva Ginecol. 2015;67(1):21–34. Duvan CI, Cumaoglu A, Turhan NO, Karasu C, Kafali H. Oxidant/antioxidant status in premenstrual syndrome. Arch Gynecol Obstet. 2011;283(2):299–304. We wish to acknowledge Ms. Niloofar Shiva for the critical editing of English grammar and syntax of the manuscript. The authors would like to thank the participants for their contributions to this study. The authors also wish to acknowledge N. Hamzavi and R. Abdolmaleki for the contributions on sampling and data gathering. This research has been supported by Tehran University of Medical Sciences and Health Services under grant No. 94–01–99-28715. The role of the funding body was in the design of the study, data collection, analysis, and interpretation of data. Nursing & Midwifery Care Research Center, School of Nursing and Midwifery, Tehran University of Medical Sciences, P.O.Box: 1419733171, Mirkhani St., Tohid Sq, Tehran, Iran Maryam Farahmand & Reza Negarandeh Reproductive Endocrinology Research Center, Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran Maryam Farahmand & Fahimeh Ramezani Tehrani Prevention of Metabolic Disorders Research Center, Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran Davood Khalili Department of Epidemiology and Biostatistics, Research Institute for Endocrine Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran Department of Pharmacognosy, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran Gholamreza Amin Maryam Farahmand Fahimeh Ramezani Tehrani Reza Negarandeh MF conceptualized and designed the study, analyzed and interpreted the data, and drafted and revised the manuscript. DK and FRT conceptualized and designed the study, participated in analysis and interpretation of the data, and revised the manuscript. GA participated in designing the study and preparation of the intervention materials. RN designed the study and drafted and revised the manuscript. All authors have read and approved the final version of the manuscript and ensured this is the case. Correspondence to Reza Negarandeh. The present study was a randomized double-blind controlled clinical trial (CONSORT guidelines) conducted on college students in Tehran, following ethical approval and confirmation by the ethics committee of Tehran University of Medical Sciences (ethics code: 28715–99–01-94). Eligible students were selected, and their written informed consents were obtained. All the authors hereby declare that they have no conflict of interest. Farahmand, M., Khalili, D., Ramezani Tehrani, F. et al. Effectiveness of Echium amoenum on premenstrual syndrome: a randomized, double-blind, controlled trial. BMC Complement Med Ther 20, 295 (2020). https://doi.org/10.1186/s12906-020-03084-2 Phytosterol Echium Ameonum
CommonCrawl
Distribution of malaria exposure in endemic countries in Africa considering country levels of effective treatment Melissa A. Penny ORCID: orcid.org/0000-0002-4972-593X1,2, Nicolas Maire1,2, Caitlin A. Bever1,2 nAff3, Peter Pemberton-Ross1,2, Olivier J. T. Briët1,2, David L. Smith4,5, Peter W. Gething4 & Thomas A. Smith1,2 Malaria prevalence, clinical incidence, treatment, and transmission rates are dynamically interrelated. Prevalence is often considered a measure of malaria transmission, but treatment of clinical malaria reduces prevalence, and consequently also infectiousness to the mosquito vector and onward transmission. The impact of the frequency of treatment on prevalence in a population is generally not considered. This can lead to potential underestimation of malaria exposure in settings with good health systems. Furthermore, these dynamical relationships between prevalence, treatment, and transmission have not generally been taken into account in estimates of burden. Using prevalence as an input, estimates of disease incidence and transmission [as the distribution of the entomological inoculation rate (EIR)] for Plasmodium falciparum have now been made for 43 countries in Africa using both empirical relationships (that do not allow for treatment) and OpenMalaria dynamic micro-simulation models (that explicitly include the effects of treatment). For each estimate, prevalence inputs were taken from geo-statistical models fitted for the year 2010 by the Malaria Atlas Project to all available observed prevalence data. National level estimates of the effectiveness of case management in treating clinical attacks were used as inputs to the estimation of both EIR and disease incidence by the dynamic models. Results and conclusions When coverage of effective treatment is taken into account, higher country level estimates of average EIR and thus higher disease burden, are obtained for a given prevalence level, especially where access to treatment is high, and prevalence relatively low. These methods provide a unified framework for comparison of both the immediate and longer-term impacts of case management and of preventive interventions. The prevalence of Plasmodium falciparum infections is routinely measured in malaria indicator surveys (MIS), and as part of various health assessments and research projects. Prevalence data are therefore relatively widely available and are often used as a measure of endemicity in geographical comparisons and in evaluating the success of intervention programmes [1]. However, although prevalence is a consequence of malaria transmission and levels of exposure, these variables do not have a one-to-one relationship but rather a non-linear relationship modified by many factors such as naturally acquired immunity, malaria interventions and of heterogeneity in transmission rates [2]. These complicate the interpretation of age-patterns of infection and disease. The relationship between exposure and prevalence of infection also depends on the amount of treatment in the population because treatment truncates infections and (depending on the drug regimen) provides a few weeks of chemoprophylaxis (Fig. 1). If access to effective treatment is good, then prevalence may remain relatively low, even at high transmission levels. The amount of effective treatment also affects the relationships of exposure (or prevalence) with morbidity, and mortality rates (Fig. 1). Illustration of impact of treatment effectiveness on interactions between infection, clinical disease and exposure. Arrows indicate causal links and double lines show where treatment has a modifying effect Human exposure to malaria, one part of malaria transmission, is best quantified by the entomological inoculation rate (EIR: the number of infectious bites per human host, per unit time), which is more directly related to morbidity and mortality than is prevalence. However, measuring this quantity directly requires intensive entomological studies over the whole annual period of malaria transmission. Previously established empirical relationships between prevalence and EIR have illustrated the complications and diversity by site [3]. EIR data are consequently relatively sparse, and indirect methods, that ideally account for treatment effects, are needed for estimating EIR from available prevalence data [4, 5]. The comprehensive repository of geo-located malaria parasite prevalence data maintained by the Malaria Atlas Project (MAP) is the obvious starting point for estimating how many people are exposed to malaria at different intensities, in different endemic countries. Several different algorithms have been used to infer the distribution of exposure from prevalence maps. In particular, a linear relationship between prevalence and the logarithm of the EIR approximates the empirical relationship between these variables [6], and the MAP repository includes EIR surfaces and estimates of the uncertainty based on this relationship [7]. Other researchers use process models to estimate transmission rates surfaces from prevalence data [6, 8–10]. These analyses do not allow for effects of treatment on prevalence. At low transmission levels, where infection events are sporadic, and superinfection relatively infrequent, this omission can be remedied using rather simple models for translating prevalence into transmission estimates, conditional on the incidence of effective treatment [4]. At higher levels of transmission, both concurrent and sequential superinfection are frequent; so mechanistic models allowing for this, as well as for treatment rates, are needed. Estimates of the number of clinical malaria episodes at national level and continent-wide have been made from the MAP database by assuming a standard empirically determined relationship between prevalence and the incidence of clinical malaria in children [11, 12]. Using a similar methodology based on geographical stratification of risk, estimates of clinical incidence at national level are made yearly by the World Health Organization (WHO) for the World Malaria Report (WMR) for high-burden sub-Saharan countries [13]. This report also provides up-to-date assessments of malaria-related interventions and policies, attempting to quantify the impact on disease burden. Estimates of clinical incidence for each year have been made by adjusting for changing intervention coverage levels within each country, assuming effects match those seen in controlled trials [14]. These estimates of clinical incidence do not allow for levels of access to effective treatment. This affects both the true extent of pathology, and the observed clinical incidence, whether ascertained passively or actively. Depending on underlying exposure, high treatment levels create a virtuous cycle by averting further pathology and secondary cases. Estimates of worldwide and national levels of burden should, therefore, take into account effects of treatment, as well as the shifts in age patterns of prevalence [15] and of incidence that occur as a result of transmission reducing interventions. The OpenMalaria platform supports an ensemble of models that can be used for calibrating different malariological indices against each other [16]. OpenMalaria is a stochastic, individual-based, simulation model of malaria in humans [17] linked to a deterministic model of malaria in mosquitoes [18]. The simulation model includes sub-models of infection of humans [19], blood-stage parasite densities [20], infectiousness to mosquitoes [21], incidence of morbidity including severe and hospitalisation [22, 23] and mortality [22]. An ensemble of 14 model variants is available [24] with each model including different assumptions for decay of natural immunity, greater within-host variability between infection and entomological exposure, heterogeneity in transmission and heterogeneity in susceptibility to co-morbidities. Six of the OpenMalaria ensemble models were used in this work to compute estimates of the distribution of exposure (EIR) for each of 43 malaria endemic countries in sub-Saharan Africa as well as estimates of clinical incidence (and also incidence of severe disease and malaria mortality) for 2010 levels of malaria control. These estimates are based on the pixel-level posterior distributions of parasite prevalence in 2010 published by MAP [7]. For each country, these estimates are conditional on national level estimates of the levels of access to effective treatment for malaria fevers [25]. The resulting estimates of the distribution of transmission and of the incidence of clinical malaria provide a basis for evaluating the impacts of both preventive and curative intervention programmes allowing for the effects of existing case management on prevalence and burden of disease. An overview of the methods in estimating malaria exposure distributions (as EIR) and resulting burden is presented in Table 1, including inputs and outputs of each method. Table 1 Description of the EIR and burden estimation methods A and B including their inputs and outputs Malaria prevalence data National levels of prevalence were taken from the prevalence surfaces estimated by the Malaria Atlas Project (MAP) for Plasmodium falciparium 2010 [7]. Estimates of prevalence in children aged 2 up to children before their 10th birthday (PfPR2–10) across a 5 km × 5 km grid were extracted as posterior distributions from a Markov Chain Monte Carlo (MCMC) calculated via a Bayesian geostatistical model using survey data. The primary estimates are of PfPR2–10 are available from the MAP website [26] as posterior densities. National levels of effective treatment coverage National levels of access to effective malaria treatment were collated previously [25] and are detailed in Table 2. Effective malaria treatment is treatment that results in parasitological cure. In this work effective treatment are estimates of the probability, E 14, that effective malaria treatment will be obtained during any 14-day period in which a fever occurs. Estimates were assembled at country level taking into account multiple factors for effectiveness of malaria case management, including probably of treatment-seeking, of type of care provider, of systems compliance with the recommended anti-malarial treatment, of adherence with the drug regimen, and the quality of the anti-malarial medications. Table 2 Coverages of effective treatment and estimated transmission profiles (EIR mean, median and quartiles) for 43 sub-Saharan Africa estimated by method B assuming country level effective treatment Relationship between parasite prevalence and EIR Two different methods were used to estimate distributions of malaria exposure (as measured by the entomological inoculation rate, EIR) from PfPR2–10 data: Method A: statistical relationship between EIR and prevalence A previously published statistical model transforming PfPR2–10 to EIR [7] (Additional file 7 of that paper), based on an earlier empirical analysis of the relationship of measured EIR values with PfPR2–10 [6]: $$x \sim \log {\text{Normal}}\left( {\mu ,\sigma^{2} } \right)$$ where x is EIR, μ = 1.768 + 7.247p, σ = 1.281, and p, is PfPR2–10. This relationship is independent of the level of access to effective treatment, E 14, and thus does not allow for the effects of case management on the prevalence-EIR relationship. This model allows for statistical uncertainty in both variables x and p (data and fitted curve shown in Fig. 2a). Scale factors can be used to obtain the EIR estimate that would be obtained with different measurement approaches (e.g. pyrethroid-spraying catches, human landing catches, or both). This method is similar to the method used in previous analyses of the global burden of clinical malaria [11]. Relationships between malaria exposure (EIR), effective coverage, and prevalence for Method A (a) and Method B (b). a Method A: plotted empirical relationship of prevalence as a function of EIR relationship [6] (Eq. 1) with data used to fit this relationship. The relationship between standardized prevalence and EIR is approximately linear-log for all the data (grey curve fitted relationship over all data). The relationship varies by study (purple colour dots a correspond to data from a single field study, purple curve the fitted relationship to those data) and by method (red dots correspond to measurements taken via pyrethroid spray catches, and purple and blue dots measurements taken by other methods. Red and blue curves correspond to the respective fits). b Method B: OpenMalaria simulations of the relationship between prevalence and EIR (model variant R0133 only, other models shown in Additional file 2: Figure S3) for discrete levels of coverage of effective treatment (points) and best fitted model to these data as Hill functions (curve for different levels of effective treatment) (Eq. 3). Colour indicates the level of effective treatment (E 14), with red 0.001 %, yellow 5 %, light blue 20 %, dark blue 40 % Method B: dynamic model relating EIR, prevalence, and coverage of treatment Method B uses relationships between EIR and prevalence derived from multiple transmission models of malaria epidemiology and control, incorporating the effects of treatment on the infectious reservoir. The process of translating prevalence to EIR is illustrated in Fig. 3a, b, in essence extracting prevalence distributions at each 5 by 5 km grid from MAP (detailed above, Fig. 3a) and converting to EIR by the fitted relationship from OpenMalaria for a given coverage of effective treatment (Fig. 3b). Schematic diagrams of the processes in estimating EIR distributions and disease burden from MAP prevalence. The figure illustrates the steps involved in estimating geographic specific EIR (Method B) and incidence levels of malaria (Methods A and B), which includes the dynamic effect of treatment on transmission, and a dynamic model of clinical incidence. a, b Illustrate the process of extracting prevalence distributions from MAP [7] by pixel (5 km by 5 km) and converting to distributions of EIR using a statistical relationship relating prevalence and EIR for given levels of effective treatment derived from OpenMalaria simulations. Method A, not illustrated, is simpler in that it does not consider the effect of treatment on transmission, and uses the WMR method for estimating EIR. c Illustrates aggregation of these EIR distributions from pixel level to a larger spatial area, such as country level. d Illustrates the process of estimating country level burden for a distribution of EIR (derived from either method A or B), namely the EIR distributions are inputs to micro-simulation OpenMalaria with outputs of incidence of clinical cases (and mortality) which are calculated for a given coverage of effective treatment E14. The Gaussian distributions in a–d are illustrative only The transmission models are six model variants from the OpenMalaria stochastic individual-based model of the dynamics of P. falciparum malaria in humans [24] (Table 3), comprising a subset a previously published model ensemble [24] with each model variant including the same sub-model for pathogenesis [23] and case-management [27], but differing by assumptions concerning immunity decay or heterogeneity in transmission or co-morbidity (Table 3). The same parameterizations as used previously [24] were used to capture human demography and the seasonality of transmission. Each model variant has been parameterized by fitting to observed relationships between seasonal patterns of EIR and a range of outcomes, including parasite prevalence [19] and morbidity rates [23] in specific field sites. Table 3 Model-specific parameters for each model variant for the statistical models fits relating OpenMalaria EIR and prevalence among 2–10 year olds A statistical relationship was fit between simulated PfPR2–10, p, and EIR, x, for a given level of effective treatment, E 14, for each model in the ensemble (Fig. 2b illustrates an example of this relationship). These simulated predictions cover a wider range of EIR and prevalence used to parameterize the transmission models originally [17, 24]. The OpenMalaria simulations use a 5-day time step and effective treatment at each 5-day time step, E 5, was obtained from the 14 day estimates using a mapping based on the pattern of fevers over time in malaria-therapy data [28] (sample values shown in Additional file 1: Table S1). A Hill function was fitted by least-squares to the simulation data in order to relate PfPR2–10 and EIR, namely: $$p\left( {x,E_{14} } \right) = \frac{{p_{max} x^{{n\left( {E_{14} } \right)}} }}{{K^{{n\left( {E_{14} } \right)}} + x^{{n\left( {E_{14} } \right)}} }},$$ where, p max , K, and n are functions of E 14. The inverse of relationship Eq. (2), relating EIR to PfPR2–10 is given by $$x = K\left( {E_{14} } \right) exp\left[ {\frac{1}{{n\left( {E_{14} } \right)}}ln\left( {\frac{{p\left( {x,E_{14} } \right)}}{{p_ {max}\left( {E_{14} } \right) - p\left( {x,E_{14} } \right)}}} \right)} \right].$$ The functional forms for x, p max and K were chosen among exponential, linear, and quadratic options to give the best fit of p(x, E 14) to the simulated prevalence for different levels of coverage of effective treatment. The selected functions are: $$K\left( {E_{14} } \right) = K_{1} { exp }\left( {K_{2} E_{14} } \right)$$ $$n\left( {E_{14} } \right) = n_{1} E_{14}^{2} + n_{2} E_{14} + n_{3}$$ $$p_{max} \left( {E_{14} } \right) = p_{1} exp\left( {p_{2} E_{14} } \right) .$$ where K 1, K 2, n 1,n 2, n 3, p 1, and p 2 are fitted parameters. Separate parameter sets were fitted for each of the six model variants in the ensemble (values provided in Table 3). Estimation of EIR distributions at national level The prevalence-EIR relationships from method A and B were used to estimate a distribution of EIR for each country from the prevalence surfaces estimated by the MAP for 2010 [7]. Prevalence from the MCMC chains are weighted by each pixel-level value of population, and the percentiles of the distributions obtained by summarizing the whole set of MCMC chains. Corresponding to the PfPR2–10 value for pixel j, from MAP, and MCMC iteration i, an EIR estimate, x (i) j , is obtained. For method B this is $$\mathbf{ x_{j}^{(i)}} = K\left( {E_{14} } \right) exp \left[ {\frac{1}{{n\left( {E_{14} } \right)}}\ln \left( {\frac{{p_{j}^{(i)} }}{{p_{{max} } - p_{j}^{(i)} }}} \right)} \right].$$ The corresponding estimate of the distribution of EIR over the whole country (including non-endemic areas, with EIR = 0) is obtained by binning x (i) j into a limited number, K, of ranges \(X_{ 1} ,X_{ 2} , \ldots ,X_{K}\), and population weighting. Aggregating the estimates from the whole set of T sampled values of p (i) j from the MCMC chains, each range is assigned probability: $${ \Pr }\left( {X_{k} } \right) = \frac{{\mathop \sum \nolimits_{i} \mathop \sum \nolimits_{j} \left( {N_{j} {\text{I}}\left({ \mathbf{{{x}}_{{j}}^{{\left( {{i}} \right)}}} \in X_{k} } \right)} \right)}}{{T\left( {\mathop \sum \nolimits_{j} N_{j} } \right)}},$$ and I(x (i) j ∊ X k ) is an indicator taking value 1 if x (i) j is in range X k and zero otherwise. Here N j is the population assigned to the pixel as determined by the gridded population of the world [29, 30]. For computational convenience we carried out the summation over i before summing over j. The resulting distributions describe the proportion of each country's population that one would expect to be living at a given level of prevalence. In many of the countries analysed, a proportion of the gridded population from [30] falls outside the boundary of the area defined by MAP as being within the spatial limits of endemic malaria transmission [7]. This proportion of the population was assigned an EIR of zero. Two different estimates of the transmission distribution per geographic area were calculated by estimation method B, to examine sensitivity to the estimated level of access to effective treatment. To capture the situation before recent scale-up of artemisinin combination treatment (ACT), a common value of access to effective treatment for all countries was used assumed and at a value previously used in OpenMalaria simulations [31]. This value equates to approximately 15 % of all malaria cases receiving treatment resulting in parasitological cure. In addition, analyses were conducted using country-specific estimates of access to effective treatment [25]. Country levels of coverage of effective treatment are listed Table 2 and illustrated by map in Additional file 2: Figure S1 and in Fig. 3b. National level estimates of the incidence of clinical malaria were projected from the EIR distributions derived from Method A and Method B using OpenMalaria simulations. These incorporate dynamic models of clinical incidence and treatment parameterized with Senegalese and Tanzanian data [23, 31] and models for severe disease and mortality [22], and hence provide clinical incidence estimates as an extension of EIR estimation (process illustrated in Fig. 3d). Separate estimates were made using the EIR estimates with Method A, those from with Method B with E14 = 0.15, and those made with Method B with country specific E14 values. Estimated burden, via clinical incidence, derived by both methods was compared with those national level estimates of clinical malaria from the WMR. For most sub-Saharan African countries, these use a standard empirical relationship between clinical incidence and endemicity. Clinical incidence values were assigned to each endemicity level based on estimates of the numbers of events recorded in longitudinal surveys of febrile malaria episodes in children, detected either actively or passively [32–34], established independently of effects of treatment rates [12]. For countries with low endemicity, WMR uses national surveillance data to estimate burden, with adjustments to allow for incomplete reporting. National level prevalence distributions National levels of PfPR2–10 aggregated at country level after extracting from MAP [7] posterior distributions at each 5 by 5 km pixel illustrate high average levels of PfPR2–10 in 2010 in many African countries but for many countries also the wide distribution of prevalence levels (values summarized in Table 4 and Additional file 1: Table S3 and shown as distributions in Additional file 2: Figure S2). Regional differences, local variation, and uncertainty within areas all contribute differently to the overall distributions, with the average levels of transmission highest in West and Central Africa. Much of Namibia, Botswana, and South Africa, and also several Sahelian countries are malaria free, as are highland areas of East Africa. Some of the variation is also a result of differences in the extent of recent intervention programmes. In some countries, intervention programmes have had little impact on 2010 prevalence (e.g. Benin, Burkina Faso, Côte d'Ivoire), while elsewhere prevalence has been considerably reduced in the last decade (e.g. Senegal, Tanzania, Zambia), or much of the population lives in areas on the margins of stable transmission (Somalia, North Sudan). The location of some major urban centres such as Nairobi and Lusaka at relatively high altitudes, with low transmission, strongly influences some of these profiles. Table 4 Prevalence distributions, summarized for each country: estimated prevalence (mean, median and quartiles) for 43 sub-Saharan Africa estimated from MAP prevalence posteriors at 5 km by 5 km grids, aggregated to country level and weighted by population. Mean EIR estimates for Method A aand Method B assuming effective coverage (E 14) of 15 % for all countries Modelled relationships between EIR and prevalence Where PfPR2–10 is high, the OpenMalaria models predict on average a slightly higher EIR at a given prevalence than does the empirical model (method A), with relatively little influence of effective treatment (E 14) (Fig. 2a, b). The fitted prevalence to EIR relationships for Method B are shown in Fig. 2b (model variant R133) and Additional file 2: Figure S3 (all 6 model variants). The six model variants all predict broadly similar, but nevertheless distinct, prevalence-EIR relationships. The general pattern for Method A is for prevalence to increase steeply with EIR at low transmission levels, but to saturate at higher transmission (Fig. 2a). The considerable variation around the best fitting curve for Method A, after adjusting for the different EIR measurement techniques used, is treated as random variation that contributes to uncertainty in the estimate of EIR from prevalence. This analysis does not allow for variations in the coverage or effectiveness of case management in the different studies, however such variation could account for much of this unexplained dispersion (compare with Fig. 2b). At lower transmission levels the fitted curves for Method B (OpenMalaria) vary considerably with E 14, suggesting that effectiveness of case management is a particularly important driver of prevalence in such settings, with Method B estimating lower EIR at a given prevalence than the empirical model unless E 14 is high (Fig. 2b). This is partly because Method B constrains estimated EIR to be zero at zero prevalence, while the empirically-based Method A does not capture or force this constraint. National level EIR distributions The differences between the two relationships for prevalence and EIR are reflected in the estimated distributions of EIR by county (Fig. 4; Additional file 2: Figure S4). The EIR distributions are generally much more highly skewed than are the prevalence distributions. The distributions obtained with the empirical model (Method A: Fig. 4; Table 4 and Additional file 1: Table S4) and with the simulation models (Method B) that assume \(E_{14} = 0.15\) (Table 4 and Additional file 1: Table S5) are similar to each other for most countries, though the estimated median EIRs are generally somewhat higher for the estimates from the simulation models (Fig. 5). Where effectiveness of case management is high, the country specific assumptions for system effectiveness make substantial differences to the estimated EIR distributions (Fig. 4; Table 2; Additional file 2: Figure S2). In these countries, notably Zambia, Tanzania, São Tomé and Principe, the EIR distribution shifts to the right when the country-specific value of \({\text{E}}_{14}\) is used, reflecting lower prevalence than in a situation with the same EIR pattern, but less effective case management. The estimate of median EIR for these countries is thus much higher when country-specific effectiveness is considered. Conversely, in a few countries, where median prevalence is low, and case management is also poor, the estimated EIR distribution allowing for country-specific effectiveness is shifted slightly to the left (e.g. both South and North Sudan). Distribution of EIR for each of the 43 countries. Distribution of EIR (including non-endemic areas which are assigned values of 0) for each of the 43 countries. Calculated from MAP using both the empirical model (Method A) (black); the simulation models (Method B) with a common value for access to care (yellow) (\(E_{14} = 0.15)\); and country-specific values of \(E_{14}\)(blue). Countries are indicated by country code Relationship of estimated average EIR to prevalence at country level. Calculated from MAP using both the empirical model (Method A) (black); the simulation models (Method B) with a common value for access to care, (yellow) (E14 = 0.15); and country-specific values of \({\text{E}}_{14}\)(blue) The EIR distributions are highly skewed, so that the arithmetic means are much higher than the medians (Figs. 4, 5). Except in some cases where prevalence is very low, the average EIR is higher when there is allowance for treatment (Method B), with a much larger shift in the mean than in the median of the distribution. When country specific E 14 values are used (which are mostly higher than the 15 % shown in yellow), this makes little difference to the mean EIR, but substantial differences to the medians, reflecting the stronger relationship between treatment rates and prevalence when EIR is low, than when EIR is high. National levels of burden of disease The OpenMalaria simulations predict that steady state clinical incidence (over all ages) increases linearly with EIR in low transmission settings, tending to plateau at high EIR (with a suggestion, driven by the specific Senegalese data used to parameterize the models for older children and adults [35] that there may be a maximum in the curve at high prevalence). The initial slope is greater when \(E_{14}\) is higher, but the plateau occurs at a similar level of incidence irrespective of effective treatment level. These patterns are a consequence of the age-specific relationships between incidence and EIR shown in Fig. 6. Models of the relationship between EIR and clinical incidence. Incidence of clinical episodes by EIR in OpenMalaria models with light blue \(E_{14}\) = 0.15 and dark blue \(E_{14}\) = 0.45. The continuous lines indicate the mean prediction of the overall incidence. The shading around the continuous lines indicates the range of predictions made from simulations with different model variants and random number seeds. The dashed lines indicate the incidence of clinical episodes that are treated [31] When these models are used to infer country-specific incidence of clinical malaria, there is a clear increase in incidence with average prevalence at the country level (Fig. 7), and no plateau is reached because even the countries with highest average transmission have only small populations in the very high EIR categories (Table 4 and Additional file 1: Table S2). The relationships between country-level EIR and estimated clinical malaria incidence are similar, irrespective of whether the EIR is estimated by Method A or Method B. Similarly, Method B estimates similar relationships between country-level EIR and clinical malaria incidence, irrespective of whether a common value, or a country specific estimate is used for the effectiveness of case management. Predicted incidence of clinical events by national level average EIR. Predicted incidence of clinical events estimated from empirical model (Method A) (black); the simulation models (Method B) with a common value for access to care (yellow) (E14 = 0.15); and country-specific values of \(E_{14}\)(blue). a uncomplicated clinical episodes (cases); b severe malaria episodes; c hospitalizations; d deaths directly attributable to malaria; e hospital deaths; f all malaria deaths (including those with co-morbidities). All rates are expressed as events per 100,000 person years at risk over all ages of hosts. The model and parameters for severe disease and mortality follows Ross et al. [22], with a common hospitalization rate assumed for severe disease across all countries Country-specific estimates of clinical incidence using country EIR distributions is compared to published malaria cases from the World Malaria Report [14] (Fig. 8; Additional file 2: Figure S5). In general, projections of incidence using EIR derived from method B produces higher predictions than using EIR from method A, but in both cases the simulation models predict substantially more episodes of malaria than the cases reported in the World Malaria Report 2013 (Fig. 8a). There is also a much less steep relationship between the incidence rate and the overall burden (Fig. 8b). This can be explained by the empirical relationships between prevalence and case incidence used by WMR [12], which refer back to field research carried out prior to the widespread use of ACT [33, 36], and therefore do not allow for level of treatment. Moreover, only clinical episodes in children under 5 years of age are considered. The effect of high levels of treatment on reducing prevalence leads to much higher ratios of case-incidence to prevalence ratio than it would be without treatment (Fig. 7). OpenMalaria correctly predicts that in low transmission countries the majority of the clinical burden is in older age groups (Fig. 6). The difference between the methods is particularly evident for low-burden countries Namibia and Botswana, for which very low case numbers are reported by the WMR, with estimates based on adjustments to surveillance data. Numbers of episodes per annum estimated using different approaches. Numbers of clinical cases; b incidence rate (episodes or cases per 100,000 person-year). Black points: based on EIR estimates calculated using Method A; yellow points: based on EIR estimates made using Method B with a common value for access to care (\(E_{14} = 0.15)\); blue points: based on EIR estimates made using Method B with country-specific values of \(E_{14}\)(blue). The diagonal line corresponds to a 1:1 relationship; the horizontal and vertical lines represent minimum and maximum ranges It has been incontrovertible since Laveran's first studies of the malaria parasite [37] that effective treatment of clinical malaria results in clearance of blood-stage parasites. Treatment lowers the overall prevalence associated with malaria (or other parasites [38]), the infectiousness of the human population and the transmission level [31], both of which synergize with effects of other interventions on transmission. Most immediately, effective treatment reduces the length of illness and the incidence adverse outcomes, including severe disease, neurological sequelae, and death. This reduces the burden of disease that can potentially be averted by other interventions. All these effects need to be considered in estimating current burden of disease, in analyses of the impact of case management and treatment on the burden of disease, and analyses of treatment modifies the public health impact achievable with other curative and preventive interventions. Results presented here clearly indicate that incorporating the dynamic effects of treatment is essential for valid estimation of EIR, of clinical incidence itself, and of downstream outcomes including the incidence of severe disease and mortality rates, with substantively differences in estimates when included or excluded. Overall, the model-based method proposed in this work (method B) provides estimates of transmission intensity, as measured by EIR, that are somewhat higher than those estimated by method A [7], especially in low endemicity countries and where case management is relatively effective. The downstream country level clinical incidence estimates are also higher than those previously reported in the World Malaria Report [13]. At the country level, allowing for uncertainty in the inputs makes a substantial difference to average values of both EIRs and disease rates, as a result of the skewness of their distributions. This means that incorporating uncertainty and spatial variation into the estimation has important consequences for both burden estimates and prediction of average health impacts of interventions, which in general vary non-linearly with EIR. The OpenMalaria models also predict, as one would expect, that the effectiveness of uncomplicated malaria treatment has substantial impact on the incidence of severe disease and malaria mortality. Preventive interventions like insecticide treated nets (ITNs), which affect prevalence only via their impact on exposure, do not change the relationship between exposure and prevalence. Consequently, coverages of preventive interventions can be useful covariates for estimating EIR or prevalence surfaces where direct measurements are sparse, but the coverage of these interventions are not directly relevant when making estimations of disease burden from prevalence. In contrast, treatment of malaria reduces the prevalence at a given level of EIR, by preventing infections from persisting, thus modifying the relationship between the two metrics (Fig. 1). So the same prevalence can result from very different average exposures depending on the level of treatment, and the effective coverage of case management (like the degree of transmission heterogeneity [2]) should be taken account in modeling the relationships between EIR and prevalence. Nevertheless, at least in high endemicity settings, prevalence remains the best measure on which to base geographically specific models of malaria transmission. This is because prevalence data are actively collected based on representative sampling of populations, are widely available, have been compiled into publicly accessible databases [7, 39], and have been analysed using geostatistical models to produce high resolution maps of the distribution of infection in space [7, 40]. In most sub-Saharan African countries prevalence is therefore likely to remain the main metric used in deciding when and where to distribute or target interventions. In low transmission settings such as those in Asia, Latin America, and selected African countries the annual parasite index (API) rather than the prevalence is the main metric used for monitoring and evaluation, and WMR has estimated burden in these countries using an API-based algorithm [12]. Prevalence-EIR-treatment relationships in such low transmission settings can be captured by relatively simple empirical mathematical models [4]. However in areas of moderate or high transmission it is important to allow for effects of superinfection and natural immunity, and thus mechanistic models that account for dynamics of immunity are needed. The use of simulation models that take both prevalence and treatment rates as inputs provides a generalizable way of generating national level estimates of transmission and disease burden, applicable across the range of transmission intensities. This generalizability will be important for monitoring progress as malaria is further controlled to the point where measurement of API becomes the main metric used by many more country programmes. The approach will capture in a natural way the transitions between the different metrics, and the age shifts in the pattern of disease where transmission rates change [41, 42]. The approach can be made more robust by employing a larger ensemble including other simulation models with different assumptions about transmission heterogeneity, immunity, and pathogenesis [10, 43]. For the method to provide the best estimates of malaria attributable mortality, geographical variation in access to appropriate in-patient treatment of severe disease also needs to be taken into account. Previous methodologies for estimating burden have applied both estimates of intervention protective efficacy derived from meta-analyses of controlled trials and/or household survey data, leading to circular reasoning. Local variability has also been ignored [1], in particular variations in access, compliance, or adherence, and also the medium- and long-term dynamics resulting from intervention-induced reductions in transmission, which include shifts of disease into older age groups [41, 42]. The burden estimation procedures proposed in this paper will allow empirical analysis of the relationships between intervention coverage and burden independently of field trial results and conditional on all these factors. This will provide a basis for assessing the impacts of both preventive and curative interventions on an equivalent basis, ensuring correct attribution of the effects of different interventions. The method can be extended to give time-dependent estimates of burden by using time-period specific input data. By linking these to intervention coverage, this will provide valid estimates of intervention impacts in time and space. Although results are presented only at country level in this work, this methodology can, in principle, be applied to any level of spatial aggregation. However, applying the approach to data disaggregated in smaller spatial units would raise additional methodological issues, as the simulation models are parameterized mainly using village-level data. This paper demonstrates the dual importance of capturing the effects of treatment when estimating disease burden based on infection prevalence: to both improve the accuracy of those estimates and to correctly quantify the impact of treatment on reduced malaria transmission and illness. These insights are currently being incorporated into a revised WHO methodology that will lead to more refined burden estimates and ultimately better information for national and international malaria control decision-making processes. Gething PW, Battle KE, Bhatt S, Smith DL, Eisele TP, Cibulskis RE, et al. Declining malaria in Africa: improving the measurement of progress. Malar J. 2014;13:39. Ross A, Smith T. Interpreting malaria age-prevalence and incidence curves: a simulation study of the effects of different types of heterogeneity. Malar J. 2010;9:132. Beier JC, Killeen G, Githure JI. Short report: entomologic inoculation rates and Plasmodium falciparum malaria prevalence in Africa. Am J Trop Med Hyg. 1999;61:109–13. Yukich J, Briet O, Bretscher MT, Bennett A, Lemma S, Berhane Y, et al. Estimating Plasmodium falciparum transmission rates in low-endemic settings using a combination of community prevalence and health facility data. PLoS One. 2012;7:e42861. Stuckey EM, Smith T, Chitnis N. Seasonally dependent relationships between indicators of malaria transmission and disease provided by mathematical model simulations. PLoS Comput Biol. 2014;10:e1003812. Smith DL, Dushoff J, Snow RW, Hay SI. The entomological inoculation rate and Plasmodium falciparum infection in African children. Nature. 2005;438:492–5. Gething PW, Patil AP, Smith DL, Guerra CA, Elyazar IR, Johnston GL, et al. A new world malaria map: plasmodium falciparum endemicity in 2010. Malar J. 2011;10:378. Gemperli A, Vounatsou P, Sogoba N, Smith T. Malaria mapping using transmission models: application to survey data from Mali. Am J Epidemiol. 2006;163:289–97. Gemperli A, Sogoba N, Fondjo E, Mabaso M, Bagayoko M, Briet OJ, et al. Mapping malaria transmission in West and Central Africa. Trop Med Int Health. 2006;11:1032–46. Griffin JT, Hollingsworth TD, Okell LC, Churcher TS, White M, Hinsley W, et al. Reducing Plasmodium falciparum malaria transmission in Africa: a model-based evaluation of intervention strategies. PLoS Med. 2010;7:e1000324. Hay SI, Okiro EA, Gething PW, Patil AP, Tatem AJ, Guerra CA, et al. Estimating the global clinical burden of Plasmodium falciparum malaria in 2007. PLoS Med. 2010;7:750. Cibulskis RE, Aregawi M, Williams R, Otten M, Dye C. Worldwide incidence of malaria in 2009: estimates, time trends, and a critique of methods. PLoS Med. 2011;8:e1001142. WHO. World Malaria Report 2013. Geneva: World Health Organization; 2014. Griffin JT, Ferguson NM, Ghani AC. Estimates of the changing age-burden of Plasmodium falciparum malaria disease in sub-Saharan Africa. Nat Commun. 2014;5:3136. Stuckey EM, Smith TA, Chitnis N. Estimating malaria transmission through mathematical models. Trends Parasitol. 2013;29:477–82. Smith T, Killeen GF, Maire N, Ross A, Molineaux L, Tediosi F, et al. Mathematical modeling of the impact of malaria vaccines on the clinical epidemiology and natural history of Plasmodium falciparum malaria: overview. Am J Trop Med Hyg. 2006;75:1–10. Chitnis N, Hardy D, Smith T. A periodically-forced mathematical model for the seasonal dynamics of malaria in mosquitoes. Bull Math Biol. 2012;74:1098–2024. Smith T, Maire N, Dietz K, Killeen GF, Vounatsou P, Molineaux L, et al. Relationship between the entomologic inoculation rate and the force of infection for Plasmodium falciparum malaria. Am J Trop Med Hyg. 2006;75:11–8. Maire N, Smith T, Ross A, Owusu-Agyei S, Dietz K, Molineaux L. A model for natural immunity to asexual blood stages of Plasmodium falciparum malaria in endemic areas. Am J Trop Med Hyg. 2006;75:19–31. Ross A, Killeen GF, Smith T. Relationships of host infectivity to mosquitoes and asexual parasite density in Plasmodium falciparum. Am J Trop Med Hyg. 2006;75(Suppl 2):32–7. Ross A, Maire N, Molineaux L, Smith T. An epidemiologic model of severe morbidity and mortality caused by Plasmodium falciparum. Am J Trop Med Hyg. 2006;75:63–73. Smith T, Ross A, Maire N, Rogier C, Trape JF, Molineaux L. An epidemiologic model of the incidence of acute illness in Plasmodium falciparum malaria. Am J Trop Med Hyg. 2006;75:56–62. Smith T, Ross A, Maire N, Chitnis N, Studer A, Hardy D, et al. Ensemble modeling of the likely public health impact of a pre-erythrocytic malaria vaccine. PLoS Med. 2012;9:e1001157. Galactionova K, Tediosi F, Savigny DD, Smith TA, Tanner M. Effective coverage and systems effectiveness for malaria case management in Sub-Saharan African countries. PLoS One. 2015;10:e0127818. Malaria Atlas Project. http://www.map.ox.ac.uk/. Accessed 15 Apr 2015. Tediosi F, Hutton G, Maire N, Smith TA, Ross A, Tanner M. Predicting the cost-effectiveness of introducing a pre-erythrocytic malaria vaccine into the expanded program on immunization in Tanzania. Am J Trop Med Hyg. 2006;75:131–43. Crowell V, Yukich J, Briet OJ, Ross A. A novel approach for measuring the burden of uncomplicated Plasmodium falciparum malaria: application to data from Zambia. PLoS One. 2013;8:e57297. Balk D, Deichmann U, Yetman G, Pozzi F, Hay S, Nelson A. Determining global population distribution: methods, applications and data. Adv Parasitol. 2006;62:119–56. Center for International Earth Science Information Network (CIESIN) CU: Global Rural-Urban Mapping Project (GRUMP): Urban extents. New York: Palisades; 2004. Tediosi F, Maire N, Smith T, Hutton G, Utzinger J, Ross A, et al. An approach to model the costs and effects of case management of Plasmodium falciparum malaria in sub-saharan Africa. Am J Trop Med Hyg. 2006;75:90–103. Snow R, Craig M, Deichmann U, Marsh K. Estimating mortality, morbidity and disability due to malaria among Africa's non-pregnant population. Bull World Health Organ. 1999;77:624–40. Roca-Feltrer A, Carneiro I, Armstrong Schellenberg JR. Estimates of the burden of malaria morbidity in Africa in children under the age of 5 years. Trop Med Int Health. 2008;13:771–83. Trape JF, Rogier C. Combating malaria morbidity and mortality by reducing transmission. Parasitol Today. 1996;12:236–40. Snow R, Craig M, Newton C, Steketee RW. The public health burden of Plasmodium falciparum malaria in Africa: Deriving the numbers. Bethesda: Fogarty International Center, National Institutes of Health; 2003. Laveran A. Nature parasitaire des accidents de l'impaludisme: description d'un nouveau parasite trouvé dans le sang des malades atteints de fièvre palustre. Paris: J. B. Bailliere; 1881. Anderson RM, May RM. Helminth infections of humans: mathematical models, population dynamics, and control. Adv Parasitol. 1985;24:1–101. Colloboration MARA. Towards an Atlas of Malaria Risk in Africa. Durban: MARA/ARMA; 1998. Gosoniu L, Vounatsou P, Sogoba N, Smith T. Bayesian modelling of geostatistical malaria risk data. Geospatial Health. 2006;1:127–39. Woolhouse ME. Patterns in parasite epidemiology: the peak shift. Parasitol Today. 1998;14:428–34. Smith T, Hii J, Genton B, Muller I, Booth M, Gibson N, et al. Associations of peak shifts in age-prevalence for human malarias with bed net coverage. Trans R Soc Trop Med Hyg. 2001;95:1–6. Eckhoff P. Mathematical models of within-host and transmission dynamics to determine effects of malaria interventions in a variety of transmission settings. Am J Trop Med Hyg. 2013;88:817–27. MAP, CB, NM and TAS designed the experiments and analyzed results. PPR, OJTB, DLS, and PWG contributed to the methodology and carried out data analyses. MAP and TAS drafted the manuscript. TAS, MAP and NM conceived of the study. All authors read and approved the final manuscript. The authors would like to thank Michael Tarantino and Erin Stuckey for the help with running simulations. The authors would also like to thank the many volunteers who made their computers available to run simulations via malariacontrol.net. Compliance with ethical guidelines Competing interests The authors declare that they have no competing interests. MAP and TAS acknowledge funding by the Bill and Melinda Gates Foundation (#OPP1032350) and PATH-Malaria Vaccine Initiative (MVI). PWG is a Career Development Fellow (#K00669X) jointly funded by the UK Medical Research Council (MRC) and the UK Department for International Development (DFID) under the MRC/DFID Concordat agreement and also receives support from the Bill and Melinda Gates Foundation (#OPP1068048). DLS acknowledges funding from the Bill and Melinda Gates Foundation (#OPP1110495). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Caitlin A. Bever Present address: Institute for Disease Modeling, Bellevue, WA, 98005, USA Department of Epidemiology and Public Health, Swiss Tropical and Public Health Institute, 4051, Basel, Switzerland Melissa A. Penny, Nicolas Maire, Caitlin A. Bever, Peter Pemberton-Ross, Olivier J. T. Briët & Thomas A. Smith University of Basel, Petersplatz 1, Basel, Switzerland Department of Zoology, University of Oxford, Tinbergen Building, South Parks Road, Oxford, OX1 3PS, UK David L. Smith & Peter W. Gething Sanaria Institute of Global Health and Tropical Medicine, Rockville, MD, 20850, USA David L. Smith Melissa A. Penny Nicolas Maire Peter Pemberton-Ross Olivier J. T. Briët Peter W. Gething Thomas A. Smith Correspondence to Melissa A. Penny. 12936_2015_864_MOESM1_ESM.pdf Additional file 1. This file includes additional Tables that support and expand some of the results in the main text, but whose inclusion would detract from the main argument. Additional file 2. This file includes additional Figures of results that support and expand some of the results in the main text, but whose inclusion would detract from the main argument. Penny, M.A., Maire, N., Bever, C.A. et al. Distribution of malaria exposure in endemic countries in Africa considering country levels of effective treatment. Malar J 14, 384 (2015). https://doi.org/10.1186/s12936-015-0864-3 Case-management
CommonCrawl
Yuichi Ike (The University of Tokyo) Persistence-like distance on Tamarkin's category and symplectic displacement energy (JAPANESE) The microlocal sheaf theory due to Kashiwara and Schapira can be regarded as Morse theory with sheaf coefficients. Recently it has been applied to symplectic geometry, after the pioneering work of Tamarkin. In this talk, I will propose a new sheaf-theoretic method to estimate the displacement energy of compact subsets in cotangent bundles. In the course of the proof, we introduce a persistence-like pseudo-distance on Tamarkin's sheaf category. This is a joint work with Tomohiro Asano. Michiya Mori (Univ. Tokyo) Tingley's problem for operator algebras Kazuhiro Kuwae (Department of Applied Mathematics, Faculty of Science, Fukuoka University) Yuta Koike (Univ. Tokyo) Hiromichi Takagi (The University of Tokyo) On classification of prime Q-Fano 3-folds with only 1/2(1,1,1)-singularities and of genus less than 2 I classified prime Q-Fano threefolds with only 1/2(1,1,1)-singularities and of genus greater than 1 (2002, Nagoya Math. J.). In this talk, I will explain how the method in that paper can be extended to the case of genus less than 2. The method is so called two ray game. By this method, I can classify the possibilities of such Q-Fano's. The classification is not yet completed since constructions of examples in certain cases are difficult. I will also explain some pretty examples in this talk. Norbert A'Campo (University of Basel) NUMERICAL ANALYSIS, COBORDISM OF MANIFOLDS AND MONODROMY. (ENGLISH) http://fmsp.ms.u-tokyo.ac.jp/FMSPLectures_ACampo_abst.pdf http://fmsp.ms.u-tokyo.ac.jp/FMSPLectures_ACampo.pdf Yuta Nozaki (The University of Tokyo) An invariant of 3-manifolds via homology cobordisms (JAPANESE) For a closed 3-manifold X, we consider the topological invariant defined as the minimal integer g such that X is obtained as the closure of a homology cobordism over a surface of genus g. We prove that the invariant equals one for every lens space, which is contrast to the fact that some lens spaces do not admit any open book decomposition whose page is a surface of genus one. The proof is based on the Chebotarev density theorem and binary quadratic forms in number theory. Junha Tanaka (The University of Tokyo) Wrapping projections and decompositions of Keinian groups (JAPANESE) Let $S$ be a closed surface of genus $g ¥geq 2$. The deformation space $AH(S)$ consists of (conjugacy classes of) discrete faithful representations $\rho:\pi_{1}(S) \to PSL_{2}(\mathbb{C})$. McMullen, and Bromberg and Holt showed that $AH(S)$ can self-bump, that is, the interior of $AH(S)$ has the self-intersecting closure. Both of them demonstrated the existence of self-bumping under the exisetence of a non-trivial wrapping projections from an algebraic limits to a geometric limits which wraps an annulus cusp into a torus cusp. In this talk, given a representation $\rho$ at the boundary of $AH(S)$, we characterize a wrapping projection to a geometric limit associated to $\rho$, by the information of the actions of decomposed Kleinian groups of the image of $\rho$. Yu Kawakami (Kanazawa University) Recent topics on the study of the Gauss images of minimal surfaces In this talk, we give a survey of recent advances on the study of the images of the Gauss maps of complete minimal surfaces in Euclidean space. Tomohiro Hayase (Univ. Tokyo) On Cauchy noise loss in a stochastic parameter optimization of random matrices Makiko Sasada (Graduate School of Mathematical Science, the University of Tokyo) Ana Caraiani (Imperial College) On the vanishing of cohomology for certain Shimura varieties (ENGLISH) I will prove that the compactly supported cohomology of certain unitary or symplectic Shimura varieties at level Gamma_1(p^\infty) vanishes above the middle degree. The key ingredients come from p-adic Hodge theory and studying the Bruhat decomposition on the Hodge-Tate flag variety. I will describe the steps in the proof using modular curves as a toy model. I will also mention an application to Galois representations for torsion classes in the cohomology of locally symmetric spaces for GL_n. This talk is based on joint work in preparation with D. Gulotta, C.Y. Hsu, C. Johansson, L. Mocz, E. Reineke, and S.C. Shih. Samuel Colin (CBPF, Rio de Janeiro, Brasil) 17:00-17:50 Quantum matter bounce with a dark energy expanding phase (ENGLISH) The ``matter bounce'' is an alternative scenario to inflationary cosmology, according to which the universe undergoes a contraction, followed by an expansion, the bounce occurring when the quantum effects become important. In my talk, I will show that such a scenario can be unambiguously analyzed in the de Broglie-Bohm pilot-wave interpretation of quantum mechanics. More specifically, I will apply the pilot-wave theory to a Wheeler-DeWitt equation obtained from the quantization of a simple classical mini-superspace model, and show that there are numerical solutions describing bouncing universes with many desirable physical features. For example, one solution contains a dark energy phase during the expansion, without the need to postulate the existence of a cosmological constant in the classical action. This work was done in collaboration with Nelson Pinto-Neto (CBPF, Rio de Janeiro, Brasil). Further details available at https://arxiv.org/abs/1706.03037. Thomas Durt (Aix Marseille Université, Centrale Marseille, Institut Fresnel) 17:50-18:40 Mass of the vacuum: a Newtonian perspective (ENGLISH) One could believe that special relativity forces us to totally renounce to the idea of an aether, but the aether reappears in general relativity which teaches us that space-time is structured by the local metrics. It also reappears in quantum field theory which teaches us that even at zero temperature space is filled by the quantum vacuum energy. Finally, aether reappears in modern astronomy where it was necessary to introduce ill-defined concepts such as dark matter and dark energy in order to explain apparent deviations from Newtonian dynamics (at the level of galactic rotation curves). Newton dynamics being the unique limit of general relativistic dynamics in the classical regime, dark matter and dark energy can be seen as an ultimate, last chance strategy, aimed at reconciling the predictions of general relativity with astronomical data. In our talk we shall describe a simple model, derived in the framework of Newtonian dynamics, aimed at explaining puzzling astronomical observations realized at the level of the solar system (Pioneer anomaly) and at the galactic scale (rotation curves), without adopting ad hoc hypotheses about the existence of dark matter and/or dark energy. The basic idea is that Newtonian gravity is modified due to the presence of a (negative) density, everywhere in space, of mass-energy. Jimenez Pascual Adrian (The University of Tokyo) On adequacy and the crossing number of satellite knots (JAPANESE) It has always been difficult to prove results regarding the (minimal) crossing number of knots. In particular, apparently easy problems such as knowing the crossing number of the connected sum of knots, or bounding the crossing number of satellite knots have been conjectured through decades, yet still remain open. Focusing on this latter problem, in this talk I will prove that the crossing number of a satellite knot is bounded from below by the crossing number of its companion, when the companion is adequate. Yumehito Kawashima (The University of Tokyo) A new relationship between the dilatation of pseudo-Anosov braids and fixed point theory (JAPANESE) A relation between the dilatation of pseudo-Anosov braids and fixed point theory was studied by Ivanov. In this talk we reveal a new relationship between the above two subjects by showing a formula for the dilatation of pseudo-Anosov braids by means of the representations of braid groups due to B. Jiang and H. Zheng. Federico Pasqualotto (Princeton) - Large data global solutions for the shallow water system in one space dimension http://fmsp.ms.u-tokyo.ac.jp/FMSP_180116.pdf Naoto Kaziwara (U. Tokyo) - Introduction to the maximal Lp-regularity and its applications to the quasi-linear parabolic equations Shinya Akagawa (Osaka University) Vanishing theorems of $L^2$-cohomology groups on Hessian manifolds A Hessian manifold is a Riemannian manifold whose metric is locally given by the Hessian of a function with respect to flat coordinates. In this talk, we discuss vanishing theorems of $L^2$-cohomology groups on complete Hessian Manifolds coupled with flat line bundles. In particular, we obtain stronger vanishing theorems on regular convex cones with the Cheng-Yau metrics. Further we show that the Cheng-Yau metrics on regular convex cones give rise to harmonic maps to the positive symmetric matrices. Kento Fujita (RIMS) K-stability of log Fano hyperplane arrangements (English) We completely determine which log Fano hyperplane arrangements are uniformly K-stable, K-stable, K-polystable, K-semistable or not. Narutaka Ozawa (RIMS, Kyoto University) Kazhdan's property (T) and semidefinite programming Masato Yamamichi (Department of General Systems Studies, The University of Tokyo) Theoretical approaches to understand eco-evolutionary feedbacks
CommonCrawl
Approximation of $e^{-x^2}$ I'm doing the applications of differentiation problem sheet from MIT single variable calculus and I don't understand the solution given in the question. I can solve the question using the Taylor approximation, however, I don't think that is what you're meant to do judging by the solutions. https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/unit-2-applications-of-differentiation/part-a-approximation-and-curve-sketching/problem-set-3/MIT18_01SC_pset2sol.pdf question 2A-12 c MartinMartin $\begingroup$ I don't think you mean $e^{(-x)^2}$, do you? That would be the same as $e^{x^2}$. $\endgroup$ – TonyK May 15 at 14:40 $\begingroup$ And surely it's $2A$-$12c$, not $2A$-$12b$? Make an effort! $\endgroup$ – TonyK May 15 at 14:42 $\begingroup$ sorry wrote this question in a rush before I left the house $\endgroup$ – Martin May 15 at 14:51 We have: $$e^x\approx 1+x\Rightarrow e^{-x^2}\approx 1-x^2$$ Grey FoxGrey Fox $\begingroup$ I don't know if this is a stupid question or not, but, how come you're allowed to sub in -x^2 into the linear approximation to get the quadratic approximation? $\endgroup$ – Martin May 15 at 14:58 $\begingroup$ Because $-x^2$ is the exponent of e. $\endgroup$ – richard1941 May 15 at 15:00 $\begingroup$ would you not use 1+x+1/2(x^2) as that is the quatratic approximation then sub in -x^2 into that? $\endgroup$ – Martin May 15 at 15:08 $\begingroup$ And it all depends on what you mean by a quadratic approximation. Some might say that the quadratic approximation is what you get by substituting $-x^2$ into $e^x = 1+x+\frac{x^2}{2}$, which would result in a fourth degree polynomial. $\endgroup$ – richard1941 May 15 at 15:09 $\begingroup$ I thought quadratic approximation meant specifically a taylor series up to the second derivative? ocw.mit.edu/courses/mathematics/… $\endgroup$ – Martin May 15 at 15:14 The solution actually tells you to use the Taylor Series approximation for $e^x$ which is $\sum_{k=0}^{\infty}x^n/n!$ and plug in $-x^2$ for $x$ to get the approximation for $e^{-x^2}$. $$e^x\approx 1+x \land x \mapsto -x^2 \implies e^{-x^2}\approx 1-x^2$$ Turns out that this approximation looks good for $x\in \left[-0.5, 0.5 \right]$ which obviously depends on what use this approximation is being put to and what restrictions on permissible error are imposed. Paras KhoslaParas Khosla $\begingroup$ And for better approximations far from x=0, go to the chapter on probability functions in AMS 55, the Handbook of Mathematical Functions. Alas, there is no single approximation that is good everywhere. $\endgroup$ – richard1941 May 15 at 15:13 If $x$ is small (in absolute value), then $$e^x\approx 1+x.$$ For instance, with $x=-0.01$, $$e^{-0.01}=0.99004983\approx 1-0.01.$$ But it makes no difference if we write $$e^{-x^2}\approx 1-x^2$$ and try $x=0.1$. Just two ways to write the same thing. By the way, maths don't go wrong. The derivatives of $e^x$ are $$e^x,e^x,e^x,e^x,e^x,e^x,\cdots$$ which evaluate as $1,1,1,1,1,1,\cdots$ at $x=0$, giving the Taylor coefficients $1,1,\dfrac12,\dfrac16,\dfrac1{24},\dfrac1{120},\cdots$. On the other hand, the derivatives of $e^{-x^2}$ are $$e^{-x^2},-2xe^{-x^2},(4x^2-2)e^{-x^2},(12x-8x^3)e^{-x^2},(16x^4-48x+12)e^{-x^2},(-35x^5+160x^3-120)e^{-x^2},\cdots$$ $$1,0,-2,0,12,0,\cdots$$ and as should, $$1,0,-1,0,\frac12,0,-\frac16,0,\frac1{24},\cdots$$ Yves DaoustYves Daoust Not the answer you're looking for? Browse other questions tagged calculus or ask your own question. Express $\int^1_0x^2 e^{-x^2} dx$ in terms of $\int^1_0e^{-x^2} dx$ Evaluate $\int{\frac{4x}{(x^2-1)(x-1)}dx}$ Riemann sum -> Integral If f ' = 0, then f is constant? Difference between proof-based calculus and analysis? Expressing height of spherical cap in terms of base radius and sphere radius Evaluating the integral as a power series Tips and tricks Calculating volume of spheroid using calculus 1 Show that for each number $n > 0$, $nx^{1/n} < \ln(x)$ for $x > 1$ On Understanding A Proof On The Fact That Differentiability Implies Continuity
CommonCrawl
A control approach to recover the wave speed (conformal factor) from one measurement IPI Home Half-linear regularization for nonconvex image restoration models May 2015, 9(2): 317-335. doi: 10.3934/ipi.2015.9.317 On the range of the attenuated magnetic ray transform for connections and Higgs fields Gareth Ainsworth 1, and Yernat M. Assylbekov 2, Trinity College, Cambridge, CB2 1TQ, United Kingdom Department of Mathematics, University of Washington, Seattle, WA 98195-4350, United States Received November 2013 Revised July 2014 Published March 2015 For a two-dimensional simple magnetic system, we study the attenuated magnetic ray transform $I_{A,\Phi}$, with attenuation given by a unitary connection $A$ and a skew-Hermitian Higgs field $\Phi$. We give a description for the range of $I_{A,\Phi}$ acting on $\mathbb{C}^n$-valued tensor fields. Keywords: inverse problems, Ray transforms, tensor tomography, magnetic geodesics.. Mathematics Subject Classification: Primary: 53C2. Citation: Gareth Ainsworth, Yernat M. Assylbekov. On the range of the attenuated magnetic ray transform for connections and Higgs fields. Inverse Problems & Imaging, 2015, 9 (2) : 317-335. doi: 10.3934/ipi.2015.9.317 G. Ainsworth, The attenuated magnetic ray transform on surfaces,, Inverse Probl. Imaging, 7 (2013), 27. doi: 10.3934/ipi.2013.7.27. Google Scholar D. V. Anosov and Y. G. Sinai, Certain smooth ergodic systems [Russian],, Uspekhi Mat. Nauk., 22 (1967), 107. Google Scholar V. I. Arnold, Some remarks on flows of line elements and frames,, Dokl. Akad. Nauk SSSR, 138 (1961), 255. Google Scholar V. I. Arnold and A. B. Givental, Symplectic geometry,, in Dynamical Systems IV, (1990), 1. doi: 10.1007/978-3-662-06793-2. Google Scholar N. Bourbaki, Topological Vector Spaces,, Springer-Verlag, (1987). doi: 10.1007/978-3-642-61715-7. Google Scholar N. S. Dairbekov, G. P. Paternain, P. Stefanov and G. Uhlmann, The boundary rigidity problem in the presence of a magnetic field,, Adv. Math., 216 (2007), 535. doi: 10.1016/j.aim.2007.05.014. Google Scholar N. Dairbekov and G. Uhlmann, Reconstructing the metric and magnetic field from the scattering relation,, Inverse Probl. Imaging, 4 (2010), 397. doi: 10.3934/ipi.2010.4.397. Google Scholar M. Dunajski, Solitons, Instantons, and Twistors,, Oxford Graduate Texts in Mathematics, (2010). Google Scholar N. J. Hitchin, G. B. Segal and R. S. Ward, Integrable Systems: Twistors, Loop Groups, and Riemann Surfaces,, Oxford Graduate Texts in Mathematics, (1997). Google Scholar V. Guillemin and D. Kazhdan, Some inverse spectral results for negatively curved 2-manifolds,, Topology, 19 (1980), 301. doi: 10.1016/0040-9383(80)90015-4. Google Scholar S. Kobayashi, Differential Geometry of Complex Vector Bundles,, Publications of the Mathematical Society of Japan 15, (1987). doi: 10.1515/9781400858682. Google Scholar V. V. Kozlov, Calculus of variations in the large and classical mechanics,, Uspekhi Mat. Nauk, 40 (1985), 33. Google Scholar N. Manton and P. Sutcliffe, Topological Solitons,, Cambridge Monographs on Mathematical Physics, (2004). doi: 10.1017/CBO9780511617034. Google Scholar L. J. Mason and N. M. J. Woodhouse, Integrability, Self-duality, and Twistor Theory,, London Mathematical Society Monographs, (1996). Google Scholar R. Michel, Sur la rigidité imposée par la longueur des géodésiques,, Invent. Math., 65 (1981), 71. doi: 10.1007/BF01389295. Google Scholar S. P. Novikov, Variational methods and periodic solutions of equations of Kirchhoff type. II,, (Russian) Funktsional. Anal. i Prilozhen, 15 (1981), 37. Google Scholar S. P. Novikov, Hamiltonian formalism and a multivalued analogue of Morse theory,, (Russian) Uspekhi Mat. Nauk, 37 (1982), 3. Google Scholar S. P. Novikov and I. Shmel'tser, Periodic solutions of the Kirchhoff equations for the free motion of a rigid body in a liquid, and the extended Lyusternik-Schnirelmann-Morse theory. I,, (Russian) Funktsional. Anal. i Prilozhen, 15 (1981), 54. Google Scholar G. P. Paternain, Transparent connections over negatively curved surfaces,, J. Mod. Dyn., 3 (2009), 311. doi: 10.3934/jmd.2009.3.311. Google Scholar G. P. Paternain and M. Paternain, Anosov geodesic flows and twisted symplectic structures,, in International Congress on Dynamical Systems in Montevideo (A Tribute to Ricardo Mañé) (eds. F. Ledrappier, (1996), 132. Google Scholar G. P. Paternain, M. Salo and G. Uhlmann, Spectral rigidity and invariant distributions on Anosov surfaces,, J. Diff. Geom., 98 (2014), 147. Google Scholar G. P. Paternain, M. Salo, G. Uhlmann, On the range of the attenuated ray transform for unitary connections,, Int. Math. Res. Not., (2015), 873. doi: 10.1093/imrn/rnt228. Google Scholar G. P. Paternain, M. Salo and G. Uhlmann, Tensor tomography on surfaces,, Invent. Math., 193 (2013), 229. doi: 10.1007/s00222-012-0432-1. Google Scholar G. P. Paternain, M. Salo and G. Uhlmann, The attenuated ray transform for connections and Higgs fields,, Geom. Funct. Anal., 22 (2012), 1460. doi: 10.1007/s00039-012-0183-6. Google Scholar L. Pestov and G. Uhlmann, On the characterization of the range and inversion formulas for the geodesic X-ray transform,, Int. Math. Res. Not., (2004), 4331. doi: 10.1155/S1073792804142116. Google Scholar L. Pestov and G. Uhlmann, Two dimensional compact simple Riemannian manifolds are boundary distance rigid,, Ann. of Math., 161 (2005), 1093. doi: 10.4007/annals.2005.161.1093. Google Scholar E. Powell, Boundary Rigidity,, unpublished draft, (2014). Google Scholar M. Salo and G. Uhlmann, The attenuated ray transform on simple surfaces,, J. Diff. Geom., 88 (2011), 161. Google Scholar P. Stefanov, Personal Communication,, 12/02/2014., (). Google Scholar M. E. Taylor, Partial Differential Equations I. Basic Theory,, Second edition, (2011). doi: 10.1007/978-1-4419-7055-8. Google Scholar F. Treves, Topological Vector Spaces, Distributions and Kernels,, Academic Press, (1967). Google Scholar François Monard. Efficient tensor tomography in fan-beam coordinates. Ⅱ: Attenuated transforms. Inverse Problems & Imaging, 2018, 12 (2) : 433-460. doi: 10.3934/ipi.2018019 Alexander Balandin. The localized basis functions for scalar and vector 3D tomography and their ray transforms. Inverse Problems & Imaging, 2016, 10 (4) : 899-914. doi: 10.3934/ipi.2016026 Venkateswaran P. Krishnan, Ramesh Manna, Suman Kumar Sahoo, Vladimir A. Sharafutdinov. Momentum ray transforms. Inverse Problems & Imaging, 2019, 13 (3) : 679-701. doi: 10.3934/ipi.2019031 Nicholas Hoell, Guillaume Bal. Ray transforms on a conformal class of curves. Inverse Problems & Imaging, 2014, 8 (1) : 103-125. doi: 10.3934/ipi.2014.8.103 Jan Boman, Vladimir Sharafutdinov. Stability estimates in tensor tomography. Inverse Problems & Imaging, 2018, 12 (5) : 1245-1262. doi: 10.3934/ipi.2018052 Michael Anderson, Atsushi Katsuda, Yaroslav Kurylev, Matti Lassas and Michael Taylor. Metric tensor estimates, geometric convergence, and inverse boundary problems. Electronic Research Announcements, 2003, 9: 69-79. Gareth Ainsworth. The attenuated magnetic ray transform on surfaces. Inverse Problems & Imaging, 2013, 7 (1) : 27-46. doi: 10.3934/ipi.2013.7.27 Gareth Ainsworth. The magnetic ray transform on Anosov surfaces. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 1801-1816. doi: 10.3934/dcds.2015.35.1801 François Monard. Efficient tensor tomography in fan-beam coordinates. Inverse Problems & Imaging, 2016, 10 (2) : 433-459. doi: 10.3934/ipi.2016007 Venkateswaran P. Krishnan, Plamen Stefanov. A support theorem for the geodesic ray transform of symmetric tensor fields. Inverse Problems & Imaging, 2009, 3 (3) : 453-464. doi: 10.3934/ipi.2009.3.453 Shui-Nee Chow, Ke Yin, Hao-Min Zhou, Ali Behrooz. Solving inverse source problems by the Orthogonal Solution and Kernel Correction Algorithm (OSKCA) with applications in fluorescence tomography. Inverse Problems & Imaging, 2014, 8 (1) : 79-102. doi: 10.3934/ipi.2014.8.79 Hiroshi Isozaki. Inverse boundary value problems in the horosphere - A link between hyperbolic geometry and electrical impedance tomography. Inverse Problems & Imaging, 2007, 1 (1) : 107-134. doi: 10.3934/ipi.2007.1.107 Yanfei Wang, Dmitry Lukyanenko, Anatoly Yagola. Magnetic parameters inversion method with full tensor gradient data. Inverse Problems & Imaging, 2019, 13 (4) : 745-754. doi: 10.3934/ipi.2019034 Herbert Egger, Manuel Freiberger, Matthias Schlottbom. On forward and inverse models in fluorescence diffuse optical tomography. Inverse Problems & Imaging, 2010, 4 (3) : 411-427. doi: 10.3934/ipi.2010.4.411 Zhenhua Zhao, Yining Zhu, Jiansheng Yang, Ming Jiang. Mumford-Shah-TV functional with application in X-ray interior tomography. Inverse Problems & Imaging, 2018, 12 (2) : 331-348. doi: 10.3934/ipi.2018015 Kaili Zhang, Haibin Chen, Pengfei Zhao. A potential reduction method for tensor complementarity problems. Journal of Industrial & Management Optimization, 2019, 15 (2) : 429-443. doi: 10.3934/jimo.2018049 Colin Guillarmou, Antônio Sá Barreto. Inverse problems for Einstein manifolds. Inverse Problems & Imaging, 2009, 3 (1) : 1-15. doi: 10.3934/ipi.2009.3.1 Sergei Avdonin, Pavel Kurasov. Inverse problems for quantum trees. Inverse Problems & Imaging, 2008, 2 (1) : 1-21. doi: 10.3934/ipi.2008.2.1 Maciej Zworski. A remark on inverse problems for resonances. Inverse Problems & Imaging, 2007, 1 (1) : 225-227. doi: 10.3934/ipi.2007.1.225 Guanghui Hu, Peijun Li, Xiaodong Liu, Yue Zhao. Inverse source problems in electrodynamics. Inverse Problems & Imaging, 2018, 12 (6) : 1411-1428. doi: 10.3934/ipi.2018059 Gareth Ainsworth Yernat M. Assylbekov
CommonCrawl
Gmat Math Practice Problems Home » Hire Someone to do GMAT Verbal » Gmat Math Practice Problems Gmat Math Practice Problems in Mathematics, Algebra and Probability, and Theories, Springer, 1988, available at A. Hiller Yong *An introduction to [*Fractals*]{} (Springer Verlag, New York, 1997). Q. Herbal *Elements of mathematical physics* (Freeman, San Jose, CA, 1980). J. see it here *Measure theory* (2nd edition, Wiley-Interscience, New York, 1996). D. Jakovac and A. A. Podlubny, *"Profitabilities and statistics of qubit entanglement breaking with weak Ising model"*, Math. Structure of the theory of Generalized Quantum Potentials, 69 J. Syst. 2, 2517–2599 (1989). A. Berg and Y. Yoshida, *Phys. Rev. E* [**55**]{}, R1074–R1094 (1997). Do My Work For Me E. Bai and C. T. Thurn, *J. Phys. C: Math. Theor.* [**47**]{}, 101201–9 (2010). I. Gokhale and R. Szalai, *Rev. Mod. Phys.* [**79**]{}, 1255–1297 (2008). Y. Fomin, *Gaps in Hamiltonians* (Springer, Berlin, 1993). J. Gaiotto, *Quantum Potentials, Electrodynamics and Adiabatic Operators* (Cambridge University Press, Cambridge, 1997). [^1]: The purpose of both presentation and later arguments are to make the connections more meaningful for the readers. While the argumentation of $\sum_{i=x}^{y}(e^{i\theta_i}-d/\theta_i)$ can be regarded as leading to a more sensible question, we require a different understanding of the relation: It should be added that even when $\theta_i$ is the characteristic $\theta$-value of a linear functional in the direction of a shift, we have made small perturbation theory, whereas when $\theta_i$ is a positive $\theta$ for which we have linear functional, $\theta_i$ seems to be effectively shifted by a positive $\theta_j$. Pay For Homework Assignments For no such "change" to happen with perturbative theory is it necessary to resort to a precise (i.e. finite) dimensionless (i.e. time dependent) shift in the direction of a particular dimensionless parameter. This need not necessarily be necessary, but was the main cause for what happens. informative post the interesting question is, "or should we be looking at a point $x$ of (loc. C) space?" A. Alvarado *Spontaneous fermionic phases in $M_2(\mathbb{R})$: Eigenfields, variational problems and a general-relation approach* Symp. Theor. Math. Phys. [**79**]{} 37–59, 6–11 (2010). C. Palapov and D. Zagier, private communication. C. J. C. Sáez *Apterelphysica e stronze. Do My Homework Online * Fortschr. Physik, Springer, Berlin, 1969 (1st Edition 1968). E. M. Fomin *Rev. Mod.Phys. Sts.* [**29**]{} 177–173 (1986). J. L. Mas' ***Elements of quantum theory*** (New York, 1962). M. C. Igoe and P. G. Ryan, *Comment. Phys. Lett.* [**53**]{} 113–119 (1976). Can I Take An Ap Exam Without Taking The Class? D. Kouza, *Fortschr. Math.* [**39**]{} 573–581 (Gmat Math Practice Problems in Mathematicians We use the following formulae for matrices: If $X = a_0^h+a_1^k$, then the following matrix equation holds for any $1 \le k \le h$: * $X' = a_0$ If $X' = a_0^hx^2 + a_1X + a'_0^hx + a'_1y^m$, will have the form if $a_0 + a_1x + a'_0^h = -1$, $(a_0, a_1, a_0), (a_0^h, a_1, a'_0)$. And if the matrix equation is $A$, every $B \in \mathbb{C}^*$ is in the polynomial ring whose components are $P^c = \begin{pmatrix} 0 & 0 & 1 \\ 1 & -1 & 0 \end{pmatrix}$, $B = (A + B)/2$. Hence we get a proof of the proposition: If $A$ is a scalar matrix, then the following linear combinations of $ABC$ and $-A$, $(A, -C, 0)$, $( -A, -Q, -C, 0)$, where $C$ is a non-zero vector, and click here for more = C^{-1}$ have the form If $A$ and $B$ both square-free, $a_0 = a_0^h, b_0 = a_0{\gmod}h/2$, we have the above linear combinations as $$\begin{aligned} \prod_{i=1}^h(0, 0, a_i^h x^2 + b_i^h y^m) &=\sum_{i=1}^h\{w^m \vert C^{-1}A'w^i \vert \} \\ \quad &=\sum_{i=1}^h(3-m/2),\;\;\;\;\;\; \prod_{i=1}^h\{w^m \vert C^{-1}x^i \vert – w^m \vert Aw^i \vert \} \\ \quad &= \sum_{i=1}^h(p_i^m + p_i{\gmod}h/2), \;\;\;\;\;\; \prod_{i=1}^h\{x^i \vert (1-w)^2 – (1+w)/2 \vert C^{-1}x^{i} \vert \} \\ \quad &= \prod_{i=1}^h(3-p^m/2),\;\;\;\;\;\;\; \prod_{i=1}^h\{x^i \vert (1-w)^2 – (w)/2 \vert C^{-1}x^{i} \vert \} \\ \quad &= \sum_{i=1}^h(p_i^m +p_i\mathbf{1}b_i),\;\;\;\;\;\;\; \prod_{i=1}^h\{x^i \vert b_i \vert +w^m \vert C^{-1}x^{i} \vert \}\end{aligned}$$ From the following relation of $\mathbf{1}$-series, we should have $$\prod_{i=1}^k \{x^{mk}(w)^k \vert C^{-1}x^{i} \vert \} = \prod_{w=1}^{k} (3-p^m/2)^{w^m} \vert C^{-1}x^{w} \vert^{1-mb_2} = Gmat Math Practice Problems 3:26 p., [1] Abstract: The objective term, "finitized", is a concept "having a structure like that… [present]". Its meaning is usually determined by the structure of its environment (we shall call it the "environment"). The idea is that a small set of structures interacts well with the larger network so that by knowing what is embedded in the environment, one can define an instantiated set that gets its own, i.e. set whose elements are embedded in the particular environment – perhaps in the environment is set to some object of particular size, or a set of these elements can typically be found out in the environment. Some of the obvious features by which this is done are: the reference system the group structure and so on. The objects of a system have some relationship to the environment-related sets that usually involve elements of sets other than the environment as a whole (here we assume the environment is the common ancestor of the environment and the reference system). A set but the environment must be specified in some way that is consistent with its environment, i.e. the environment have a common (generally physical) set of numbers and symbols. Generally, we would expect the object of the system to be an empty set. Take Online Courses For You Subsequently, processes to create a set that is valid can be given access to a set (a set of tuples and strings that is valid for some set to members). A Get More Info with its members can thus be set to its set members. In other words: 2 The Sets The sets are formed by the words you use in your vocabulary (those in English), words that are set and those in this vocabulary. Here's a quick example: You say that a set of words is created by "The System" (4, "Set of Words" or whatever it is) – the system is called the System – then you come to the same word "Set of Words" by which you say that a set is created by a word? The statement is generally in Latin, as opposed to English. There are two concepts of the "Set" that are connected by the concept of the system: the system and the set, the universal set and the set. A set consists roughly of elements (two distinct sets of items) and thesystem. The system consists of the items they form and the list of the items that they form. As the system is the well-known language, the first concept of the system is called the system of items, and the second the setof items. In this example, the system is known as the system of items, and the set which we Look At This is called the system of items. The system and the items are all the same in both languages. The system and the items are all present in the environment and are completely new in the systemand are there for the system design. Why does this all need to be the systemand set (as in the example above), and why is the system the set of items (the system of items)? Simple why is set. Secondly, simple reason for the find here Some parts of the system can't exist in real world in their whole form, but what is real world? Many places, that may only exist for their whole form, do exist and they don't exist at all. Why is the set of items always created? Another type of system that has a common component in both languages is called the set of items. Each set of items (sometimes called "members" or "members are", say the least) is part of the system. There are no rules for their operation here – in the first example, they are shown how to access an object with those members in it. The rules for accessing the members of the set are quite simple: only the items can have members. In the second example, each member may be in anyone's set but not in the set by itself. Here is the second example: We would characterize this as a change in the set, adding members to it and making it possible to see objects and associations they are based on. Pay For College Homework This is called change-and-add. When we say "set is How Long Is Gmat Quant? How To Get 40 In Verbal Gmat
CommonCrawl
Physics and Astronomy (2) Journal of Fluid Mechanics (2) test society (2) A laboratory study of internal gravity waves incident upon slopes with varying surface roughness Yu-Hao He, Bu-Ying-Chao Cheng, Ke-Qing Xia Journal: Journal of Fluid Mechanics / Volume 942 / 10 July 2022 Published online by Cambridge University Press: 20 May 2022, A26 We report a laboratory study on the scattering, energy dissipation and mean flow induced by internal gravity waves incident upon slopes with varying surface roughness. The experiment was performed in a rectangular box filled with thermally stratified water. The roughness of the slope surface, $\lambda$, defined as the height of a roughness element over its base width, and the off-criticality $\gamma =(\alpha -\beta )/\beta$, with $\alpha$ and $\beta$ being the angles of the incident wave and the slope, are used as two control parameters. The distribution of energy dissipation in the direction normal to the slope is found to be more uniform in the rough surface cases. Counter-intuitively, both the maximum value in the dissipation profile and the total energy dissipation near the slope are reduced by surface roughness under most circumstances. The measured peak width (the full width at half-maximum of the peaks) of the dissipation profile is found to be broadened significantly in the rough surface cases. We also observed that there exists a non-zero optimal off-criticality ( $\gamma =0.17$ for the present measurement resolution) for the normalized average dissipation and total dissipation, which may be due to the strongest wave energy near the slope at this $\gamma$. Unlike surface roughness, the off-criticality has a small effect on the distribution of energy dissipation. Moreover, surface roughness is also found to change the structure of the scattering-induced mean flow and enhance its strength. The present study provides new perspectives on how the surface roughness on topographic features influences energy dissipation. Universal fluctuations in the bulk of Rayleigh–Bénard turbulence Yi-Chao Xie, Bu-Ying-Chao Cheng, Yun-Bing Hu, Ke-Qing Xia Journal: Journal of Fluid Mechanics / Volume 878 / 10 November 2019 Published online by Cambridge University Press: 06 September 2019, R1 We present an investigation of the root-mean-square (r.m.s.) temperature $\unicode[STIX]{x1D70E}_{T}$ and the r.m.s. velocity $\unicode[STIX]{x1D70E}_{w}$ in the bulk of Rayleigh–Bénard turbulence, using new experimental data from the current study and experimental and numerical data from previous studies. We find that, once scaled by the convective temperature $\unicode[STIX]{x1D703}_{\ast }$, the value of $\unicode[STIX]{x1D70E}_{T}$ at the cell centre is a constant ( $\unicode[STIX]{x1D70E}_{T,c}/\unicode[STIX]{x1D703}_{\ast }\approx 0.85$) over a wide range of the Rayleigh number ( $10^{8}\leqslant Ra\leqslant 10^{15}$) and the Prandtl number ( $0.7\leqslant Pr\leqslant 23.34$), and is independent of the surface topographies of the top and bottom plates of the convection cell. A constant close to unity suggests that $\unicode[STIX]{x1D703}_{\ast }$ is a proper measure of the temperature fluctuation in the core region. On the other hand, $\unicode[STIX]{x1D70E}_{w,c}/w_{\ast }$, the vertical r.m.s. velocity at the cell centre scaled by the convective velocity $w_{\ast }$, shows a weak $Ra$-dependence ( ${\sim}Ra^{0.07\pm 0.02}$) over $10^{8}\leqslant Ra\leqslant 10^{10}$ at $Pr\sim 4.3$ and is independent of plate topography. Similar to a previous finding by He & Xia (Phys. Rev. Lett., vol. 122, 2019, 014503), we find that the r.m.s. temperature profile $\unicode[STIX]{x1D70E}_{T}(z)/\unicode[STIX]{x1D703}_{\ast }$ in the region of the mixing zone with a mean horizontal shear exhibits a power-law dependence on the distance $z$ from the plate, but now the universal profile applies to both smooth and rough surface topographies and over a wider range of $Ra$. The vertical r.m.s. velocity profile $\unicode[STIX]{x1D70E}_{w}(z)/w_{\ast }$ obeys a logarithmic dependence on $z$. The study thus demonstrates that the typical scales for the temperature and the velocity are the convective temperature $\unicode[STIX]{x1D703}_{\ast }$ and the convective velocity $w_{\ast }$, respectively. Finally, we note that $\unicode[STIX]{x1D703}_{\ast }$ may be utilised to study the flow regime transitions in ultrahigh- $Ra$-number turbulent convection.
CommonCrawl
Subsection 6.3.2: Existence of Localizations (cite) 6.3.2 Existence of Localizations Our goal in this section is to prove the following: Proposition 6.3.2.1 (Existence of Localizations). Let $\operatorname{\mathcal{C}}$ be a simplicial set and let $W$ be a collection of edges of $\operatorname{\mathcal{C}}$. Then there exists an $\infty $-category $\operatorname{\mathcal{D}}$ and a morphism of simplicial sets $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ which exhibits $\operatorname{\mathcal{D}}$ as a localization of $\operatorname{\mathcal{C}}$ with respect to $W$. Remark 6.3.2.2 (Uniqueness of Localizations). Let $\operatorname{\mathcal{C}}$ be a simplicial set and let $W$ be a collection of edges of $\operatorname{\mathcal{C}}$. Proposition 6.3.2.1 asserts that there exists an $\infty $-category $\operatorname{\mathcal{D}}$ and a morphism $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ which exhibits $\operatorname{\mathcal{D}}$ as a localization of $\operatorname{\mathcal{C}}$ with respect to $W$. In this case, for every $\infty $-category $\operatorname{\mathcal{E}}$, composition with $F$ induces a bijection \[ \operatorname{Hom}_{ \mathrm{h} \mathit{\operatorname{Cat}_{\infty }}}( \operatorname{\mathcal{D}}, \operatorname{\mathcal{E}}) = \pi _0( \operatorname{Fun}( \operatorname{\mathcal{D}}, \operatorname{\mathcal{E}})^{\simeq } ) \rightarrow \pi _0( \operatorname{Fun}(\operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})^{\simeq } ) \] (Proposition 6.3.1.13). In other words, the $\infty $-category $\operatorname{\mathcal{D}}$ corepresents the functor \[ \mathrm{h} \mathit{\operatorname{Cat}_{\infty }} \rightarrow \operatorname{Set}\quad \quad \operatorname{\mathcal{E}}\mapsto \pi _0( \operatorname{Fun}(\operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})^{\simeq } ). \] It follows that $\operatorname{\mathcal{D}}$ is uniquely determined (up to canonical isomorphism) as an object of the homotopy category $\mathrm{h} \mathit{\operatorname{Cat}_{\infty }}$. We will sometimes emphasize this uniqueness by referring to $\operatorname{\mathcal{D}}$ as the localization of $\operatorname{\mathcal{C}}$ with respect to $W$, and denoting it by $\operatorname{\mathcal{C}}[W^{-1}]$. Beware that the localization $\operatorname{\mathcal{C}}[W^{-1}]$ is not well-defined up to isomorphism as a simplicial set: in fact, any equivalent $\infty $-category can also be regarded as a localization of $\operatorname{\mathcal{C}}$ with respect to $W$ (Remark 6.3.1.19). Warning 6.3.2.3. Let $\operatorname{\mathcal{C}}$ be a simplicial set, let $W$ be a collection of edges of $\operatorname{\mathcal{C}}$, and let $\operatorname{\mathcal{E}}$ be an $\infty $-category. We have now given two different definitions for the $\infty $-category $\operatorname{Fun}( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})$: According to Notation 6.3.1.1, $\operatorname{Fun}( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})$ denotes the full subcategory of $\operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{\mathcal{E}})$ spanned by those diagrams $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{E}}$ which carry each edge of $W$ to an isomorphism in $\operatorname{\mathcal{E}}$. By the convention of Remark 6.3.2.2, $\operatorname{\mathcal{C}}[W^{-1}]$ denotes an $\infty $-category equipped with a diagram $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{C}}[W^{-1}]$ which exhibits $\operatorname{\mathcal{C}}[W^{-1}]$ as a localization of $\operatorname{\mathcal{C}}$ with respect to $W$. We can then consider the $\infty $-category of functors from $\operatorname{\mathcal{C}}[W^{-1}]$ to $\operatorname{\mathcal{E}}$, which we will temporarily denote by $\operatorname{Fun}'( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})$. Beware that these $\infty $-categories are not identical. However, they are equivalent: if $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{C}}[W^{-1}]$ exhibits $\operatorname{\mathcal{C}}[W^{-1}]$ as a localization of $\operatorname{\mathcal{C}}$ with respect to $W$, then composition with $F$ induces an equivalence of $\infty $-categories $\operatorname{Fun}( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}}) \rightarrow \operatorname{Fun}'( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})$ (Proposition 6.3.1.13). Note that the $\infty $-category $\operatorname{Fun}( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})$ does not depend on any auxiliary choices: it is well-defined up to equality as a simplicial subset of $\operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{\mathcal{E}})$. By contrast, the $\infty $-category $\operatorname{Fun}'( \operatorname{\mathcal{C}}[W^{-1}],\operatorname{\mathcal{E}})$ depends on the choice of the functor $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{C}}[W^{-1}]$ (and is therefore well-defined up to equivalence, but not up to isomorphism). Our proof of Proposition 6.3.2.1 will make use of the following: Lemma 6.3.2.4. Let $Q$ be a contractible Kan complex, let $e: \Delta ^1 \hookrightarrow Q$ be a monomorphism of simplicial sets, and let $W = \{ \operatorname{id}_{ \Delta ^1} \} $ consist of the single nondegenerate edge of $\Delta ^1$. Then, for any $\infty $-category $\operatorname{\mathcal{E}}$, precomposition with $e$ induces a trivial Kan fibration of simplicial sets \[ \theta : \operatorname{Fun}(Q, \operatorname{\mathcal{E}}) \rightarrow \operatorname{Fun}( \Delta ^1[ W^{-1}], \operatorname{\mathcal{E}}) = \operatorname{Isom}(\operatorname{\mathcal{E}}). \] Proof. Since $e$ is a monomorphism, Corollary 4.4.5.3 immediately implies that $\theta $ is an isofibration when regarded as a functor from $\operatorname{Fun}(Q,\operatorname{\mathcal{E}})$ to $\operatorname{Fun}( \Delta ^1, \operatorname{\mathcal{E}})$. Using the pullback diagram \[ \xymatrix@R =50pt@C=50pt{ \operatorname{Fun}(Q, \operatorname{\mathcal{E}}) \ar [d]^{\theta } \ar [r] & \operatorname{Fun}(Q, \operatorname{\mathcal{E}}) \ar [d]^{\theta } \\ \operatorname{Isom}(\operatorname{\mathcal{E}}) \ar [r] & \operatorname{Fun}( \Delta ^1, \operatorname{\mathcal{E}}), } \] we deduce that $\theta $ is also an isofibration when regarded as a functor from $\operatorname{Fun}(Q, \operatorname{\mathcal{E}})$ to $\operatorname{Isom}(\operatorname{\mathcal{E}})$. Consequently, to show that $\theta $ is a trivial Kan fibration, it will suffice to show that it is an equivalence of $\infty $-categories (Proposition 4.5.5.20). In other words, we are reduced to proving that the morphism $e$ exhibits $Q$ as a localization of $\Delta ^1$ with respect to $W$. Let $q: Q \rightarrow \Delta ^0$ denote the projection map. Since $Q$ is contractible, the morphism $q$ is an equivalence of $\infty $-categories. By virtue of Remark 6.3.1.19, we are reduced to proving that the composite map $\Delta ^1 \xrightarrow {e} Q \xrightarrow {q} \Delta ^0$ exhibits $\Delta ^0$ as a localization of $\Delta ^1$ with respect to $W$, which follows from Example 6.3.1.14. $\square$ We will deduce Proposition 6.3.2.1 from the following more precise result: Proposition 6.3.2.5. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a morphism of simplicial sets, where $\operatorname{\mathcal{D}}$ is an $\infty $-category. Let $W$ be a collection of edges of $\operatorname{\mathcal{C}}$ such that, for each $w \in W$, the image $F(w)$ is an isomorphism in $\operatorname{\mathcal{D}}$. Then $F$ factors as a composition \[ \operatorname{\mathcal{C}}\xrightarrow {G} \operatorname{\mathcal{C}}[W^{-1}] \xrightarrow {H} \operatorname{\mathcal{D}}, \] where $G$ exhibits $\operatorname{\mathcal{C}}[W^{-1}]$ as a localization of $\operatorname{\mathcal{C}}$ with respect to $W$ and $H$ is an inner fibration (so that $\operatorname{\mathcal{C}}[W^{-1}]$ is also an $\infty $-category). Moreover, this factorization can be chosen to depend functorially on the diagram $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ and the collection of edges $W$, in such a way that the construction $(F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}, W) \mapsto \operatorname{\mathcal{C}}[W^{-1}]$ commutes with filtered colimits. Proof. For each element $w \in W$, the image $F(w)$ can be regarded as a morphism from $\Delta ^1$ to the core $\operatorname{\mathcal{D}}^{\simeq }$. By virtue of Proposition 3.1.7.1, we can (functorially) choose a factorization of this morphism as a composition \[ \Delta ^1 \xrightarrow { i_{w} } Q_{w} \xrightarrow { q_{w} } \operatorname{\mathcal{D}}^{\simeq }, \] where $i_{w}$ is anodyne and $q_{w}$ is a Kan fibration. Since $\operatorname{\mathcal{D}}^{\simeq }$ is a Kan complex, $Q_{w}$ is also a Kan complex, which is contractible by virtue of the fact that $i_{w}$ is anodyne. Form a pushout diagram of simplicial sets \[ \xymatrix@R =50pt@C=50pt{ \coprod _{w \in W} \Delta ^1 \ar [r] \ar [d]^{\coprod _{w \in W} i_ w} & \operatorname{\mathcal{C}}\ar [d]^{i} \\ \coprod _{w \in W} Q_ w \ar [r] & \operatorname{\mathcal{C}}'. } \] We first claim that $i: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{C}}'$ exhibits $\operatorname{\mathcal{C}}'$ as a localization of $\operatorname{\mathcal{C}}$ with respect to $W$. Let $\operatorname{\mathcal{E}}$ be an $\infty $-category. Note that if $G: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{E}}$ is a morphism of simplicial sets which factors through $\operatorname{\mathcal{C}}'$, then for each $w \in W$ the morphism $G(w)$ belongs to the image of a functor $Q_ w \rightarrow \operatorname{\mathcal{E}}$, and is therefore an isomorphism in $\operatorname{\mathcal{E}}$. It follows that composition with $i$ induces a functor $\theta : \operatorname{Fun}( \operatorname{\mathcal{C}}', \operatorname{\mathcal{E}}) \rightarrow \operatorname{Fun}( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})$, and we wish to show that $\theta $ is an equivalence of $\infty $-categories. This follows by inspecting the commutative diagram \[ \xymatrix@R =50pt@C=50pt{ \operatorname{Fun}(\operatorname{\mathcal{C}}', \operatorname{\mathcal{E}}) \ar [r]^-{\theta } \ar [d] & \operatorname{Fun}(\operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}}) \ar [r] \ar [d] & \operatorname{Fun}(\operatorname{\mathcal{C}}, \operatorname{\mathcal{E}}) \ar [d] \\ \prod _{w \in W} \operatorname{Fun}(Q_ w,\operatorname{\mathcal{E}}) \ar [r]^-{\theta '} & \prod _{w \in W} \operatorname{Isom}(\operatorname{\mathcal{E}}) \ar [r] & \prod _{w \in W} \operatorname{Fun}(\Delta ^1, \operatorname{\mathcal{E}}). } \] The outer rectangle is a pullback square by the definition of $\operatorname{\mathcal{C}}'$, and the right square is a pullback by the definition of $\operatorname{Fun}( \operatorname{\mathcal{C}}[W^{-1}], \operatorname{\mathcal{E}})$. It follows that the left square is also a pullback. Lemma 6.3.2.4 implies that $\theta '$ is a trivial Kan fibration, so that $\theta $ is also a trivial Kan fibration (hence an equivalence of $\infty $-categories by Proposition 4.5.3.11). Note that the morphism $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ and the collection of morphisms $\{ q_{w}: Q_ w \rightarrow \operatorname{\mathcal{D}}^{\simeq } \subseteq \operatorname{\mathcal{D}}\} _{w \in W}$ can be amalgamated to a single morphism of simplicial sets $F': \operatorname{\mathcal{C}}' \rightarrow \operatorname{\mathcal{D}}$. Applying Proposition 4.1.3.2, we can (functorially) factor $F'$ as a composition $\operatorname{\mathcal{C}}' \xrightarrow {G'} \operatorname{\mathcal{C}}[W^{-1}] \xrightarrow {H} \operatorname{\mathcal{D}}$, where $G'$ is inner anodyne and $H$ is an inner fibration. We conclude by observing that the composite map $G = (G' \circ i): \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{C}}[W^{-1}]$ exhibits $\operatorname{\mathcal{C}}[W^{-1}]$ as a localization of $\operatorname{\mathcal{C}}$ with respect to $W$, by virtue of Remark 6.3.1.19. $\square$ Proof of Proposition 6.3.2.1. Apply Proposition 6.3.2.5 in the special case $\operatorname{\mathcal{D}}= \Delta ^{0}$. $\square$
CommonCrawl
Research | Open | Published: 09 May 2018 Research on multi-constellation GNSS compatible acquisition strategy based on GPU high-performance operation Chengjun Guo ORCID: orcid.org/0000-0002-1466-29751, Bingyan Xu2 & Zhong Tian2 With the continuous development of satellite navigation, how to make full use of the compatibility and interoperability among the four constellations deserves our deep thinking. Signal acquisition is a critical technology that affects the performance of GNSS receivers. However, its implementation has high requirements on both resource consumption and processing time. In recent years, graphics processing unit (GPU) technology with a large number of parallel processing units is gradually applied to the navigation field. In this paper, a multi-constellation-compatible capture strategy based on GPU is proposed. Compared with the traditional GNSS signal processing, the design has better flexibility and portability, and the acquisition parameters of the system can be configured flexibly through the interface. The experimental results show that the proposed scheme makes full use of the powerful parallel processing capability of GPU, which greatly reduces the acquisition time and improves the efficiency of signal processing, with the increase of data processing. In addition, as the amount of signal processing data increases, the advantages of the CPU high-performance computing platform will be more obvious. Recently, GNSS applications will face a revolution due to the construction and modernization of global navigation satellite systems (GNSS), and research on multi-system and multi-frequency compatible receivers has become the trend. Integrated navigation technology can greatly improve the reliability and the precise of navigation system [1]. For the GNSS multi-constellation navigation system, because the four satellite navigation systems (GPS, GLONASS, Galileo signal, and Beidou) have different signal characteristics, including modulation mode, carrier frequency, and the generation of pseudo random noise code (PRN) code, making the consideration between the four satellite systems compatibility and interoperability is very important in the design of the GNSS receiver [2]. Interoperability refers to the use of the same receiver system without any hardware modifications and only needs to set up software to receive signals from the four systems at the same time. Signal processing is the core part of GNSS receiver, which is mainly composed of signal capture and signal tracking. In the initial stage of the development of the receiver, it is mainly implemented by hardware. Then, the technology of digital signal processing (DSP), field programmable gate array (FPGA), and advanced RISC machines (ARM) have been gradually applied to the design of the signal sampling or processing part of the GNSS software receiver [3,4,5,6,7,8]. Compared to the traditional hardware receiver using application-specific integrated circuit (ASIC), FPGA or DSP offer flexibility similar to software and speed similar to hardware, but there are difficulties in implementing high-complexity calculations, and they are still expensive niche products for dedicated applications [9, 10]. In addition, the flexibility of receivers based on FPGA, DSP, or FPGA+DSP is still very limited. At the same time, the central processing unit (CPU) has gradually been used in the design of the signal processing part of the GNSS software receiver [11]. But limited by the framework of CPU's own structure, the efficiency of FFT processing is not ideal, which makes it difficult to realize real-time processing. Until recently, the graphics processing unit (GPU) technology with high-performance parallel computing has gradually developed [12,13,14]. GPU is very different from a CPU dedicated to general sequential tasks, which has the characteristics of high parallel number and large data throughput. GPU includes hundreds to thousands of special purpose processors for graphic processing and aims to carry out programming so that the same function can be performed on multiple data [15]. With the continuous development of GPU technology, its application range extends from the graphic field to more high-performance computing areas that require very strong computing power [16, 17]. The GPU has been expected to be another way for implementation which allows to realize the GNSS radio [18]. In [19], a novel GPU-based correlator architecture for GNSS software receivers is proposed. In [20], scholars have studied the GPU-based real-time GPS software receiver. It can be seen that the existing research results prove that GPU technology can be applied to the signal processing part of a software receiver and bring the improvement of signal processing efficiency, but the existing results mostly focus on single constellation signal processing. On this basis, this paper makes a more in-depth study of the capture technology that is compatible with multiple constellation signal capabilities. A good compatible signal capture strategy can improve not only the acquiring performance of receiver but also the processing capabilities of weak GNSS signal and greatly reduce the consumption of resources. However, the difference of the signal system of each system brings some difficulties to the design of the compatible capture strategy. The compatible acquisition strategy based on GPU architecture proposed in this paper can not only capture GPS and other single constellation signals but also achieve compatible acquisition of four major system signals with different code lengths, code periods, or code rates by configuring system parameters. In addition, the design makes full use of the advantages of GPU parallel processing and improves the efficiency of signal acquisition. The experimental results show that the successful capture of the eight satellites takes only 2.312 s. Analysis of acquisition algorithm The purpose of signal acquisition is to determine the satellite that can be observed and to estimate the rough code phase and carrier Doppler shift [21, 22]. The essence of the capture is to use the strong auto-correlation of the pseudo random noise (PRN) code in the navigation signal to identify the navigation signal from the noise [23]. In this paper, the fast acquisition method based on fast Fourier transformation (FFT) is adopted. The core is to transform the time-domain correlation computation between the received signal and the local pseudo code into the multiplication calculation in frequency domain by using the circle correlation theorem, thus greatly reducing the acquisition time. The principle of the frequency domain capture algorithm based on FFT is shown in Fig. 1. Principle of capture algorithm. The figure describes the principles and processes of frequency domain capture algorithm based on FFT Usually, when the algorithm is used to capture the signal, the number of arithmetic points should be an integer power of 2. Therefore, it is necessary to process the data through a data pre-processing unit before the FFT operation. But this paper uses GPU high-performance computing platform, its internal cuFFT system kernel function library will automatically optimize and accelerate 2a × 3b × 5c × 7d. So there is no need to consider whether the length of the processed data meets the basic 2-FFT operation. Therefore, the data pre-processing operation designed in this paper is mainly used to improve the compatibility of GNSS signal acquisition, so that it can bring the data processing efficiency of the receiver and improve the performance and speed of the signal capture. Principle of compatible acquisition This section gives an analysis of the principle of GNSS signal compatibility acquisition based on GPU technology. Whether they capture GPS signals or other constellation signals, they all have the same capture and acquisition process, but there are also some differences caused by different signal characteristics [24]. In this paper, GPS L1 C/A, Beidou B1I, Galileo E1B, and GLONASS L1 signals are studied as examples. The difference between the four signals is shown in Table 1. It can be seen from the table that the differences of the four large system signals include the code length, the code rate, the modulation mode, and the way of the RPN code generation. Especially in the modulation mode, different from the traditional BPSK signal, the auto-correlation function of the BOC signal has many peaks, and it is necessary to pay attention to it. Therefore, in the design process of the compatible capture strategy, the similarities and differences of the system signal should be fully considered. Table 1 The difference between four major constellation systems In this design, in order to make the system compatible with the signals with different characteristics, the common part is integration design into a public module, the differential part (PRN code generator module and data processing module) are designed into an independent sub module, and realizes the seamless connection between the common module and the difference between modules, so as to improve the system code reuse rate and data processing efficiency. As shown in Fig. 1, the system contains two data pre-processing modules, one of which is data pre-processing for the output of the PRN code generator module and the other one is the pretreatment of the intermediate frequency (IF) sampling data. The former uses the method of up-sampling to achieve the expansion of the local PRN code, so that it can meet the needs of acquisition all of GNSS signal. According to the Nyquist sampling theorem, this paper define the bit wide of data as two times of 4.092 MHz, which means that the module will produce PRN code data with a frequency of 8.182 MHz. Therefore, during the process of capture, the length of the pseudo-code data is 8182 after the pre-processing of each input 1 ms. And the other pre-processing module mainly uses the methods of down-sampling and sub-sampling to reduce the sampling rate of the IF data, so that the width of input GNSS signal can meet the requirements of the compatible capture. In this paper, the short-time coherent integration algorithm is used to reduce the sampling data of the intermediate frequency of the GNSS signal. Usually, satellite data contains a number of satellite signals, and one of the satellite signals is used for analysis. Suppose that the IF sampling data S(t) obtained in the satellite with PRN K and the result of the multiplication of the co-phase component and the quadrature component in the local carrier generator are seen as two intermediate variables S i (t) and Sq(t), that is $$ {S}_I(t)=s(t)\cdot \sqrt{2}\cos \left[\left({\omega}_{IF}+\Delta \omega \right)t\right]={D}_k\left({nt}_s\right){C}_k\left({nt}_s\right)\cos \left(\Delta \omega t\right) $$ $$ {S}_{\mathrm{Q}}(t)=s(t)\cdot \sqrt{2}\sin \left[\left({\omega}_{IF}+\Delta \omega \right)t\right]={D}_k\left({nt}_s\right){C}_k\left({nt}_s\right)\sin \left(\Delta \omega t\right) $$ Here, ts is the sampling period, showing N ts = 1 ms, where C k is the PRN code, D k is the bits of the navigation message, and ω is the Doppler frequency. If the IF signal S(t) after ADC has the same frequency with the local carrier generator, then ω is 0. When the short-time PRN code does not jump, the S i (t) and the Sq(t) values are constant, the strength of accumulated data block signal has been enhanced; when the frequency is not at the same time, the S i (t) and Sq(t) values of the existence of a Doppler frequency sine and cosine component of ω, the cumulative signal strength reduce, which means not related to accumulated signal data in the block strength has been weakened. When ω = 0 and short-time PRN code jumping change takes place, such a situation only takes place for once in the complete 1 RRN code period and it will not cause an influence on the PRN code signal treatment in the entire period. Design of compatible capture strategy based on GPU Based on the above discussion, this article presents a compatible capture structure based on GPU, whose structure is shown in Fig. 2. Depending on the CPU platform, GNSS signal acquisition can be carried out in parallel by multithreading. Each thread can reuse the signal acquisition module, so that different data resources can be processed by using the same instructions. The overall structure diagram of acquisition system. The figure shows the six core modules of the multi-constellation-compatible capture system and the relationship between each module As shown in the diagram, the system is mainly composed of channel parameter control setting module, related energy accumulation and signal acquisition module, pseudo-code generator, sampling date memory, and two data pre-processing modules. Among them, the first input first output (FIFO) and channel parameter control are used to store the data and control parameters for each channel, respectively. The multi-channel PRN generator is responsible for producing a local pseudo code of each system, and the pseudo code should be sent into the data pre-processing module to realize the bit expansion of the code data. The sampling data memory is used to store the 1-ms satellite signal to be captured, and the output data of this module also needs to be preprocessed. Two data pre-processing modules and a related energy accumulation and signal acquisition module are designed to realize the parallel treatment for GNSS signal capture, so as to achieve the features of multi-logic channel capture and multi-functional reuse. Because the official ICD document of the Galileo system does not give the way of its PRN code generation, the PRN code generation of Galileo signal can only be realized by register storage. Therefore, the system's pseudo-code generator module includes a universal code generator and a memory code controller. The former is used to generate PRN codes for GPS, BDS, and GLONASS systems, and the latter is used in the Galileo system. The flow of the GNSS compatible acquisition algorithm is shown in Fig. 3. The capture module performs the signal capture operation of the GNSS based on the captured execution command information received by the system load monitoring module. When the capture phase begins, the acquisition module acquires the state of signal acquisition from the interior of the GNSS receiver system. During the process of capturing, the receiver depending on the GPU high-performance processor concurrently opens the Kernel function of multiple cores, and each Kernel function performs PRN code acquisition in a frequency domain for a satellite. After the capture is finished, the system stores the capture results and transmits them to the tracking channels for subsequent signal processing. The system also makes full use of the flow processing mechanism of the GPU high-performance computing platform. In the process of acquisition processing, another thread can execute signal data from the CPU memory to the GPU memory copy operation, so as to make more efficient capture. Flow chart of acquisition algorithm. The figure shows a complete capture process for the multi-constellation compatibility capture strategy designed in this article In this design, the related energy accumulation and signal acquisition algorithm modules are designed into separate thread safety structures, so as to achieve parallel acquisition of multiple signals without confusion. In addition, the structure can independently carry out coherent accumulation and incoherent accumulation of data point by point. The processing process of the correlation energy accumulation and frequency domain capture of a logical channel is shown in the Fig. 4. Procedure of logical channel. The figure shows the processing process of the correlation energy accumulation and frequency domain capture of a logical channel in the system Summary of system characteristics and compatibility Based on the above design structure and processing flow, the design scheme proposed in this paper can implement the different system-compatible acquisition according to different satellite navigation signals. Once acquisition is achieved, a rough estimate value of the code phase and Doppler can be obtained. The compatible capture designed in this article has the following characteristics. Multi-channel parallel search: Based on the GPU platform, the GNSS signal capture is carried out in parallel by multithreading. And each thread reuses the signal capture module to perform the different dataproessing with the same instruction. Support different signals with 1023 integer multiples of the code length: The up-sampling of the PRN code through the data pretreatment module which ensures the width of the local pseudo-code data meets the design requirements of wide energy accumulation and signal capture module, according to the Nyquist criteria set receiver capture width 8192, making it suitable for all the GNSS signal capture requirements. In addition, the length of the correlative integral time is set by the channel parameters, so that the system can support the acquisition of signals with different code periods. Compatible with BPSK and BOC modulation modes: The use of a single logical channel can support BPSK signal acquisition. If the two logic channel carrier frequencies are set to the two peak values of the BOC signal and the incoherent cumulative results of the two are added together, the BOC signal can be captured. Support different code rate and sampling rate signals: The two data pre-processing modules mentioned in the previous article are the key to the implementation of this part of the compatibility Support the fine parallel acquisition of carrier Doppler frequency: By increasing the number of the FFT transform of the signal after dispreading, the finer Doppler frequency estimation can be obtained, which is helpful for tracking a channel to track the signal more accurately. The number of code phase search is variable: By setting the channel parameters, the number of processing points and the number of code phase slips can be controlled, in order to achieve a variable number of code phase search. Accelerating capture by stream processing: Performance computing platform, when the GNSS signal capture operations, another thread will copy the GNSS signal data from the CPU memory to the GPU memory operation, thus causes the receiver to capture more quickly and efficiently. In addition, one or several satellite systems can be selectively captured by the system load monitor to improve capture efficiency. System prototype The GNSS-compatible acquisition strategy proposed in this paper is based on the GPU high-performance computing platform, mainly built by NVIDIA GeForce GTX 850M graphics card and developed by compute unified device architecture (CUDA) integrated development kit on Visual Studio2010 platform. CUDA is a common parallel computing architecture launched by NVIDIA in 2006, which enables GPU to solve complex computing problems [25]. The system prototype designed based on GPU architecture proposed in this paper is mainly divided into the display layer, the processing layer, the interface layer, and the device layer, as shown in Fig. 5. Display layer: It provides the user with a simple system parameter configuration interface and present processing results. Processing layer: The processing requests submitted by the user are invoked efficiently through a good interface, and the capture processing of different signals can be executed according to the difference in the system settings. In addition, it makes full use of the advantages of GPU parallel operations to process signals and import programs in a modular way. Interface layer: It is used to specify the access methods of the corresponding devices and files, as well as the format of the data storage. Equipment layer: It is the lowest level of the whole system. The system is a high-performance computing platform based on GPU, where GPU is responsible for massive data high-performance parallel computing, and CPU is responsible for process control of program execution. In order to achieve the compatible capture of GNSS signals, the key and difficult point of this design is the signal capture part of the system processing layer. The specific development environment is shown as follows: Operating system: Windows10 x64 pro CPU: Intel(R) Core(TM) i7-4710MQ GPU: NVIDIA(R) GeForce(R) GTX 850M。 CUDA version: Version 7.5.18 System prototype architecture diagram. The figure shows the system prototype of the compatible capture system designed in this article, which is mainly composed of the display layer, the processing layer, the interface layer, and the device layer System implementation and test Here, the capture structure based on GPU is tested, and the system's capture test interface is shown in Fig. 6. Before capturing, users must configure relevant information, including intermediate frequency, sampling rate, search bandwidth, and selection of satellite system. After setting up, signal acquisition operation can be carried out. Because of the limited hardware resources in the laboratory, the GPS digital IF signal is used as an example to test the system architecture designed in this paper. In order to highlight the high performance of GPU and the practical application ability of CPU to deal with complex data, the test uses the large file data sampled by high sampling frequency. Therefore, the input signal is that the GPS satellite signal is sampled by the 38.192 MHz sampling frequency and the 4 bit sampling width of 50 s, and the frequency of the original signal is 9.548 MHz. Acquisition parameter setting interface. This figure shows the interface of the system, where the user can configure the capture parameters through the interface before the capture starts Here, the system successfully captured eight GPS satellites. The specific data of the satellite number, code phase, carrier frequency, and peak ratio of the eight satellites are shown as shown in Table 2. Figure 7 shows the visual effect of the capture results. Table 2 The satellite data captured by the GPU experimental platform The proportional peak diagram of satellite acquisition. This is an intuitive display of the results of the capture of 32 GPS satellites in the test. In the figure, the green bars represent successful acquisition and the blue bars represent a capture that failed Analysis of system acquisition efficiency In order to further verify the advantage of GPU for signal capture in efficiency, the capture efficiency of GPU and CPU is compared. In the same experimental environment, 20 groups of independent acquisition experiments based on CPU and GPU were completed respectively by using the same data. The average execution time of the two experimental groups to capture a single satellite and capture eight satellites was calculated. Among them, the execution time of GPU is determined by the special timing API provided by NAVIDA, and the running time of CPU is determined by the timer function provided by MFC. The comparison results are shown in Table 3. Table 3 Time consuming comparison between GPU and CPU The test results show that when the single satellite is captured, the signal capture base on GPU needs more execution time than the signal capture base on CPU. This is because the initialization of the CUDA computing and the memory allocation will take a certain amount of time. And using GPU-based capture structure to traverse all satellites takes only 2.312 s time, which is nearly five times faster than that of all satellites captured by CPU. In fact, the highest frequency of i7-4710MQ CPU is up to 3.5 GHz, but its execution speed is still less than that of the GTX850M card with the highest 2.5 G frequency. It can be seen that the advantages of parallel processing of GPU are remarkable. The following is a further analysis of the efficiency of this design using the GPU architecture for signal capture. Figure 8 is the percentage of the time consumed by each kernel when it is captured under this structure. Time consumed percentages of each kernel. The figure is the time-occupying time consumed by each kernel in the execution of the capture process by the system designed in this article. Among them, the time of three FFT operations is up to 55.99% From the graph, it can be seen that the time spent in capturing the structure in this paper is mainly concentrated in the FFT computation performed by the three time cufftExecC2C kernel functions, accounting for 55.99% of the total computation ratio (three times and FFT time).This part is the key to CPU's acceleration over CPU's execution of signal capture operations. In addition, the number of points in the FFT operation designed by this article is variable, and the number of execution capture channels can also be set on its own. As the number of FFT point's increases and the number of parallel acquisition channels increases, the advantage of GPU as a high-performance parallel computing platform will become more and more obvious. Of course, this acceleration is not necessarily established with the increase in the amount of data. The time consumed of a FFF operation in this paper is changed with the number of channels and the number of points, as shown in Fig. 9. Time consumed change diagram of FFF operation. The figure shows the time consumed changes in the FFT operation as the number of channels and FFT points changes in the system To sum up, the new compatible capture structure based on GPU is proposed in this paper, which has high computing efficiency and very considerable time compression ratio. This paper is based on the research of CPU-based multi-constellation GNSS signal acquisition technology. Compatible capture is the key technology for developing GNSS receiver. The compatible acquisition scheme proposed in this paper not only expands the format of captured signals, which almost supports all existing satellite system signals, but also achieves a parallel processing of GNSS signal data by using GPU, which improves the efficiency of capture processing. Using this structure to capture eight parallel satellites only takes 2.312 s, which is about five times faster than the CPU in the same case. In addition, the GPU high-performance computing platform used in this paper is built mainly by the NVIDIA GeForce GTX 850M graphics card and is implemented in conjunction with the CUDA programming environment. It is very convenient for ordinary computers to build the GPU high-performance computing platform which is proposed in this paper. It only needs to access a NVIDIA graphics card supporting a CUDA programming. Therefore, the design of this paper has good implementation and portability. In another words, making full use of the advantages of GPU and applying it to the processing of GNSS satellite signals is a new direction for the design of compatible receivers. Moreover, the design idea of this paper is redesigning the GNSS system, which can provide a reference for the new research of how to allocate resources rationally and build an ideal multi-country shared civil GNSS and the next generation of satellite navigation system. ARM: Advanced RISC machines CUDA: Compute unified device architecture DSP: FFT: Fast Fourier transformation FIFO: First input first output FPGA: Field programmable gate array GNSS: Global navigation satellite system Graphics processing unit IF: Intermediate frequency PRN: Pseudo random noise code Z Zhou, Y Li, J Liu, et al., Equality constrained robust measurement fusion for adaptive Kalman filter based heterogeneous multi-sensor navigation. IEEE Trans. Aerosp. Electron. Syst. 49(4), 2146–2157 (2013) J Leclere, C Botteron, PA Farine, Comparison framework of FPGA-based GNSS signals acquisition architectures. IEEE Trans. Aerosp. Electron. Syst 49(3), 1497–1518 (2013) Fortin M, Bourdeau F, Landry R, Implementation strategies for a software-compensated FFT-based generic acquisition architecture with minimal FPGA resources. Navigation - J Institute Navigation. 62(3), 71–188 (2015). J Leclère, C Botteron, PA Farine, Acquisition of modern GNSS signals using a modified parallel code-phase search architecture. Signal Process. 95(2), 177–191 (2014) Dovis F, Mulassano P, Gramazio A, SDR technology applied to Galileo receivers. Proceedings of the International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GPS '02), 2002. S Khan, A Borsic, P Manwaring, et al., FPGA based high speed data acquisition system for electrical impedance tomography. J Phys Conf Se. 434(1), 012081 (2013) Xiaolei, Yongrong, Jianye, et al., Design and realization of synchronization circuit for GPS software receiver based on FPGA. J. Syst. Eng. Electron. 21(1), 20–26 (2010) Z Zhou, Y Li, J Zhang, et al., Integrated navigation system for a low-cost quadrotor aerial vehicle in the presence of rotor influences. J. Surv. Eng. 143(1), 05016006 (2016) H Wang, J Chang, J Lv, et al., An implementation scheme of GNSS signal simulator based on DSP and FPGA. International Conference on Computer Science and Service System, IEEE (2011), pp. 27–29 V Chakravarthy, J Tsui, D Lin, et al., Software GPS receiver. GPS Solut 5(2), 63–70 (2001) T Jokitalo, K Kaisti, V Karttunen, et al., A CPU-friendly approach to on-demand positioning with a software GNSS receiver. Ubiquitous Positioning Indoor Navigation and Location Based Service (UPINLBS), IEEE (2010), pp. 14–15 H Shamoto, K Shirahata, A Drozd, et al., GPU-accelerated large-scale distributed sorting coping with device memory capacity. IEEE Transactions on Big Data 2(1), 57–69 (2016) J Wu, Z Song, G Jeon, GPU-parallel implementation of the edge-directed adaptive intra-field deinterlacing method. J. Disp. Technol. 10(9), 746–753 (2017) Lobeiras J, Amor M, Doallo R, Designing efficient index-digit algorithms for CUDA GPU architectures. IEEE Transactions on Parallel & Distributed Systems, 27(5), 1331–1343 (2016). M Garland, SL Grand, J Nickolls, et al., Parallel computing experiences with CUDA. Micro IEEE 28(4), 13–27 (2008) Z Yu, L Eeckhout, N Goswami, et al., GPGPU-MiniBench: accelerating GPGPU micro-architecture simulation. IEEE Trans. Comput. 64(11), 3153–3166 (2015) S Ando, F Ino, T Fujiwara, et al., Enumerating joint weight of a binary linear code using parallel architectures: multi-core CPUs and GPUs. IJNC 5(2), 290–303 (2015) SH Im, GI Jee, Software-based real-time GNSS signal generation and processing using a graphic processing unit (GPU). J Positioning Navigation Timing 3(3), 99–105 (2014) L Xu, NI Ziedan, X Niu, et al., Correlation acceleration in GNSS software receivers using a CUDA-enabled GPU. GPS Solutions 21(1), 1–12 (2017) T Hobiger, T Gotoh, J Amagai, et al., A GPU based real-time GPS software receiver. GPS Solutions 14(2), 207–216 (2010) Y Wang, G Mao, Fast GNSS satellite signal acquisition method based on multiple resamplings. Eurasip J Adv Signal Processing 2016(1), 109 (2016) BM Ledvina, ML Psiaki, SP Powell, et al., Bit-wise parallel algorithms for efficient software correlation applied to a GPS software receiver. IEEE Trans. Wirel. Commun. 3(5), 1469–1473 (2004) D Akopian, Fast FFT based GPS satellite acquisition methods. Radar, Sonar and Navigation, IEE Proceedings 152(4), 277–286 (2005) L Huang, Acquisition algorithm for GNSS multi-constellation signals. Sci. Sinica 41(5), 226–232 (2011) E Lindholm, J Nickolls, S Oberman, et al., NVIDIA tesla: a unified graphics and computing architecture. IEEE Micro 28(2), 39–55 (2008) The authors would like to thank Prof. Wei Guo at the National Key Laboratory of Science and Technology on Communications of UESTC for the help, and Prof. Long Jin and Prof. Yonglun Luo at the Research Institute of Electronic Science and Technology of UESTC for their assistance in the GPU and DSP. The authors also want to thank the Research Institute of Electronic Science and Technology and Key Laboratory of Integrated Electronic System, Ministry of Education for their support to this research. However, the opinions expressed in this paper are solely those of the authors. National Key Laboratory of Science and Technology on Communications, University of Electronic Science and Technology of China, Chengdu, China Chengjun Guo Research Institute of Electronic Science and Technology, University of Electronic Science and Technology of China, Chengdu, China Bingyan Xu & Zhong Tian Search for Chengjun Guo in: Search for Bingyan Xu in: Search for Zhong Tian in: CG is the main writer of this paper. He put forward the main idea, completed the experimental test, and analyzed the result. The overall design scheme of GPU-based compatible acquisition algorithm was mainly proposed by CG, and BX assisted him in completing the design. ZT gave some important suggestions and help for the technology of GNSS signal processing. All authors read and approved the final manuscript. Correspondence to Chengjun Guo. Global navigation satellite system (GNSS) Graphics processing unit (GPU) Signal acquisition
CommonCrawl
Enhanced control of self-doping in halide perovskites for improved thermoelectric performance Degradation mechanism of hybrid tin-based perovskite solar cells and the critical role of tin (IV) iodide Luis Lanzetta, Thomas Webb, … Saif A. Haque Electrical doping in halide perovskites Julie Euvrard, Yanfa Yan & David B. Mitzi Metal halide perovskite nanostructures for optoelectronic applications and the study of physical properties Yongping Fu, Haiming Zhu, … Song Jin Understanding how excess lead iodide precursor improves halide perovskite solar cell performance Byung-wook Park, Nir Kedem, … Sang Il Seok Maximizing and stabilizing luminescence from halide perovskites with potassium passivation Mojtaba Abdi-Jalebi, Zahra Andaji-Garmaroudi, … Samuel D. Stranks Ligand size effects in two-dimensional hybrid copper halide perovskites crystals Arramel Arramel, Angga Dito Fauzi, … Andrivo Rusydi The rise of halide perovskite semiconductors Chenlu He & Xiaogang Liu Moisture-triggered fast crystallization enables efficient and stable perovskite solar cells Kaikai Liu, Yujie Luo, … Zhanhua Wei Strategic advantages of reactive polyiodide melts for scalable perovskite photovoltaics Ivan Turkevych, Said Kazaoui, … Alexey B. Tarasov Tianjun Liu1,2, Xiaoming Zhao2, Jianwei Li3, Zilu Liu3, Fabiola Liscio4, Silvia Milita4, Bob C. Schroeder ORCID: orcid.org/0000-0002-9793-631X3 & Oliver Fenwick1,2 Electronic properties and materials Thermoelectrics Metal halide perovskites have emerged as promising photovoltaic materials, but, despite ultralow thermal conductivity, progress on developing them for thermoelectrics has been limited. Here, we report the thermoelectric properties of all-inorganic tin based perovskites with enhanced air stability. Fine tuning the thermoelectric properties of the films is achieved by self-doping through the oxidation of tin (ΙΙ) to tin (ΙV) in a thin surface-layer that transfers charge to the bulk. This separates the doping defects from the transport region, enabling enhanced electrical conductivity. We show that this arises due to a chlorine-rich surface layer that acts simultaneously as the source of free charges and a sacrificial layer protecting the bulk from oxidation. Moreover, we achieve a figure-of-merit (ZT) of 0.14 ± 0.01 when chlorine-doping and degree of the oxidation are optimised in tandem. With rapidly rising greenhouse gas emissions to the atmosphere, it is paramount to develop technologies able to generate energy at negligible cost to the environment, and to reverse the currently accelerating climatic changes. However, to successfully fulfil the transition from fossil fuels to renewable energy sources, we can no longer rely solely on existing materials, but must focus on the synthesis of other material classes with improved properties. Halide perovskites have been recognized as promising photovoltaic materials1,2,3 achieving a power conversion efficiency exceeding 25%4, due to their large absorption coefficients, high charge carrier mobilities5 and large carrier diffusion lengths6. They are a highly versatile class of semiconductors, with a band gap that is tuneable through the composition of the inorganic framework, the choice of organic or inorganic cation, stoichiometry, and through self-assembly into layered structures7,8,9,10 and nanoparticles11. This diversity in structure has enabled the range of applications of these materials to extend to other optoelectronic devices, including light-emitting diodes (LEDs)12,13,14, X-ray detectors15,16 and lasers17,18. Despite intense research on halide perovskite materials for optoelectronics, there have only been a small number of experimental studies on their thermoelectric properties, where a temperature gradient across the material can move free charge carriers and generate a thermal voltage. Thermoelectric generators can produce electrical power from temperature gradients, and to do so efficiently, must use materials possessing a high figure-of-merit, ZT: $${\mathrm{ZT}} = \sigma \alpha ^2T/\kappa$$ where σ, α and κ are the electrical conductivity, Seebeck coefficient and thermal conductivity, respectively. T is the temperature. Halide perovskites have an ABX3 stoichiometry comprising a network of inorganic (metal-halide) octahedra with loosely bound organic or inorganic cations occupying the cavities between octahedra. These cations provide rattling modes which scatter phonons, enabling ultralow values of thermal conductivity that are now well-documented19,20. Combined with the high charge mobilities5 observed in many halide perovskites, the relatively small number of experimental reports of ZT to date21,22 in these materials is perhaps surprising. In 2014, He et al. studied thermoelectric properties of methylammonium lead iodide (MAPbI3) and methylammonium tin iodide (MASnI3) by ab initio calculations23. They found that both materials exhibit small carrier effective mass and weak phonon-phonon and hole-phonon couplings, and predicted ZT in the range of 1–2, in-line with state-of-the-art thermoelectric materials. Shortly afterwards, Mettan et al.21 measured the thermoelectric properties of MAPbI3 and MASnI3 bulk crystals21 finding that photo-induced doping of MAPbI3 and chemical doping of MASnI3 improved ZT. They concluded that MAPbI3 would be a good candidate for the thermoelectric applications due the high hole mobility, large Seebeck coefficient and a remarkably low thermal conductivity. However, the low charge carrier density is a barrier to further development. Low charge carrier density in these materials is a product of ionic compensation of charged point defects24, as well as a defect tolerant electronic structure arising from bonding orbitals at the conduction band minimum, and antibonding orbitals at the valence band maximum25. The resulting low density of deep defects is an excellent feature for optoelectronic applications since defects can quench electroluminescence in LEDs or lead to recombination of photo-generated charges in solar cells. On the other hand, thermoelectric applications require charge densities typical of heavily doped semiconductors ~1018–1020 cm−3, and doping would usually come from defect sites, such as substitution of a higher valency metal atom on the perovskite B-site26. This makes development of halide perovskites for thermoelectrics challenging. An exception are the lead-free tin halide perovskites, such as the cubic perovskite CH3NH3SnI3, which shows metallic conductivity27. Takahashi et al.28 noted that high conductivity in CH3NH3SnI3 bulk crystals arises from a self-doping process through the oxidation of Sn2+ to Sn4+28. In 2017, Lee et al. reported the ultralow thermal conductivity of a single CsSnI3 nanowire and a ZT of 0.11 at 320 K22, whilst Saini et al. report a ZT in thin films of 0.137 at 292 K29. However, the underlying physical mechanisms that determine thermoelectric performance of halide perovskite materials are not completely understood, and significant issues remain unaddressed such as identification of ZT optimisation strategies. In this work, we develop a series of vacuum thermal evaporation methods to fabricate lead-free CsSnI3 perovskite thin films. We find air stability and electrical conductivity of our films to be highly tuneable by the deposition process with films formed by sequential deposition of the precursors yielding electrical conductivity 25 times that of films formed by co-evaporation of the same precursors. Compared with organic-inorganic hybrid perovskites, all-inorganic halide perovskites present significant improvements in thermal stability30,31, but we enhance this further by developing a method to substitutionally dope chlorine into the perovskite structure in the top 10 nm of our films. A by-product of air exposure is the oxidation of Sn2+ to Sn4+ (self-doping), and we exploit this in a controlled manner to fine tune the electrical conductivity and thermoelectric properties of the mixed halide CsSnI3−xClx perovskite thin films. We quantify the Sn oxidation states as a function of depth in mixed halide perovskite films using Auger electron spectroscopy, showing an unusual mechanism whereby an oxidised top surface-layer (<10-nm thick) is responsible for electrical doping the underlying film (250–300-nm thick). In this surface doping configuration, the dopants do not disrupt the crystal structure in the part of the film responsible for charge transport. In fact our Seebeck measurements indicate that the electrical doping levels in our films rise in tandem with the amount of Cl substituted in the top layers, showing that chlorine doping is simultaneously providing free charges to the system and acting as a sacrificial surface layer that slows oxidation of the bulk. We furthermore verify the applicability of the Wiedemann-Franz law in this class of materials with a value of the Lorenz number close to the Sommerfeld value, and achieve a ZT of 0.14 at 345 K upon simultaneous optimisation of the degree of Cl-doping and the degree of oxidation. Thermal vapour deposition of CsSnI3 perovskite films Past approaches to synthesize CsSnI3 have included solution processing by spin-coating31 and growth of single crystals22,32. In our case we have developed thermal vapour deposition approaches in order to achieve a high quality of films with fine control over morphology and composition. Starting from the precursors caesium iodide (CsI) and stannous iodide (SnI2), we developed three different vacuum deposition methods to prepare the perovskite films: co-evaporation, sequential deposition and seed layer plus sequential deposition (SLS) (Fig. 1a–c). For the co-evaporation process (Fig. 1a), the perovskite was obtained directly from simultaneous vacuum thermal evaporation of the two precursor materials (SnI2 and CsI). For the sequential deposition method (Fig. 1b), CsI and SnI2 were sequentially deposited to form a bilayer film which was then baked to form the perovskite. For the SLS method (Fig. 1c), a co-evaporated perovskite seed layer was introduced before sequential deposition, and the film was post-baked to form the perovskite structure. Co-evaporated films were mirror-black, characteristic of the CsSnI3 perovskite, whilst sequentially deposited and SLS films were red-brown, but became mirror-black after baking at 170 °C in nitrogen atmosphere (Supplementary Fig. 1). Scanning electron microscopy (SEM) revealed the dense polycrystalline morphology of the vapour-deposited CsSnI3 thin films (Fig. 1d–f). Sequential deposition produced perovskite thin films with around 1 μm diameter grains, which were larger than grains in the co-evaporated perovskite thin film (300–500 nm diameter). The SLS perovskite thin films also contained sub-micron grains, yet with a rougher surface morphology. As shown in Fig. 1g, X-ray diffraction patterns of CsSnI3 films made by all three deposition procedures showed features of the orthorhombic black phase, B-γ32 of CsSnI3, with peaks at 25.02° and 29.15° (2θ) corresponding to (220) and (202) planes, respectively. The sequentially processed films have a dominant peak at 29.15°, showing a preferred orientation of the (202) crystal plane parallel to the substrate. On the other hand, the co-evaporated films present preferential orientation of the (220) plane parallel to the surface. In the case of the SLS processed films, multiple peaks were observed, including both (220) and (202), indicating mixed orientations of crystallites in the film. The films are present in the B-γ phase regardless of deposition method, which is confirmed with grazing-incidence X-ray diffraction (GIXRD) experiments (Supplementary Fig. 2), and there was no evidence of diffraction peaks associated with Cs2SnI6 or the precursor materials. The thickness of all films studied was between 250 and 300 nm. Fig. 1: Morphology, crystal structure and electrical conductivity of CsSnI3 films. a–c Schematics of film deposition of the co-evaporation, sequential and SLS methods, respectively (before any annealing steps). d–f Corresponding scanning electron microscopy (SEM) images of the films after any annealing steps. g X-ray diffraction spectra of three types of thin film with lattice plane indices of the most prominent peaks in each case. Electrical conductivity of three types of perovskite film in nitrogen atmosphere (h) and in air (i). Electrical conductivity and stability To characterise the electrical stability of our films, we performed time-dependent electrical conductivity measurements both in inert atmosphere (N2 glovebox) and in air. CsSnI3 thin films from all three deposition methods showed high stability when tested in a N2 atmosphere, in fact showing a modest increase in electrical conductivity over a period of 1 h (Fig. 1h). In total over that period a reproducible increase in conductivity by a factor of 3.6 was observed for co-evaporated films, 1.3 for sequentially evaporated and 1.2 for SLS films, reaching maximum conductivities of 8.5 × 10−3, 7.3 and 6.8 S cm−1, respectively. When the thin films were exposed to air, σ increased by a factor of 2 to 7 in all cases (Fig. 1i), which would be expected from a self-doping process during oxidation of Sn2+ to Sn4+33,34. σ of co-evaporated thin films continuously increased for 45 min, while σ of sequentially deposited films increased for just 5 min before degradation caused a rapid decrease. The thin films deposited by the SLS method can sustain increases in σ for 11 min, reaching a value 7 times the initial one and remain reasonably stable afterwards, showing only 30% reduction in σmax over the following 50 min. CsSnI3 thin films deposited by SLS show a similar maximum electrical conductivity (37.1 S cm−1) to sequentially deposited films (32.2 S cm−1), which is ~25 times higher than the maximum for co-evaporated films (1.2 S cm−1), a value we consider too low for thermoelectric applications. Co-evaporated films with dominate orientation (220) therefore show the best air stability but lowest electrical conductivity. Sequentially deposited films with dominate orientation (202) show a poor air stability despite their larger grain sizes, but do have higher electrical conductivity. Previous work has shown that grain orientation can have a significant effect on degradation rates of halide perovskite films35 and this is likely to be the case here. Since SLS films have enhanced stability compared with the sequentially deposited films, we chose SLS produced films as the platform from which to optimise the thermoelectric properties of CsSnI3 perovskites. To further improve the air stability of SLS perovskite thin films, chloride was introduced in the deposition process, as mixed halide perovskites are known to exhibit improved air stability over analogous single-halide materials31,36. This was done by thermal deposition of a thin layer (<20 nm) of tin chloride (SnCl2) on top of the 250–300-nm thick SLS films prior to thermal annealing (schematic in Fig. 2a). Deposition was followed by baking under nitrogen atmosphere at 170 °C. An initial SEM investigation revealed an elongated grain structure on the top surface of our films (Fig. 2b) which was attribute to a pure SnCl2 phase. As baking progresses (Fig. 2c), the typical polycrystalline perovskite morphology with polygonal grains emerges, although a small number of the elongated crystals remain on top. On further baking, the remaining elongated crystals show reduced aspect ratio and the underlying perovskite grains merge into micron-sized features (Fig. 2d), until after 40 min of baking (Fig. 2e), there was little evidence of the elongated crystals at all. The mixed halide films have XRD features similar to CsSnI3 with an absence of peaks that could be assigned to SnCl2 or CsSnCl3 (Supplementary Figs. 3 and 4). It should be noted that SnCl2 can sublime at 170 °C, so we used SEM and STEM combined with energy-dispersive X-ray spectroscopy (EDS) (Supplementary Fig. 5 and Fig. 2f–k, respectively) to confirm residual Cl incorporation into our samples. Furthermore, high-resolution transmission electron microscopy (HRTEM) of a single grain of our mixed halide perovskite (Fig. 2f), showed two regions of different crystal lattices (marked with red and yellow squares). The lattice spacing of 0.328 nm measured in the red region corresponds to the CsSnI3 crystal (122) plane, whilst the lattice spacing of 0.546 nm measured in the yellow region corresponds to the (001) plane of the CsSnCl3 cubic lattice. This indicates a degree of nanoscale phase separation between chlorine-rich and iodine rich phases within perovskite grains. The absence of CsSnCl3 features in the XRD spectra is due to the low concentration of Cl in our films. Corroborating evidence for the incorporation of chlorine into perovskite structures is provided by X-ray photoelectron spectroscopy (XPS), with the Cl 2p peak of our mixed halide films showing a significant broadening compared with SnCl2 (Supplementary Fig. 6)31. Moreover, we used XPS to get a depth profile of the Cl concentration in our films (Supplementary Fig. 7), finding that Cl was present in the top layer, penetrating only a few nanometres into the bulk. We could not detect any chlorine by XPS at depths larger than 10 nm from the film surface. In what follows, we studied 0.5, 1, 3 and 5% SnCl2 mixed halide CsSnI3−xClx perovskite films. The percentage we use refers to the mass of SnCl2 relative to SnI2 in our thin films before the baking step. The final atomic % of Cl in the film will be much lower. Fig. 2: Mixed halide CsSnI3−xClx perovskite morphology, structure and elemental distribution. a Schematic of the mixed halide perovskite deposition method. The white squares represent the CsSnCl3 structures in the top layers of the B-γ-CsSnI3 perovskite films. b–e SEM images of the morphological development of mixed halide perovskite films as a function of baking time. f High-resolution transmission electron microscopy (HRTEM) of mixed halide perovskite structures. The yellow square and red square correspond to CsSnCl3 and CsSnI3 crystal lattices, respectively. g STEM-HAADF image of isolated grains of mixed halide CsSnI3−xClx formed on an amorphous carbon support and (h–k) STEM-EDS elemental mapping in the area denoted by the red square in (g). To demonstrate the improved stability of our mixed halide perovskite films, we studied the quenching of the optical absorption peak at 420 nm (Supplementary Fig. 8). 5% Cl-doped SLS films show enhanced stability, with just 3% quenching of the 420 nm peak after 100 min air exposure, whereas SLS CsSnI3 films without Cl-doping showed a 40% quenching of the peak under the same conditions. In fact, in terms of their optical properties, the 5% Cl-doped SLS films are more stable than undoped co-evaporated films, showing less than half of the quenching of the absorption after 500 min in air. Quantitative analysis of Sn oxidation states As the origin of high conductivity in tin halide perovskites comes from hole doping due to the oxidation of Sn2+ to Sn4+, we used XPS analysis to probe the oxidation state of Sn in our films. Shifts in the Sn 3d5/2 peak are relatively modest as a function of oxidation state (Supplementary Fig. 9), so we focussed on the Auger region of the spectrum. Since Auger electron spectroscopy (AES) probes three-electron process, it is a much more sensitive measure of oxidation state. We did this as a function of depth in CsSnI3−xClx films (1% SnCl2) which had undergone a short air exposure (Fig. 3a). The Sn MNN AES spectrum shows a broad line shape, including several Sn MNN peaks (fitting curves labelled a, b, c and d, with details in Supplementary Table 1). In the Sn0 metal M5N4,5N4,5 AES spectrum reported by Barlow et al.37, 1S0 has a peak at a kinetic energy of 421.2 eV, and showed a large broadening after oxidation. In our case, the 1S0 peak (fitted curve a) is broad, confirming the absence of Sn0 states. Fitted curve b (425–430 eV) includes multiplet splitting of the 1G4, 3P2, 3F2,3 and 3F4 states (Supplementary Table 1). This broad peak shifts to higher kinetic energy with increasing etching depth, corresponding to oxidation in the top layer compared with the bulk38,39,40. This is even more evident from peak c, which is linked to Sn4+ states38,39,40 and is prominent at the surface, but disappears completely within an etching depth of 7.5 nm (Fig. 3b). Fig. 3: Sn oxidation state in 1% Cl-doped CsSnI3−xClx perovskite thin films. a Auger electron spectra of Sn MNN at different etching depths from 0 to 10 nm. b Photoelectron counts of fitted curves in (a) as a function of etching depth. c Sn 3d5/2 Wagner plot with the modified Auger parameter of our samples (circles) and reference values for Sn0, Sn2+ and Sn4+. The modified Auger parameter (α′) can be used for a more robust identification of chemical states of elements in molecules or solids, and is not susceptible to shifts caused by sample charging38,39,40,41. It is defined as the sum of the kinetic energy of a core-core-core Auger line, Ek, and the binding energy, Eb, of a core electron, \(\alpha ^\prime = E_{\mathrm{k}} + E_{\mathrm{b}}\) and can be viewed more intuitively if plotted in a Wagner format42 (a plot of Ek versus Eb recorded from all chemical states of the atom). The Wagner plot in Fig. 3c, combines our own data (as a function of depth) with some literature references for Sn0, Sn2+ and Sn4+ states39,40,43, and clearly illustrates a mixture of Sn2+ and Sn4+ oxidation states in our film (Sn core binding energy, Auger kinetic energy and Auger parameters detailed in Supplementary Tables 2–3 and Supplementary Fig. 10)44. Moreover, there is a progressive change from majority Sn4+ states at the surface of the film to Sn2+ at a depth of 10 nm, further evidence that the oxidation process only occurs in the top 7.5 nm of the film, the same region of the film that incorporates chlorine dopants. From this, we can conclude that the top surface layer of the mixed CsSnI3−xClx acts as a sacrificial layer where initial oxidation occurs. This layer provides hole doping to the bulk (vide infra) from the surface Sn4+ species. This mechanism, which separates the doping layer from the transport region, minimises the structural impact of doping on charge mobility, and enables our mixed halide perovskite structure to present high electrical conductivity whilst retaining a reasonable degree of air stability. Thermoelectric properties of CsSnI3−xClx thin films We performed thermoelectric property measurements as shown in Fig. 4a–f. The temperature dependence of σ and the sign of α for 1% Cl-doped CsSnI3−xClx in the range 290–360 K (Fig. 4a, b) indicates band-like transport and that the majority charge carriers are holes, as reported previously for CsSnI3 single crystal nanowires22, validating the high quality of our mixed halide perovskite films. α increases approximately linearly with temperature due to the shift of the Fermi level away from the valence band, following the Fermi-Dirac distribution within the mobility edge model of the Seebeck coefficient for a heavily doped semiconductors45. Fig. 4: Thermoelectric properties of 1% mixed CsSnI3−xClx perovskite thin films. Temperature dependence of electrical conductivity σ (a), Seebeck coefficient α (b), thermal conductivity κtotal (c) and figure-of-merit, ZT (d). The differently coloured curves represent different degrees oxidation according to the legend in plot (a). Error bars in (b) are too small to be visible, typically representing <2% of the value. e Charge carrier density and Hall mobility as a function of air exposure time. f Maximum figure-of-merit, ZT, as a function of SnCl2 incorporation. We can fine tune the electrical conductivity of our films by exposing them to air (3 min at a time) to further oxidise Sn2+ to Sn4+. At room temperature, initial electrical conductivity, σ0, was 8.0 ± 0.6 S cm−1 (Fig. 4a) and it dramatically increased to 69.0 ± 5.2 S cm−1 after a further air exposure (σ3), and saturated at 126.5 ± 9.7 S cm−1 after 9 min air exposure (σ9). Further air exposure lead to a slower, but steady decrease in electrical conductivity (σ12 = 119.6 ± 9.2 S cm−1). The dependence of σ on air exposure time comes from the competition between the enhanced charge carrier concentration owing to self-doping from Sn4+ species and reduced carrier mobility owing to defects caused by air exposure, which can take the form of degradation in the bulk or at the grain boundaries during the oxidation process, or even increased ionised impurity scattering. The Seebeck coefficient, α, in Fig. 4b shows a steady decrease with exposure time (at room temperature, α0 = 144.7 ± 1.5 μV K−1, and after 12 min air exposure α12 = 103.0 ± 1.0 μV K−1), consistent with a steady increase in the charge concentration shifting the Fermi energy level, Ef, towards the valence band. The observation that the Seebeck coefficient continues to decrease with air exposure when the electrical conductivity has already peaked is further evidence that the degradation in conductivity after extended exposure to air is due to mobility lowering processes. To verify this hypothesis, we used Hall measurements to determine the charge carrier concentration as a function of air exposure, showing an increase with air exposure from 2.38 × 1018 to 1.06 × 1019 cm−3 after 12 min (Fig. 4e). Meanwhile, the Hall mobility decreases from an initial value of 76.1 to 50.1 cm2 V−1 s−1 after oxidation. We note that in Lee et al.'s work22, lower α (79 μV K−1) at room temperature with higher σ (282 S cm−1) indicates a higher level of self-doping, whereas our control of oxidation level allows us to precisely tune the α/σ ratio and ultimately optimise ZT. The measured temperature-dependent thermal conductivity for 1% SnCl2 perovskite thin films is presented in Fig. 4c. At room temperature after a minimal air exposure of 30 s, the thermal conductivity is 0.38 ± 0.01 W m−1 K−1 and it increases with air exposure to 0.47 ± 0.01 W m−1 K−1. To obtain the lattice thermal conductivity κlattice from the measured κtotal (= κlattice + κelectronic), we plotted κtotal as a function of σ (Supplementary Fig. 11). The electronic thermal conductivity, κelectronic, is described by Wiedemann-Franz law (κelectronic = σLT), enabling us to determine κlattice and the Lorentz number, L, from the intercept and slope, respectively, of a linear fit. We note that since electrical doping is provided by a thin (<10 nm) surface layer, we can assume that the lattice thermal conductivity in the bulk of the film is not strongly affected by the doping process, which is a requirement for this analysis. We found κlattice = 0.38 ± 0.01 W m−1 K−1 at room temperature, which is consistent with Lee's work (0.38 ± 0.04 W m−1 K−1), and extract a Lorentz number of (2.40 ± 0.33) × 10−8 W Ω K−2 at room temperature or an average of (2.26 ± 0.13) × 10−8 W Ω K−2 over the full temperature range, close to the Sommerfeld value for free electrons. Furthermore, we can confirm that polycrystalline CsSnI3−xClx thin films exhibit a temperature dependence of κlattice that is consistent with the Calloway model46, with Umklapp scattering processes dominating in this temperature range, as has been reported for methylammonium lead iodide perovskites21,47. Finally, the thermoelectric figure-of-merit, ZT, of our CsSnI3−xClx perovskite films increases for oxidation time, τ, in the range 0–6 min, and then decreases for τ more than 6 min, as shown in Fig. 4d. The largest ZT is 0.14 ± 0.01 at 345 K for 1% CsSnI3−xClx, a factor of 7 higher compared with that of the τ = 0 sample (ZT = 0.02 at 355 K). The figure-of-merit shows a 32% reduction after 10 h in air, and a 30% reduction after 10 days storage in nitrogen atmosphere (Supplementary Figs. 12 and 13). We also note that thinner films showed higher electrical conductivities, but no improvement in ZT (Supplementary Fig. 14). This high degree of control over ZT through tuning of σ and α indicates the effectiveness of self-doping in the thermoelectric performance of Sn-halide perovskites. Interestingly, the maximum ZT is a function of the degree of Cl-doping (Fig. 4f), with a sharp increase of ZTmax from 0.07 at 0% Cl to a peak of ZTmax = 0.14 ± 0.01 at 1% Cl and steady decrease upon further Cl-inclusion. In parallel, we observe that the Seebeck coefficient decreases as a function of Cl-doping (Supplementary Figs. 15, 18), implying that the more heavily Cl-doped the films are, the higher the charge carrier density that is achieved. This is further evidence that the chlorine-rich surface layer is acting not only as a protective layer slowing down oxidation of the underlying CsSnI3, but also as a sacrificial source of holes in this system that are donated from the surface to the bulk. This separation of the dopants from the charge transport channel prevents the introduction of scattering defects in the transport channel which can reduce charge mobility, and is the reason that our Cl-doped films can achieve up to four times the maximum electrical conductivity of our pristine CsSnI3 films (Fig. 1 and Supplementary Fig. 18). Our work sheds light on optimisation strategies of halide perovskites for thermoelectrics, with wider implications for the development of halide perovskite films with targeted properties across other application areas such as photovoltaics, photodetectors, thin film transistors and light-emitting diodes. We have demonstrated a number of thermal vapour deposition methods for the formation of high quality CsSnI3 thin films from its precursor materials. These films are self-doping through oxidation of Sn2+ to Sn4+, and we have shown that the stability and electrical conductivity of the films is highly dependent on whether a sequential or co-evaporation process is adopted, with the former offering higher electrical conductivity and the latter higher stability. For this reason, we developed a hybrid of the two approaches (SLS) to offer a suitable platform from which to optimise thermoelectric properties. Beyond this and building on knowledge that mixed halide approaches can improve atmospheric stability of halide perovskites, we adopted a unique approach to chlorine doping of our CsSnI3 films, that results in substitution of chlorine into a perovskite crystal lattice in a region <10 nm from the surface. We have shown that the Cl-dopants not only enhance stability but simultaneously act as a sacrificial source of free charges. The electrical doping is therefore coming from the outer atomic layers of the film, but dopes to the pristine bulk, thus dividing the film into a thin doping layer and a thicker charge transport channel and ensuring that the introduction of dopants does not degrade mobility in the transport channel. The accessible free charge carrier concentration is therefore determined by the Cl concentration and by tuning this in combination with the degree of oxidation, we have optimised thermoelectric performance, achieving ZT = 0.14 ± 0.01, and verified that the Wiedemann-Franz law is valid in these materials with a Lorenz number similar to the Sommerfeld value for free electrons. These results are important in identifying routes to develop the halide perovskite class of materials for thermoelectric applications, but the process of their optimisation for thermoelectrics has revealed a deeper understanding of their thermal and electrical transport properties, as well as strategies for controlled doping, which has implications across all areas of their application. The potential advantages of halide perovskite materials for thermoelectrics are elemental abundance as well as mechanical flexibility, solution processability and large area scalability. Finally, we note that the stability of this tin based perovskite material could be further improved by adding a strong reductant with favourable Goldschmidt tolerance into the structure, pursuing layered structures, surface passivation or adopting mixed metal approaches. Film deposition We present three type of evaporated films: co-evaporated, sequentially evaporated and seed layer plus sequential deposition (SLS). For co-evaporated films, tin (II) iodide (SnI2, 99.99%, Sigma-Aldrich) and caesium iodide (CsI, 99.99%, Sigma-Aldrich) were simultaneously deposited at 10−7 mbar. The deposition rate was 1 Å s−1 for SnI2 (achieved with a crucible temperature of 160 °C) and 3 Å s−1 for CsI (achieved with a crucible temperature of 430 °C). The mirror-black films were directly obtained from the co-evaporation methods without annealing. For the sequential deposition methods, SnI2 was thermally evaporated at 10−7 mbar at 2 Å s−1 (170 °C), followed by CsI at 6 Å s−1 (450 °C). The initially red-brown thin films were removed from the vacuum chamber for baking at 170 °C in nitrogen atmosphere. Upon baking, the appearance of the films became mirror-black indicating that CsSnI3 thin films were successfully fabricated. For SLS films, a 50 nm co-evaporated layer was first deposited as a seed layer. Above the seed layer, followed a layer deposited by the sequential method without breaking vacuum. Dark brown films were obtained from the SLS method, forming mirror-black CsSnI3 films upon baking at 170 °C. For mixed halide perovskite samples, tin (II) chloride (SnCl2, 99.99%, Sigma-Aldrich) was evaporated at 0.5 Å s−1 (achieved with a crucible temperature of 130 °C) on top of SLS films (which had not been baked) without breaking vacuum. The mixed halide films were also baked at 170 °C in nitrogen atmosphere to form mirror-black mixed halide perovskite films. The surface morphology of the films was performed on a field-emission scanning electron microscope (FEI Inspect-F). Optical absorption UV-Vis absorption spectra were measured with Shimadzu UV-2600 spectrophotometer, using 10 min intervals for time-dependent air stability studies. X-ray diffraction was performed on a Siemens D5000 X-Ray Powder diffractometer using a Cu Kα source (λ = 1.54 Å). Grazing-incidence X-ray diffraction GIXRD measurements were performed at the XRD1 beamline of the ELETTRA synchrotron facility in Trieste (Italy). The X-ray beam had a wavelength of 0.7 Å and a beam size of 200 × 200 μm2. 2D-GIWAXS images were collected by using 2 M Pilatus silicon pixel X-ray detector (DECTRIS Ltd.) positioned perpendicular to the incident beam, at a distance of 260 mm from the sample. The grazing incident angle was fixed at αi = 0.5° to probe the full thickness of the film. Scanning transmission electron microscopy-energy-dispersive X-ray spectroscopy Transmission electron microscopy (TEM) and high-resolution TEM imaging was carried out on a Tecnai G2 F20 S-TWIN at 200 kV. High angle annular dark field scanning transmission electron microscopy (HAADF-STEM) imaging and energy-dispersive X-ray spectroscopy (EDS) elemental mapping were performed on a JEM-ARM 200F at 200 kV. The TEM specimen fabrication was by the evaporation process of CsSnI3−xClx perovskite mentioned in the Film deposition section onto a copper grid with an amorphous carbon film on top. X-ray photoelectron spectroscopy XPS was performed on Thermo Scientific K-Alpha X-ray photoelectron spectrometer with a monochromatic Al Kα X-ray source under high vacuum (2 × 10−8 mbar). Etching of the films for depth profiling was by in situ sputtering at room temperature using a beam of 3 keV Ar+ ions. The etching depth profile was calculated from the etching time required to etch through to the silicon substrate. Fitting was performed on the CasaXPS package, incorporating Voigt line shapes and a Shirley background. Thermoelectric properties measurement In-plane thermoelectric properties (σ, κ and α) were measured simultaneously on the same sample with a Linseis Thin Film Analyser (described elsewhere48,49,50). In this measurement geometry, in-plane thermal conductivity is measured on a suspended SiN membrane by a 3-ω method. Electrical conductivity is measured by the van der Pauw method with four needle like electrodes at the four corners of the films. The Seebeck coefficient measurement uses a thermometer and a heater on the suspended membrane to achieve a temperature gradient (schematised in the Supplementary Fig. 19). Samples fabricated in the glovebox were transferred to the Linseis Thin Film Analyser with <2 min exposure to air. The humidity in lab was around 40%. All measurements were performed under vacuum and in the dark. Hall effect measurements were performed on PPMS-9 from Quantum Design Inc. When we wished to partially oxidise the films, the measurement chamber was refilled with air to atmospheric pressure for a designated time before pumping down again for the next measurement. The data that support the findings of this work are available from the corresponding author on request. Kojima, A., Teshima, K., Shirai, Y. & Miyasaka, T. Organometal halide perovskites as visible-light sensitizers for photovoltaic cells. J. Am. Chem. Soc. 131, 6050–6051 (2009). Burschka, J. et al. Sequential deposition as a route to high-performance perovskite-sensitized solar cells. Nature 499, 316–319 (2013). Lee, M. M., Teuscher, J., Miyasaka, T., Murakami, T. N. & Snaith, H. J. Efficient hybrid solar cells based on meso-superstructured organometal halide perovskites. Science 338, 643–647 (2012). Best Research-Cell Efficiency Chart. https://www.nrel.gov/pv/cell-efficiency.html (2019). Dong, Q. F. et al. Electron-hole diffusion lengths >175 μm in solution-grown CH3NH3PbI3 single crystals. Science 347, 967–970 (2015). Shi, D. et al. Low trap-state density and long carrier diffusion in organolead trihalide perovskite single crystals. Science 347, 519–522 (2015). Akkerman, Q. A. et al. Solution synthesis approach to colloidal cesium lead halide perovskite nanoplatelets with monolayer-level thickness control. J. Am. Chem. Soc. 138, 1010–1016 (2016). Shamsi, J. et al. Colloidal synthesis of quantum confined single crystal CsPbBr3 nanosheets with lateral size control up to the micrometer range. J. Am. Chem. Soc. 138, 7240–7243 (2016). Song, J. Z. et al. Monolayer and few-layer all-inorganic perovskites as a new family of two-dimensional semiconductors for printable optoelectronic devices. Adv. Mater. 28, 4861–4869 (2016). Dou, L. T. et al. Atomically thin two-dimensional organic-inorganic hybrid perovskites. Science 349, 1518–1521 (2015). Schmidt, L. C. et al. Nontemplate synthesis of CH3NH3PbBr3 perovskite nanoparticles. J. Am. Chem. Soc. 136, 850–853 (2014). Zou, W. et al. Minimising efficiency roll-off in high-brightness perovskite light-emitting diodes. Nat. Commun. 9, 608 (2018). Xing, J. et al. Color-stable highly luminescent sky-blue perovskite light-emitting diodes. Nat. Commun. 9, 3541 (2018). Gong, X. et al. Electron-phonon interaction in efficient perovskite blue emitters. Nat. Mater. 17, 550–556 (2018). Shrestha, S. et al. High-performance direct conversion X-ray detectors based on sintered hybrid lead triiodide perovskite wafers. Nat. Photon 11, 436–440 (2017). Pan, W. C. et al. Cs2AgBiBr6 single-crystal X-ray detectors with a low detection limit. Nat. Photon 11, 726–732 (2017). Zhu, H. et al. Lead halide perovskite nanowire lasers with low lasing thresholds and high quality factors. Nat. Mater. 14, 636–642 (2015). Yakunin, S. et al. Low-threshold amplified spontaneous emission and lasing from colloidal nanocrystals of caesium lead halide perovskites. Nat. Commun. 6, 8056 (2015). Hata, T., Giorgi, G. & Yamashita, K. The effects of the organic-inorganic interactions on the thermal transport properties of CH3NH3PbI3. Nano Lett. 16, 2749–2753 (2016). Yue, S. Y., Zhang, X. L., Qin, G. Z., Yang, J. Y. & Hu, M. Insight into the collective vibrational modes driving ultralow thermal conductivity of perovskite solar cells. Phys. Rev. B 94, 115427 (2016). Mettan, X. et al. Tuning of the thermoelectric figure of merit of CH3NH3MI3 (M=Pb,Sn) photovoltaic perovskites. J. Phys. Chem. C. 119, 11506–11510 (2015). Lee, W. et al. Ultralow thermal conductivity in all-inorganic halide perovskites. Proc. Natl Acad. Sci. USA 114, 8693–8697 (2017). He, Y. P. & Galli, G. Perovskites for solar thermoelectric applications: a first principle study of CH3NH3Al3 (A = Pb and Sn). Chem. Mater. 26, 5394–5400 (2014). Walsh, A., Scanlon, D. O., Chen, S., Gong, X. G. & Wei, S.-H. Self-regulation mechanism for charged point defects in hybrid halide perovskites. Angew. Chem. Int. Ed. 54, 1791–1794 (2015). Brandt, R. E., Stevanović, V., Ginley, D. S. & Buonassisi, T. Identifying defect-tolerant semiconductors with high minority-carrier lifetimes: beyond hybrid lead halide perovskites. MRS Commun. 5, 265–275 (2015). Abdelhady, A. L. et al. Heterovalent dopant incorporation for bandgap and type engineering of perovskite crystals. J. Phys. Chem. Lett. 7, 295–301 (2016). Mitzi, D. B., Feild, C. A., Harrison, W. T. A. & Guloy, A. M. Conducting tin halides with a layered organic-based perovskite structure. Nature 369, 467–469 (1994). Takahashi, Y., Hasegawa, H., Takahashi, Y. & Inabe, T. Hall mobility in tin iodide perovskite CH3NH3SnI3: evidence for a doped semiconductor. J. Solid State Chem. 205, 39–43 (2013). Saini, S., Baranwal, A., Yabuki, T., Hayase, S. & Miyazaki, K. Growth of halide perovskites thin films for thermoelectric applications. MRS Adv. 4, 1719–1725 (2019). Wang, P. Y. et al. Solvent-controlled growth of inorganic perovskite films in dry environment for efficient and stable solar cells. Nat. Commun. 9, 2225 (2018). Marshall, K. P., Walker, M., Walton, R. I. & Hatton, R. A. Enhanced stability and efficiency in hole-transport-layer-free CsSnI3 perovskite photovoltaics. Nat. Energy 1, 16178 (2016). Chung, I. et al. CsSnI3: semiconductor or metal? High electrical conductivity and strong near-infrared photoluminescence from a single material. High hole mobility and phase-transitions. J. Am. Chem. Soc. 134, 8579–8587 (2012). Takahashi, Y. et al. Charge-transport in tin-iodide perovskite CH3NH3SnI3: origin of high conductivity. Dalton T 40, 5563–5568 (2011). Kontos, A. G. et al. Structural stability, vibrational properties, and photoluminescence in CsSnI3 perovskite upon the addition of SnF2. Inorg. Chem. 56, 84–91 (2017). Ma, Y. C. et al. Controlled crystal facet of MAPbI3 perovskite for highly efficient and stable solar cell via nucleation modulation. Nanoscale 11, 170–177 (2019). Chung, I., Lee, B., He, J. Q., Chang, R. P. H. & Kanatzidis, M. G. All-solid-state dye-sensitized solar cells with high efficiency. Nature 485, 486–494 (2012). Barlow, S. M., Bayatmokhtari, P. & Gallon, T. E. M4,5N4,5N4,5 auger spectrum of tin and oxidized tin. J. Phys. C. Solid State 12, 5577–5584 (1979). Kövér, L. et al. High-resolution photoemission and auger parameter studies of electronic-structure of tin oxides. J. Vac. Sci. Technol. A 13, 1382–1388 (1995). Kövér, L. et al. Electronic structure of tin oxides: high-resolution study of XPS and auger-spectra. Surf. Interface Anal. 23, 461–466 (1995). Lee, A. F. & Lambert, R. M. Oxidation of Sn overlayers and the structure and stability of Sn oxide films on Pd(111). Phys. Rev. B 58, 4156–4165 (1998). Fenwick, O. et al. Tuning the energetics and tailoring the optical properties of silver clusters confined in zeolites. Nat. Mater. 15, 1017–1022 (2016). Satta, M. & Moretti, G. Auger parameters and Wagner plots. J. Electron Spectrosc. 178, 123–127 (2010). Asbury, D. A. & Hoflund, G. B. A Surface study of the oxidation of polycrystalline tin. J. Vac. Sci. Technol. A 5, 1132–1135 (1987). Cortecchia, D. et al. Lead-Free MA2CuClxBr4-x hybrid perovskites. Inorg. Chem. 55, 1044–1052 (2016). Lu, N. D., Li, L. & Liu, M. A review of carrier thermoelectric-transport theory in organic semiconductors. Phys. Chem. Chem. Phys. 18, 19503–19525 (2016). Callaway, J. Model for lattice thermal conductivity at low temperatures. Phys. Rev. 113, 1046–1051 (1959). Article ADS CAS MATH Google Scholar Pisoni, A. et al. Ultra-low thermal conductivity in organic-inorganic hybrid perovskite CH3NH3PbI3. J. Phys. Chem. Lett. 5, 2488–2492 (2014). Linseis, V., Volklein, F., Reith, H., Nielsch, K. & Woias, P. Advanced platform for the in-plane ZT measurement of thin films. Rev. Sci. Instrum. 89, 015110 (2018). Burton, M. R. et al. Thin film tin selenide (SnSe) thermoelectric generators exhibiting ultralow thermal conductivity. Adv. Mater. 30, 1801357 (2018). Volklein, F., Reith, H. & Meier, A. Measuring methods for the investigation of in-plane and cross-plane thermal conductivity of thin films. Phys. Status Solidi A 210, 106–118 (2013). The research was financed under O.F.'s Royal Society University Research Fellowship (UF140372). B.S. acknowledges financial support by the British Council (Grant No: 337323). T.L., X.Z., J.L. and Z.L. were supported by the Chinese Scholarship Council (CSC). School of Engineering and Material Sciences, Queen Mary University of London, Mile End Road, London, E1 4NS, UK Tianjun Liu & Oliver Fenwick The Organic Thermoelectrics Laboratory, Materials Research Institute, Queen Mary University of London, Mile End Road, London, E1 4NS, UK Tianjun Liu, Xiaoming Zhao & Oliver Fenwick Department of Chemistry, University College London, 20 Gordon Street, London, WC1H 0AJ, UK Jianwei Li, Zilu Liu & Bob C. Schroeder Istituto per la Microelettronica e Microsistemi (IMM)-Consiglio Nazionale delle Ricerche (CNR), Via Gobetti 101, 40129, Bologna, Italy Fabiola Liscio & Silvia Milita Tianjun Liu Xiaoming Zhao Jianwei Li Zilu Liu Fabiola Liscio Silvia Milita Bob C. Schroeder Oliver Fenwick T.L. performed the experimental work on film deposition, structure and thermoelectric property characterization. X.Z. and J.L. performed the XRD measurements and data analysis. T.L., Z.L. and B.S. performed the XPS and AES measurements. F.L. and S.M. performed GIWAX measurements. This project was conceived and planned by O.F. and T.L and supervised by O.F. The paper was written with contributions from all authors. Correspondence to Oliver Fenwick. Peer review information Nature Communications thanks Peng Gao and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Liu, T., Zhao, X., Li, J. et al. Enhanced control of self-doping in halide perovskites for improved thermoelectric performance. Nat Commun 10, 5750 (2019). https://doi.org/10.1038/s41467-019-13773-3 A charge transfer framework that describes supramolecular interactions governing structure and properties of 2D perovskites Melissa L. Ball Yueh-Lin Loo Glassy thermal conductivity in Cs3Bi2I6Cl3 single crystal Paribesh Acharyya Kanishka Biswas Prediction of high thermoelectric performance in the low-dimensional metal halide Cs3Cu2I5 Young-Kwang Jung In Taek Han Aron Walsh npj Computational Materials (2021) Si0.97Ge0.03 microelectronic thermoelectric generators with high power and voltage densities Ruchika Dhawan Prabuddha Madusanka Optoelectronic and thermal properties of cubic SiMO3 (M = Sn, Pb) oxides for device application: a first principle study Umm-e-Hani G. Murtaza Hafiz Hamid Raza Optical and Quantum Electronics (2020) Top 50 Physics Articles
CommonCrawl
Effective machine-learning assembly for next-generation amplicon sequencing with very low coverage Louis Ranjard ORCID: orcid.org/0000-0002-7622-48231, Thomas K. F. Wong1 & Allen G. Rodrigo1 BMC Bioinformatics volume 20, Article number: 654 (2019) Cite this article In short-read DNA sequencing experiments, the read coverage is a key parameter to successfully assemble the reads and reconstruct the sequence of the input DNA. When coverage is very low, the original sequence reconstruction from the reads can be difficult because of the occurrence of uncovered gaps. Reference guided assembly can then improve these assemblies. However, when the available reference is phylogenetically distant from the sequencing reads, the mapping rate of the reads can be extremely low. Some recent improvements in read mapping approaches aim at modifying the reference according to the reads dynamically. Such approaches can significantly improve the alignment rate of the reads onto distant references but the processing of insertions and deletions remains challenging. Here, we introduce a new algorithm to update the reference sequence according to previously aligned reads. Substitutions, insertions and deletions are performed in the reference sequence dynamically. We evaluate this approach to assemble a western-grey kangaroo mitochondrial amplicon. Our results show that more reads can be aligned and that this method produces assemblies of length comparable to the truth while limiting error rate when classic approaches fail to recover the correct length. Finally, we discuss how the core algorithm of this method could be improved and combined with other approaches to analyse larger genomic sequences. We introduced an algorithm to perform dynamic alignment of reads on a distant reference. We showed that such approach can improve the reconstruction of an amplicon compared to classically used bioinformatic pipelines. Although not portable to genomic scale in the current form, we suggested several improvements to be investigated to make this method more flexible and allow dynamic alignment to be used for large genome assemblies. De novo assembly algorithms classically use graph, de Bruijn or overlap-layout-consensus, to join short sequencing reads into longer contigs. However, when the short-reads coverage is very low, only short contigs can be reconstructed because of the occurrence of uncovered gaps in the sequence [1]. In this case, the availability of a reference sequence can be beneficial to connect and order these contigs, an approach known as reference-guided assembly or homology-guided assembly [2, 3]. The reads are mapped onto this reference and a contig is constructed by taking the consensus of the short-reads at each position. However, some gaps in the mapping of the reads onto the reference may remain if the available reference is too distant phylogenetically from the sequence the short-reads originate from. This is because the short-reads that cannot, or can only partially, be mapped to the distant reference are discarded or trimmed. The information contained in the discarded or trimmed sequences of the reads is therefore lost. Hence, improvements in the alignments of the reads to the reference that are able to take advantage of this unexploited information should improve the assemblies. Iterative referencing proposes to align all the reads to the reference and then update the reference sequence by calling the consensus of the reads. Once the reference has been updated, several additional iterations of read mapping/reference update can be performed to progressively improve the results [4–8]. Significant improvements in the mapping accuracy of the reads is achieved thanks to this approach [9]. Subsequently, it has been shown that dynamic approaches can offer comparable improvements while performing less data processing, i.e. only requiring a single iteration of read mapping [9]. In dynamic mapping, the reference is updated continuously as the reads are aligned onto it in an online fashion. Hence, the information obtained from the alignments of previous reads is used to map future reads. Dynamic strategies can be especially useful when the read sequences are highly divergent from the reference [9]. However, the treatment of insertions and deletions (indels) remains a problem to dynamic mappers as the coordinates of the reads have to be continuously recalculated [9] with a new indexing of the reference. Here, we introduce a new online read aligner, Nucleoveq [10], and assess how it can improve the alignment of the reads when the reference is distant phylogenetically from the reads. This is a difficult task because, in this case, a large portion of the reads cannot be mapped to the reference. Using a machine learning approach, we present an algorithm that is able to dynamically perform substitutions and indels in the reference. The probability of each base at each position is learned from the past read alignments. A dynamic time warping algorithm uses these probability vectors directly to measure the edit distance between a read and the reference at the best alignment position. This is contrasting from previously proposed dynamic mapping approaches that record a counter for the different possible variants between the sequential updates of the reference [9]. In the present method, the reference is updated after every read alignments. Note that our algorithm allows the reference to be updated with insertions and deletions at any position in the reference. We show that, because the reference sequence is continuously updated according to the alignment of the previous reads, the alignment of the read gradually improves. We demonstrate that this feature allows us to take advantage of distantly related reference sequence and improve the resulting short-reads assembly. In order to assess our method, we asked whether the improved read alignment provided by a dynamic approach results in better guided assemblies. We compared the assembly obtained from the dynamic aligner to classic assembly techniques. Briefly, we tested three assembly pipelines referred to as: mapping, mapping of all the reads to the reference followed by update of the reference; learning, dynamic time warping alignment of the reads with simultaneous machine learning approach to update the reference (Nucleoveq [10], see online Methods for details); de novo, reference-free assembly of the reads using a de Bruijn graph approach. Additionally, two hybrid approaches were evaluated, the de novo + mapping and the de novo + learning pipelines where the contigs obtained by the de novo assembly of the reads are respectively mapped and aligned before updating the reference. A set of computer simulations was performed to compare the reconstructed sequence obtained by these strategies when coverage is very low (1−5×) and with varying phylogenetic distances between the original sequence and the sequence used as reference. We used sequencing short-reads obtained from a study of mitochondrial amplicons of the western-grey kangaroo, Macropus fuliginosus [11, 12]. Focusing on a 5,000 bp amplicon allowed us to conduct extensive re-sampling of the reads. Published mitochondrial reference sequences from the following species were used as references: the eastern-grey kangaroo (Macropus giganteus, Genbank accession NC_027424), the swamp wallaby (Wallabia bicolor, Genbank accession KJ868164), the Tasmanian devil (Sarcophilus harrisii, Genbank accession JX475466) and the house mouse (Mus musculus, Genbank accession NC_005089). The computer simulations were performed using the most divergent amplicon (Amplicon 3) identified by [11] which is located from position 11,756 to 16,897 in the eastern-grey kangaroo mitochondrial genome, total length of 5,130bp. This region contains the mitochondrial D-loop and, at the time of this study, the nucleotide sequence is not covered in the western-grey kangaroo mitochondrial genome (Genbank accession KJ868120). These species were chosen at increasing phylogenetic distance from the western-grey kangaroo (Table 1) but with no changes in their gene order. The homologous regions were selected in each species by aligning the amplicon sequence to each mitochondrial genome in Geneious version 10.2.4 [13]. Then, a region spanning from position 11,000 bp to 1,200 bp was used for each circular reference genome except the eastern-grey kangaroo. For the eastern-grey sequence the homologous amplicon region was used [11]. This was done to reduced computational time while still keeping some part of the sequences located outside of the target region, i.e. from which the short-reads originate. The quality of the different assemblies was evaluated by using two statistics: first, the number of errors while aligning the reconstructed amplicon and the true western-grey kangaroo amplicon sequences; second, the length of the reconstructed sequence. Table 1 The four different reference sequences used to guide the reconstruction of the western-grey kangaroo mitochondrial amplicon from short sequencing reads Reference positions covered The total read coverage in the reference was recorded for both the mapping and learning approaches to assess whether dynamic reference updates increases the reads alignment rate. As expected, the number of bases covered increases with the number of reads sampled (Fig. 1). However, with distant reference sequences, i.e. the Tasmanian devil and the house mouse, the mapping rate of the reads is very low while the alignment rate is less affected by the increasing phylogenetic distance of the reference. Moreover, with these two species used as reference, the mapping rate remains low even though the depth of coverage increases. Generally, it appears that the variance in the mapping rate is higher than for the alignment rate. Realised coverage obtained by mapping (MAPPING) or aligning (LEARNING) sequencing reads to increasingly distant homologous reference sequences. The short-reads originate from a western-grey kangaroo amplicon of length 5,130bp with 5× coverage, therefore the expected number of bases covered is ∼ 25,000 (dashed line) Assembly evaluation A total of 2000 computer simulations were conducted. For coverage values ranging from 1× to 5×, the number of reads required to achieve such coverage was calculated and a corresponding subset of reads was randomly chosen among the full set. Then, for each of the four species reference sequence, the five pipelines were tested. A total of 100 replicates was performed for each setting. To compute the number of errors and length of the reconstructed sequence statistics, the pairwise alignment was computed using the Needleman-Wunsch algorithm with affine gap penalty scheme, the NUC44 scoring matrix and null gap penalties at the end of the sequences. The non-aligned sequences at the beginning and at the end of the alignment were discarded and the remaining sequence length was reported for comparisons between pipelines. The number of errors was computed as the Hamming distance between the remaining aligned sequences. Overall, the learning approaches offered the best compromise between limiting the error rate and recovering the true length of the amplicon sequence (Fig. 2). In all simulation settings, the de Bruijn graph assemblies (de novo assembly) achieved a very low error rate. On the other hand, this approach was only able to generate relatively short assemblies compared to the other pipelines (Fig. 2). However, with increasing coverage the length of the de novo assembled contigs increased confirming the suitability of de Bruijn graph based methods for assembling short-reads when the depth of coverage is high. Specifically, our simulations showed that at least a 20× coverage is required to reconstruct the full length amplicon with this approach (Fig. 3). Number of errors and length in nucleotide of the reconstructed amplicon for each bioinformatic pipeline and simulation settings. The 95% intervals are shown as solid lines for each method along both dimensions (reconstructed amplicon length and error rate) With more than 20× coverage, the de Bruijn graph assembly is able to reconstruct the expected amplicon length (5,130bp) When using distant references (Tasmanian devil and the house mouse), the hybrid approaches (de novo + mapping and de novo + learning) produced less errors than the same algorithms used on the raw reads (Fig. 2). However, when using more closely related sequences as references, the de novo + mapping method produced more errors than the mapping pipeline. This is putatively the consequence of the low coverage of the de novo assembly of the reads, i.e. the de novo only generated very short contigs. On the other hand, the de novo + learning and learning generated similar amount of errors with closely related reference sequences used as guides. With more distant reference sequences, the de novo + learning produced less errors than the learning pipeline. While both pipelines benefit from an increase in read coverage, the de novo + learning returned the lowest amount of errors with distant references. When the reference sequence was chosen phylogenetically close to the reads sequence, i.e. eastern-grey kangaroo and swamp wallaby, and the coverage was set to 5×, all pipelines, except de novo assembly, generated assemblies of comparable length from the truth. With decreasing coverage, the reconstructed sequence length also decreased for all methods. This is particularly noticeable for approaches that use mapping of the reads as the mapping rate strongly decreases with increasing phylogenetic distance of the reference (Fig. 1). On the other hand, the two methods that use dynamic programming to align the reads were able to reconstruct sequences of length comparable to the western-grey amplicon using distant reference (Fig. 2). It is noticeable that in these cases the variance of both the length and the error rate for the mapping-based pipelines is comparatively very high. This is highly likely to be the consequence of the higher variance in the mapping rate for these pipelines and it may indicate that the mapping-based methods are more sensitive to a non-uniform coverage of the re-sampled reads. Moreover, the variation between the different mitochondrial genomes is not uniformly distributed and the mapping of the reads would be more difficult when they originate from highly divergent regions. Comparison to iterative referencing Additionally, an iterative mapping approach was implemented by repeating the mapping pipeline five times using the updated reference obtained at the previous iteration. This approach was tested with the Tasmanian devil reference sequence at coverage 5× as it is expected that the best improvements would be obtained with higher coverage. As expected iterative mapping improved the sequence reconstruction (Table 2). Each additional iteration of the mapping of the reads allowed the error rate to decrease as more reads could be mapped. However, the improvements were limited. After five iterations, the error rate and the length of the reconstructed sequence were still worse than the ones obtained with the de novo + learning pipeline (Fig. 2). Similar limited improvements were obtained using the other reference sequences and coverage values. No improvements in the number of bases covered was observed after three iterations for eastern-grey kangaroo and swamp wallaby references, and after eight iterations for the more distant relative references (Fig. 4). Increasing the number of mapping iteration of the same reads does improve the number of aligned reads, measured as number of bases covered, but only to a limited extend. The short-reads originate from an amplicon of length 5,130bp with 5× coverage, therefore the expected number of bases covered is ∼ 25,000 (dashed line) Table 2 Iterative mapping lowers the error rate and the length of the reconstructed sequences Assembly ofMacropus fuliginosus mitochondrial genome To demonstrate the applicability of the method, a full mitochondrial genome was assembled from short-reads using a sister species reference sequence. At the time of this study, the western-grey kangaroo mitochondrial genome is only partial and lacks the hyper variable region (Genbank accession KJ868120) [11]. We used our method to reconstruct the full mitochondrial genome of the individual identified as "KA" in [11]. First, the partial mitochondrial genome of the western-grey kangaroo was completed using the eastern-grey kangaroo reference (Genbank accession NC_027424) generating an hybrid full genome template. The sequencing reads generated from three western-grey kangaroo mitochondrial amplicons, of length 4641bp, 4152bp and 5140bp (83% of the genome, [11]), were then aligned to this reference template using Nucleoveq. One of the amplicon fully spans the missing region in the western-grey kangaroo mitochondrial genome reference. Reads were sub-sampled so that to obtain a coverage of 5×. Because the coverage was low, ten iterations were conducted to insure that the reference was fully covered by randomly sampled reads. The ten replicates of the mitochondrial genome assembly were aligned with an average of 99% identity. Visual inspections of the alignment of the replicates showed that these differences occurred in regions with no coverage. The consensus sequence of the ten replicates was compared to the high coverage assembly of the mitochondrial assembly from [11]. As expected, some errors were observed at the beginning or end of the three mitochondrial amplicons. Because the short-read coverage was extremely low in these regions, it was very unlikely that the sub sampling of the reads retrieved these sequences. A new mitochondrial genome was generated by correcting the consensus sequence with the high coverage information. The newly assembled western-grey mitochondrial genome was annotated in Geneious version 10.2.4 [13] using the eastern-grey kangaroo mitochondrial genome as a reference. The western-grey complete mitochondrial genome is on Genbank under accession number MH717106. By iteratively aligning short sequencing reads and updating the reference sequence, we were able to improve the reconstruction of the read sequence, resulting in assemblies of comparable length to the truth while limiting the number of errors. The improvement of this dynamic alignment method over de Bruijn graph- or the mapping-based approaches tested here can be explained by two factors. First, the alignment rate is higher when using dynamic programming over the Burrows-Wheeler transform approach used for mapping the reads. Second, the progressive modifications of the reference, as reads are aligned onto it, facilitate the alignment of the following reads because the reference is continuously pulled closer to the reads sequence [9]. This is particularly useful when only a phylogenetically distant reference sequence is available for a reference-guided assembly. Actually, our results showed that the static mapping of the reads is not possible when the reference is too distant from the reads, as demonstrated by a very low mapping rate. The drawback of our dynamic programming method for read alignment is memory usage. The memory required to build the alignment matrix M (see Methods) precludes the direct usage of this method for large genome assemblies. While our approach is relevant to small genome assemblies, e.g. mitochondrial, supplementary work would be required to adapt this approach to large genome read alignments. For example, while it is not possible to directly align the reads to a large genome, a first search could help identify short windows, i.e. few thousands bases, in the reference sequence where the reads could then be aligned more accurately by our algorithm. In the current implementation of the method, it is optionally possible to take advantage of the known mapping positions of the reads by passing a mapping file as argument. This technique can massively reduce the memory requirements as only a window of specified size around these positions will be considered for performing the alignment. Our algorithm could also be combined with other methods to find the potential locations of each read in the genome prior to performing the alignments. The seed-based algorithm used by Blast [14] or some kmer-based seed searches [15, 16] are obvious candidates. However, when the reference sequence is distant from the reads, it is not possible to initially map all the reads onto it. It is therefore inevitable to re-align or re-map these reads once the reference has been partially updated. Our method improves previous dynamic reference building approaches in that it allows the reference to be updated with insertions and deletions. Previously, Liao and co-authors [15] proposed a seed and vote approach to locate indels. [9] proposed a dynamic mapping approach where the reference is iteratively updated with the read sequences but indels were not fully supported [17]. Our method not only locates but also aligns and corrects the reference sequence with indels, facilitating further the subsequent read alignments. This approach comes at the computational cost of realigning each read onto the reconstructed reference. However, in our algorithm each read is treated independently and the updates of the reference are only performed according to the information from one read at a time. This is different from graph-based and iterative referencing methods that need all reads to be aligned before calling the variants. As a consequence, parallelization may be used to distribute batch of reads to be analysed independently prior to merging the several assemblies. The threshold limit for performing insertions and deletions was set to be equal to the learning rate (see Methods). Therefore, indels will not be performed when the read alignment is poor. However, there is no particular reasons to use this value and other values could be used based on other statistics. Preliminary tests (data not shown) indicated that this value nevertheless returned best assemblies. Similarly, the indels costs was set to equal the maximum possible distance between a pair of nucleotide vectors. Preliminary tests using grid search showed that similar results were obtained while varying their values (data not shown). However, this hyper-parameters could also be set to depend on some other parameters measured on the data and further investigations could be conducted to explore these possibilities. Finally, the learning rate hyper-parameter was set to depend on the alignment distance. Classically in machine learning algorithms, the learning rate is set to decay through the learning process [18, 19]. Conversely, in our algorithm, it is expected that the rate will increase as the reference sequence gets closer to the reads. Alternative learning rate schedules could be tested, for example cyclic methods as proposed by [20] for training deep neural networks. Moreover, we only considered one epoch for learning, i.e. one iteration over the full set of reads. In other words, the total read set is only seen once to learn the amplicon sequence. Because the reads are chosen in a random order, the assembled sequence will potentially be different between distinct runs of the algorithm and there is no guarantee to converge on the best assembly. Performing the learning over multiple epochs could potentially improve the convergence among runs at the cost of processing time. The presented method can therefore improve assemblies in experiments with low coverage of the input DNA material by the sequencing reads. While it is not common to design targeted sequencing strategies with low coverage, they can nevertheless be encountered in other situations. For example, when only a low amount of DNA is available, e.g. ancient DNA studies or challenging DNA extraction conditions. Moreover, assemblies are sometime conducted from experiments that were designed for different purposes. For instance, the reads obtained for a transcript sequencing experiment could be used to sequence the mitochondrial genome of a species lacking a reference [21]. Permitting assembly from lower amount of reads would therefore allow researchers to extract more information from sequencing experiments. Learning from dynamic programming alignment of the reads to the reference In essence, the algorithm consists in aligning the reads to the reference using dynamic time warping. Then, an "average" sequence of the aligned region is computed from the best path of the local free-ends alignment [22]. This approach was originally designed to perform unsupervised clustering of bioacoustic sequences [23]. In this work, a similar algorithm is implemented to analyse nucleotide sequences: each nucleotide position in a sequence is represented as a four elements vector, the Voss representation [24], encoding the probability of each base according to previously aligned reads. This numerical representation of DNA sequence is appropriate for the comparison of DNA sequences [25] and their classification[26]. In molecular biology, a similar algorithm has been applied to the clustering of amino acid sequences [27] where vector quantization is used to estimate the probability density of amino acids. In the area of genomic signal processing, dynamic time warping approaches have been successful at classifying various representations of genomic data [28–31]. We consider two sequences of nucleotide vectors, a reference F=f1...fl and a read R=r1...rn, respectively representing the reference sequence of length l and a read of length n aligned onto it. The vectors fx, where 1≤x≤l, and ry, where 1≤y≤n, represent the probability vectors of each nucleotide at position x in the reference and position y in the read, respectively. Through a statistical learning process and vector quantization, the reference sequence vectors are updated according to the sequencing read nucleotides. Ultimately, the goal is to reconstruct, i.e. assemble, the original sequence S which the reads come from. A probability vector ry is calculated according to the quality scores of each base at position y in the read, with equal probability given to the alternative bases. More precisely, if the base b was called with calling error probability q at position y, ryb=1−q and \(\phantom {\dot {i}\!}r_{yb^{\prime }}=q/3\) for b′ in {1..4}∖{b}. At initialisation, all fx are only made of binary vectors defined by the reference sequence. Additionally, a "persistence" vector P=p1...pl, where pi for 1≤i≤l are initialised all to 1, is updated when indels occur for each nucleotide position in the reference. The distance between a pair of nucleotide vectors is defined as $$\begin{array}{*{20}l} {}d(f_{x},r_{y}) &= d([f_{x1},f_{x2},f_{x3},f_{x4}], [r_{y1},r_{y2},r_{y3},r_{y4}]) \\ &= |f_{xi}-r_{yi}| \quad for \quad i\,=\,argmax_{j}([r_{yj}]), \quad j\,=\,{1...4}. \end{array} $$ Therefore, only the nucleotide with the highest probability in the read is taken into account. A dynamic programming approach is used to align the reads to the reference sequence. Let M(x,y) the minimum edit distance over all possible suffixes of the reference from position 1 to x and the read from position 1 to y. $${\begin{aligned} M(x,0) &= 0 \quad for \quad 0 \leq x \leq l\\ M(0,y) &= c*y \quad for \quad 1 \leq y \leq n\\ M(x,y) &= \min{ \left\lbrace \begin{array}{ll} M(x-1,y-1) + d(f_{x-1},r_{y-1}) \\ M(x-1,y) + c \\ M(x,y-1) + c \\ \end{array} \right. }\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! for \quad 1 \leq x \leq l \quad and \quad 1 \leq y \leq n, \end{aligned}} $$ with the insertion/deletion cost is c=1. The three elements correspond to three edit operations: insertion, deletion and substitution. The value in eFR=min1≤x≤lM(x,n) therefore consists in an edit distance between the read and the reference vector sequences of nucleotide vectors. It is then normalised by the length of the read to obtain a read "edit rate", \(\hat {e}_{FR}\). The optimal path is traced back and, at each position, the new reference vector is updated. In case of a substitution, fx=w∗fx+(1−w)ry with a learning rate w (see below). In cases of deletions or insertions, the fx remains unchanged but the corresponding position in the persistence vector decreases or increases by an amount equal to (1−w), respectively. Then, the persistence value is assessed against a threshold: if px>1+w or px<1−w, then an insertion or a deletion is performed at the position x in the reference sequence. For insertions, the inserted nucleotide vector is initialised to the same value ry which is the nucleotide probability vector on the position y of the read r aligned to the inserted position in the reference. All the reads are chosen in random order and sequentially aligned to the reference sequence according to this procedure (Fig. 5). Overview of the algorithm. Reads are taken in random order and iteratively aligned to the reference. After each alignment, the reference sequence is updated according to the learning rate w, which is proportional to the normalised edit distance between the read and the reference. In this case, there is one substitution between the reference of the read; the read has a G with Phred quality score of 15 while the reference is T. One deletion and one insertion are treated thanks to a persistence vector. The persistence value p∙ indicates the tendency of a base to be inserted or deleted at each position in the reference. This value can trigger indels update in the reference when it goes beyond a threshold The learning rate (1−w) is set to depend on the edit rate and governs how much the reference is updated. For low values of (1−w) the reference mostly remains unmodified. When the distance between the read and the reference is low, there is high certainty in the positioning of the read onto the reference. Therefore, the learning rate can be increased to facilitate the update of the reference toward the sequence of the read. On the other hand, when the alignment of the read is more difficult, i.e. high edit distance, the learning rate is set to a low value so that the reference is only slightly updated and misalignments or errors in the read sequence are not affecting the learning process. Computer simulations were conducted in order to determine the distribution of the edit distances between reads and increasingly divergent reference sequences. First, a nucleotide sequence of length \(\mathcal {U}(500,5000)\) was generated by randomly choosing nucleotides with 50% GC content. A read sequence of length 150 was generated by randomly choosing a position in the original sequence and using an error rate of 1% with the errors uniformly distributed along the sequence. Then, mutations were introduced in the original sequence, at a rate of {1,5,10,30,50}%, and single nucleotide indels were introduced at a rate of 10%. Additionally, random reference sequences of similar length were generated to build a random distribution of the distance. The process was repeated 1,000 times (Fig. 6). Distribution of the normalised edit distance between reads and increasingly distant reference sequences. The mutation rate of the reference sequence is indicated on the y-axis. The top row (Random) shows the distribution of the edit distance when reads were aligned to randomly generated nucleotide sequences. For the lowest row, the reads were aligned to their original sequence and the departure from 0 of the edit distance only results from the simulated sequencing errors From the empirical distributions of the distance (Fig. 6), the learning rate was determined to be equal to 0.95 when the distance is below 0.05, which corresponds to the range of distances expected due to sequencing errors. It is set to 0.05 when the distance is above 0.35, i.e. the distance expected when the read and the reference sequence have less than 70% sequence similarity. Between normalised edit distances of 0.05 and 0.95, the rate was set to linearly increase, i.e. \(w=3 \times \frac {\hat {e}_{FR}}{n} - 0.1\). Five assembly pipelines First, the whole set of reads, average coverage of ∼ 2000×, was mapped to the eastern-grey kangaroo to determine the western-grey kangaroo mitochondrial sequence for the amplicon (see [11] for details). Then, five different bioinformatic pipelines were tested at lower coverage. At first, the reads were preprocessed before running each pipeline: Illumina adapters and low quality bases were removed (Trimmomatic version 0.36, [32]) using a sliding window of 15 nucleotides, with steps of four bases and the resulting reads below length 36 were discarded. Additionally, kmer error correction was performed using Tadpole (BBMap version 37.95, Brian Bushnell). The five assembly pipelines (Fig. 7) are described below: Mapping was performed using Bowtie2 version 2.2.6 [33]. Both "local" alignment with "soft trimmed" and "end-to-end" alignment of the reads were tested. In general, local alignment resulted in higher alignment rates and was therefore used in all simulations. Once the reads were aligned to the reference, Samtools version 1.5 [34] was used to order the reads. Freebayes version 1.1.0 [35] then allowed us to identify variants. Calls with high probability to be false positive, Phred score < 20, were removed with Vcffilter (Vcflib version 1.0.0) [36]. The consensus sequence was generated using Bcftools version 1.6 [34] by applying the alternative variants to the reference sequence. Finally, the uncovered parts at the beginning and at the end of the reference were removed. Learning consisted in iteratively aligning the reads and dynamically updating the reference according to the machine learning approach previously described, the algorithm is implemented in Nucleoveq [10]. For these simulations, all the reads were aligned to the reference and no prior information about the mapping position was utilised to perform read alignments. At the end of the learning process, the uncovered regions located at the beginning and end of the reference were truncated to generate the final assembly. De novo assembly was done with Trinity version 2.4.0 [37], using a kmer size of 17 and setting the minimum contig length to 100 so that assembly could be performed when coverage was very low. After assembly, the longest contig was selected for evaluation. De novo + Mapping consisted in mapping all the de novo assembly contigs obtained from Trinity to the reference in an effort to connect them into a longer sequence. The same approach as for mapping pipeline was used to generate the consensus. De novo + Learning consisted in feeding all the de novo assembly contigs obtained from Trinity to our machine learning algorithm. The same steps as for the above learning pipeline were performed while regarding the contigs instead of the reads as input. Five bioinformatic pipelines for assembly. Dashed-line: it is possible to pass a priori mapping position of the reads to Nucleoveq to decrease memory requirements and speed up computation (option not used in the reported comparisons) Software Nucleoveq is freely available at https://github.com/LouisRanjard/nucleoveq. Sequencing reads are available on Sequence Read Archive (SRA: SRP121381, BioProject: PRJNA415669). indels: insertions and deletions Miller JR, Koren S, Sutton G. Assembly algorithms for next-generation sequencing data. Genomics. 2010; 95(6):315–27. Rausch T, Koren S, Denisov G, Weese D, Emde A-K, Döring A, Reinert K. A consistency-based consensus algorithm for de novo and reference-guided sequence assembly of short reads,. Bioinforma (Oxford, England). 2009; 25(9):1118–24. Lischer HEL, Shimizu KK. Reference-guided de novo assembly approach improves genome reconstruction for related species. BMC Bioinformatics. 2017; 18(1):474. Otto TD, Sanders M, Berriman M, Newbold C. Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology,. Bioinforma (Oxford, England). 2010; 26(14):1704–7. Tsai IJ, Otto TD, Berriman M. Improving draft assemblies by iterative mapping and assembly of short reads to eliminate gaps,. Genome Biol. 2010; 11(4):41. Dutilh BE, Huynen MA, Gloerich J, Strous M. Iterative Read Mapping and Assembly Allows the Use of a More Distant Reference in Metagenome Assembly. In: Handbook of Molecular Microbial Ecology I. Hoboken: John Wiley & Sons, Inc.: 2011. p. 379–85. Ghanayim A. Iterative referencing for improving the interpretation of dna sequence data. Technical Report CS-2013-05, Technion, Computer Science Department. 2013. http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2013/CS/CS-2013-05.pdf. Hahn C, Bachmann L, Chevreux B. Reconstructing mitochondrial genomes directly from genomic next-generation sequencing reads–a baiting and iterative mapping approach. Nucleic Acids Res. 2013; 41(13):129. Břinda K, Boeva V, Kucherov G. Dynamic read mapping and online consensus calling for better variant detection. arXiv. 2016:1–21. Ranjard L. Nucleoveq. GitHub. 2018. https://github.com/LouisRanjard/nucleoveq. Ranjard L, Wong TKF, Rodrigo AG. Reassembling haplotypes in a mixture of pooled amplicons when the relative concentrations are known: A proof-of-concept study on the efficient design of next generation sequencing strategies. PLoS ONE. 2018; 13(4):0195090. Wong TKF, Ranjard L, Lin Y, Rodrigo AG. HaploJuice : Accurate haplotype assembly from a pool of sequences with known relative concentrations. bioRxiv. 2018:307025. Kearse M, Moir R, Wilson A, Stones-Havas S, Cheung M, Sturrock S, Buxton S, Cooper A, Markowitz S, Duran C, Thierer T, Ashton B, Meintjes P, Drummond A. Geneious Basic: An integrated and extendable desktop software platform for the organization and analysis of sequence data. Bioinformatics. 2012; 28(12):1647–9. Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basic local alignment search tool. J Mol Biol. 1990; 215(3):403–10. Liao Y, Smyth GK, Shi W. The Subread aligner: fast, accurate and scalable read mapping by seed-and-vote. Nucleic Acids Res. 2013; 41(10):108. Břinda K, Sykulski M, Kucherov G. Spaced seeds improve <i>k</i> -mer-based metagenomic classification. Bioinformatics. 2015; 31(22):3584–92. Břinda K, Boeva V, Kucherov G. Ococo: an online consensus caller. arXiv preprint. 2017;1712.01146. 2017. Ranjard L, Withers SJ, Brunton DH, Ross HA, Parsons S. Integration over song classification replicates: Song variant analysis in the hihi. J Acoust Soc Am. 2015; 137(5):2542–51. Ruder S. An overview of gradient descent optimization algorithms. arXiv preprint. 2016;1609.04747. 2016. Smith LN. Cyclical Learning Rates for Training Neural Networks. arXiv preprint. 2015;1506.01186. 2015. Ranjard L, Wong TKF, Kulheim C, Rodrigo AG, Ragg NLC, Patel S, Dunphy BJ. Complete mitochondrial genome of the green-lipped mussel, Perna canaliculus (Mollusca: Mytiloidea), from long nanopore sequencing reads. Mitochondrial DNA Part B. 2018; 3(1):175–6. Ranjard L, Ross HA. Unsupervised bird song syllable classification using evolving neural networks. J Acoust Soc Am. 2008; 123(6):4358–68. Ranjard L, Withers SJ, Brunton DH, Parsons S, Ross HA. Geographic patterns of song variation reveal timing of song acquisition in a wild avian population. Behav Ecol. 2017; 28(4):1085–92. Voss RF. Evolution of long-range fractal correlations and 1/ <i>f</i> noise in DNA base sequences. Phys Rev Lett. 1992; 68(25):3805–8. Mendizabal-Ruiz G, Román-Godínez I, Torres-Ramos S, Salido-Ruiz RA, Morales JA. On DNA numerical representations for genomic similarity computation. PLoS ONE. 2017; 12(3):0173288. Mendizabal-Ruiz G, Román-Godínez I, Torres-Ramos S, Salido-Ruiz RA, Vélez-Pérez H, Morales JA. Genomic signal processing for DNA sequence clustering. PeerJ. 2018; 6:4264. Olshen AB, Cosman PC, Rodrigo AG, Bickel PJ, Olshen RA. Vector quantization of amino acids: Analysis of the HIV V3 loop region. J Stat Plan Infer. 2005; 130(1-2):277–98. Legrand B, Chang CS, Ong SH, Neo S-Y, Palanisamy N. Chromosome classification using dynamic time warping. Pattern Recogn Lett. 2008; 29(3):215–22. Skutkova H, Vitek M, Babula P, Kizek R, Provaznik I. Classification of genomic signals using dynamic time warping. BMC Bioinformatics. 2013; 14(Suppl 10):1. Skutkova H, Vitek M, Sedlar K, Provaznik I. Progressive alignment of genomic signals by multiple dynamic time warping. J Theor Biol. 2015; 385:20–30. Loose M, Malla S, Stout M. Real-time selective sequencing using nanopore technology. Nat Methods. 2016; 13(9):751–4. Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014; 30(15):2114–20. Langmead B, Salzberg SL. Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012; 9(4):357–9. Li H. A statistical framework for SNP calling, mutation discovery, association mapping and population genetical parameter estimation from sequencing data. Bioinformatics. 2011; 27(21):2987–93. Garrison E, Marth G. Haplotype-based variant detection from short-read sequencing. 2012. Garrison E. a simple C++ library for parsing and manipulating VCF files. Github. 2016. https://github.com/vcflib/vcflib. Grabherr MG, Haas BJ, Yassour M, Levin JZ, Thompson DA, Amit I, Adiconis X, Fan L, Raychowdhury R, Zeng Q, Chen Z, Mauceli E, Hacohen N, Gnirke A, Rhind N, di Palma F, Birren BW, Nusbaum C, Lindblad-Toh K, Friedman N, Regev A. Full-length transcriptome assembly from RNA-Seq data without a reference genome. Nat Biotechnol. 2011; 29(7):644–52. We thank the anonymous reviewers for their constructive comments and ideas, which helped to improve the manuscript. This research was funded by an Australian Research Council Discovery Project Grant #DP160103474. The funding body had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. The Research School of Biology, The Australian National University, Canberra, Australia Louis Ranjard , Thomas K. F. Wong & Allen G. Rodrigo Search for Louis Ranjard in: Search for Thomas K. F. Wong in: Search for Allen G. Rodrigo in: LR conceived the ideas; LR, TKFW and AGR designed methodology; LR wrote the code, analysed the data and led the writing of the manuscript. All authors contributed critically to the drafts and gave final approval for publication. Correspondence to Louis Ranjard. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Ranjard, L., Wong, T.K.F. & Rodrigo, A.G. Effective machine-learning assembly for next-generation amplicon sequencing with very low coverage. BMC Bioinformatics 20, 654 (2019) doi:10.1186/s12859-019-3287-2 Western-grey kangaroo Machine Learning and Artificial Intelligence in Bioinformatics Submission enquiries: [email protected]
CommonCrawl
Two bodies are in equilibrium when suspended in water from the arms of balance. The mass of one body is $$36\ g$$ and its density is $$9\ g/ cm^{3}$$. If the mass of the other is $$48\ g$$, its density in $$g/ cm^{3}$$ is: The correct option is C $$3$$ Resultant force on both of them must be same. For the body of mass $$36gram$$ $${ F }_{ \left( Net \right) }=36\times 10-36\cfrac { { \rho }_{ w } }{ 9 } \times 10=36\times 10\left[ \cfrac { 8 }{ 9 } \right] =320N\\ 320=480-\cfrac { 48\times 1\times 10 }{ \rho } \\ \cfrac { 480 }{ \rho } =160\\ \rho =\cfrac { 480 }{ 160 } =3$$
CommonCrawl
What's wrong with this "proof" that Gödel's first incompleteness theorem is wrong? Edit: I've added an answer myself, based on the other answers and comments. Here is a very very informal "proof" (sketch) that Gödel's theorem is wrong (or at least that the idea of the proof is wrong) : Roughly, the proof of Gödel's theorem is as follows: For any decidable and consistent set of axioms $\Phi$ that contain (or imply) the first order Peano's axioms in first order language, we can construct a Gödel sentence $G^\Phi$, such that neither $\Phi\vdash G^\Phi$ nor $\Phi\vdash \neg G^\Phi$, but where we know from an argument in the meta-language that $G^\Phi$ is true. For any such $\Phi$, we will therefore have a counterexample to the completeness of the theory $Th(\Phi)$. Therefore we know that no such $\Phi$ can be complete (where complete means that all first order statements can be proven or disproven from it). Here is a failed proposal to "circumvent" Gödel's theorem that I have already heard someone make: Just define $\Phi_1=\Phi\cup \{G^\Phi\}$, and since we know that $G^\Phi$ is true in arithmetic, we know that $\Phi_1$ is consistent. The problem of course is: We can now formulate a new Gödel sentence $G^{\Phi_1}$, which cannot be proven from in $\Phi_1$, even though it is true in standard arithmetic. Now here is my proposal: Rather than trying to add individual Gödel sentences to the set of axioms, we simply take the enumeration procedure, such that it enumerates $\phi_i \in \Phi$ for the original set of axioms $\Phi$, and also enumerates all successive Gödel sentences $G^{\Phi}, G^{\Phi_1}, G^{\Phi_2},...$. This is possible, since $\Phi$ is decidable, and decidable sets of finite strings are enumerable, so we can enumerate them successively, as $\phi_1, \phi_2, \phi_3$, where $\phi_1$ is the first statement of the enumeration of $\Phi$, and $\phi_2 = G^\Phi$, and $\phi_3$ is the second statement of the enumeration of $\Phi$, etc... We can then define the set of axioms $\Phi_\infty = \{\phi_1, \phi_2, ...\}$. This will also have a Gödel sentence $G^{\Phi_\infty}$. But what we can simply do, is add this to the enumeration procedure as well. And then the next one, and the next, and so forth. We take this process to infinity, just as we did for $\Phi$, and just keep going. Every time a Gödel sentence pops up, we simply add it to the enumeration. Now note that: since the set of first order sentences is countable, the set of Gödel sentences is countable as well (since it is a subset of the set of first order sentences). Therefore we can in this procedure described above enumerate all possible Gödel sentences. The resulting set of sentences forms an enumerable and consistent set of sentences $\Psi$ that contains the original $\Phi$, and additionally contains the Gödel sentences of all possible sets of axioms $\Phi_x$. Therefore The Gödel sentence of $\Psi$ must be in $\Psi$ itself. Moreover, we can then create a "decidable version" of $\Psi$, by defining $\Psi^*=\{\psi_1, \psi_1 \land \psi_2, \psi_1 \land \psi_2\land \psi_3, ... \}$, for all $\psi_1, \psi_2,... \in \Psi$. We therefore have a consistent and decidable set of first order sentences that are true in standard arithmetic, contain Peano's axioms, and bypass Gödel's proof of incompleteness. This is obviously a contradiction with Gödel's theorem. So where is my "proof sketch" wrong? fake-proofs incompleteness $\begingroup$ Related: math.stackexchange.com/questions/1703489/… $\endgroup$ – Henning Makholm Apr 1 '18 at 10:54 $\begingroup$ Please, pay attention to this: when you say "the proof of Gödel's theorem is as follows: For any decidable and consistent set of axioms Φ that contain (or imply) the first order Peano's axioms..." you are omitting the fact that actually Godel's first incompleteness theorem hold for every semidecidable (which is more general than decidable) and consistent set of first-order axioms that imply Peano axioms. $\endgroup$ – Taroccoesbrocco Apr 1 '18 at 11:10 $\begingroup$ @CarlMummert - Do you refer to Craig's theorem? I had forgotten it, thank you fro the reminder. Sorry for my worthless comment. $\endgroup$ – Taroccoesbrocco Apr 1 '18 at 22:01 $\begingroup$ Yes, that is the theorem. Your comment was something that seems like it should be right, so many people have thought it was right over time. The key point, of course, is that we can move to a different set of axioms for the same theory. $\endgroup$ – Carl Mummert Apr 1 '18 at 22:10 $\begingroup$ Cool, I didn't know that had a name. Though I'll probably have forgotten the name the next time I need to refer to the result ,,, $\endgroup$ – Henning Makholm Apr 2 '18 at 11:42 There are at least two problems here. First, when you say "we take this process to infinity and just keep going", that is a very informal description, and without spending some work on making it more concrete you have no good reason to expect it can actually be made to work. Fortunately, such work has in fact been done, and the standard way of making it concrete is to speak about process steps indexed by transfinite ordinal numbers, which I'm going to suppose is what you are proposing. Then, however, a real problem arises: Gödel's procedure only works when the original theory is recursively axiomatized, that is, it is computable whether a given proposed sentence is one of its axioms or not. In order to do this for one of your intermediate theories, you need to be able to algorithmically describe the process that produced the theory. And there is (vividly speaking, but can be made precise) so far to infinity that there are not enough Turing machines to describe the structure of each of the theories you encounter along the way. So at some point you're going to approach a point where the process that produced the axioms you already have is so complex that the combined theory is not recursively axiomatized anymore, and then the incompleteness theorem stops working. A second, comparatively minor, problem comes later in your argument: since the set of first order sentences is countable, the set of Gödel sentences is countable as well (since it is a subset of the set of first order sentences). Therefore we can in this procedure described above enumerate all possible Gödel sentences. This argument seems to be the form: "There are a countable infinity of foos in total; here we have a set of countably-infinite many foo; therefore the set contains all of them", which is not valid -- consider e.g. the situation where a "foo" is a natural number and the set in question contains all the perfect squares. (Note also that you don't seem to have defined what a "possible Gödel sentence" means, which you really ought to do before claiming that you have all of them). Henning MakholmHenning Makholm $\begingroup$ I think you misunderstood the OP regarding the second problem. I think the claim was: the set is countable (as a subset of a countable set), hence it can be enumerated. [The definition of can be enumerated probably shifted during the argument.] $\endgroup$ – Carsten S Apr 1 '18 at 14:31 $\begingroup$ @CarstenS, yes indeed, my point was that the set of godel sentences is countable, and can therefore be enumerated. I made this point because the set of possible sets of sentences is not countable, and hence one might think that since each such set has a godel sentence, the set of godel sentences is not countable either. I made the point that this is obvioiusly a wrong conclusion, and that instead the set of godel sentences is countable, and can therefore potentially be enumerated. $\endgroup$ – user56834 Apr 1 '18 at 15:31 $\begingroup$ A question about your point: "And there is (vividly speaking, but can be made precise) so far to infinity that ..." Could you specify what you mean by "far to infinity"? I assume you don't mean that they are uncountable? Also, could you elaborate (at least informally) why the combined theory is not recursively axiomatized anymore? And why there are not enough turing machines to "describe" these theories? What exactly does "describe" mean here? Represent them in natural numbers? Intuitively it seems to me that the process I described already gives a blueprint for such turing machines. $\endgroup$ – user56834 Apr 1 '18 at 15:36 $\begingroup$ @Programmer2134, Hurkyl's answer addresses your latest questions. The idea is that, as Henning said, if you want to formalize the notion of "going to infinity and then continuing", your theories need to correspond to ordinals (to actually enumerate the axioms, you'll want ordinal notations, but the idea is the same without worrying about that). But whatever ordinal you use for your "final" theory, there's always a bigger ordinal that you haven't used, for the same reason that there's no biggest integer; and your theory will be missing the corresponding axiom. $\endgroup$ – Robin Saunders Apr 2 '18 at 1:19 $\begingroup$ There is a limit to how large ordinals can get whilst still possessing a notation, i.e. a way to use that ordinal to computably enumerate axioms (or anything else). The smallest ordinal that has no notation is the Church-Kleene ordinal mentioned by Hurkyl. But there's no largest ordinal that does have a notation, because "take the next ordinal instead" is a computable operation on ordinal notations. $\endgroup$ – Robin Saunders Apr 2 '18 at 1:22 Taking this argument, refined by Henning Makholm's observation about quantifying it with ordinals, and combining it with the fact the conclusion of the the incompleteness theorem says you can't actually achieve your goal proves an interesting theorem: Theorem: There are countable ordinal numbers that cannot be computed by Turing machine I don't think I've encountered this before, but I found some references on this phenomenon. Here are wikipedia links: Recursive ordinal — those well-orderings that can be expressed by computable functions Churck-Kleene ordinal — the first nonrecursive ordinal. It is countable. Large countable ordinal — more stuff In my opinion, what's going here in regard to computable functions is really the same phenomenon as uncountability is in set theory. Compare, for an infinite set $S$, "$S$ is recursively enumerable" means "there is a computable bijection $\mathbb{N} \to S$" "$S$ is countable" means "there exists a bijection $\mathbb{N} \to S$" The only difference between the two notions is what kind of functions we allow: whether we draw functions from a universe of sets or merely from the universe of Turing machines. In alternative language emphasizing this analogy, Gödel proves that every complete, consistent extension of PA is computably uncountable. The limitations of your argument are the fact that you can't reach the first computably uncountable ordinal $\omega_1^{CK}$. Henning Makholm HurkylHurkyl $\begingroup$ In particular, each of the Gödel sentences constructed along the way refers to a particular theory, which itself refers to a particular well ordered set of previous theories. So a "possible Gödel sentence" would include an ordinal notation for some countable ordinal $\eta$ and a sequence of theories $T_\alpha : \alpha < \eta$. This is why we can't enumerate "all possible" Gödel sentences, because every particular r.e. ordinal notation system is bounded strictly below $\omega_1^{CK}$. This is the relevant Wikipedia article: Ordinal notation $\endgroup$ – Carl Mummert Apr 1 '18 at 21:31 $\begingroup$ (Actually, that article is pretty incomprehensible, but it should be the relevant article, in a perfect world.) $\endgroup$ – Carl Mummert Apr 1 '18 at 21:35 $\begingroup$ @Programmer2134: An ordinal is computable if there is a program that decides a subset of $\mathbb N\times\mathbb N$ such that this subset is an order relation and $\mathbb N$ with this order is order-isomorphic to the ordinal we're talking about. $\endgroup$ – Henning Makholm Apr 2 '18 at 11:23 $\begingroup$ @Programmer2134: Recursively enumerable is a standard concept in computability theory. I don't think there's a wide consensus whether "enumerable" should mean "countable" or "recursively enumerable", so people generally just use the two latter terms instead. $\endgroup$ – Henning Makholm Apr 2 '18 at 11:33 $\begingroup$ @Programmer2134: Just because the mapping exists doesn't mean that the mapping is computable, which is what is needed here. (Your argument here sounds like it would also prove that every infinite subset of $\mathbb N$ is computable, which we know is not the case). $\endgroup$ – Henning Makholm Apr 2 '18 at 12:00 Some belated comments on your self-written answer: This is all more or less correct. There's a theory referred to as "True Arithmetic", which is the set of all sentences in the language of arithmetic which are true in "the intended model".⁽¹⁾ By definition, True Arithmetic is consistent and complete, so by the first incompleteness theorem it must not be recursively enumerable. It follows that, for any theory of arithmetic T which is recursively enumerable, the set of sentences of True Arithmetic that T cannot prove is itself not recursively enumerable, as you observed. I do have a nitpick with one thing you wrote, though it's not essential to the reasoning: By " in principle" I mean that if an oracle told us the next godel sentence of this enumeration every time we asked for it, we could, using this oracle, enumerate all the godel sentences in my construction, and thereby "bypass" the limitations implied by godel's theorem. This is all fine,⁽²⁾ as long as we keep in mind that the "next" Gödel sentence in the enumeration will not correspond with with the "next" theory as ordered by the corresponding ordinals. We do not in fact need an oracle to tell us the next sentence in the latter sense: the oracle is needed to give a single uniform listing of all the Gödel sentences, but the order in which it produces that list cannot correspond with the ordering of the corresponding ordinals. To illustrate this, recall that your original idea depended on (recursive) lists of ordinals going some way beyond the finite ones. Such lists do indeed exist, but the order of the list cannot correspond to the natural order of the ordinals, since otherwise we'd never get past the finite ones. For example, here's a recursive list of all ordinals below ω+ω (where ω is the first infinite ordinal): 0, ω, 1, ω+1, 2, ω+2, ... ⁽¹⁾ The reference to "the intended model" is needed in order to accommodate the following fact: for any recursively enumerable theory of arithmetic T, there are models of T, i.e. sets of "numbers" together with definitions of 0, 1, +, and · which satisfy the axioms of T, and which are not "the intended model" because e.g. a given Gödel sentence is false in those models. This fact is a result of applying Gödel's completeness theorem (not to be confused with his incompleteness theorems!) to the existence a Gödel sentence. Technically, the well-definedness of "the intended model", and hence of True Arithmetic, depends on the assumption that all sentences in the language of arithmetic do in fact have a "correct" truth value - an assumption that most working mathematicians take for granted, but that some people interested in the foundations of mathematics reject or at least question: see for instance http://jdh.hamkins.org/question-for-the-math-oracle/ ⁽²⁾ If by "bypassing Gödel's theorem" you mean that the list of axioms obtained at the end of this process should be complete, i.e. sufficient for deducing all sentences of True Arithmetic, you might need to be careful about not only which Gödel sentences you list, but what order you list them in: this is the subject of ordinal notations, which I've mostly avoided except to note that the Church-Kleene ordinal, which we're effectively using here, doesn't have a recursively enumerable one. There is more detail on this in the answers to https://mathoverflow.net/questions/67214/pi1-sentence-independent-of-zf-zfconzf-zfconzfconzfconzf-etc - a question close in spirit to yours, though it uses ZFC instead of Peano arithmetic, and the second incompleteness theorem rather than the first. Robin SaundersRobin Saunders Here is a condensation of my thoughts based on the other answers and comments. This enumeration that I proposed can be compared with functions on the natural numbers: While there are a countable number of natural numbers, there are an uncountable number of functions from natural numbers to natural numbers. Since there are a countable number of Turing machines (since a TM must be described by a finite string), there must therefore be an uncountable number of functions from $N$ to $N$ that cannot be computed. The same principle holds for the sequence of godel sentences that I formulated: This sequence of godel sentences "in principle" exists, and is countable, and could therefore "in principle" be made into to an enumerable set of axioms (combined with PA). By " in principle" I mean that if an oracle told us the next godel sentence of this enumeration every time we asked for it, we could, using this oracle, enumerate all the godel sentences in my construction, and thereby "bypass" the limitations implied by godel's theorem. However, we don't have an oracle, and have to use a TM to compute these godel sentences. And the problem is, that just as there are incomputable functions $f:N\to N$ due to uncountability of the space of such functions, there are also an uncountable number of orderings on the natural numbers, so that there will necessarily be countable ordinals whose enumeration in terms of natural numbers cannot be computed. In other words if we have a collection with order type equal to such an ordinal, we cannot compute the enumeration of the elements of this collection. (Even though the sentences in this set all consists of finite strings, and the set is countable) Moreover, since Godel's theorem holds for any enumerable set of axioms, we could employ transfinite induction: Let $\alpha(\Phi)$ mean something like: "Either there is a godel sentence $G^\Phi$ for $\Phi$ (which is not implied by $\Phi$), or $\Phi$ is not recursively enumerable". We could use transfinite induction combined with godel's theorem to show that for any ordinal $\beta$, it must hold that $\alpha(\Phi_\beta)$. But if $\Phi_\alpha$ is recursively enumerable, we can always add its godel sentence and those of all its successors to the next $\Phi_\gamma$, so if we continue with this process that I described, adding all the godel sentences together, then we must eventually reach a set $\Psi^*$ whose elements are no longer recursively enumerable. Moreover, if we let $\Psi=\Psi^* / PA$ (where $PA$ are the peano axioms), then $\Psi$ is also not recursively enumerable, yet this set contains only those godel sentences which we know from meta-logical analysis are true in standard arithmetic, but unprovable from $PA$. Hence we can state the following extension of Godel's theorem: Theorem. Let $\Phi$ be a recursively enumerable and consistent set of axioms containing the peano axioms. Then there exists a non-recursively enumerable and countably infinite set of sentences $\Psi$, such that for all $\psi\in \Psi$: Neither $\Phi \vdash \psi$, nor $\Phi \vdash \neg \psi$ This result (insofar as my informal proof sketch is correct) is much stronger than godel's theorem, and follows quite directly from godel's theorem. Essentially it says: There are an infinite amount of sentences that are true in standard arithmetic, but we can never know which sentences they are, let alone prove them from a set of first-order axioms. (we can enumerate a subset of them, even an infinite subset, but there is also an infinite subset that we can never find). EDIT: I think the theorem can be strengthened further: Note that the fact that there exist such a non-enumerable $\Psi$ may by itself not be that interesting, because in principle, it may be that if we take the list of all statements that are true but not provable from $PA$, that this list is suddenly enumerable. ($\Psi$ is a subset of this set, and if a subset is not enumerable, it may still be that the superset is enumerable). But this cannot be the case: Assume that the set $\Omega$ of sentences that are true but unprovable from $PA$ are enumerable. Then we can create a new set $\Phi_\Omega=PA \cup \Omega$, which is enumerable. But then by godel's theorem there exists a godel sentence $G^{\Phi_\Omega}$ such that it, and its negation, are not provable from $\Phi_\Omega=PA \cup \Omega$, but it is still true. But then by the definition of $\Omega$, we have $G^{\Phi_\Omega}\in \Omega$, so that $\Omega \vdash G^{\Phi_\Omega}$. This is a contradiction. Therefore, $\Omega$ cannot be enumerable. Theorem. Let $\Phi$ be a recursively enumerable and consistent set of axioms containing the peano axioms. Then the set of sentences $\Omega=\{\psi : \text { neither } \Phi \vdash \psi \text{ nor } \Phi \vdash \neg \psi\}$ is countably infinite and not recursively-enumerable. $\begingroup$ I'm not sure quite how Stack Exchange's notification system is set up, so just in case: I've posted a response to this in a separate answer. Hope it helps! $\endgroup$ – Robin Saunders Apr 4 '18 at 18:46 Not the answer you're looking for? Browse other questions tagged fake-proofs incompleteness or ask your own question. First Incompleteness Theorem: Does the limit of $\mathsf{T}_n\cup\{\rho_{\mathsf{T}_n}\}$ exist? Gödel's Second Incompleteness Theorem and Arithmetically Non-Definable Theories Concerning the canonical example for Gödel's first incompleteness theorem are these correct formulations of Godel's first incompleteness theorem? Do we not need to use the axioms of ZFC for a proof of Gödel's 1st Incompleteness theorem? Why do we find Gödel's Incompleteness Theorem surprising? How/why does Gödel's incompletness theorem apply to set theory? Does Gödel's theorem rule out derivations from all possible logical systems or just first-order logic? Gödel's incompleteness 1: construction of the formula relating Gödel number of proof to Gödel number of proven statement FOL and Gödel's Incompleteness Theorem 1 Decidability of Gödel sentences.
CommonCrawl
19w5220 HomeConfirmed Participants Workshop Files Final Report (PDF) Testimonials Schedule for: 19w5220 - Asymptotic Algebraic Combinatorics Arriving in Banff, Alberta on Sunday, March 10 and departing Friday March 15, 2019 16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre) 17:30 - 19:30 Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) 20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110)) 07:00 - 08:45 Breakfast ↓ Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. 08:45 - 09:00 Introduction and Welcome by BIRS Staff ↓ A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions. (Max Bell 252) 09:00 - 10:00 Robin Pemantle: A survey of applications of asymptotic combinatorics to probability ↓ I will survey applications of exact and asymptotic combinatorial methods to problems in probability theory. The methods will be familiar to combinatorialists: bijections (including RSK), inclusion-exclusion and other determinantal methods, lattice path enumeration results, the transfer matrix method, and analytic methods based on generating functions in one or more variables. The questions may be less familiar. These include non-intersecting Brownian motions and Brownian watermelons, SLE and Liouville Quantum Gravity, random tilings and quantum walks. 10:00 - 10:30 Coffee Break (Corbett Hall Lounge (CH 2110)) 10:30 - 11:30 Duncan Dauvergne: The Archimedean limit of random sorting networks ↓ Consider a list of n particles labelled in increasing order. A sorting network is a way of sorting this list into decreasing order by swapping adjacent particles, using as few swaps as possible. Simulations of large-n uniform random sorting networks reveal a surprising and beautiful global structure involving sinusoidal particle trajectories, a semicircle law, and a theorem of Archimedes. Based on these simulations, Angel, Holroyd, Romik, and Virag made a series of conjectures about the limiting behaviour of sorting networks. In this talk, I will discuss how to use the local structure of random sorting networks to prove these conjectures. (Max Bell) 11:30 - 12:00 Svante Linusson: Limit shape of shifted staircase SYT ↓ A shifted tableau of staircase shape has row lengths n,n-1,...,2,1 adjusted on the right side. I will present the limit shape for a uniformly random shifted Young tableau. This implies via properties of the Edelman– Greene bijection results about random 132-avoiding sorting networks, including limit shapes for trajectories and intermediate permutations. (Based on joint work with Samu Potka and Robin Sulzgruber.) 12:00 - 13:30 Lunch ↓ Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. 13:00 - 14:00 Guided Tour of The Banff Centre ↓ Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus. (Corbett Hall Lounge (CH 2110)) 14:00 - 15:00 Vadim Gorin: Boundaries of branching graphs from 80s till present ↓ I will review the progress in problems related to the branching graphs in asymptotic combinatorics and representation theory during the last 40 years. We will start from characters of infinite-dimensional unitary group, pass through Gelfand-Tsetlin graph and asymptotics of Schur polynomials, and end with a very recent topic of q-deformations. 15:00 - 15:10 Group Photo ↓ Meet in the foyer of the Max Bell building, in front of the meeting room, for the group photo. Dress for the weather, as the photo will be outside. Don't be late, or you may not be in the group photo! (Max Bell Foyer) 15:30 - 16:30 Alejandro Morales: Hook formulas for enumeration and asymptotics of skew tableaux (Max Bell 252) 16:30 - 17:00 break (Corbett Hall Lounge (CH 2110)) 17:00 - 17:30 Jehanne Dousse: Asymptotics of skew standard Young tableaux ↓ A standard Young tableau (SYT) is a filling of the boxes of a Young diagram of size n with the numbers 1 to n, such that the rows and columns are increasing. The hook-length formula of Frame, Robinson and Thrall allows one to compute the number of SYTs of a certain shape. However, when one considers SYTs of skew shapes (a diagram obtained by removing a Young diagram $\mu$ from the top left corner of a larger Young diagram $\lambda$), there is no such simple formula, and it is therefore harder to count them. In this talk, we will study the asymptotics of the number of SYTs of skew shapes. Our technique relies on bounds for characters of the symmetric group. This is joint work with Valentin Féray. 07:00 - 09:00 Breakfast (Vistas Dining Room) 09:00 - 10:00 Cyril Banderier: Analytic combinatorics, urn models, and limit surface of random triangular Young Tableaux ↓ Pólya urns are urns where at each unit of time a ball is drawn and replaced with some other balls according to its colour. We introduce a more general model: the replacement rule depends on the colour of the drawn ball and the value of the time (mod p). We extend the work of Flajolet et al. on Pólya urns: the generating function encoding the evolution of the urn is studied by methods of analytic combinatorics. We show that the initial partial differential equations lead to ordinary linear differential equations which are related to hypergeometric functions (giving the exact state of the urns at time n). When the time goes to infinity, we prove that these periodic Pólya urns have asymptotic fluctuations which are described by a product of generalized gamma distributions. With the additional help of what we call the density method (a method which offers access to enumeration and random generation of poset structures), we prove that the law of the south-east corner of a triangular Young tableau follows asymptotically a product of generalized gamma distributions. This allows us to tackle some questions related to the continuous limit of large random Young tableaux and links with random surfaces. Joint work with Philippe Marchal and Michael Wallner. 10:30 - 11:30 Stephen Melczer: Asymptotic regime change for multivariate generating functions ↓ The asymptotic study of multivariate generating functions comprises the domain of Analytic Combinatorics in Several Variables (ACSV). Analogously to the univariate setting, the techniques of ACSV show how the singularities of a (typically rational) multivariate generating function dictate asymptotics of its coefficients. Unlike the univariate case, however, a multivariate generating function encodes a wealth of sequences: one can take a direction vector R and examine asymptotics of the coefficient sequence on positive integer multiples of R. Although this definition is a priori only non-trivial when R contains rational entries, the techniques of ACSV show asymptotics typically vary in a uniformly predictable way as R varies smoothly, meaning asymptotics can be defined in a limit sense for "generic" directions. In this talk we survey the techniques of ACSV, discuss a new study of asymptotic transitions between different generic asymptotic regions, and highlight some new software implementations. Includes joint work with Yuliy Baryshnikov and Robin Pemantle, Bruno Salvy, and Éric Schost and Kevin Hyun. 12:00 - 13:30 Lunch (Vistas Dining Room) 14:00 - 15:00 Valentin Féray: Large permutations and permutons ↓ I will present the recently developed theory of permutons, which are limits of permutation sequences. The convergence in terms of permutons can be seen either at the convergence of the rescaled permutation matrix, or as the convergence of pattern proportions. We will survey recent results involving permutons: limit of the so-called Mallows model, large deviation theory for permutations, limits of uniform random permutations in permutation classes with finite specification... 15:30 - 16:00 Olga Postnova: Asymptotic of multiplicities and of character distributions for large tensor products of representations of simple Lie algebras ↓ Let $\mathfrak{g}$ be a simple Lie algebras and $V_i$, $I=1,\cdots, m$ be finite dimensional representations of $\mathfrac{g}$. The asymptotic of the multiplicity of irreducible representations in the tensor product $\prod_{I=1}^m V_i^{\otimes N_i}$ is derived in the limit $N_i\to \infty$, and $N_1:\cdots :N_m$ is finite. This asymptotic is used to compute the asymptotic of the character measure in this limit. This is a joint work with N. Reshetikhin and V. Serganova. 16:10 - 16:40 Maciej Dołęga: Jack-deformed random Young diagrams ↓ We introduce a large class of random Young diagrams which can be regarded as a natural one-parameter deformation of some classical Young diagram ensembles; a deformation which is related to Jack polynomials and Jack characters. We show that each such a random Young diagram converges asymptotically to some limit shape and that the fluctuations around the limit are asymptotically Gaussian. This is a joint work with Piotr Śniady. 17:00 - 17:30 Piotr Sniady: Spin characters and enumeration of maps ↓ Spin characters of the symmetric groups, with the right choice of the normalization, form a beautiful collection of polynomial functions on the set of shifted Young diagrams. During the talk I will present two explicit formulas for spin characters in terms of maps (=bicolored graphs drawn on surfaces). Bonus: I leave it as an open problem to the participants of the workshop to fill in the gap in an alternative, conceptually new proof of these formulas. References: Sho Matsumoto, Piotr Śniady. Stanley character formula for the spin characters of the symmetric groups. https://arxiv.org/abs/1810.13255 Sho Matsumoto, Piotr Śniady. Linear versus spin: representation theory of the symmetric groups. https://arxiv.org/abs/1811.10434 18:00 - 19:30 Dinner (Vistas Dining Room) 19:30 - 21:30 Open problem session (Max Bell 252) 09:00 - 10:00 Leonid Petrov: From matrices over finite fields to square ice ↓ Asymptotic representation theory of symmetric groups is a rich and beautiful subject with deep connections with probability, mathematical physics, and algebraic combinatorics. A one-parameter deformation of this theory related to infinite random matrices over a finite field leads to a randomization of the classical Robinson-Schensted correspondence between words and Young tableaux. Exploring such randomizations we find unexpected applications to six vertex (square ice) type models and traffic systems on a 1-dimensional lattice. 10:00 - 10:30 Sevak Mkrtchyan: The point processes at turning points of large lozenge tilings ↓ In the thermodynamic limit of the lozenge tiling model the frozen boundary develops special points where the liquid region meets with two different frozen regions. These are called turning points. It was conjectured by Okounkov and Reshetikhin that in the scaling limit of the model the local point process near turning points should converge to the GUE corners process. We will discuss various results showing that the point process at a turning point is the GUE corner process and that the GUE corner process is there in some form even when at the turning point the liquid region meets two frozen regions of arbitrary (non-lattice) rational slope. The last regime arises when weights in the model are periodic in one direction with arbitrary fixed finite period. 11:00 - 12:00 Collaboration/discussion (Max Bell 252) 13:30 - 17:30 Free Afternoon (Banff National Park) 09:00 - 10:00 Sara Billey: Cyclotomic Generating Functions ↓ It is a remarkable fact that for many combinatorial statistics, the roots of the corresponding generating function are each either a complex root of unity or zero. We call such generating functions \textit{cyclotomic} and study the possible limit distributions of their coefficients using cumulants. We consider three main examples of cyclotomic generating functions. First, we use Stanley's $q$-hook length formula to study the major index on standard tableaux of block diagonal skew shape. We give a simple statistic on partitions, \textit{aft}, which completely classifies all possible normalized limit laws for major index on any sequence of partition shapes, resulting in the uniform-sum and normal distributions. Our classification provides a common generalization of earlier work due to Canfield--Janson--Zeilberger, Chen--Wang--Wang, Diaconis, Feller, Mann--Whitney, and others on limit distributions of $q$-multinomial coefficients and $q$-Catalan numbers. In our second example, we consider the coefficients of Stanley's $q$-hook-content formula and illustrate a variety of normal and non-normal limit laws in this case. Finally, we consider $q$-hook length formulas of Bj\"orner--Wachs for the generating functions of the major index and inversion number on linear extensions of labeled forests. We conclude with several open problems concerning unimodality, log-concavity, and local limit laws. This talk is based on joint works with Matjaž Konvalinka and Joshua Swanson. 10:30 - 11:30 Alexander Yong: Complexity, combinatorial positivity, and Newton polytopes ↓ The Nonvanishing Problem asks if a coefficient of a polynomial is nonzero. Many families of polynomials in algebraic combinatorics admit combinatorial counting rules and simultaneously enjoy having saturated Newton polytopes (SNP). Thereby, in amenable cases, Nonvanishing is in the complexity class ${\sf NP} \cap {\sf coNP}$ of problems with "good characterizations". This suggests a new algebraic combinatorics viewpoint on complexity theory. This paper focuses on the case of Schubert polynomials. These form a basis of all polynomials and appear in the study of cohomology rings of flag manifolds. We give a tableau criterion for Nonvanishing, from which we deduce the first polynomial time algorithm. These results are obtained from new characterizations of the Schubitope, a generalization of the permutahedron defined for any subset of the n x n grid, together with a theorem of A. Fink, K. Meszaros and A. St. Dizier, which proved a conjecture of C. Monical, N. Tokcan and the speaker. 14:00 - 15:00 Christian Krattenthaler: Advanced Determinant Calculus ↓ I shall explain, and illustrate by examples, how I go about evaluating determinants. 15:30 - 16:00 Jang Soo Kim: Generalized Schur function determinants using Bazin-Sylvester identity ↓ In the literature there are several determinant formulas for Schur functions: the Jacobi--Trudi formula, the dual Jacobi--Trudi formula, the Giambelli formula, the Lascoux--Pragacz formula, and the Hamel--Goulden formula, where the Hamel--Goulden formula implies the others. In this talk we use the Bazin--Sylvester identity to derive a determinant formula for Macdonald's ninth variation of Schur functions. As consequences we obtain a generalization of the Hamel--Goulden formula and a Lascoux--Pragacz-type determinant formula for factorial Schur functions conjectured by Morales, Pak and Panova. This is joint work with Meesue Yoo. 16:00 - 16:30 Fedor Petrov: Asymptotics of Plancherel measure on graded graphs via asymptotics of uniform measure on paths to far level ↓ Let $G$ be a graded graph with levels $V_0,V_1,\dots$. Fix $m$ and choose a vertex $v$ on the level $V_n, n\geqslant m$. Consider the uniform measure on the paths from $V_0$ to the vertex $v$. Each such a path has a unique vertex on the level $V_m$, and so the measure $\nu_v^m$ on $V_m$ is induced. It is natural to expect that such measures have a limit when vertex $v$ goes to infnity by somehow ``regular'' way. This limit is then natural to call the Plancherel measure (on the set $V_m$). We justify such approach for the graphs of Young and Schur (of Young diagrams and strict Young diagrams, respectively). For them the regularity is understood as follows: the proportion of the boxes contained in the first row and the first column goes to 0. For Young graph this was essentially proved in the seminal work of Vershik and Kerov. We propose more straightforward and elementary approach and discuss the appearing polynomial identities. 19:30 - 21:00 Future directions, panel discussion (Max Bell 252) 09:00 - 09:30 Sylvie Corteel: Cylindric partitions ↓ The lecture hall partitions were introduced by Bousquet-Mélou and Eriksson in 1997 by showing that they are the inversion vectors of elements of the parabolic quotient $\tilde{C}_n/C_n$. Since 1997, a lot of beautiful combinatorial techniques were developed to study these objects and their generalisations. These use basic hypergeometric series, geometric combinatorics, real rooted polynomials... Some of those results can be found in the survey paper by C. D. Savage "The Mathematics of lecture hall partitions". Here we take a different approach and show that these objects are also multivariate moments of the Little q-Jacobi polynomials. The multivariate moments were introduced by Williams and me in the context of asymmetric exclusion processes. The benefit of this new approach is that we define a tableau analogue of lecture hall partitions and we show that their generating function is a beautiful product. This uses a mix of orthogonal polynomials techniques, non intersecting lattice paths (i.e. determinants) and q-Selberg integral. This is joint work with Jang Soo Kim (SKKU). 09:30 - 10:00 David Keating: Lecture hall tableaux ↓ In this talk we present some asymptotic of bounded Lecture Hall Tableaux of a given shape. We describe how to view the tableaux as a collection of nonintersecting paths. We then use tangent method, developed by Colomo and Sportiello, to recover a parametrization of the arctic curves arising in the thermodynamic limit. 10:30 - 11:00 Damir Yeliussizov: and Igor Pak: "On the largest Kronecker and Littlewood-Richardson coefficients" ↓ We give new bounds and asymptotic estimates for Kronecker and Littlewood--Richardson coefficients. Notably, we resolve Stanley's questions on the shape of partitions attaining the largest Kronecker and Littlewood–Richardson coefficients. We apply the results to asymptotics of the number of standard Young tableaux of skew shapes. Joint work of: Igor Pak, Greta Panova, Damir Yeliussizov. 11:30 - 12:00 Checkout by Noon ↓ 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, Max Bell and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon. The Front Desk has a luggage storage service. (Front Desk - Professional Development Centre) 12:00 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room)
CommonCrawl
Your Esoteric Language is Useless You'd think that programmers would get over these ridiculous language wars. The consensus should be that any one programmer is going to use whatever language they are most comfortable with that gets the job done most efficiently. If someone knows C/C++, C#, and Java, they're probably going to use C++ to write a console game. You can argue that language [x] is terrible because of [x], but the problem is that ALL languages are terrible for one reason or another. Every aspect of a languages design is a series of trade-offs, and if you try to criticize a language that is useful in one context because it isn't useful in another, you are ignoring the entire concept of a trade-off. These arguments go on for hours upon hours, about what exactly is a trade-off and what languages arguably have stupid features and what makes someone a good programer and blah blah blah blah SHUT UP. I don't care what language you used, if your program is shit, your program is shit. I don't care if you wrote it in Clojure or used MongoDB or used continuations and closures in whatever esoteric functional language happens to be popular right now. Your program still sucks. If someone else writes a better program in C without any elegant use of anything, and it works better than your program, they're doing their job better than you. I don't care if they aren't as good a programmer as you are, by whatever stupid, arbitrary standards you've invented to make yourself feel better, they're still doing a better job than you. I don't care if your haskell editor was written in haskell. Good for you. It sucks. It is terribly designed. It's workflow is about as conducive as a blob of molasses on a mountain in January. I don't care if you are using a fantastic stack of professionally designed standard libraries instead of re-inventing the wheel. That guy over there re-invented the wheel the wrong way 10 times and his program is better than yours because it's designed with the user in mind instead of a bunch of stupid libraries. I don't care if you're using Mercurial over SVN or Git on Linux using Emacs with a bunch of extensions that make you super productive. Your program still sucks. I am sick and tired of people judging programmers on a bunch of rules that don't matter. Do you know functional programming? Do you know how to implement a LAMP stack? Obviously you don't use C++ or anything like that, do you? These programmers have no goddamn idea what they're talking about. But that isn't what concerns me. What concerns me is that programmers are so obsessed over what language is best or what tool is best or what library they should use when they should be more concerned about what their program actually DOES. They get so caught up in building whatever elegant crap they're trying to build they completely forget what the end user experience is, especially when the end user has never used the program before. Just as you are not a slave to your tools, your program is not enslaved to your libraries. Your program's design should serve the user, not a bunch of data structures. The Irrationality of Idiots If Everyone Else is Such an Idiot, How Come You're Not Rich? - Megan McArdle I run around calling a whole lot of people and/or things stupid, dump, moronic, or some other variation of idiot. As the above quote exemplifies, saying such things tends to be a bit dangerous, since if everyone else was an idiot, you should be rich as hell. My snarky reaction to that, of course, would be that I'm not rich yet (and even then, "rich" in the sense of the quote is really just a metaphor for success, depending on how you define it for yourself), but in truth there are very specific reasons I call someone an idiot, and they don't necessarily involve actual intelligence. To me, someone is an idiot if they refuse to argue in a rational manner. If you ignore evidence or use nonsensical reasoning and logical fallacies to support your beliefs, you're an idiot. If you don't like me calling you an idiot, that's just fine, because I acknowledge your existence about as much as I acknowledge the existence of dirty clothes on my bedroom floor. It's only when there is such a pile of dirty laundry lying around that it impedes movement that I really notice and clean it up. In the case of suffocating amounts of stupidity, I usually just go somewhere else. The rest of the time, stupid people can only serve to grudgingly function in a society, not take part in running it. This is because designing and running a society requires rational thinking and logical arguments, or nothing gets done. I can get really angry about certain things, but I must yield to opinions that have a reasonable basis, if only to acknowledge that I might be wrong, even if I think I'm not. Everything I say or do must have some sort of logical basis, even if it originated from pure intuition. So long as you can poke legitimate holes in an accepted theory, you can hold some pretty crazy opinions that can't be considered illogical, though perhaps still incredibly risky or unlikely. All the other times I call someone an idiot, I'm usually being lazy when I should really be calling the action idiotic. For example, I can't legitimately call Mark Zuckerberg an idiot. If I call him an idiot, I'm not forming a legitimate opinion, and its probably because he did something that pissed me off and I'm ranting about it, and you are free to ignore my invalid opinion, at least until I clarify that what he did was idiotic, not him. Of course, sometimes people repeatedly do things that are just so mind-bogglingly stupid that it is entirely justified to actually call them a moron, because they are displaying a serious lack of bona fide intelligence. Usually, though, most people are entirely capable of rational thought, but simply do not care enough to exercise it, in which case their idiocy stems from an unwillingness to use rationality, not actual intelligence. I bring this up, because it seems to be a serious problem. What happens when we lose rationality? People can't compromise anymore, and we get a bunch of stupendously idiotic proposals borne out of ignorance that no longer has to pass through a filter of logical argumentation. All irrational disputes become polarized because neither side is willing to listen to the other, and the emotions that are intrinsically tied to the dispute prevent any meaningful progress from being made. Society breaks down in the face of irrationality because irrationality refuses to acknowledge things like, people are different. Well gee, that sounds like our current political mess. I am an aggressive supporter of educational reform, and one of the things that I believe should be taught in schools is not only rational thought and logical arguments, but how rational thought can complement creativity and irrational emotions. We cannot rid ourselves of illogical beliefs, because then we've turned into Vulcans, but we must learn, as a species, when our emotions are appropriate, and when we need to exercise our ability to be rational agents. As it is, we are devolving into a prehistoric mess of irrational demands and opinions that only serve to drag society backwards, just as we begin unlocking the true potential of our technology. Relevent: The Great Mystery of Linear Gradient Lighting A long, long time ago, in pretty much the same place I'm sitting in right now, I was learning how one would do 2D lighting with soft shadows and discovered the age old adage in 2D graphics: linear gradient lighting looks better than mathematically correct inverse square lighting. I brushed it off as artistic license and perceptual trickery, but over the years, as I dug into advanced lighting concepts, nothing could explain this. It was a mystery. Around the time I discovered microfacet theory I figured it could theoretically be an attempt to approximate non-lambertanian reflectance models, but even that wouldn't turn an exponential curve into a linear one. This bizarre law even showed up in my 3D lighting experiments. Attempting to invoke the inverse square law would simply result in extremely bright and dark areas and would look absolutely terrible, and yet the only apparent fix I saw anywhere was simply calculating light via linear distance in clear violation of observed light behavior. Everywhere I looked, people calculated light on a linear basis, everywhere, on everything. Was it the equations? Perhaps the equations being used operated on linear light values instead of exponential ones and so only output the correct value if the light was linear? No, that wasn't it. I couldn't figure it out. Years and years and years would pass with this discrepancy left unaccounted for. A few months ago I noted an article on gamma correction and assumed it was related to color correction or some other post process effect designed to compensate for monitor behavior, and put it as a very low priority research point on my mental to-do-list. No reason fixing up minor brightness problems until your graphics engine can actually render everything properly. Yesterday, though, I happened across a Hacker News posting about learning modern 3D engine programming. Curious if it had anything I didn't already know, I ran through its topics, and found this. Gamma correction wasn't just making the scene brighter to fit with the monitor, it was compensating for the fact that most images are actually already gamma-corrected. In a nutshell, the brightness of a monitor is exponential, not linear (with a power of about 2.2). The result is that a linear gradient displayed on the monitor is not actually increasing in brightness linearly. Because it's mapped to a curve, it will actually increase in brightness exponentially. This is due to the human visual system processing luminosity on a logarithmic scale. The curve in question is this: Source: GPU Gems 3 - Chapter 24: The Importance of Being Linear You can see the effect in this picture, taken from the article I mentioned: The thing is, I always assumed the top linear gradient was a linear gradient. Sure it looks a little dark, but hey, I suppose that might happen if you're increasing at 25% increments, right? WRONG. The bottom strip is a true linear gradient1. The top strip is a literal assignment of linear gradient RGB values, going from 0 to 62 to 126, etc. While this is, digitally speaking, a mathematical linear gradient, what happens when it gets displayed on the screen? It gets distorted by the CRT Gamma curve seen in the above graph, which makes the end value exponential. The bottom strip, on the other hand, is gamma corrected - it is NOT a mathematical linear gradient. It's values go from 0 to 134 to 185. As a result, when this exponential curve is displayed on your monitor, it's values are dragged down by the exact inverse exponential curve, resulting in a true linear curve. An image that has been "gamma-corrected" in this manner is said to exist in sRGB color space. The thing is, most images aren't linear. They're actually in the sRGB color space, otherwise they'd look totally wrong when we viewed them on our monitors. Normally, this doesn't matter, which is why most 2D games simply ignore gamma completely. Because all a 2D engine does is take a pixel and display it on the screen without touching it, if you enable gamma correction you will actually over-correct the image and it will look terrible. This becomes a problem with image editing, because digital artists are drawing and coloring things on their monitors and they try to make sure that everything looks good on their monitor. So if an artist were visually trying to make a linear gradient, they would probably make something similar to the already gamma-corrected strip we saw earlier. Because virtually no image editors linearize images when saving (for good reason), the resulting image an artist creates is actually in sRGB color space, which is why only turning on gamma correction will usually simply make everything look bright and washed out, since you are normally using images that are already gamma-corrected. This is actually good thing due to subtle precision issues, but it creates a serious problem when you start trying to do lighting calculations. The thing is, lighting calculations are linear operations. It's why you use Linear Algebra for most of your image processing needs. Because of this, when I tried to use the inverse-square law for my lighting functions, the resulting value that I was multiplying on to the already-gamma-corrected image was not gamma corrected! In order to do proper lighting, you would have to first linearize the gamma-corrected image, perform the lighting calculation on it, and then re-gamma-correct the end result. Wait a minute, what did we say the gamma curve value was? It's $$x^{2.2}$$, so $$x^{0.45}$$ will gamma-correct the value $$x$$. But the inverse square law states that the intensity of a light is actually $$\frac{1}{x^2}$$, so if you were to gamma correct the inverse square law, you'd end up with: \[ {\bigg(\frac{1}{x^2}}\bigg)^{0.45} = {x^{-2}}^{0.45} = x^{-0.9} ≈ x^{1} \] That's almost linear!2 That's it! The reason I saw linear curves all over the place was because it was a rough approximation to gamma correction! The reason linear lighting looks good in a 2D game is because its actually an approximation to a gamma-corrected inverse-square law! Holy shit! Why didn't anyone ever explain this?!3 Now it all makes sense! Just to confirm my findings, I went back to my 3D lighting experiment, and sure enough, after correcting the gamma values, using the inverse square law for the lighting gave correct results! MUAHAHAHAHAHAHA! For those of you using OpenGL, you can implement gamma correction as explained in the article mentioned above. For those of you using DirectX9 (not 10), you can simply enable D3DSAMP_SRGBTEXTURE on whichever texture stages are using sRGB textures (usually only the diffuse map), and then enable D3DRS_SRGBWRITEENABLE during your drawing calls (a gamma-correction stateblock containing both of those works nicely). For things like GUI, you'll probably want to bypass the sRGB part. Like OpenGL, you can also skip D3DRS_SRGBWRITEENABLE and simply gamma-correct the entire blended scene using D3DCAPS3_LINEAR_TO_SRGB_PRESENTATION in the Present() call, but this has a lot of caveats attached. In DirectX10, you no longer use D3DSAMP_SRGBTEXTURE. Instead, you use an sRGB texture format (see this presentation for details). 1 or at least much closer, depending on your monitors true gamma response 2 In reality I'm sweeping a whole bunch of math under the table here. What you really have to do is move the inverse square curve around until it overlaps the gamma curve, then apply it, and you'll get something that is roughly linear. 3 If this is actually standard course material in a real graphics course, and I am just really bad at finding good tutorials, I apologize for the palm hitting your face right now. Signed Integers Considered Stupid (Like This Title) Unrelated note: If you title your article "[x] considered harmful", you are a horrible person with no originality. Stop doing it. Signed integers have always bugged me. I've seen quite a bit of signed integer overuse in C#, but it is most egregious when dealing with C/C++ libraries that, for some reason, insist on using for(int i = 0; i < 5; ++i). Why would you ever write that? i cannot possibly be negative and for that matter shouldn't be negative, ever. Use for(unsigned int i = 0; i < 5; ++i), for crying out loud. But really, that's not a fair example. You don't really lose anything using an integer for the i value there because its range isn't large enough. The places where this become stupid are things like using an integer for height and width, or returning a signed integer count. Why on earth would you want to return a negative count? If the count fails, return an unsigned -1, which is just the maximum possible value for your chosen unsigned integral type. Of course, certain people seem to think this is a bad idea because then you will return the largest positive number possible. What if they interpret that as a valid count and try to allocate 4 gigs of memory? Well gee, I don't know, what happens when you try to allocate -1 bytes of memory? In both cases, something is going to explode, and in both cases, its because the person using your code is an idiot. Neither way is more safe than the other. In fact, signed integers cause far more problems then they solve. One of the most painfully obvious issues here is that virtually every single architecture in the world uses the two's complement representation of signed integers. When you are using two's complement on an 8-bit signed integer type (a char in C++), the largest positive value is 127, and the largest negative value is -128. That means a signed integer can represent a negative number so large it cannot be represented as a positive number. What happens when you do (char)abs(-128)? It tries to return 128, which overflows back to... -128. This is the cause of a host of security problems, and what's hilarious is that a lot of people try to use this to fuel their argument that you should use C# or Java or Haskell or some other esoteric language that makes them feel smart. The fact is, any language with fixed size integers has this problem. That means C# has it, Java has it, most languages have it to some degree. This bug doesn't mean you should stop using C++, it means you need to stop using signed integers in places they don't belong. Observe the following code: if (*p == '*') ++p; total_width += abs (va_arg (ap, int)); This is retarded. Why on earth are you interpreting an argument as a signed integer only to then immediately call abs() on it? So a brain damaged programmer can throw in negative values and not blow things up? If it can only possibly be valid when it is a positive number, interpret it as a unsigned int. Even if someone tries putting in a negative number, they will serve only to make the total_width abnormally large, instead of potentially putting in -128, causing abs() to return -128 and creating a total_width that is far too small, causing a buffer overflow and hacking into your program. And don't go declaring total_width as a signed integer either, because that's just stupid. Using an unsigned integer here closes a potential security hole and makes it even harder for a dumb programmer to screw things up1. I can only attribute the vast overuse of int to programmer laziness. unsigned int is just too long to write. Of course, that's what typedef's are for, so that isn't an excuse, so maybe they're worried a programmer won't understand how to put a -1 into an unsigned int? Even if they didn't, you could still cast the int to an unsigned int to serve the same purpose and close the security hole. I am simply at a loss as to why I see int's all over code that could never possibly be negative. If it could never possibly be negative, you are therefore assuming that it won't be negative, so it's a much better idea to just make it impossible for it to be negative instead of giving hackers 200 possible ways to break your program. 1 There's actually another error here in that total_width can overflow even when unsigned, and there is no check for that, but that's beyond the scope of this article. Posted by Unknown at 4:02 PM 13 comments Why Kids Hate Math They're teaching it wrong. And I don't just mean teaching the concepts incorrectly (although they do plenty of that), I mean their teaching priorities are completely backwards. Set Theory is really fun. Basic Set Theory can be taught to someone without them needing to know how to add or subtract. We teach kids Venn Diagrams but never teach them all the fun operators that go with them? Why not? You say they won't understand? Bullshit. If we can teach third graders binary, we can teach them set theory. We take forever to get around to teaching algebra to kids, because its considered difficult. If something is a difficult conceptual leap, then you don't want to delay it, you want to introduce the concepts as early as possible. I say start teaching kids algebra once they know basic arithmetic. They don't need to know how to do crazy weird stuff like x * x = x² (they don't even know what ² means), but you can still introduce them to the idea of representing an unknown value with x. Then you can teach them exponentiation and logs and all those other operators first in the context of numbers, and then in the context of unknown variables. Then algebra isn't some scary thing that makes all those people who don't understand math give up, its something you simply grow up with. In a similar manner, what the hell is with all those trig identities? Nobody memorizes those things! You memorize like, 2 or 3 of them, and almost only ever use sin² + cos² = 1. In a similar fashion, nobody ever uses integral trig identities because if you are using them you should have converted your coordinate system to polar coordinates, and if you can't do that then you can just look them up for crying out loud. Factoring and completing the square can be useful, but forcing students to do these problems over and over when they almost never actually show up in anything other than spoon-fed equations is insane. Partial Fractions, on the other hand, are awesome and fun and why on earth are they only taught in intermediate calculus?! Kids are ALWAYS trying to pull apart fractions like that, and we always tell them to not do it - why not just teach them the right way to do it? By the time they finally got around to teaching me partial fractions, I was thinking that it would be some horrifically difficult, painful, complex process. It isn't. You just have to follow a few rules and then 0 out some functions. How can that possibly be harder than learning the concept of differentiation? And its useful too! Lets say we want to teach someone basic calculus. How much do they need to know? They need to know addition, subtraction, division, multiplication, fractions, exponentiation, roots, algebra, limits, and derivatives. You could teach someone calculus without them knowing what sine and cosine even are. You could probably argue that, with proper teaching, calculus would be about as hard, or maybe a little harder, than trigonometry. Trigonometry, by the way, has an inordinate amount of time spent on it. Just tell kids how right triangles work, sine/cosine/tangent, SOHCAHTOA, a few identities, and you're good. You don't need to know scalene and isosceles triangles. Why do we even have special names for them? Who gives a shit if a triangle has sides of the same length? Either its a right triangle and its useful or its not a right triangle and you have to do some crazy sin law shit that usually means your algorithm is just wrong and so the only time you ever actually need to use it you can just look up the formula because it is a obtuse edge case that almost never comes up. Think about that. We're grading kids by asking them to solve edge cases that never come up in reality and grading how well they are in math based off of that. And then we're confused when they complain about math having no practical application? Well duh. The sheer amount of time spent on useless topics is staggering. Calculus should be taught to high school freshman. Differential equations and complex analysis go to the seniors, and by the time you get into college you're looking at combinatorics and vector analysis, not basic calculus. I have already seen some heavily flawed arguments against this. Some people say that people aren't interested in math, so this will never work. Since I'm saying that teaching kids advanced concepts early on will make them interested in math, this is a circular argument and invalid. Other people claim that the kids will never understand because of some bullshit about needing logical constructs, which just doesn't make sense because you should still introduce the concepts. Introducing a concept early on and having the student be confused about it is a good thing because it means they'll try to work it out over time. The more time you give them, the more likely it will click. Besides, most students aren't understanding algebra with the current system anyway, so I fail to see the point of that argument. It's not working now so don't try to change it or you'll make it worse? That's just pathetic. TL;DR: Stop teaching kids stupid, pointless math they won't need and maybe they won't rightfully conclude that what they are being taught is useless. Posted by Unknown at 3:44 AM 27 comments Don't Work on Someone Else's Dream When I complain to my friends about a recent spat of not being productive, they often remind me of the occasional 10 hours I spend forgetting to eat while hunting down a bug. When introducing myself, I am always clear that, most of the time, I am either busy, or trying to be busy. Everything to me is work, everything that makes me proud of myself is work, everything in my future will, hopefully, be more work. The entire concept of retiring to me is madness. I never want to stop working. This is often mistaken as an unhealthy obsession with work, which is not entirely true. I am not torturing myself every day for 10 hours just so I can prove myself, I'm doing exactly what I want to do. I'm 21 years old, I can drink and smoke (but never do), I can drive (but I take the bus anyway), I go to college (but rarely attend classes), and in general am supposed to be an adult. Most people my age are finishing college and inevitably taking low paying jobs while they search for another low paying internship at a company so they can eventually get a high paying job that actually uses what they learned in college after they're too old to care. If I really wanted, I could be at Facebook or Microsoft right now. I even had a high school internship at Microsoft, and probably could have gotten a college one too. I could have spent my time learning all the languages the companies want you to learn, and become incredibly well-versed in everything that everyone else already knows. I could have taught myself proper documentation and proper standards and proper guidelines and kept up my goody two-shoes act for the rest of my fucking life and get congratulated for being such a well-behaved and successful clone. I am 21 years old, and I'm going to spend it doing what I like doing, working on the projects I want to work on, and figuring out a way to make a living out of it even if I have to live out of my parents house for another 6 months. I am not going to get a job doing what other people tell me is important. While I am often very critical of myself as a person, realistically speaking, my only regrets are the moments I spent not working, or wasting time on things that weren't important. It doesn't matter that I've been working on a project most people dismiss as a childish fantasy since I was 18. It doesn't matter that I have no income and no normal job and no programming skills that would get me hired at a modern tech company because everyone hates C++ and only cares about web development. I'm not working on something a CEO thinks is important, I'm working on something I think is important. I'm going to start a company so I can continue to work on what I think is important, and every single employee I will ever hire will work on something they think is important. This doesn't necessarily mean its fun - finding a rogue typecast is anything but fun - but rather its something that you are willing to ride the highs and lows through because it is intrinsically important to you, as a person. You should not wait until you're 35 with a family and a wife to worry about. Do it now. Do whatever is necessary to make it possible for you start working on whatever you think is important and then do it so hard you can make a living out of it. Don't waste the best 10 years of your life working on someone else's dream. (don't waste 10 years of your life forgetting to eat, either. That just isn't healthy) C# to C++ Tutorial - Part 3: Classes and Structs and Inheritance (OH MY!) [ 1 · 2 · 3 · 4 · 5 · 6 · 7 ] Classes in C#, like most object-oriented languages, are very similar to their C++ counterparts. They are declared with class, exist between brackets and inherit classes using a colon ':'. Note, however, that all classes in C++ must end with a semicolon! You will forget this semicolon, and then all the things will break. You can do pretty much everything you can do with a C# class in a C++ class, except that C++ does not have partial classes, and in C++ classes themselves cannot be declared public, protected or private. Both of these features don't exist because they are made irrelevant with how classes are declared in header files. In C# you usually just have one code file with the class declared in it along with all the code for all the functions. You can just magically use this class everywhere else and everything is fun and happy with rainbows. As mentioned before, C++ uses header files, and they are heavily integrated into the class system. We saw before how in order to use a function somewhere else, its prototype must first be declared in the header file. This applies to both classes and pretty much everything else. You need to understand that unlike C#, C++ does not have magic dust in its compiler. In C++, it just goes down the list of .cpp files, does a bit of dependency optimization, and then simply compiles each .cpp file by taking all the content from all the headers that are included (including all the headers included in the headers) and pasting it before the actual code from the .cpp file, and compiling. This process is repeated separately for every single code file, and no order inconsistencies are allowed anywhere in the code, the headers, or even the order that the headers are included in. The compiler literally takes every single #include statement as it is and simply replaces it with the code of the header it points to, wherever this happens to be in the code. This can (and this has happened to me) result in certain configurations of header files working even though one header file is actually missing a dependency. For example: //Rainbow.h class Rainbow Unicorn _unicorns[5]; // 5 unicorns dancing on rainbows }; // DO NOT FORGET THE SEMICOLON //Unicorn.h class Unicorn int magic; //main.cpp #include "Unicorn.h" #include "Rainbow.h" Rainbow rainbow; Compiling main.cpp will succeed in this case, even though Rainbow.h is referencing the Unicorn class without it ever being declared. The reason behind this is what happens when the compiler expands all the includes. Right before compiling main.cpp (after the preprocessor has run), main.cpp looks like this: It is now obvious that because Rainbow.h was included after Unicorn.h, the Unicorn reference was resolved since it was declared before Rainbow. However, had we reversed the order of the include files, we would have had an anachronism: an inconsistency in our chronological arrangement. It is very bad practice to construct headers that are dependent on the order in which they are included, so we usually resolve something like this by having Rainbow.h simply include Unicorn.h, and then it won't matter what order they are included in. Left as is, however, and we run into a problem. Lets try compiling main.cpp: We've just declared Unicorn twice! Obviously one way to solve this in our very, very simplistic example is to just remove the spurious #include statement, but this violates the unwritten rule of header files - any header file should be able to be included anywhere in any order regardless of what other header files have been included. This means that, first, any header file should include all the header files that it needs to resolve its dependencies. However, as we see here, that simply makes it extremely likely that a header file will get included 2 or 3 or maybe hundreds of times. What we need is an include guard. #ifndef __UNICORN_H__ #define __UNICORN_H__ Understanding this requires knowledge of the C Preprocessor, which is what goes through and processes your code before its compiled. It is very powerful, but right now we only need to know the basics. Any statement starting with # is a preprocessor command. You will notice that #include is itself a preprocessor command, which makes sense, since the preprocessor was replacing those #include's with the code they contained. #define lets you define a constant (or if you want to be technical, an object-like macro). It can be equal to a number or a word or just not be equal to anything and simply be in a defined state. #ifdef and #endif are just an if statement that allows the code inside of it to exist if the given constant is defined. #ifndef simply does the opposite - the code inside only exists if the given constant doesn't exist. So, what we do is pick a constant name that probably will never be used in anything else, like __UNICORN_H__, and put in a check to see if it is defined. The first time the header is reached, it won't be defined, so the code inside #ifndef will exist. The next line tells the preprocessor to define __UNICORN_H__, the constant we just checked for. That means that the next time this header is included, __UNICORN_H__ will have been defined, and so the code will be skipped over. Observe: Our problem is solved! However, note that //Unicorn.h was left in, because it was outside the include guard. It is absolutely critical that you put everything inside your include guard (ignoring comments), or it will either not work properly or be extremely inefficient. #ifndef __RAINBOW_H__ //WRONG WRONG WRONG WRONG WRONG #define __RAINBOW_H__ In this case, the code still compiles, because the include guards prevent duplicate definitions, but its very taxing on the preprocessor that will repeatedly attempt to include Unicorn.h only to discover that it must be skipped over anyway. The preprocessor may be powerful, but it is also very dumb and is easily crippled. The thing is slow enough as it is, so try to keep its workload to a minimum by putting your #include's inside the include guard. Also, don't put semicolons on preprocessor directives. Even though almost everything else in the entire language wants semicolons, semicolons in preprocessor directives will either be redundant or considered a syntax error. #ifndef __RAINBOW_H__ #include "Unicorn.h" // SMILES EVERYWHERE! Ok, so now we know how to properly use header files, but not how they are used to declare classes. Let's take a class declared in C#, and then transform it into an equivalent prototype in C++. public class Pegasus : IComparable<Pegasus> private Rainbow rainbow; protected int magic; protected bool flying; const int ID=10; static int total=0; const string NAME="Pegasus"; public Pegasus() flying=false; magic=1; IncrementTotal(); ~Pegasus() public void Fly() flying=true; private void Land() public static string GetName() private static void IncrementTotal() ++total; public int CompareTo(Pegasus other) class Pegasus : public IComparable<Pegasus> Pegasus(); ~Pegasus(); void Fly(); virtual int CompareTo(Pegasus& other); static const int ID=10; static int total; static const char* NAME; inline static void IncrementTotal() { ++total; } bool flying; void Land(); Immediately, we are introduced to C++'s method of dealing with public, protected and private. Instead of specifying it for each item, they are done in groups. The inheritance syntax is identical, and we've kept the static variables, but now only one of them is being initialized in the class. In C++, you cannot initialize a static variable inside a class unless it is a static const int (or any other integral type). Instead, we will have to initialize total and NAME when we get around to implementing the code for this class. In addition, while most of the functions do not have code, as expected, IncrementTotal does. As an aside, C# does not have static const because it considers it redundant - all constant values are static. C++, however, allows you to declare a const variable that isn't static. While this would be useless in C#, there are certain situations where it is useful in C++. If a given function's code doesn't have any dependencies unavailable in the header file the class is declared in, you can define that method in the class prototype itself. However, as I mentioned before, code in header files runs the danger of being compiled twice. While the compiler is usually good about properly instancing the class, it is usually a good idea to inline any functions defined in the header. Functions that are inline'd are embedded inside code that calls them instead of being explicitly called. That means instead of pushing arguments on to the stack and returning, the compiler simply embeds the function inside of the code that called it, like so: #include "Pegasus.h" // Before compilation Pegasus::IncrementTotal() // After compilation ++Pegasus::total; The consequence of this means that the function itself is never actually instantiated. In fact the function might as well not exist - you won't be able to call it from a DLL because the function was simply embedded everywhere that it was used, kind of like a fancy macro. This neatly solves our issue with code in header files, and will be important later on. This also demonstrates how one accesses static variables and functions in a class. Just like before, the C# method of using . no longer works, you must use the Scope Resolution Operator (::) to access static members and functions of a class. This same operator is what allows us to declare the code elsewhere without confusing the compiler. //Pegasus.cpp int Pegasus::total = 0; const char* Pegasus::NAME = "Pegasus"; Pegasus::Pegasus() : IComparable<Pegasus>(), magic(1), flying(false) Pegasus::~Pegasus() void Pegasus::Fly() void Pegasus::Land() string Pegasus::GetName() int Pegasus::CompareTo(Pegasus other) This looks similar to what our C# class looked like, except the functions aren't in the class anymore. Pegasus:: tells the compiler what class the function you are defining belongs in, which allows it to assign the class function prototype to the correct implementation, just like it did with normal functions before. Notice that static is not used when defining GetName() - All function decorations (inline, static, virtual, explicit, etc.) are only allowed on the function prototype. Note that all these rules apply to static variable initialization as well; both total and NAME are resolved using Pegasus:: and don't have the static decorator, only their type. Even though we're using const char* instead of string, you can still initialize a constant value using = "string". The biggest difference here is in the constructor. In C#, the only things you bother with in the constructor after the colon is either initializing a subclass or calling another constructor. In C++, you can initialize any subclasses you have along with any variables you have, including passing arguments to whatever constructors your variables might have. Most notably is the ability to initialize constant values, which means you can have a constant integer that is set to a value passed through the constructor, or based off a function call from somewhere else. Unfortunately, C++ traditionally does not allow initializing any variables in any sub-classes, nor does it allow calling any of your own constructors. C++0x partially resolves this problem, but it is not fully implemented in VC++ or other modern compilers. This blow, however, is mitigated by default arguments in functions (and by extension, constructors), which allows you to do more with fewer functions. The order in which variables are constructed is occasionally important if there is an inter-dependency between them. While having such inter-dependencies are generally considered a bad idea, they are sometimes unavoidable, and you can take advantage of a compiler's default behavior of initializing the values in a left to right order. While this behavior isn't technically guaranteed, it is sufficiently reliable for you to take use of it in the occasional exceptional case, but always double-check that the compiler hasn't done crazy optimization in the release version (usually, though, this will just blow the entire program up, so it's pretty obvious). Now, C# has another datatype, the struct. This is a limited datatype that cannot have a constructor and is restricted to value-types. It is also passed by-value through functions by default, unlike classes. This is very similar to how structs behaved in C, but have no relation to C++'s struct type. In C++, a struct is completely identical to a class in every way, save for one minor detail: all members of a class are private by default, while all members of a struct are public by default. That's it. You can take any class and replace it with struct and the only thing that will change is the default access modifier. Even though there is no direct analogue to C#'s struct, there is an implicit equivalent. If a class or struct (C++ really doesn't care) meets the requirements of a traditional C struct (no constructor, only basic data types), then it's treated as Plain Old Data, and you are then allowed to skip the constructor and initialize its contents using the special bracket initialization that was touched on before. Yes, you can initialize constant variables using that syntax too. One thing I've skipped over is the virtual code decorator in the C++ prototype of Pegasus, which is not actually necessary, because the function is already attempting to override another virtual function declared in IComparable, which implicitly makes it virtual. However, in C#, IComparable is implemented as an interface, which is not present in C++. Of course, if you really think about it, an interface is kind of like a normal class, just with all abstract methods (ignore the inheritance issues with this for now). So, we could rewrite the C# implementation of IComparable as a class with abstract methods: public class IComparable<T> public abstract int CompareTo(T other); As it turns out, this has a direct C++ analogue: template<class T> class IComparable virtual int CompareTo(T other)=0; This virtual function, instead of being implemented, has an =0 on the end of it. That makes the function pure virtual, which is just another way of saying abstract. So the C++ version of abstract is a pure virtual function, and a C++ version of interfaces is just a class made entirely out of pure virtual functions. Just as C# prevents you from instantiating an abstract class or interface, C++ considers any class that either declares or inherits pure virtual functions without giving them code as an abstract class that cannot be instantiated. Unfortunately C++ does not have anything like sealed, override, etc., so you are on your own there. Keep in mind that public IComparable<T> could easily be replaced with protected or private for more control. The reason C# has interfaces at all is because C# only allows you to inherit a single class, regardless of whether or not its abstract. If its got code, you can only inherit it once. Interfaces, however, have no code, and so C# lets you pile them on like candy. This isn't done in C++, because C++ supports multiple inheritance. In C++ you can have any class inherit any other class, no matter what, but you can only instantiate a class if it provides implementations for all pure virtual functions somewhere along its inheritance line. Unfortunately, there are a lot of caveats about multiple inheritance, the most notorious being the Diamond Problem. Let's say you have a graphics engine that has an Image class, and that image class inherits from an abstract class that holds its position. Obviously, any image on the screen is going to have a position. Then, let's take a physics engine, with a basic object that also inherits from an abstract class that holds its position. Obviously any physics object must have a position. So, what happens when you have a game object that is both an image and a physics object? Since the image and the physics object are in fact the same thing, both of them must have the same position at all times, but both inherit the abstract class storing position separately, resulting in two positions. Which one is the right position? When you call SetPosition, which position are you talking about? Virtual inheritance was introduced as an attempt to solve this problem. It works by creating a single instance of a derived class for the entire inheritance change, such that both the physics object and the image share the same position, as they are supposed to. Unfortunately, it can't resolve all the ambiguities, and it introduces a whole truckload of new problems. It has a nasty habit of being unable to resolve its own virtual functions properly and introducing all sorts of horrible weirdness. Most incredibly bizarre is a virtually inherited class's constructor - it must be initialized in the last class in the inheritance chain, and is one of the first classes to get its constructor called, regardless of where it might be in the hierarchy. It's destructor order is equally as bizarre. Virtual inheritance is sometimes useful for certain small utility classes that must be shared through a wide variety of situations, like a flag class. As a rule of thumb, you should only use virtual inheritance in a class that either relies on the default constructor or only offers a constructor that takes no arguments, and has no superclasses. This allows you to just slap the virtual keyword on and forget about all the wonky constructor details. class Pegasus : virtual IComparable<Pegasus> If you ever think you need to use virtual inheritance on something more complicated, your code is broken and you need to rethink your program's architecture (and the compiler probably won't be able to do it properly anyway). On a side-note, the constructors for any given object are called from the top down. That is, when your object's constructor is called, it immediately calls all the constructors for all it's superclasses, usually before even doing any variable initialization, and then those object constructors immediately call all their superclass constructors, and so on until the first line of code executed in your program is whatever the topmost class was. This then filters down until control is finally returned to your original constructor, such that any constructor code is only executed after all of its base classes have been constructed. The exact reverse happens for destructors, with the lowest class destructor being executed first, and after its finished, the destructors for all its base classes are called, such that a class destructor is always called while all of its base classes still exist. Hopefully you are familiar with C#'s enum keyword. While it used to be far more limited, it has now been extended to such a degree it is identical to C++, even the syntax is the same. The only difference between the two is that the C++ version can't be declared public, protected or private and needs to have a semicolon on the end (like everything else). Like in C#, enums, classes and structs can be embedded in classes, except in C++ they can also be embedded in structs (because structs are basically classes with a different name). Also, C++ allows you to declare an enum/class/etc. and a variable inside the class at the same time using the following syntax: class Pegasus enum Count { Uno=2, Dos, Tres, Quatro, Cinco } variable; enum { Uno=2, Dos, Tres, Quatro, Cinco } var2; //When used to immediately declare a variable, enums can be anonymous //Same as above enum Count { Uno=2, Dos, Tres, Quatro, Cinco }; //cannot be anonymous Count variable; Count var2; Unions are exclusive to C++, and are a special kind of data structure where each element occupies the same address. To understand what that means, let's look at an example: union //Unions are usually anonymous, but can be named struct { // The anonymity of this struct exposes its internal members. __int32 low; __int32 high; __int64 full; __int32 and __int64 are simply explicitly declaring 32-bit and 64-bit integers. This union allows us to either set an entire 64-bit integer, or to only set its low or high portion. This happens because the data structure is laid out as follows: Both low and full are mapped to the exact same place in memory. The only difference is that low is a 32-bit integer, so when you set that to 0, only the first four bytes are set to zero. high is pointing to a location in memory that is exactly 4 bytes in front of low and full. So, if low and full were located at 0x000F810, high would be located at 0x000F814. Setting high to zero sets the last four bytes to zero, but doesn't touch the first four. Consequently, if you set high to 0, reading full would always return the same value as low, since it would essentially be constrained to a 32-bit integer. Unions, however, do not have to have matching memory layouts: char pink[5] __int32 fluffy; __int64 unicorns; The layout of this union is: Any unused space is simply ignored. This same rule would apply for any structs being used to group data. The size of the union is simply the size of its largest member. Setting all 5 elements of pink here would result in fluffy being equal to zero, and only the last 24-bits (or last 3 bytes) of unicorns be untouched. Likewise, setting fluffy to zero would zero out the first 4 elements in pink (indexes 0-3), leaving the 5th untouched. These unions are often used in performance critical areas where a single function must be able to recieve many kinds of data, but will only ever recieve a single group of data at a time, and so it would be more efficient to map all the possible memory configurations to a single data structure that is large enough to hold the largest group. Here is a real world example: struct __declspec(dllexport) cGUIEvent cGUIEvent() { memset(this,0,sizeof(cGUIEvent)); } cGUIEvent(unsigned char _evt, const cVecT<int>* mousecoords, unsigned char _button, bool _pressed) : evt(_evt), subevt(0), mousecoords(mousecoords), button(_button), pressed(_pressed) {} cGUIEvent(unsigned char _evt, const cVecT<int>* mousecoords, unsigned short _scrolldelta) : evt(_evt), subevt(0), mousecoords(mousecoords), scrolldelta(_scrolldelta) {} unsigned char evt; unsigned char subevt; unsigned short realevt; struct { const cVecT<int>* mousecoords; unsigned char button; bool pressed; }; struct { const cVecT<int>* mousecoords; short scrolldelta; }; struct { //the three ctrl/shift/alt bools (plus a held bool) here are compressed into a single byte bool down; unsigned char keycode; //only used by KEYDOWN/KEYUP char ascii; //Only used by KEYCHAR wchar_t unicode; //Only used by KEYCHAR char sigkeys; struct { float value; short joyaxis; }; //JOYAXIS struct { bool down; short joybutton; }; //JOYBUTTON* Here, the GUI event is mapped to memory according to the needs of the event that it is representing, without the need for complex inheritance or wasteful memory usage. Unions are indispensable in such scenarios, and as a result are very common in any sort of message handling system. One strange decorator that has gone unexplained in the above example is the __declspec(dllexport) class decorator. When creating a windows DLL, if you want anything to be usable by something inheriting the DLL, you have to export it. In VC++, this can be done with a module definition file (.def), which is useful if you'll be using GetProcAddress manually, but if you are explicitly linking to a DLL, __declspec(dllexport) automatically exports the function for you when placed on a function. When placed on a class, it automatically exports the entire class. However, for anyone to utilize it, they have to have the header file. This arises to DLLs being distributed as DLLs, linker libraries (.lib), and sets of header files, usually in an "include" directory. In certain cases, only some portions of your DLL will be accessible to the outside, and so you'll want two collections of header files - outside header files and internal ones that no one needs to know about. Consequently, utilizing a large number of C++ DLLs usually involves substantial organization of a whole lot of header files. Due to the compiler-specific nature of DLL management, they will be covered in Part 6. For now, its on to operator overloading, copy semantics and move semantics! Part 4: Operator Overload The Problem of Vsync If you were to write directly to the screen when drawing a bouncing circle, you would run into some problems. Because you don't do any buffering, your user might end up with a quarter circle drawn for a frame. This can be solved through Double Buffering, which means you draw the circle on to a backbuffer, then "flip" (or copy) the completed image on to the screen. This means you will only ever send a completely drawn scene to the monitor, but you will still have tearing issues. These are caused by trying to update the monitor outside of its refresh rate, meaning you will have only finished drawing half of your new scene over the old scene in the monitor's video buffer when it updates itself, resulting in half the scanlines on the screen having the new scene and half still having the old scene, which gives the impression of tearing. This can be solved with Vsync, which only flips the backbuffer right before the screen refreshes, effectively locking your frames per second to the refresh rate (usually 60 Hz or 60 FPS). Unfortunately, Vsync with double buffering is implemented by simply locking up the entire program until the next refresh cycle. In DirectX, this problem is made even worse because the API locks up the program with a 100% CPU polling thread, sucking up an entire CPU core just waiting for the screen to enter a refresh cycle, often for almost 13 milliseconds. So your program sucks up an entire CPU core when 90% of the CPU isn't actually doing anything but waiting around for the monitor. This waiting introduces another issue - Input lag. By definition any input given during the current frame can only come up when the next frame is displayed. However, if you are using vsync and double buffering, the current frame on the screen was the LAST frame, and the CPU is now twiddling its thumbs until the monitor is ready to display the frame that you have already finished rendering. Because you already rendered the frame, the input now has to wait until the end of the frame being displayed on the screen, at which point the frame that was already rendered is flipped on to the screen and your program finally realizes that the mouse moved. It now renders yet another frame taking into account this movement, but because of Vsync that frame is blocked until the next refresh cycle. This means, if you were to press a key just as a frame was put up on the monitor, you would have two full frames of input lag, which at 60 FPS is 33 ms. I can ping a server 20 miles away with a ping of 21 ms. You might as well be in the next city with that much latency. There is a solution to this - Triple Buffering. The idea is a standard flip mechanism commonly used in dual-thread lockless synchronization scenarios. With two backbuffers, the application can write to one and once its finished, tell the API and it will mark it for flipping to the front-buffer. Then the application starts drawing on the second, after waiting for any flipping operation to finish, and once its done, marks that for flipping to the front-buffer and starts drawing on the first again. This way, the application can draw 2000 frames a second, but only 60 of those frames actually get flipped on to the monitor using what is essentially a lockless flipping mechanism. Because the application is now effectively rendering 2000 frames per second, there is no more input lag. Problem Solved. Except not, because DirectX implements Triple Buffering in the most useless manner possible. DirectX just treats the extra buffer as a chain, and rotates through the buffers as necessary. The only advantage this has is that it avoids waiting for the backbuffer copy operation to finish before writing again, which is completely useless in an era where said copy operation would have to be measured in microseconds. Instead, it simply ensures that vsync blocks the program, which doesn't solve the input issue at all. However, there is a flag, D3DPRESENT_DONOTWAIT, that forces vsync to simply return an error if the refresh cycle isn't available. This would allow us to implement a hack resembling what triple buffering should be like by simply rolling our own polling loop and re-rendering things in the background on the second backbuffer. Problem solved! Except not. It turns out the Nvidia and Intel don't bother implementing this flag, forcing Vsync to block no matter what you do, and to make matters worse, this feature doesn't have an entry in D3DCAPS9, meaning the DirectX9 API just assumes that it exists, and there is no way to check if it is supported. Of course, don't complain about this to anyone, because of the 50% of people who asked about this who weren't simply ignored, almost all of them were immediately accused of bad profiling, and that the Present() function couldn't possibly be blocking with the flag on. I question the wisdom of people who ignore the fact that the code executed its main loop 2000 times with vsync off and 60 times with it on and somehow come to the conclusion that Present() isn't blocking the code. Either way, we're kind of screwed now. Absolutely no feature in DirectX actually does what its supposed to do, so there doesn't seem to be a way past this input lag. There is, however, another option. Clever developers would note that to get around vsync's tendency to eat up CPU cycles like a pig, one could introduce a Sleep() call. So long as you left enough time to render the frame, you could recover a large portion of the wasted CPU. A reliable way of doing this is figuring out how long the last frame took to render, then subtracting that from the FPS you want to enforce and sleep in the remaining time. By enforcing an FPS of something like 80, you give yourself a bit of breathing room, but end up finishing rendering the frame around the same time it would have been presented anyway. By timing your updates very carefully, you can execute a Sleep() call, then update all the inputs, then render the scene. This allows you to cut down the additional lag time by nearly 50% in ideal conditions, almost completely eliminating excess input lag. Unfortunately, if your game is already rendering at or below 100 FPS, it takes you 10 milliseconds to render a frame, allowing you only 2.5 milliseconds of extra time to look for input, which is of limited usefulness. This illustrates why Intel and Nvidia are unlikely to care about D3DPRESENT_DONOTWAIT - modern games will never render fast enough for substantial input lag reduction. Remember when implementing the Yield that the amount of time it takes to render the frame should be the time difference between the two render calls, minus the amount of time spent sleeping, minus the amount of time Present() was blocking. As a composer, I have been on the receiving end of a lot of musical criticism - some useful, most ridiculous. I have given out quite a bit of criticism myself, but after discovering that most people aren't interested in brutally honest opinions, I have since avoided it. However, one thing that continues to come up over and over again is someone complaining about Genres. "This is too fast to be trance." "This isn't real Drum'n'Bass." Sometimes people will even slam entire swathes of subgenres, like Ishkur's rant on Epic Trance (and by extension almost anything related to it), literally accusing it as betraying the entire idea of trance: "There must be a word to describe the pain one feels when witnessing (or hearing, rather) something once pure and brilliant completely sold down the river. Sometime in the mid-90s trance decided to drop the technique of slowly introducing complicated layers and building adequate tension over long stretches, replacing them with cutesy little insta-melodies ... The average attention span, way too ritalin-freaked to pay attention to the slow, brooding trance in its original form, liked the anthemic singalong tone of the NEW McTrance, and that's why all you trance crackers are reading this right now. Not because you grew a taste for this super awesome underground music ... But because trance reformed its sound and delivery to suit [YOU]." This is repeated for something like half the listed subgenres of trance, and in fact the entire trance genre in his "Guide" is just one giant extended rant about how Trance sucks now and it used to be awesome and now we've all ruined it forever. This kind of stuck-up, bigoted, brain-melting stupidity has grated against my nerves for years, half because I just don't like stupid stuck-up dipshits, but mostly because it is simply wrong. Genres do not define music. Genres were invented so people could find music similar to songs that they liked. That's all. There are no rules for any genre other than "it should sound kind of like other songs in said genre", and even then it's commonplace to have songs associated with multiple genres. Genres are a categorization system, and nothing else. Many people try and justify their opinions by saying that they're criticizing the classification of the song instead of the song itself, and suggesting that it should be put in some kind of subgenre instead. When the inevitable subgenre usually fails to exist because the composer is being creative like their supposed to, they'll suggest something asinine, like "put it in Miscellaneous, that's what its there for." Really? Put this obviously heavily drum'n'bass influenced song in Miscellaneous with a bunch of off-the-wall experimental stuff instead of songs that, you know, actually sound like it, just because it doesn't conform to a bunch of imaginary rules you pulled out of your ass to "qualify" the song for the genre? Well why don't we just invent another subgenre? We've only got like a couple hundred of them now, 80% of which are basically the same damn thing. People try to defend their perceived sanctity of genres, but the problem is that its all bullshit. Let's remind ourselves, why do genres exist? Genres exist so people can find songs that sound similar to music they like. If you have a bajillion subgenres, no one's going to be able to accurately classify every single song into its own little niche, and what's more infuriating is that this misses the point completely. The vast majority of people do not have laser-guided musical tastes. They just listen to whatever the heck music they like. If they're looking for a song, they don't want to have to filter through hundreds of meaningless subgenres, because all they're really looking for is something like, Trance, or maybe Melodic Trance, and that's about as qualifying as you can get while still being useful. Consequently if your song is weird, you are better off picking the closest well-known genre of music that it sounds like and slapping it in there. And yet, it still doesn't stop. People start throwing on ridiculous prescriptive rules like, a trance song has to be mixable, and to be club friendly you have to have 1 minute of intro with no bass, or it has to be between 116-148 BPM, or you have to use these types of instruments, or you have to do X, or X, or X. Music is art, god damn it, what matters is what a song feels like. If it feels like trance even though its flying along at 166 BPM, and a lot of people who like trance also like that song, then it belongs in trance no matter how much you complain about it. Maybe stick it in "Energy Trance", it kinda gets the idea across, but its still Trance, so who cares, and even then this point is usually moot, because these arguments always come up on websites with either a set list of genres, or one that operates on keywords. In the former case, you can't qualify your genre with anything more than "trance" because the only thing they offer is "Trance" and "Techno". In the latter case, you'll have to tag it with Trance no matter what you do because otherwise no one will ever know your song exists. Attacking a song because of its perceived genre is the dumbest, most useless criticism you can ever give, unless the artist explicitly states that they are trying for a very specific sound, and even then its rarely a genre and usually more of an abstract concept used across several subgenres, in which case you should be referring to the idea, not the genre. People need to understand that if I slap a "Trance" label on to my song, it doesn't automatically mean I am trying to make whatever anglicized version of "Trance" they have deluded themselves into thinking encapsulates the entire genre (which is completely different from everyone else's), it is simply there to help them find the damn song. C# to C++ Tutorial - Part 2: Pointers Everywhere! We still have a lot of ground to cover on pointers, but before we do, we need to address certain conceptual frameworks missing from C# that one must be intimately familiar with when moving to C++. Specifically, in C# you mostly work with the Heap. The heap is not difficult to understand - its a giant lump of memory that you take chunks out of to allocate space for your classes. Anything using the new keyword is allocated on the heap, which ends up being almost everything in a C# program. However, the heap isn't the only source of memory - there is also the Stack. The Stack is best described as what your program lives inside of. I've said before that everything takes up memory, and yes, that includes your program. The thing is that the Heap is inherently dynamic, while the Stack is inherently fixed. Both can be re-purposed to do the opposite, but trying to get the Stack to do dynamic allocation is extremely dangerous and is almost guaranteed to open up a mile-wide security hole. I'm going to assume that a C# programmer knows what a stack is. All you need to understand is that absolutely every single piece of data that isn't allocated on the heap is pushed or popped off your program's stack. That's why most debuggers have a "stack" of functions that you can go up and down. Understanding the stack in terms of how many functions you're inside of is ok, but in reality, there are also variables declared on the stack, including every single parameter passed to a function. It is important that you understand how variable scope works so you can take advantage of declaring things on the stack, and know when your stack variables will simply vanish into nothingness. This is where { and } come in. int bunny = 1; int carrot=3; int lettuce=8; bunny = 2; // Legal //carrot=2; //Compiler error: carrot does not exist int carrot = 3; //Legal, since the other carrot no longer exists int lettuce = 0; //int carrot = 1; //Compiler error: carrot already defined int grass = 9; bunny = grass; //Still legal bunny = carrot; // Also legal //bunny = grass; //Illegal bunny = lettuce; //Legal //bunny = lettuce; //Illegal { and } define scope. Anything declared inside of them ceases to exist outside, but is still accessible to any additional layers of scope declared inside of them. This is a way to see your program's stack in action. When bunny is declared, its pushed on to the stack. Then we enter our first scope area, where we push carrot and lettuce on to the stack and set bunny to 2, which is legal because bunny is still on the stack. When the scope is then closed, however, anything declared inside the scope is popped from the stack in the exact opposite order it was pushed on. Unfortunately, compiler optimization might change that order behind the scenes, so don't rely on it, but it should be fairly consistent in debug builds. First lettuce is de-allocated (and its destructor called, if it has one), then carrot is de-allocated. Consequently, trying to set carrot to 2 outside of the scope will result in a compiler error, because it doesn't exist anymore. This means we can now declare an entirely new integer variable that is also called carrot, without causing an error. If we visualize this as a stack, that means carrot is now directly above bunny. As we enter a new scope area, lettuce is then put on top of carrot, and then grass is put on top of lettuce. We can still assign either lettuce or carrot to bunny, since they are all on the stack, but once we leave this inner scope, grass is popped off the stack and no longer exists, so any attempt to use it causes an error. lettuce, however, is still there, so we can assign lettuce to bunny before the scope closes, which pops lettuce off the stack. Now the only things on the stack are bunny and carrot, in that order (if the compiler hasn't moved things around). We are about to leave the function, and the function is also surrounded by { and }. This is because a function is, itself, a scope, so that means all variables declared inside of that scope are also destroyed in the order they were declared in. First carrot is destroyed, then bunny is destroyed, and then the function's parameters argc and argv are destroyed (however the compiler can push those on to the stack in whatever order it wants, so we don't know the order they get popped off), until finally the function itself is popped off the stack, which returns program flow to whatever called it. In this case, the function was main, so program flow is returned to the parent operating system, which does cleanup and terminates the process. You can declare anything that has a size determined at compile time on the stack. This means if you have an array that has a constant size, you can declare it on the stack: int array[5]; //Array elements are not initialized and therefore are undefined! int array[5] = {0,0,0,0,0}; //Elements all initialized to 0 //int array[5] = {0}; // Compiler error - your initialization must match the array size You can also let the compiler infer the size of the array: int array[] = {1,2,3,4}; //Declares an array of 4 ints on the stack initialized to 1,2,3,4 Not only that, but you can declare class instances and other objects on the stack. Class instance(arg1, arg2); //Calls a constructor with 2 arguments Class instance; //Used if there are no arguments for the constructor //Class instance(); //Causes a compiler error! The compiler will think its a function. In fact, if you have a very simple data structure that uses only default constructors, you can use a shortcut for initializing its members. I haven't gone over classes and structs in C++ yet (See Part 3), but here is the syntax anyway: struct Simple int b; const char* str; Simple instance = { 4, 5, "Sparkles" }; //instance.a is now 4 //instance.b is now 5 //instance.str is now "Sparkles" All of these declare variables on the stack. C# actually does this with trivial datatypes like int and double that don't require a new statement to allocate, but otherwise forces you to use the Heap so its garbage collector can do the work. Wait a minute, stack variables automatically destroy themselves when they go out-of-scope, but how do you delete variables allocated from the Heap? In C#, you didn't need to worry about this because of Garbage Collection, which everyone likes because it reduces memory leaks (but even I have still managed to cause a memory leak in C#). In C++, you must explicitly delete all your variables declared with the new keyword, and you must keep in mind which variables were declared as arrays and which ones weren't. In both C# and C++, there are two uses of the new keyword - instantiating a single object, and instantiating an array. In C++, there are also two uses of the delete keyword - deleting a single object and deleting an array. You cannot mix up delete statements! int* Fluffershy = new int(); int* ponies = new int[10]; delete Fluffershy; // Correct //delete ponies; // WRONG, we should be using delete [] for ponies delete [] ponies; // Just like this //delete [] Fluffershy; // WRONG, we can't use delete [] on Fluffershy because we didn't // allocate it as an array. int* one = new int[1]; //delete one; // WRONG, just because an array only has one element doesn't mean you can // use the normal delete! delete [] one; // You still must use delete [] because you used new [] to allocate it. As you can see, it is much easier to deal with stack allocations, because they are automatically deallocated, even when the function terminates unexpectedly. std::auto_ptr takes advantage of this by taking ownership of a pointer and automatically deleting it when it is destroyed, so you can allocate the auto_ptr on the stack and benefit from the automatic destruction. However, in C++0x, this has been superseded by std::unique_ptr, which operates in a similar manner but uses some complex move semantics introduced in the new standard. I won't go into detail about how to use these here as its out of the scope of this tutorial. Har har har. For those of you who like throwing exceptions, I should point out common causes of memory leaks. The most common is obviously just flat out forgetting to delete something, which is usually easily fixed. However, consider the following scenario: void Kenny() int* kenny = new int(); throw "BLARG"; delete kenny; // Even if the above exception is caught, this line of code is never reached. Kenny(); } catch(char * str) { //Gotta catch'em all. return 0; //We're leaking Kenny! o.O Even this is fairly common: int* kitty = new int(); *kitty=rand(); if(*kitty==0) return 0; //LEAK delete kitty; These situations seem obvious, but they will happen to you once the code becomes enormous. This is one reason you have to be careful when inside functions that are very large, because losing track of if statements may result in you forgetting what to delete. A good rule of thumb is to make sure you delete everything whenever you have a return statement. However, the opposite can also happen. If you are too vigilant about deleting everything, you might delete something you never allocated, which is just as bad: int* rarity = new int(); int* spike; if(rarity==NULL) spike=new int(); delete rarity; delete spike; // Suddenly, in an alternate dimension, earth ceased to exist delete rarity; // Since this only happens if the allocation failed and returned a NULL // pointer, this will also blow up. delete spike; Clearly, one must be careful when dealing with allocating and destroying memory in C++. Its usually best to encapsulate as much as possible in classes that automate such things. But wait, what about that NULL pointer up there? Now that we're familiar with memory management, we're going to dig into pointers again, starting with the NULL pointer. Since a pointer points to a piece of memory that's somewhere between 0 and 4294967295, what happens if its pointing at 0? Any pointer to memory location 0 is always invalid. All you need to know is that the operating system does some magic voodoo to ensure that any attempted access of memory location 0 will always throw an error, no matter what. 1, 2, 3, and any other double or single digit low numbers are also always invalid. 0xfdfdfdfd is what the VC++ debugger sets uninitialized memory to, so that pointer location is also always invalid. A pointer set to 0 is called a Null Pointer, and is usually used to signify that a pointer is empty. Consequently if an allocation function fails, it tends to return a null pointer. Null pointers are returned when the operation failed and a valid pointer cannot be returned. Consequently, you may see this: int* blink = new int(); if(blink!=0) delete blink; blink=0; This is known as a safe deletion. It ensures that you only delete a pointer if it is valid, and once you delete the pointer you set the pointer to 0 to signify that it is invalid. Note that NULL is defined as 0 in the standard library, so you could also say blink = NULL. Since pointers are just integers, we can do pointer arithmetic. What happens if you add 1 to a pointer? If you think of pointers as just integers, one would assume it would simply move the pointer forward a single byte. This isn't what happens. Adding 1 to a pointer of type integer results in the pointer moving forward 4 bytes. Adding or subtracting an integer $i$ from a pointer moves that pointer $i\cdot n$ bytes, where $n$ is the size, in bytes, of the pointer's type. This results in an interesting parallel - adding or subtracting from a pointer is the same as treating the pointer as an array and accessing it via an index. int* kitties = new int[14]; int* a = &kitties[7]; int* b = kitties+7; //b is now the same as a int* c = &a[4]; int* d = b+4; //d is now the same as c int* e = &kitties[11]; int* f = kitties+11; //c,d,e, and f now all point to the same location So pointer arithmetic is identical to accessing a given index and taking the address. But what happens when you try to add two pointers together? Adding two pointers together is undefined because it tends to produce total nonsense. Subtracting two pointers, however, is defined, provided you subtract a smaller pointer from a larger one. The reason this is allowed is so you can do this: int* eggplants = new int[14]; int* a = &eggplants[7]; int* b = eggplants+10; int diff = b-a; // Diff is now equal to 3 a += (diff*2); // adds 6 to a, making it point to eggplants[13] diff = a-b; // diff is again equal to 3 diff = a-eggplants; //diff is now 13 ++a; //The increment operator is valid on pointers, and operates the same way a += 1 would // So now a points to eggplants[14], which is not a valid location, but this is still // where the "end" of the array technically is. diff = a-eggplants; // Diff now equals 14, the size of the array --b; // Decrement works too diff = a-b; // a is pointing to index 14, b is pointing to 9, so 14-9 = 5. Diff is now 5. There is a mistake in the code above, can you spot it? I used a signed integer to store the difference between the two pointers. What if one pointer was above 2147483647 and the other was at 0? The difference would overflow! Had I used an unsigned integer to store the difference, I'd have to be really damn sure that the left pointer was larger than the right pointer, or the negative value would also overflow. This complexity is why you have to goad windows into letting your program deal with pointer sizes over 2147483647. In addition to arithmetic, one can compare two pointers. We already know we can use == and !=, but we can also use < > <= and >=. While you can get away with comparing two completely unrelated pointers, these comparison operators are usually used in a context like the following: int* teapots = new int[15]; int* end = teapots+15; for(int* s = teapots; s<end; ++s) *s = 0; Here the for loop increments the pointer itself rather than an index, until the pointer reaches the end, at which point it terminates. But, what if you had a pointer that didn't have any type at all? void* is a legal pointer type, that any pointer type can be implicitly converted to. You can also explicitly cast void* to any pointer type you want, which is why you are allowed to explicitly cast any pointer type to another pointer type (int* p; short* q = (short*)p; is entirely legal). Doing so, however, is obviously dangerous. void* has its own problems, namely, how big is it? The answer is, you don't know. Any attempt to use any kind of pointer arithmetic with a void* pointer will cause a compiler error. It is most often used when copying generic chunks of memory that only care about size in bytes, and not what is actually contained in the memory, like memcpy(). void* p = (void*)teapots; p++; // compiler error unsigned short* d = (unsigned short*)p; d++; // No compiler error, but you end up pointing to half an integer d = (unsigned short*)teapots; // Still valid Now that we know all about pointer manipulation, we need to look at pointers to pointers, and to anchor this in a context that actually makes sense, we need to look at how C++ does multidimensional arrays. In C#, multidimensional arrays look like this: int[,] table = new int[4,5]; C++ has a different, but fairly reasonable stack-based syntax. When you want to declare a multidimensional array on the heap, however, things start getting weird: int unicorns[5][3]; // Well this seems perfectly reasonable, I wonder what- int (*cthulu)[50] = new int[10][50]; // OH GOD GET IT AWAY GET IT AWAAAAAY...! int c=5; int (*cthulu)[50] = new int[c][50]; // legal //int (*cthulu)[] = new int[10][c]; // Not legal. Only the leftmost parameter // can be variable //int (*cthulu)[] = new int[10][50]; // This is also illegal, the compiler is not allowed // to infer the constant length of the array. Why isn't the multidimensional array here just an int**? Clearly if int* x is equivalent to int x[], shouldn't int** x be equivalent to int x[][]? Well, it is - just look at the main() function, its got a multidimensional array in there that can be declared as just char** argv. The problem is that there are two kinds of multidimensional arrays - square and jagged. While both are accessed in identical ways, how they work is fundamentally different. Let's look at how one would go about allocating a 3x5 square array. We can't allocate a 3x5 chunk out of our computer's memory, because memory isn't 2-dimensional, its 1-dimensional. Its just freaking huge line of bytes. Here is how you squeeze a 2-dimensional array into a 1-dimensional line: As you can see, we just allocate each row right after the other to create a 15-element array ($5\cdot 3 = 15$). But then, how do we access it? Well, if it has a width of 5, to access another "row" we'd just skip forward by 5. In general, if we have an $n$ by $m$ multidimensional array being represented as a one-dimensional array, the proper index for a coordinate $(x,y)$ is given by: array[x + (y*n)]. This can be extended to 3D and beyond but it gets a little messy. This is all the compiler is really doing with multidimensional array syntax - just automating this for you. Now, if this is a square array (as evidenced by it being a square in 2D or a cube in 3D), a jagged array is one where each array is a different size, resulting in a "jagged" appearance: We can't possibly allocate this in a single block of memory unless we did a lot of crazy ridiculous stuff that is totally unnecessary. However, given that arrays in C++ are just pointers to a block of memory, what if you had a pointer to a block of memory that was an array of pointers to more blocks of memory? Suddenly we have our jagged array that can be accessed just like our previous arrays. It should be pointed out that with this format, each inner-array can be in a totally random chunk of memory, so the last element could be at position 200 and the first at position 5 billion. Consequently, pointer arithmetic only makes sense within each column. Because this is an array of arrays, we declare it by creating an array of pointers. This, however, does not initialize the entire array; all we have now is an array of illegal pointers. Since each array could be a different size than the other arrays (this being the entire point of having a jagged array in the first place), the only possible way of initializing these arrays is individually, often by using a for loop. Luckily, the syntax for accessing jagged arrays is the exact same as with square arrays. int** jagged = new int*[5]; //Creates an array of 5 pointers to integers. for(int i = 0; i < 5; ++i) jagged[i] = new int[3+i]; //Assigns each pointer to a new array of a unique size jagged[4][1]=0; //Now we can assign values directly, or... int* second = jagged[2]; //Pull out one column, and second[0]=0; //manipulate it as a single array // The double-access works because of the order of operations. Since [] is just an // operator, it is evaluated from left to right, like any other operator. Here it is // again, but with the respective types that each operator resolves to in parenthesis. ( (int&) ( (int*&) jagged[4] ) [1] ) = 0; As you can see above, just like we can have pointers to pointers, we can also have references to pointers, since pointers are just another data type. This allows you to re-assign pointer values inside jagged arrays, like so: jagged[2] = (int*)kitty. However, until C++0x, those references didn't have any meaningful data type, so even though the compiler was using int*&, using that in your code will throw a compiler error in older compilers. If you need to make your code work in non-C++0x compilers, you can simply avoid using references to pointers and instead use a pointer to a pointer. int* bunny; int* value = new int[5]; int*& bunnyref = bunny; // Throws an error in old compilers int** pbunny = &bunny; // Will always work bunnyref = value; // This does the same exact thing as below. *pbunny = value; // bunny is now equal to value This also demonstrates the other use of a pointer-to-pointer data type, allowing you to remotely manipulate a pointer just like a pointer allows you to remotely manipulate an integer or other value type. So obviously you can do pointers to pointers to pointers to pointers to an absurd degree of lunacy, but this is exceedingly rare so you shouldn't need to worry about it. Now you should be strong in the art of pointer-fu, so our next tutorial will finally get into object-oriented techniques in C++ in comparison to C#. Part 3: Classes and Structs and Inheritance OH MY! Signed Integers Considered Stupid (Like This Title... C# to C++ Tutorial - Part 3: Classes and Structs a...
CommonCrawl
`); }); } $('#search-pretype-options').append(prevbooks); }); } function anon_pretype() { let prebooks = null; try { prebooks = JSON.parse(localStorage.getItem('PRETYPE_BOOKS_ANON')); }catch(e) {} if ('previous_books' in prebooks && 'recommended_books' in prebooks) { previous_books = prebooks.previous_books; recommended_books = prebooks.recommended_books; if (typeof PREVBOOKS !== 'undefined' && Array.isArray(PREVBOOKS)) { new_prevbooks = PREVBOOKS; previous_books.forEach(elem => { for (let i = 0; i < new_prevbooks.length; i++) { if (elem.id == new_prevbooks[i].id) { return; } } new_prevbooks.push(elem); }); new_prevbooks = new_prevbooks.slice(0,3); previous_books = new_prevbooks; } if (typeof RECBOOKS !== 'undefined' && Array.isArray(RECBOOKS)) { debugger; new_recbooks = RECBOOKS; for (let j = 0; j < new_recbooks.length; j++) { new_recbooks[j].viewed_at = new Date(); } let insert = true; for (let i=0; i < recommended_books.length; i++){ for (let j = 0; j < new_recbooks.length; j++) { if (recommended_books[i].id == new_recbooks[j].id) { insert = false; } } if (insert){ new_recbooks.push(recommended_books[i]); } } new_recbooks.sort((a,b)=>{ adate = new Date(2000, 0, 1); bdate = new Date(2000, 0, 1); if ('viewed_at' in a) {adate = new Date(a.viewed_at);} if ('viewed_at' in b) {bdate = new Date(b.viewed_at);} // 100000000: instead of just erasing the suggestions from previous week, // we just move them to the back of the queue acurweek = ((new Date()).getDate()-adate.getDate()>7)?0:100000000; bcurweek = ((new Date()).getDate()-bdate.getDate()>7)?0:100000000; aviews = 0; bviews = 0; if ('views' in a) {aviews = acurweek+a.views;} if ('views' in b) {bviews = bcurweek+b.views;} return bviews - aviews; }); new_recbooks = new_recbooks.slice(0,3); recommended_books = new_recbooks; } localStorage.setItem('PRETYPE_BOOKS_ANON', JSON.stringify({ previous_books: previous_books, recommended_books: recommended_books })); build_pretype(); } } var search_text_out = true; var search_popup_out = true; $( document ).ready(function() { $('#search-text').focusin(function() { $('#search-popup').addClass('show'); resize_popup(); search_text_out = false; }); $( window ).resize(function() { resize_popup(); }); $('#search-text').focusout(() => { search_text_out = true; if (search_text_out && search_popup_out) { $('#search-popup').removeClass('show'); } }); $('#search-popup').mouseenter(() => { search_popup_out = false; }); $('#search-popup').mouseleave(() => { search_popup_out = true; if (search_text_out && search_popup_out) { $('#search-popup').removeClass('show'); } }); build_pretype(); let prevbookUrl = `/search/pretype_books/`; const is_login = false; if (is_login) { $.ajax({ url: prevbookUrl, method: 'POST', data:{csrfmiddlewaretoken: "U15EBhJA2uhFEFukxtXJK2PGEsOWeqPFD8f1UEnW0UcBV8GF35Ul9sCNMC3aZrAI"}, success: function(response){ previous_books = response.previous_books; recommended_books = response.recommended_books; build_pretype(); }, error: function(response){ console.log(response); } }); } else { let prebooks = null; try { prebooks = JSON.parse(localStorage.getItem('PRETYPE_BOOKS_ANON')); }catch(e) {} if (prebooks && 'previous_books' in prebooks && 'recommended_books' in prebooks) { anon_pretype(); } else { $.ajax({ url: prevbookUrl, method: 'POST', data:{csrfmiddlewaretoken: "U15EBhJA2uhFEFukxtXJK2PGEsOWeqPFD8f1UEnW0UcBV8GF35Ul9sCNMC3aZrAI"}, success: function(response){ previous_books = response.previous_books; recommended_books = response.recommended_books; build_pretype(); }, error: function(response){ console.log(response); } }); } } }); Integrals are the area of mathematics that deals with the integration of a function. The area of mathematics is concerned with the subject of the integrals themselves. Integrals are usually written on the right side of an equal sign. Integrals are also used in a variety of formulas, particularly in mathematics and physics. Integrals are usually written using a definite integral symbol (for example, ) or a function notation (for example, sin(x)dx, which is the integral of the sine function with respect to the variable x). The definite integral of a function f(x) with respect to a variable x is denoted by: or Integrals can be classified into two areas: The area called elementary calculus is devoted to the integration of elementary functions such as polynomials, rational functions, trigonometric and exponential functions. The area called advanced calculus is devoted to the integration of more complicated functions such as exponential and logarithmic functions, trigonometric and hyperbolic functions, and even functions of several variables. The remainder of this section discusses elementary functions. The indefinite integral and the definite integral are two different forms of integration. The definite integral is the standard form of integration used in calculus. The definite integral may or may not be unique; if two functions have the same definite integral, then they are equal. The indefinite integral is the integral of a function in which the limits of integration are not fixed. For example, the indefinite integral of (ex) is In the above integral, the limit of integration is infinity, but it is not specified. The indefinite integral of a function is unique if the function is continuous and its limits of integration are in the closed interval [a, b] (a closed interval is an interval with a definite end point, such as (2, 3), or [0, ?)), or if the function is differentiable with a derivative at every point of the closed interval. If it is not differentiable at every point in a closed interval, the indefinite integral of the function with respect to any variable is undefined. The indefinite integral of a function is a special case of the integral of the integral. Using the Riemann sum theorem, the indefinite integral of a function "f"("x") is given by: The fundamental theorem of calculus states that the indefinite integral of a function of a real variable over a closed interval containing the integral of the function is equal to the integral of the function of the function. The result of this theorem can be extended to the case of an unbounded interval containing the integral of the function. Where the function is defined at an open interval containing the integral, then the result of the extension is the same as the result of the theorem. The definite integral is an integral over a closed interval of the form: where "a" and "b" are real numbers and the function "f" is continuous on the closed interval. The left-hand side of the definite integral can be interpreted as the area between the curve "y" = "f"("x") and the x-axis, and the right-hand side as the area between the curve "y" = "f"("x") and the x-axis. For the function formula_3, the definite integral can be calculated as follows: The integral of a function of a single variable is the limit of a sequence of approximations of the integral of the function. The limit of a finite sequence of approximations is the same as the integral of the function. The following table shows an infinite sequence of approximations of the integral of sin ("x"): Each of these terms is a quotient of the previous term by the difference between the limits of the interval. The limit of a finite sequence of approximations, if finite, is the same as the integral of the function. The limit of an infinite sequence of approximations is a function of the limits of the sequence. The limit of the sequence is therefore the original function. As an example, let the function formula_9 be defined for all real numbers "x" as: and let us evaluate the following sequence of approximations of the integral: If we let "x" approach infinity, we get the following sequence: The limit of this infinite sequence of approximations is therefore the function formula_10. A function of two variables may be integrated over a closed interval containing the interval of integration only if one of the functions is differentiable at every point in the closed interval. The indefinite integral of a function of two variables over a closed interval containing the interval of integration is the limit of the sequence of indefinite integrals of the function of the function. The limit of an infinite sequence of indefinite integrals Areas and Distances 160 Practice Problems A plane flew due north at 500 miles per hour for 3 hours. A second plane, starting at the same point and at the same time, flew southeast at an angle $150^{\circ}$ clockwise from due north at 435 miles per hour for 3 hours. At the end of the 3 hours, how far apart were the two planes? Round to the nearest mile. Trigonometric Functions of Angles The Law of Cosines University Calculus: Early Transcendentals find the distance from the point to the plane. $$(0,1,1), \quad 4 y+3 z=-12$$ Vectors and the Geometry of Space Lines and Planes in Space Area of a triangle Find the area of the triangle with vertices on the coordinate axes at the points $(a, 0,0),(0, b, 0),$ and $(0,0, c)$ in terms of $a, b,$ and $c$ Cross Products Definite Integrals Calculus: Early Transcendental Functions Prove Green's first identity in three dimensions (see exercise 43 in section 14.5 for Green's first identity in two dimensions): $\iiint_{Q} f \nabla^{2} g d V=\iint_{\partial Q} f(\nabla g) \cdot \mathbf{n} d S-\iiint_{Q}(\nabla f \cdot \nabla g) d V$ (Hint: Use the Divergence Theorem applied to $\mathbf{F}=f \nabla g$.) Vector Calculus The Divergence Theorem Find the flux of $\mathbf{F}$ over $\partial Q$. $Q \quad$ is bounded by $\quad x^{2}+z^{2}=1, y=0 \quad$ and $\quad y=1$ $\mathbf{F}=\left\langle z-y^{3}, 2 y-\sin z, x^{2}-z\right\rangle$ Find the value of the constant $k$ to make each of the following pdf's on the interval $[0, \infty) . \text { (See exercise } 61 .)$ (a) $k x e^{-2 x}$ (b) $k x e^{-4 x}$ (c) $k x e^{-r x}$ Integration Techniques Improper Integrals Fundamental Theorem of Calculus 21st Century Astronomy An empirical science is one that is based on a. hypothesis. b. calculus. c. computer models. d. observed data. Motion of Astronomical Bodies Use the Fundamental Theorem of Calculus to find an antiderivative of $e^{-x^{2}}$ The Fundamental Theorem of Calculus Use Part I of the Fundamental Theorem to compute each integral exactly. $$\int_{0}^{4} x(x-2) d x$$ Indefinite Integrals Find the volume of the solid formed by revolving the region bounded by $y=x \sqrt{\sin x}$ and $y=0(0 \leq x \leq \pi)$ about the $x$ -axis. Integration by Parts Determine whether the integral converges or diverges. Find the value of the integral if it converges. $$\int_{-\infty}^{\infty} \frac{1}{x^{2}} d x$$ Evaluate the integral using integration by parts and substitution. (As we recommended in the text, "Try something!") $$\int \sin (\ln x) d x$$ Introductory and Intermediate Algebra for College Students Solve the systems in Exercises $79-80$. $$\left\{\begin{array}{l} \log _{y} x=3 \\ \log _{y}(4 x)=5 \end{array}\right.$$ Conic Sections and Systems of Nonlinear Equations Systems of Nonlinear Equations in Two Variables Make Sense? Determine whether each statement "makes sense" or "does not make sense" and explain your reasoning. I think that the nonlinear system consisting of $x^{2}+y^{2}=36$ and $y=(x-2)^{2}-3$ is easier to solve graphically than by using the substitution method or the addition method. Explain how to solve a nonlinear system using the substitution method. Use $x^{2}+y^{2}=9$ and $2 x-y=3$ to illustrate your explanation. Riemann Sums Evaluate $\int_{0}^{2}\left[\tan ^{-1}(4-x)-\tan ^{-1} x\right] d x$ by rewriting it as a double integral and switching the order of integration. Multiple Integrals Double Integrals Compute the Riemann sum for the given function and region, a partition with $n$ equal-sized rectangles and the given evaluation rule. $f(x, y)=x+2 y^{2}, 0 \leq x \leq 2,-1 \leq y \leq 1, n=4,$ evaluate at midpoint Use the Fundamental Theorem if possible or estimate the integral using Riemann sums. $$\int_{1}^{4} \frac{x^{2}}{x^{2}+4} d x$$ 71 Practice Problems Calculus for Scientists and Engineers: Early Transcendental Air flow in the lungs A reasonable model (with different parameters for different people $)$ for the flow of air in and out of the lungs is $$V^{\prime}(t)=-\frac{\pi V_{0}}{10} \sin \left(\frac{\pi t}{5}\right)$$, where $V(t)$ is the volume of air in the lungs at time $t \geq 0,$ measured in liters, $t$ is measured in seconds, and $V_{0}$ is the capacity of the lungs. The time $t=0$ corresponds to a time at which the lungs are full and exhalation begins. a. Graph the flow rate function with $V_{0}=10 \mathrm{L}$ b. Find and graph the function $V$, assuming that $V(0)=V_{0}=10 \mathrm{L}$ c. What is the breathing rate in breaths/minute? Applications of Integration Velocity and Net Change Filling a tank A $200-\mathrm{L}$ cistern is empty when water begins flowing into it (at $t=0$ ) at a rate (in liters/minute) given by $Q^{\prime}(t)=3 \sqrt{t}.$ a. How much water flows into the cistern in 1 hour? b. Find and graph the function that gives the amount of water in the tank at any time $t \geq 0.$ c. When will the tank be full? Where do they meet? Kelly started at noon $(t=0)$ riding a bike from Niwot to Berthoud, a distance of $20 \mathrm{km},$ with velocity $v(t)=15 /(t+1)^{2}$ (decreasing because of fatigue). Sandy started at noon $(t=0)$ riding a bike in the opposite direction from Berthoud to Niwot with velocity $u(t)=20 /(t+1)^{2}$ (also decreasing because of fatigue). Assume distance is measured in kilometers and time is measured in hours. a. Make a graph of Kelly's distance from Niwot as a function of time. b. Make a graph of Sandy's distance from Berthoud as a function of time. c. How far has each person traveled when they meet? When do they meet? d. More generally, if the riders' speeds are $v(t)=A /(t+1)^{2}$ and $u(t)=B /(t+1)^{2}$ and the distance between the towns is $D,$ what conditions on $A, B,$ and $D$ must be met to ensure that the riders will pass each other? e. Looking ahead: With the velocity functions given in part (d), make a conjecture about the maximum distance each person can ride (given unlimited time).
CommonCrawl
Rus. J. Nonlin. Dyn.: Нелинейная динам., 2018, том 14, номер 4, страницы 473–494 (Mi nd626) Nonlinear physics and mechanics Dynamics of a Body with a Sharp Edge in a Viscous Fluid I. S. Mamaevab, V. A. Tenenevb, E. V. Vetchanincb a Institute of Mathematics and Mechanics of the Ural Branch of RAS, ul. S. Kovalevskoi 16, Ekaterinburg, 620990 Russia b Kalashnikov Izhevsk State Technical University, ul. Studencheskaya 7, Izhevsk, 426069 Russia c Udmurt State University, ul. Universitetskaya 1, Izhevsk, 426034 Russia Аннотация: This paper addresses the problem of plane-parallel motion of the Zhukovskii foil in a viscous fluid. Various motion regimes of the foil are simulated on the basis of a joint numerical solution of the equations of body motion and the Navier – Stokes equations. According to the results of simulation of longitudinal, transverse and rotational motions, the average drag coefficients and added masses are calculated. The values of added masses agree with the results published previously and obtained within the framework of the model of an ideal fluid. It is shown that between the value of circulation determined from numerical experiments, and that determined according to the model of and ideal fluid, there is a correlation with the coefficient $\mathcal{R}=0.722$. Approximations for the lift force and the moment of the lift force are constructed depending on the translational and angular velocity of motion of the foil. The equations of motion of the Zhukovskii foil in a viscous fluid are written taking into account the found approximations and the drag coefficients. The calculation results based on the proposed mathematical model are in qualitative agreement with the results of joint numerical solution of the equations of body motion and the Navier – Stokes equations. Ключевые слова: Zhukovskii foil, Navier – Stokes equations, joint solution of equations, finitedimensional model, viscous fluid, circulation, sharp edge Финансовая поддержка Номер гранта Министерство образования и науки Российской Федерации 1.2405.2017/4.6 Российский фонд фундаментальных исследований 15-08-09093-a 18-08-00995-a The work of V. A. Tenenev (Sections 2 and Conclusion ) was carried out within the framework of the state assignment given to the Izhevsk State Technical University 1.2405.2017/4.6. The work of E. V. Vetchanin (Introduction and Section 1) and I. S. Mamaev (Section 3) was supported by the Russian Foundation for Basic Research under grants Nos. 15-08-09093-a and 18-08-00995-a, respectively. DOI: https://doi.org/10.20537/nd180404 MSC: 37Mxx, 65Nxx, 70Exx, 76Dxx Принята в печать:07.09.2018 Язык публикации: английский Образец цитирования: I. S. Mamaev, V. A. Tenenev, E. V. Vetchanin, "Dynamics of a Body with a Sharp Edge in a Viscous Fluid", Нелинейная динам., 14:4 (2018), 473–494 \RBibitem{MamTenVet18} \by I. S. Mamaev, V. A. Tenenev, E. V. Vetchanin \paper Dynamics of a Body with a Sharp Edge in a Viscous Fluid \jour Нелинейная динам. \mathnet{http://mi.mathnet.ru/nd626} \crossref{https://doi.org/10.20537/nd180404} \elib{https://elibrary.ru/item.asp?id=36686069} http://mi.mathnet.ru/nd626 http://mi.mathnet.ru/rus/nd/v14/i4/p473 Ivan S. Mamaev, Evgeny V. Vetchanin, "The Self-propulsion of a Foil with a Sharp Edge in a Viscous Fluid Under the Action of a Periodically Oscillating Rotor", Regul. Chaotic Dyn., 23:7-8 (2018), 875–886 V A. Klekovkin, "Simulation of the motion of a propellerless mobile robot controlled by rotation of the internal rotor", Vestn. Udmurt. Univ.-Mat. Mekh. Kompyuternye Nauk., 30:4 (2020), 645–656
CommonCrawl
A quantum annealing approach to ionic diffusion in solids A combined variational and diagrammatic quantum Monte Carlo approach to the many-electron problem Kun Chen & Kristjan Haule A shortcut to the thermodynamic limit for quantum many-body calculations of metals Tina N. Mihm, Tobias Schäfer, … James J. Shepherd Hybrid quantum annealing via molecular dynamics Hirotaka Irie, Haozhao Liang, … Tetsuo Hatsuda Computing inelastic neutron scattering spectra from molecular dynamics trajectories Thomas F. Harrelson, Makena Dettmann, … Roland Faller Direct calculation of the ionic mobility in superionic conductors Alexandra Carvalho, Suchit Negi & Antonio H. Castro Neto Enhancing quantum annealing performance by a degenerate two-level system Shohei Watabe, Yuya Seki & Shiro Kawabata Automated calculation and convergence of defect transport tensors Thomas D. Swinburne & Danny Perez Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets Andrew D. King, Jack Raymond, … Mohammad H. Amin Unveiling quasiparticle dynamics of topological insulators through Bayesian modelling Satoru Tokuda, Seigo Souma, … Takafumi Sato Keishu Utimula1, Tom Ichibha2, Genki I. Prayogo2, Kenta Hongo3, Kousuke Nakano2,4 & Ryo Maezono2 Scientific Reports volume 11, Article number: 7261 (2021) Cite this article We have developed a framework for using quantum annealing computation to evaluate a key quantity in ionic diffusion in solids, the correlation factor. Existing methods can only calculate the correlation factor analytically in the case of physically unrealistic models, making it difficult to relate microstructural information about diffusion path networks obtainable by current ab initio techniques to macroscopic quantities such as diffusion coefficients. We have mapped the problem into a quantum spin system described by the Ising Hamiltonian. By applying our framework in combination with ab initio technique, it is possible to understand how diffusion coefficients are controlled by temperatures, pressures, atomic substitutions, and other factors. We have calculated the correlation factor in a simple case with a known exact result by a variety of computational methods, including simulated quantum annealing on the spin models, the classical random walk, the matrix description, and quantum annealing on D-Wave with hybrid solver . This comparison shows that all the evaluations give consistent results with each other, but that many of the conventional approaches require infeasible computational costs. Quantum annealing is also currently infeasible because of the cost and scarcity of qubits, but we argue that when technological advances alter this situation, quantum annealing will easily outperform all existing methods. The quantum annealing technique1,2 has been widely and successfully applied to challenging combinatorial optimizations3, including NP(Non-deterministic Polynomial time)-hard4 and NP-complete problems3,5,6,7. Realistic problems such as the capacitated vehicle routing problem (CVRP), optimization of traffic quantity8,9,10,11, investment portfolio design12, scheduling problems13, and digital marketing14 have recently been addressed by quantum annealing. The technique has also been applied to improve the performance of machine learning15,16. In the chemistry and materials science domain, however, relatively few applications have been found, other than investigation of the molecular similarity problem17 or the search for protein conformations18. This contrasts with the many applications of quantum gate computing to this field19, e.g., in quantum phase estimation. This imbalance is self-perpetuating: chemists and materials scientists are unfamiliar with quantum annealing, and so do not think to use it. Finding additional applications of the technique is therefore important not only for the sake of the applications themselves, but also for the sake of increasing recognition of quantum annealing as a useful method in this domain. In the quantum annealing framework, an optimization problem is mapped into a quantum spin system described by the Ising Hamiltonian1,2. The problem is then solved by searching for optimal spin configurations minimizing the energy of the Hamiltonian. In this framework, the problem of finding an optimum in the presence of many local minima is solved by using quantum tunneling (i.e. virtual hopping) to cross high energy barriers. The quantum framework is an increasingly popular tool for the solution of optimization problems in the everyday, classical world. However, its application to problems in the quantum world17 seems to be surprisingly rare. In the present study, we applied it to ionic diffusion in solids20. This quantum-mechanical topic, which is of great interest in both pure and applied materials science, originally attracted attention in connection with the microscopic analysis of mechanical strengths21, and more recently has been connected to the efficiency of batteries, systems where charge-carrying ions diffusing in the solid electrolyte are clearly of central importance22,23,24. Among the various mechanisms20 of ionic diffusion, we concentrate here on the vacancy mechanism20, in which ions hop only between lattice sites. Although many ab initio works have provided insight into microscopically 'easier paths' for the ion to hop along, it remains difficult to get practically useful knowledge of the diffusion coefficient D as a macroscopic quantity. To connect the microscopic knowledge with the macroscopic quantity, we must cope with the difficult problem of counting all possible processes by which an ion is pulled back toward a vacancy25 (while also being pulled in other directions, as explained in the next section). This process is described by the correlation factor20,25 f. The evaluation of f, which involves identifying the optimum routing as a vacancy hops around on lattice sites for a given anisotropic easiness, is essential for connecting the microscopic analysis with the evaluation of practically useful macroscopic quantities25. Such a routing problem is analogous to classical ones that have been successfully treated in the annealing framework. Otherwise, the evaluation is far too difficult to solve in the general case; so far, only very limited cases and simple models (e.g., the simple cubic lattice) have been solved20. In the present work, we provide a way to formulate the evaluation in the annealing framework, and show that the method successfully overcomes difficulties unsolved by conventional approaches. Correlation factor in diffusion mechanism We consider a form of atomic diffusion where the atom to be considered (the 'tracer') hops onto a neighboring vacancy site ('hole') generated by thermal processes. Let the tracer be located on a site \(\alpha\). At the initial step (\(i=0\)), we will write \(\alpha = S\) (Start). Any hopping of the tracer onto neighboring vacant sites generates a hole on \(\alpha = S\) at the \(i=1\) step. This hole then becomes a possible vacant site by which the tracer may get back to \(\alpha = S\), a process described as 'the hole pulls the tracer back with a certain probability'. This probability is typically manifest as a reduction of the effective stride of the tracer by a factor f, the correlation factor of the diffusion. Examples of snapshots for the vacancy (white circles, initially at site S) to attract a tracer (white crosses at site T) to the vacancy's position. The horizontal direction to the right is defined to be identical to that of the diffusion flow to be considered. The vacancy is located at one of the Z neighboring sites to site T (Z=6 as an example in the panels) right before exchanging positions with the tracer. The vacancy site is denoted by k. The attraction angles from site k are \(\theta _1 = \pi\), \(\theta _2 = \pi - \varphi _2\), \(\theta _3 = \varphi _3\), \(\theta _4 = 0\), \(\ldots\). The panel (a) indicates the most likely case that the vacancy pulls behind the tracer and the panel (b) indicates that the vacancy pulls forward the tracer after detour movements. While the simplest picture would be an immediate 'pull-back' made by a vacancy at \(\alpha = S\) when \(i=2\), we must take into account further ways a wandering vacancy can attract a tracer when \(i\ge 3\). We shall therefore consider the final state (where the vacancy is about to attract a tracer). Let the site \(\alpha = T\) be where the tracer is located at step \(i=(N-1)\), immediately before it is finally attracted back to the neighboring vacancy. Because this is an exchange process, the vacancy will be located at \(\alpha = T\) when \(i=N\). To specify the geometry, let \(\theta =0\) be the direction of the diffusion flow with a radius vector centered at \(\alpha = T\) (Fig. 1). Let the number of neighboring sites to \(\alpha = T\) be Z, with locations specified by \(\theta _k\). A pulling back by a vacancy at \(\theta _k\) is then contributing to the diffusion by its projection, \(\cos \theta _k\). Letting \(P_k\) be the probability distribution to get a vacancy at a specific \(\theta _k\) amongst Z when \(i=(N-1)\), the 'average cosine' , $$\begin{aligned} \left\langle {\cos \theta } \right\rangle = \sum \limits _{k=1}^Z {{P_k}\cos {\theta _k}}, \end{aligned}$$ matters to the correlation factor. Further consideration is required to take into account the fact that a pulling-back process itself is also subject to pulling-back. Such multiple processes are finally convoluted into a form25,26 as, $$\begin{aligned} f= & {} 1 + 2\sum \limits _{n = 1}^\infty {\left\langle {\cos \theta } \right\rangle ^n} = \frac{ {1+\left\langle {\cos \theta } \right\rangle } }{1-{\left\langle {\cos \theta } \right\rangle }} \ . \end{aligned}$$ With \(\theta\) as in Fig. 1, this factor ranges from \(f = 0\) (\(\theta = \pi\)) through \(f = 1\) (\(\theta = \pi /2\)) to \(f\rightarrow \infty\) (\(\theta \rightarrow 0\)). Formulation using quantum annealing Hamiltonian The evaluation of the correlation factor is therefore reduced to the calculation of the averaged projection given in Eq. (1). The mission of the simulations is to provide the probability \(P_k\), which is obtained from the trajectories of a vacancy hopping along optimal paths in the given system, i.e., those satisfying the initial [\(\alpha = S\) (\(i=1\))] and the final [\(\alpha = T\) (\(i=N\))] conditions: the probability distribution for these trajectories gives \(P_k\) at \(i=(N-1)\). The problem of getting the optimum trajectories is well formulated as a routing problem solved by the quantum annealing, as described in the 'Introduction' section. To facilitate this approach, we shall introduce Ising spins to describe the time evolution of the position of the vacancy as follows: Let \(q_{\alpha ,i}\) take the value 1 when a vacancy is located at the site \(\alpha\) at the step i, and otherwise take the value 0. The initial (final) condition is then described as \(q_{S,1}=1\) (\(q_{T,N}=1\)). Under these conditions, the annealing framework is capable of providing optimum trajectories when \(i=2\sim (N-1)\). The probability that \(q_{k,N-1}=1\) corresponds to \(P_k\) in Eq. (1). A trajectory is expressed by a spin alignment \(\left\{ q_{\alpha ,i}\right\}\) dominated by an Ising Hamiltonian8,9,10,11: $$\begin{aligned} {\hat{H}}_N= & {} \sum \limits _{\alpha ,\beta } {\sum \limits _{i = 1}^{N - 1} {\left( {{t_{\alpha \rightarrow \beta }}\cdot {q_{a,i}} {q_{\beta ,i + 1}}} \right) } } + {\lambda _2}\sum \limits _{i = 1}^N {{{\left( {\sum \limits _\alpha {{q_{\alpha ,i}} - 1} } \right) }^2}} \nonumber \\&+ {\lambda _3}{\left( {{q_{S,1}} - 1} \right) ^2} + {\lambda _4}{\left( {\sum \limits _{i = 2}^{N - 1} {{q_{\mathrm{{T}},i}}} } -0\right) ^2} + {\lambda _5}{\left( {{q_{T,N}} - 1} \right) ^2}.\end{aligned}$$ The first term describes the hopping of a vacancy between sites, \(\alpha \rightarrow \beta\). The hopping amplitude \(t_{\alpha \rightarrow \beta }\) corresponds to the probability of the hopping p, which scales with a temperature (T) dependence \(p_{\alpha \rightarrow \beta }\sim \exp {\left( - \Delta E_{\alpha \rightarrow \beta }/T \right) } \sim \exp {\left( - t_{\alpha \rightarrow \beta }/T \right) }\). Here \(\Delta E_{\alpha \rightarrow \beta }\) is the barrier energy for the hopping, which can be evaluated by ab initio calculations25. The amplitude t is therefore related to p by \(t\propto \ln {p}\). The terms with \(\lambda _3\) and \(\lambda _5\) denote the initial and final conditions as the constraints. The term with \(\lambda _2\) expresses the condition that only one vacancy exists over all the sites, i.e., the assumption that we consider a single vacancy contributing to the pulling-back as the primary contribution to f, ignoring multiple-vacancy processes as secondary. This assumption is reasonable except for some cases. Noted that most of the exceptions are in face-centered metallic crystals, where the bi-vacancy process significantly contributes to the self-diffusion when the temperature is higher than 2/3 of the melting temperature20. The term with \(\lambda _4\) means that the vacancy never exchanges its position with the tracer until \(i=N\), as the problem assumes. Evaluation of the correlation factor As a concrete example, consider a \(5 \times 5\) lattice in two dimensions: $$\begin{aligned} \left( \begin{array}{l} {{(0,0) \quad (0,1) \quad (0,2) \quad (0,3) \quad (0,4) }}\\ {{(1,0) \quad (1,1) \quad (1,2) \quad (1,3) \quad (1,4) }}\\ {{(2,0) \quad (2,1) \quad (2,2) \quad (2,3) \quad (2,4) }}\\ {{(3,0) \quad (3,1) \quad (3,2) \quad (3,3) \quad (3,4) }}\\ {{(4,0) \quad (4,1) \quad (4,2) \quad (4,3) \quad (4,4) }} \end{array} \right) , \end{aligned}$$ where the entries in the matrix are the site indices. Suppose that a tracer located initially at (2,1) hops onto (2,2), where initially there was a vacancy. We then consider the process by which the tracer is pulled 'back' by the vacancy with an angle \(\theta _k\) and probability \(P_k\) of evaluating the average given by Eq. (1). The process is complete when the vacancy coalesces with the tracer again (\(q_{T,N}=1\)). Contributions to the summation are not only from the direct 'pulling back' (\(\theta _k = \pi , N=2\)) from (2,1) [the site where a new vacancy appears due to the tracer's hopping], but also from other possible sites at which the vacancy arrives after strolling for several time steps, as shown in Table 1. Table 1 Possible trajectories for a vacancy generated at (2,1) due to hopping by a tracer. Let us denote the contributions from trajectories obtained by the simulation with the Hamiltonian \(H_N\) as $$\begin{aligned} P_k^{(N)} =\sum _{l\in {\Omega };~\mathrm{trajectories}}{\pi _l} \ , \end{aligned}$$ where l indexes each trajectory and \(\Omega\) is the space formed by all the contributing trajectories. Each contribution from a trajectory with energy \(E_l^{(N)}\) would be expressed as \(\pi _l\sim \exp {\left( -E_l^{(N)} /T \right) }\). For example, in the case of \(N=4\) in Table 1, \(\pi _l\sim \exp {(3t)}\sim p^3\). Noticing that trajectories with different N values (numbers of steps to arrive at coalescence with a tracer) are mutually exclusive, the probability \(P_k\) can be expressed as a sum of each exclusive contribution with different N: $$\begin{aligned} {P_k} = \sum \limits _{N = 2}^{{N_{\max }}} {P_k^{(N)}} \ , \end{aligned}$$ where \({P_k^{(N)}}\) is the probability of finding a vacancy at a neighboring site with \({\theta _k}\) obtained from the simulation with \({\hat{H}}_N\). \({P_k^{(N)}}\) is obtained as the ratio of the number of trajectories with \({\theta _k}\) divided by the total number of trajectories within the simulation using \({\hat{H}}_N\). In the procedure, quantum annealing computers (QACs) are used only to identify the optimal trajectories while the calculation of Eq. (1) is made by a classical counting over a table like Table 1. To get such a table, the annealing simulations should be repeated even within a fixed \({\hat{H}}_N\). Recalling that an annealing simulation gives an optimal trajectory, enough repetition is required to search all the possible trajectories that are likely to be degenerate even within a \({\hat{H}}_N\). After all the possible trajectories have been tabulated, the calculation of Eq. (1) by the classical counting on the table can be attempted. One might wonder whether it is possible to perform the on-the-fly evaluation of Eq. (1) during the search for optimal trajectories. For example, suppose that '\(\theta =0\)' were obtained 5 times, possibilities. One might be tempted to use the frequency of the appearance of a particular angle for an 'on-the-fly' calculation of \(P_k\). However, this cannot be justified at least for QAC, as we note later in the first paragraph of the 'Discussion' section. Verification of benchmark For some selected cases with simple lattices, it is possible to describe the multi-scattering processes contributing to the correlation factor in terms of recursion equations, and thus to find analytical solutions20; some examples are shown in Table 2. Table 2 Correlation factors f, obtained analytically for simple lattice model systems20. Table 3 The convergence of the correlation factors evaluated by '(a) Quantum Annealing with D-wave (QA)', '(b) Simulated Quantum Annealing (SQA)', '(c) Classical Random Walk (CRW)', and '(d) Matrix Updating method (MU)', depending on the system size N. The values given in Table 2 can be used to test our formulation and its implementation. We are able to reproduce the value f = 0.46727 for a two-dimensional tetragonal lattice by our procedure, as described below. Note that the analytical solution is obtained only for a quite limited case in which the initial and the final positions of the tracer are within one step of each other, (\(T=S+1\))28, while our treatment is never limited by such toy-model assumption. The present approach is therefore capable of providing interesting knowledge going beyond what can be learned by existing methods. Though '(a) Quantum annealing computers (QAC)' are ultimately the preferred technology for counting up trajectories to get \(P_k\), the availability of such devices is still limited, not only by financial considerations, but also by the total number of qubits technically achieved. As explained later, current availability enables us to try up to \(N_{\mathrm{max}}\sim 5\): far too few to verify the calibration of the two-dimensional tetragonal lattice (f = 0.46727). As possible substitutes, we can list (b) simulated quantum annealing (SQA)29,30/ path integral monte carlo (PIMC)31,32', '(c) classical random walk (CRW)', and '(d) matrix updating (MU)', in order of their closeness to (a). Unfortunately, for larger \(N_{\mathrm{max}}\), the feasibility of (b) and (c) proved limited. For '(b) SQA', the required computational cost is dominated by the annealing time, i.e., the time to decrease the transverse magnetic field. To achieve the equilibrium Boltzmann distribution, this time increases with system size N as \(\sim \exp (N)\)32. This limits the possible number of trajectories obtainable at an affordable cost, leading to larger error bars in Eq. (6), as shown in Table 3. For '(c) CRW', feasibility is assured up to \(N_{\mathrm{max}}\)=12 in the present case. In this method, the computational time is dominated by the number of stochastic trials. For a step there are Z possible ways of hopping to nearest neighboring sites (\(Z=4\) in this benchmark case). The total number of possibilities for an N-step trajectory amounts to \(Z^N\), which easily becomes very large as N increases. By using '(d) MU', we can successfully verify the calibration by going up to \(N_{\mathrm{max}}\)=500, as described below (Table 3). We introduce the vacancy hopping operator $$\begin{aligned} {\hat{T}} = \sum \limits _{i,j} {{t_{ij}}\cdot a_i^\dag a_j}, \end{aligned}$$ Consider a field described by the matrix $$\begin{aligned} {F_0} = \left( {\begin{array}{*{20}{c}} 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0 \end{array}} \right), \end{aligned}$$ where each element \((F_{0})_{i,j}\) corresponds to the location of a hopping site. The value '1' in \(F_0\) indicates the (initial) location of a vacancy, whereas '0' means the vacancy is absent. We update the field at step K to \(F_K\), by $$\begin{aligned} {F_K} = {\hat{T}}\cdot {F_{K - 1}} \ . \end{aligned}$$ In the present case (two-dimensional tetragonal lattice), we assume \(t_{ij}\) is isotropic and only connects between the nearest neighboring sites. This drives the field matrix to $$\begin{aligned} {\left( {{F_K}} \right) _{i,j}} = {\left( {{F_{K - 1}}} \right) _{i - 1,j}} + {\left( {{F_{K - 1}}} \right) _{i + 1,j}} + {\left( {{F_{K - 1}}} \right) _{i,j + 1}} + {\left( {{F_{K - 1}}} \right) _{i,j - 1}} \ . \end{aligned}$$ The constraint that the vacancy not coalesce with the tracer until the given final step N can be expressed as \({\left( {{F_K}} \right) _{i',j'}}=0\) for \(K < N\) where \(\left( i',j'\right)\) is the location of the tracer site. After updating the field matrix until step N, each matrix element shows how many trajectories being possible to give a vacancy at that site after N steps, from which we can evaluate \(P_k\) and thus f. As shown in Table 3, f falls as N increases. It is at 0.468 when \(N = 500\), and the rate of decline has become very small. Thus, it appears to be asymptotically approaching the value from the analytical solution, 0.467. The feasibility of '(a) Quantum annealing computers (QAC)' is determined in large part by the available number of qubits, \(N_{\mathrm{Qubit}}^{(\mathrm available)}\), currently 204833. The required number of qubits scales in the present case as the product of \(N_{\mathrm{max}}\) and the size of the lattice (\(M\times M\) in the two-dimensional case; \(5\times 5\) in the example). Therefore, the maximum possible \(N_{\mathrm{max}}^{(\mathrm possible)}\) may be estimated as 81 (= 2048/25); for a user with a practical budget situation, it is probably closer to five. We note however that the computational limitation of being directly and linearly proportional to \(N_{\mathrm{Qubit}}^{(\mathrm available)}\) still renders (a) more promising than other methods like (b) and (c). For '(a) QAC', we used D-Wave34 applied to \((N_{\mathrm{max}}+1) \times (N_{\mathrm{max}}+1)\) lattice size for \(N_{\mathrm{max}}=2,4,6\ldots\) in order. Since implemented topologies of qubits interconnections (chimera graph) are not capable in general to describe Ising spin couplings as it is in the Hamiltonian, some of the couplings (spins directly couple with each other in the Hamiltonian, say \(J_{12}\sigma _1\sigma _2\)) are equivalently realized by the synchronized qubits pairs (i.e., \(\sigma _1\)–\(\sigma _2\) in the Hamiltonian is realized as \(\sigma _1\)–\(\tau _1\)...\(\tau _2\)–\(\sigma _2\), where \(\tau _{1}\) and \(\tau _{2}\) are distant but synchronized)35. The technique costs the number of qubits than that of pure required one in the model Hamiltonian. Even using 2000 qubits, we could embed our problem only upto \(N_{\mathrm{max}}=2\) on the D-wave using the technique. In such a case, we can use the 'Hybrid solver' to resolve the problem36. The solver itself works on a classical computer, decomposing the original size problem into a set of smaller chimeric graphs those are possible to be handled by D-wave. The set of results by D-wave is then post-processed by the solver to get the answer of the original problem on the classical computer. By using the solver, we have confirmed that proper trajectories are obtained upto, at least, \(N_{\mathrm{max}}=12\). However, to get the correlation factor finally, we have to count over all the trajectories, for which we could achieve upto \(N_{\mathrm{max}}=6\) due to the D-wave resource limitation. For \(N_{\mathrm{max}}\)=2, 4, and 6, we sampled 1, 15, and 240 solutions, covering 100%, 100%, and 94.54% of the trajectories, respectively. All the above limitations are, however, coming purely from the technical/implementational aspect of Quantum Annealing machines. It is straightforward for us to make the limitations ahead assisted by the intensive developments on the implementations such as the increasing \(N_{\mathrm{Qubit}}^{\mathrm{(available)}}\), improved topologies of chimera graph etc. (e.g., pegasus graph37). We note that the intrinsic computational cost for the trajectory sampling is just several \(\mu\)sec. as we confirmed. In the procedure explained above, it is assumed that all the degenerate ground state spin configurations (i.e., the optimal trajectories) can be found after a sufficiently large (but finite) numbers of trials of the annealing simulation. We should note, however, that there seems to be no firm theoretical basis for this assumption. In SQA, by contrast, it is guaranteed that all degenerate states will be realized under the Boltzmann distribution if the transverse magnetic field is decreased by the correct procedure32. For QAC, we could not find such a clear foundation, but the literature seems to support our assumption. It has been reported that a D-Wave machine can realize the optimal states dominated by the Boltzmann distribution under an ideal operation38. There is also a report that, in the setting of quadratic unconstrained binary optimization, Gaussian noise intentionally added on the coefficients improves the reproducibility of simulations35. If the unsatisfactory reproducibility was due to the 'bias in the frequency to get equivalent degenerate solutions', then the improvement seems to correspond to a hopeful procedure to ensure our assumption here. It is interesting to estimate how much error will occur in the correlation factor f when some degenerate trajectories are missing from the count. Larger multiplicities in the degeneracies occur in the large N region, for which MU (\(N_{\mathrm{max}} = 501\)) is currently the only means of access. We intentionally dropped off some of the degenerate trajectories randomly (at most 10%). The bias in the estimated f was then found to be \(\sim 0.4\)%. Given the present value of \(N_{\mathrm{Qubit}}^{(\mathrm available)}\), MU is still superior to QAC . It is therefore important to discuss what restricts further scalability of MU, and what will make QAC inherently superior when \(N_{\mathrm{Qubit}}^{(\mathrm available)}\) is larger. In the space \(\Omega\) of all trajectories [mentioned in in Eq. (5)], the weight, \(\exp {\left( - E_l^{(N)} /kT \right) }\), dominates only for those trajectories with the most stable energy \(E_0^{(N)}\) at lower temperature. Denoting the space formed by such (possibly degenerate) trajectories with the lowest energy as \({\mathcal {A}}\subset \Omega\), then $$\begin{aligned} P_k^{(N)}\sim \sum _{l\in {{\mathcal {A}}}}{\pi _l}, \end{aligned}$$ for the temperature range. The advantage of QAC in optimization problems in general is its quite efficient ability to extract \({\mathcal {A}}\) from \(\Omega\). MU, on the other hand, is a scheme which surveys all the elements of \(\Omega\), since it accumulates the number of visits \(N_{\mathrm{visits}}\) by the vacancy to every lattice site. When the system size is very large, \(|{\mathcal {A}}| \ll |\Omega |\), and hence QAC will perform more efficiently than MU in evaluating \(P_k^{(N)}\). From this viewpoint, the present benchmark, the two-dimensional tetragonal lattice, would be highly inappropriate for showing the superiority of QAC for the following reason: In the simplified case (\(t_{\alpha \rightarrow \beta }=t\)), all the trajectories having the same N have the same energy and are elements of \({\mathcal {A}}\). Hence \({\mathcal {A}}= \Omega\) and the advantage of QAC disappears. MU can easily be generalized to higher dimensional lattices with general shapes and with anisotropic hopping. The temperature dependence of the hopping can be parameterized via the factor \(\exp {\left( - E_l^{(N)} /kT \right) }\), and then the scheme would be useful for analyzing temperature-depending diffusion (as would QAC). In the case of the two-dimensional tetragonal lattice, however, the success of MU with \(N_{\mathrm{max}}\sim 500\) is in fact just a lucky accident due to the presence of an especially efficient data structure valid only for this case. The factor dominating \(N_{\mathrm{max}}\) in MU comes from the upper limit of the largest possible exponent of \(N_{\mathrm{visits}}\), represented by various numeric data types. It increases explosively in factorial manner as N increases, and (using integer type) easily overflows. In the present work, we use instead the double precision type with mantissa/exponent representation, and find the upper limit of the exponent corresponds to \(N_{\mathrm{max}}\sim 500\) even using the simplest possible data structure to store \(N_{\mathrm{visits}}\). When we try more general cases, such as three-dimensional lattices, we cannot use such a simple data structure but instead must use 'struct' type to store \(N_{\mathrm{visits}}\), leading to a much reduced \(N_{\mathrm{visits}}\sim 20\) (for the three-dimensional cubic lattice). The difficulty of accommodating \(N_{\mathrm{visits}}\) in a practical size of data storage comes from the fact that MU has to treat all the trajectories in \(\Omega\). QAC, on the other hand, has no such inherent problem, because it only deals with \({\mathcal {A}}\). The method is then potentially feasible in the future when \(N_{\mathrm{Qubit}}^{\mathrm{available}}\) increases. It is unavoidable, but due to the limitations of available number of qubits at present, the benchmark verification with smaller N is not fully appealing to show the benefits of quantum annealing. The cost function of our formalism is to search for the path achieving the highest cumulative hopping probability, \(\prod _{\mathrm{path}} \exp { \left( - \Delta E_{\alpha \rightarrow \beta } /kT \right) }\), but in the above verification benchmark, it reduces to a search for the shortest path, being a case with less attraction because of the special condition where all hopping probabilities are identical. However, the framework demonstrates its true power when the hopping probability gets inhomogeneous and especially for a larger N. Under such condition, the optimal path achieving the highest hopping probability could take a longer distance, being difficult to be found without the real power of quantum annealing. Since the problem is not only to find the optimal path for each fixed N but to integrate over solutions with different N, it is critical to identify each optimal path as fast as possible, which makes the use of quantum annealing inevitable. In practical applications, the temperature dependence of the hopping probability generates huge varieties of path networks, which provides further applications of the quantum annealing technique to interesting problems. We developed a framework to evaluate the correlation factor, a key quantity used to derive the macroscopic diffusion coefficient for ions in solid materials. The coefficient describes the process by which a vacancy attracts back a tracer even after repeated scattering events. Direct counting of the possible processes is not feasible with conventional computational tools, so the coefficient has previously only been evaluated in limited model cases where simple assumptions allowing the process to be described in terms of recursion formulae can be justified. This has hampered the utilization of microscopic information obtained by ab initio approaches (vacancy rate, formation energy for a defect, energy barrier to hopping, etc.) in macroscopic calculations. By using our framework, we verified as a calibration that direct counting reliably reproduces the results obtained previously by the recursion model. The framework promises to be especially valuable when implemented on quantum computers with the increased number of available qubits made possible by recent technological advances. The applicability of the direct counting approach is never restricted to special cases, so we can investigate how the diffusion coefficient is affected by nano-level tuning of materials and other factors evaluated by ab initio calculations, factors not previously considered applicable to practical ionic hopping networks in realistic materials. Kadowaki, T. & Nishimori, H. Quantum annealing in the transverse ising model. Phys. Rev. E 58, 5355–5363. https://doi.org/10.1103/PhysRevE.58.5355 (1998). Kumar, V., Bass, G., Tomlin, C. & Dulny, J. Quantum annealing for combinatorial clustering. Quantum Inf. Process. 17, 39. https://doi.org/10.1007/s11128-017-1809-2 (2018). Article ADS MathSciNet MATH Google Scholar quantum annealing through adiabatic evolution. Santoro, GE. & Tosatti, E. Optimization using quantum mechanics. J. Phys. A Math. Gen. 39, R393–R431. https://doi.org/10.1088/0305-4470/39/36/r01 (2006). Peng, W. C. et al. Factoring larger integers with fewer qubits via quantum annealing with optimized parameters. Sci. China Phys. Mech. Astron. 62, 60311. https://doi.org/10.1007/s11433-018-9307-1 (2019). Lucas, A. Ising formulations of many np problems. Front. Phys. 2, 5. https://doi.org/10.3389/fphy.2014.00005 (2014). Das, A. & Chakrabarti, B. K. Colloquium Quantum annealing and analog quantum computation. Rev. Mod. Phys. 80, 1061–1081. https://doi.org/10.1103/RevModPhys.80.1061 (2008). Farhi, E. et al. A quantum adiabatic evolution algorithm applied to random instances of an np-complete problem. Science 292, 472–475. https://doi.org/10.1126/science.1057726 (2001). Article ADS MathSciNet CAS PubMed MATH Google Scholar Neukart, F. et al. Traffic flow optimization using a quantum annealer. Front. ICT 4, 29. https://doi.org/10.3389/fict.2017.00029 (2017). Syrichas, A. & Crispin, A. Large-scale vehicle routing problems: quantum annealing, tunings and results. Comput. Oper. Res. 87, 52–62. https://doi.org/10.1016/j.cor.2017.05.014 (2017). Crispin, A. & Syrichas, A. Quantum annealing algorithm for vehicle scheduling. In 2013 IEEE International Conference on Systems, Man, and Cybernetics, 3523–3528 (2013). Martoňák, R., Santoro, G. E. & Tosatti, E. Quantum annealing of the traveling-salesman problem. Phys. Rev. E 70, 057701. https://doi.org/10.1103/PhysRevE.70.057701 (2004). Rosenberg, G., Haghnegahdar, P., Goddard, P., Carr, P., Wu, K. & de Prado, M. L.. Solving the optimal trading trajectory problem using a quantum annealer. In Proceedings of the 8th Workshop on High Performance Computational Finance, WHPCF '15 (ACM, New York, NY, USA, 2015) 7:1–7:7. https://doi.org/10.1145/2830556.2830563. Venturelli, D., Marchand, D.J.J., & Rojo, G.: Quantum annealing implementation of job-shop scheduling (2015). arXiv:1506.08479 [quant-ph]. Takayanagi, S. Display advertising optimization by quantum annealing processor. In Adiabatic Computation Conference 2017 (2017) Hu, F., Wang, B.-N., Wang, N. & Wang, C. Quantum machine learning with d-wave quantum computer. Quantum Eng. 1, e12. https://doi.org/10.1002/que2.12 (2019). Zhang, Y. & Ni, Q. Recent advances in quantum machine learning. Quantum Eng. 2, e34. https://doi.org/10.1002/que2.34 (2020). Hernandez, M. & Aramon, M. Enhancing quantum annealing performance for the molecular similarity problem. Quantum Inf. Process. 16, 133. https://doi.org/10.1007/s11128-017-1586-y (2017). Perdomo-Ortiz, A., Dickson, N., Drew-Brook, M., Rose, G. & Aspuru-Guzik, A. Finding low-energy conformations of lattice protein models by quantum annealing. Sci. Rep. 2, 571. https://doi.org/10.1038/srep00571 (2012). Article ADS CAS PubMed PubMed Central Google Scholar Cao, Y. et al. Quantum chemistry in the age of quantum computing. Chem. Rev. 119, 10856–10915. https://doi.org/10.1021/acs.chemrev.8b00803 (2019). Mehrer, H. Diffusion in Solids: Fundamentals, Methods, Materials, Diffusion-Controlled Processes, Springer Series in Solid-State Sciences ( Springer, 2007). Kumar, S., Handwerker, C. & Dayananda, M. Intrinsic and interdiffusion in cu-sn system. J. Phase Equilib. Diffus.https://doi.org/10.1007/s11669-011-9907-9 (2011). Shi, S., Qi, Y., Li, H. & Hector, L. G. Defect thermodynamics and diffusion mechanisms in Li\(_{2}\)CO\(_{3}\) and implications for the solid electrolyte interphase in li-ion batteries. J. Phys. Chem. C 117, 8579–8593 (2013). Bachman, J. C. et al. Inorganic solid-state electrolytes for lithium batteries. Mechanisms and properties governing ion conduction. Chem. Rev. 116, 140–162. https://doi.org/10.1021/acs.chemrev.5b00563 (2016). Levi, E., Levi, M. D., Chasid, O. & Aurbach, D. A review on the problems of the solid state ions diffusion in cathodes for rechargeable Mg batteries. J. Electroceram. 22, 13–19. https://doi.org/10.1007/s10832-007-9370-5 (2009). Ichibha, T., Prayogo, G., Hongo, K. & Maezono, R. A new ab initio modeling scheme for the ion self-diffusion coefficient applied to the \(\varepsilon\)-Cu\(_{3}\)Sn phase of the Cu-Sn alloy. Phys. Chem. Chem. Phys. 21, 5158–5164. https://doi.org/10.1039/C8CP06271D (2019). Compaan, K. & Haven, Y. Correlation factors for diffusion in solids. Trans. Faraday Soc. 52, 786–801. https://doi.org/10.1039/TF9565200786 (1956). Montet, G. L. Integral methods in the calculation of correlation factors in diffusion. Phys. Rev. B 7, 650–662. https://doi.org/10.1103/PhysRevB.7.650 (1973). Mantina, M., Shang, S. L., Wang, Y. & Chen, L. Q. & Liu, Z. K. Phys. Rev. B 80, https://doi.org/10.1103/PhysRevB.80.184111 (2009). Giuseppe, E. S., Roman, M., Erio, T. & Roberto, C. Theory of quantum annealing of an ising spin glass. Science 295, 2427–2430. https://doi.org/10.1126/science.1068774 (2002). Martoňák, R., Santoro, G. E. & Tosatti, E. Quantum annealing by the path-integral monte carlo method: the two-dimensional random ising model. Phys. Rev. B 66, 094203. https://doi.org/10.1103/PhysRevB.66.094203 (2002). Ceperley, D. M. Path integrals in the theory of condensed helium. Rev. Mod. Phys. 67, 279–355. https://doi.org/10.1103/RevModPhys.67.279 (1995). Morita, S. & Nishimori, H. Convergence theorems for quantum annealing. J. Phys. A Math. Gen. 39, 13903 (2006). Article ADS MathSciNet Google Scholar D-Wave Systems Inc., D-wave technology overview. online: https://www.dwavesys.com/sites/default/files/D-Wave%202000Q%20Tech%20Collateral_0117F.pdf (2020a). Accessed on 15 July2020. D-Wave Systems Inc., D-wave technology overview. online: https://www.dwavesys.com/sites/default/files/Dwave_Tech%20Overview2_F.pdf (2020b). Accessed on 17 May2020. Foster, R. C. Brian, W. & James, G. Applications of quantum annealing in statistics (2019a). arXiv:1904.06819. D-Wave Systems Inc., D-wave hybrid solver service: an overview. online: https://www.dwavesys.com/sites/default/files/14-1039A-B_D-Wave_Hybrid_Solver_Service_An_Overview.pdf (2020c). Accessed on 17 May 2020. Boothby, K., Bunyk, P., Raymond, J., & Roy, A. Next-generation topology of d-wave quantum processors (2020). arXiv:2003.00133. Foster, R., Weaver, B. & Gattiker, J. Applications of quantum annealing in statistics (2019b) The computation in this work has been performed using the facilities of the Research Center for Advanced Computing Infrastructure (RCACI) at JAIST. T.I. is grateful for financial support from Grant-in-Aid for JSPS Research Fellow (18J12653). K.H. is grateful for financial support from the HPCI System Research Project (Project ID: hp190169) and MEXT-KAKENHI (JP16H06439, JP17K17762, JP19K05029, and JP19H05169). R.M. is grateful for financial support from MEXT-KAKENHI (JP19H04692 and JP16KK0097), FLAGSHIP2020 (project nos. hp190169 and hp190167 at K-computer), Toyota Motor Corporation, the Air Force Office of Scientific Research (AFOSR-AOARD/FA2386-17-1-4049;FA2386-19-1-4015), and JSPS Bilateral Joint Projects (with India DST). School of Materials Science, JAIST, Asahidai 1-1, Nomi, Ishikawa, 923-1292, Japan Keishu Utimula School of Information Science, JAIST, Asahidai 1-1, Nomi, Ishikawa, 923-1292, Japan Tom Ichibha, Genki I. Prayogo, Kousuke Nakano & Ryo Maezono Research Center for Advanced Computing Infrastructure, JAIST, Asahidai 1-1, Nomi, Ishikawa, 923-1292, Japan Kenta Hongo International School for Advanced Studies (SISSA), Via Bonomea 265, 34136 Trieste, Italy Kousuke Nakano Tom Ichibha Genki I. Prayogo Ryo Maezono K.H. and R.M. initiated the idea. R.M. supervised the research. K.U. and K.N. carried out the calculations on a classical computer. T.I. and G.I.P. carried out the calculations on a D-wave. K.U. prepared the initial draft of the manuscript. All authors contributed to the discussions and revisions of the manuscript. Correspondence to Keishu Utimula. Utimula, K., Ichibha, T., Prayogo, G.I. et al. A quantum annealing approach to ionic diffusion in solids. Sci Rep 11, 7261 (2021). https://doi.org/10.1038/s41598-021-86274-3 Editor's choice: quantum computing
CommonCrawl
Volume 62 Issue 235 Monitoring of seasonal glacier ... Core reader Monitoring of seasonal glacier mass balance over the European Alps using low-resolution optical satellite images 2. DATA 2.1. Satellite data 2.2. Glacier MB data 3.1. Cloud filtering and temporal interpolation of the snow cover maps 3.2. Calibration 3.3. Validation 3.3.1. Period 1998/1999–2008 3.3.2. Period 2009–2013/2014 4.1. Performance over the calibration period 1998/1999–2008 4.1.1. Individual glaciers 4.1.2. Performance for the average of all observed glaciers 4.2. Cross-validation over the calibration period 1998/1999–2008 4.3. Performance over the evaluation period 2009–2013/2014 5. DISCUSSION AND PERSPECTIVES 5.1. Method performance 5.2. Perspectives Saunier, Sebastien Northrop, Amy Lavender, Samantha Galli, Luca Ferrara, Riccardo Mica, Stefano Biasutti, Roberto Goryl, Philippe Gascon, Ferran Meloni, Marco Desclee, Baudouin and Altena, Bas 2017. European Space agency (ESA) Landsat MSS/TM/ETM+/OLI archive: 42 years of our history. p. 1. Hu, Zhongyang Kuenzer, Claudia Dietz, Andreas J. and Dech, Stefan 2017. The Potential of Earth Observation for the Analysis of Cold Region Land Surface Dynamics in Europe—A Review. Remote Sensing, Vol. 9, Issue. 10, p. 1067. Rabatel, Antoine Sirguey, Pascal Drolon, Vanessa Maisongrande, Philippe Arnaud, Yves Berthier, Etienne Davaze, Lucas Dedieu, Jean-Pierre and Dumont, Marie 2017. Annual and Seasonal Glacier-Wide Surface Mass Balance Quantified from Changes in Glacier Surface State: A Review on Existing Methods Using Optical Satellite Imagery. Remote Sensing, Vol. 9, Issue. 5, p. 507. Davaze, Lucas Rabatel, Antoine Arnaud, Yves Sirguey, Pascal Six, Delphine Letreguilly, Anne and Dumont, Marie 2018. Monitoring glacier albedo as a proxy to derive summer and annual surface mass balances from optical remote-sensing data. The Cryosphere, Vol. 12, Issue. 1, p. 271. Journal of Glaciology, Volume 62, Issue 235 October 2016 , pp. 912-927 VANESSA DROLON (a1), PHILIPPE MAISONGRANDE (a1), ETIENNE BERTHIER (a1), ELSE SWINNEN (a2) and MATTHIAS HUSS (a3) (a4) 1 LEGOS, Université de Toulouse, CNRS, CNES, IRD, UPS, 14 av Edouard Belin, 31400 Toulouse, France 2 Remote Sensing Unit, Flemish Institute for Technological Research (VITO), Boeretang 200, 2400 MOL, Belgium 3 Department of Geosciences, University of Fribourg, Chemin du Musée 4, CH-1700 Fribourg, Switzerland 4 Laboratory of Hydraulics, Hydrology and Glaciology (VAW), ETH Zurich, Hönggerbergring 26, CH-8093 Zurich, Switzerland Copyright: © The Author(s) 2016 This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. DOI: https://doi.org/10.1017/jog.2016.78 Published online by Cambridge University Press: 13 July 2016 Figures: Fig. 1. Map of the mean NDSI for the period 1998–2014, over the European Alps, and location of the studied glaciers with MB balance measurements (sources 1–3; Table S1). (a) Entire study area. (b) Enlargement of the studied area (red rectangle in (a)). Fig. 2. Altitudinal distribution of NDSI for each year since 1998. The NDSI has been averaged in a square window centred on Griesgletscher, central Swiss Alps. The red horizontal line represents the NDSI value from which the mean regional snow altitude Z (represented by the red vertical line) is inferred for each year. (a) Winter NDSI over 1999–2014 (WOSM size: 367 × 367 km²; NDSI value: 0.43). (b) Summer NDSI over 1998–2014 (WOSM size: 117 × 117 km²; NDSI value: 0.54). Fig. 3. Observed (a) winter and (b) summer MB of Griesgletscher, central Swiss Alps, as a function of the mean regional snow altitude Z for each year of the calibration period represented by coloured dots. Dashed thin lines represent the 95% confidence intervals for linear regression (solid line). Fig. 4. Regression results for the 55 glaciers in terms of R 2, RMSE and seasonal MB. (a) RMSE (black curve) and R 2 (red curve) over the winter calibration period 1999–2008, for the 55 studied glaciers ranked by increasing RMSE. Time series of observed winter MB (b) and winter MB estimated with VGT (c) over 1999–2008. For (b) and (c), each horizontal row of coloured rectangles represents the MB time series of a glacier (the rectangle colour indicating the MB value). Lower panels (d)–(f): respective summer equivalents of graphs (a)–(c), over 1998–2008, for the 55 glaciers ranked by increasing RMSE. The glaciers ranking is not the same for the two seasons (winter and summer glacier ranking in Table S1). Table 1. Summary of the linear regression results over the calibration period 1998/1999–2008 according to individual data sources and averaged for the 55 glaciers Table 2. Mean mass balance error MBE averaged for all glaciers and standard deviation σ of the observed MBs, per year and for the overall calibration period 1998/1999–2008 Table 3. Summary of the cross-validation results over the calibration period 1998/1999–2008 according to individual data sources and for the 55 glaciers Table 4. MB error MBE and RMSE per year and for the overall evaluation period 2009–2013/2014 Fig. 8. Distribution of glaciers (%) as a function of their WOSM* side length (a) in winter and (b) in summer. Send article to Kindle To send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Volume 62, Issue 235 Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox. Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive. We explore a new method to retrieve seasonal glacier mass balances (MBs) from low-resolution optical remote sensing. We derive annual winter and summer snow maps of the Alps during 1998–2014 using SPOT/VEGETATION 1 km resolution imagery. We combine these seasonal snow maps with a DEM to calculate a 'mean regional' altitude of snow (Z) in a region surrounding a glacier. Then, we compare the interannual variation of Z with the observed winter/summer glacier MB for 55 Alpine glaciers over 1998–2008, our calibration period. We find strong linear relationships in winter (mean R 2 = 0.84) and small errors for the reconstructed winter MB (mean RMSE = 158 mm (w.e.) a−1). This is lower than errors generally assumed for the glaciological MB measurements (200–400 mm w.e. a−1). Results for summer MB are also satisfying (mean R 2 and RMSE, respectively, 0.74 and 314 mm w.e. a−1). Comparison with observed seasonal MB available over 2009–2014 (our evaluation period) for 19 glaciers in winter and 13 in summer shows good agreement in winter (RMSE = 405 mm w.e. a−1) and slightly larger errors in summer (RMSE = 561 mm w.e. a−1). These results indicate that our approach might be valuable for remotely determining the seasonal MB of glaciers over large regions. Glacier mass balance (MB) is highly sensitive to atmospheric conditions and constitutes a direct and perceptible indicator of climate change (Haeberli and Beniston, 1998). On a global scale, the mass loss of glaciers (apart from the Greenland and Antarctica ice sheets) constitutes the major contribution to present sea-level rise (IPCC, 2013). For example, over the well-observed 2003–2009 period, glacier mass loss was 259 ± 28 Gt a−1, representing about 30 ± 13% of total sea-level rise (Gardner and others, 2013). Between 1902 and 2005, glacier mass loss, reconstructed by Marzeion and others (2015), corresponds to 63.2 ± 7.9 mm sea-level equivalent (SLE). Glacier mass losses are projected to increase in the future and are expected to range between 155 ± 41 mm SLE (for the emission scenario RCP4.5) and 216 ± 44 mm SLE (for RCP8.5) between 2006 and 2100 (Radić and others, 2014). Moreover, on continental surfaces, glacier behaviour strongly determines watershed hydrology: glacier meltwater impacts on the runoff regime of mountainous drainage basins and consequently on the availability of the water resource for populations living downstream (Huss, 2011). Many scientific, economic and societal stakes are thus related to glaciers (Arnell, 2004; Kaser and others, 2010), hence justifying regular monitoring. Traditionally, the annual and seasonal glacier MBs are measured using glaciological in situ measurements from repeated snow accumulation surveys and ice ablation stake readings on individual glaciers, that are subsequently extrapolated over the entire glacier area (Østrem and Brugman, 1991; Kaser and others, 2003). Nevertheless, this laborious technique requires heavy logistics, is time consuming and moreover is subject to potential systematic errors (e.g. Thibert and others, 2008). Glaciological MB measurements are thus limited to a few easily accessible glaciers, unequally distributed around the globe: only about 150 glaciers among the world's ~200 000 glaciers are regularly monitored in the field and present at least ten consecutive years of MB measurements (Pfeffer and others, 2014; Zemp and others, 2015). Several remote sensing techniques are used to increase data coverage of glacier mass change at a global scale. One of them is the geodetic method, which consists of monitoring glacier elevation changes by differencing multitemporal DEMs. These DEMs can be derived from aerial photos and/or airborne Light Detection and Ranging (LiDAR) (Soruco and others, 2009; Abermann and others, 2010; Jóhannesson and others, 2013; Zemp and others, 2013). However, the main limitation of airborne sensors is their geographic coverage, restricted to areas accessible with airplanes; some large remote areas such as High Mountain Asia can hardly be monitored. LiDAR data from the Ice Cloud and Elevation Satellite (ICESat) altimeter can also provide information on glacier elevation changes, but these data are too sparse to allow reliable monitoring of the mass budget of individual glaciers (Kropáček and others, 2014; Kääb and others, 2015). DEMs can also be derived from medium-to-high resolution optical satellite images such as Satellite pour l'Observation de la Terre-High Resolution Stereoscopic (SPOT5/HRS) images or radar satellite images as done during the Shuttle Radar Topographic Mission (SRTM). However, the vertical precision of these DEMs (~5 m) is such that it remains challenging to accurately estimate the MB of individual small to medium-sized glaciers (glaciers < 10 km2) (Paul and others, 2011; Gardelle and others, 2013). More recently, Worldview and Pléiades sub-meter stereo images have been used to provide accurate glacier topography with a vertical precision of ±1 m and even ±0.5 m on gently sloping glacier tongues (Berthier and others, 2014; Willis and others, 2015). Nevertheless, repeatedly covering all glaciers on Earth with Worldview and Pléiades stereo images would be very expensive given their limited footprint for a single scene (e.g. 20 km × 20 km for Pléiades). Due to DEM errors and uncertainties in the density value for the conversion from volume change to mass change (Huss, 2013), this method currently provides MB at a temporal resolution typically, of 5–10 a. This does not provide an understanding of glacier response to climatic variations at seasonal and annual timescales (Ohmura, 2011). An alternative method to estimate the MB change of a glacier is based on the fluctuations of the glacier equilibrium line altitude (ELA) (Braithwaite, 1984; Kuhn, 1989; Leonard and Fountain, 2003; Rabatel and others, 2005). The ELA is the altitude on a glacier that separates its accumulation zone, where the annual MB is positive, from its ablation zone, where the annual MB is negative (Ohmura, 2011; Rabatel and others, 2012). The ELA and annual glacier-wide MB are strongly correlated. Early studies have shown that the ELA can be efficiently approximated by the snowline altitude at the end of the ablation season (i.e. at the end of the hydrological year) on mid-latitude glaciers (LaChapelle, 1962; Lliboutry, 1965). The snowline altitude measured at the end of the ablation season thus constitutes a proxy for estimating the annual MB. This method has been applied to glaciers located in the Himalayas, Alaska, Western North America, Patagonia and the European Alps amongst others, using aerial photographs or optical remote sensing images from Landsat and MODIS (Ostrem, 1973; Mernild and others, 2013; Rabatel and others, 2013; Shea and others, 2013). However, the Landsat temporal resolution of 16 d can be a strong limitation for accurately studying the dynamics of snow depletion, especially in mountainous regions with widespread cloud coverage (Shea and others, 2013); and the methodologies used with MODIS (available since 2000) to detect ELA often require complex algorithms implying heavy classification (Pelto, 2011; Shea and others, 2012; Rabatel and others, 2013). Unlike the above-mentioned techniques, only a few recent studies have focused on retrieving seasonal glacier MBs (Hulth and others, 2013; Huss and others, 2013). Seasons are in fact, more relevant timescales to better understand the glacier response to climate fluctuations, allowing the partitioning of recent glacier mass loss between changes in accumulation (occurring mainly in winter) and changes in ablation (occurring mainly in summer), at least in the mid- and polar latitudes (Ohmura, 2011). Hulth and others (2013) combined melt modelling using meteorological data and snowline tracking measured in the field with GPS to determine winter glacier snow accumulation along snowlines. Huss and others (2013) calculated the evolution of glacier-wide MB throughout the ablation period also by combining simple accumulation and melt modelling with the fraction of snow-covered surface mapped from repeated oblique photography. Both methods have been applied for only one or two glaciers, and a few years as they are time-consuming and require various datasets (meteorological data, remotely sensed data) that are not widely available. In this context, the consistent archive of optical SPOT/VEGETATION (SPOT/VGT) images, spanning 1998–2014 with a daily and nearly global coverage at 1 km resolution (Sylvander and others, 2000; Maisongrande and others, 2004; Deronde and others, 2014), remains an unexploited dataset to estimate the year-to-year seasonal MB variations of glaciers. Instead of detecting the exact physical snowline altitude of a glacier as done in earlier studies, our new method focuses on monitoring a 'mean regional' seasonal altitude of snow (Z), in a region surrounding a glacier, during summer (from early May to the end of September) and winter (from early October to the end of April). The interannual dynamics of this altitude Z, estimated from SPOT/VGT images, is then compared with the interannual variation of observed seasonal MBs (derived from direct glaciological measurements), available continuously every year over 1998–2008, for 55 glaciers in the Alps (WGMS, 2008, 2012, 2013; Huss and others, 2010a, b, 2015). The performance of the 55 individual linear regressions between Z and MB are analysed in terms of the linear determination coefficient R 2 and the RMSE. For each of the 55 regressions, we then perform a cross-validation of the temporal robustness and skill of the regression coefficients. For all 55 glaciers, seasonal MBs are then calculated outside of the calibration period, i.e. each year of the 2009–2014 interval. A validation is realized for some glaciers with observed seasonal MB over 2009–2014. Finally, we discuss the possible causes of differences in the results between seasons, between glaciers and present the limits and perspectives of our methodology. This study uses optical SPOT/VGT satellite images provided by two instruments: VEGETATION 1 from April 1998 to the end of January 2003 (VGT 1 aboard the SPOT 4 satellite launched in March 1998), and VEGETATION 2 from February 2003 to the end of May 2014 (VGT 2 aboard the SPOT 5 satellite launched in May 2002). The instruments provide a long-term dataset (16 a) with accurate calibration and positioning, continuity and temporal consistency. The SPOT/VGT sensors have four spectral bands: the blue B0 (0.43–0.47 µm), the red B2 (0.61–0.68 µm), the Near Infra-Red B3 (0.78–089 µm) and the Short Wave Infra-Red SWIR (1.58–1.75 µm). Both sensors provide daily images of almost the entire global land surface with a spatial resolution of 1 km, in a 'plate carree' projection and a WGS84 datum (Deronde and others, 2014). In this study, we used the SPOT/VGT-S10 products (10-daily syntheses) freely available between April 1998 and May 2014 (http://www.vito-eodata.be/PDF/portal/Application.html#Home). The SPOT/VGT-S10 products, processed by Vlaamse Instelling voor Technologisch Onderzoek (VITO), result from the merging of data strips from ten consecutive days (three 10-d syntheses are made per month) through a classical Maximum Value Composite (MVC) criterion (Tarpley and others, 1984; Holben, 1986). The MVC technique consists of a pixel-by-pixel comparison between the 10-daily Normalized Difference Vegetation Index (NDVI) images. For each pixel, the maximum NDVI value at top-of-atmosphere is picked. This selection allows minimizing cloud cover and aerosol contamination. Due to atmospheric effects, an atmospheric correction is applied to the VGT-S10 products (Duchemin and Maisongrande, 2002; Maisongrande and others, 2004) based on a modified version of the SMAC (Simple Method for the Atmospheric effects) code (Rahman and Dedieu, 1994). The SPOT/VGT-S10 Blue, Red and SWIR spectral bands have been gathered from 1 April 1998 to 31 May 2014, over a region including the Alps and stretching from 43 to 48.5°N and from 4 to 17°E (Fig. 1). The terrain elevation over the studied region has been extracted from the SRTM30 DEM, derived from a combination of data from the SRTM DEM (acquired in February 2000) and the U.S. Geological Survey's GTOPO30 dataset (Farr and Kobrick, 2000; Werner, 2001; Rabus and others, 2003). The SRTM30 spatial resolution is 30 arcsec (~ 900 m) and the geodetic reference is the WGS84 EGM96 geoid. The DEM has been resampled using the nearest-neighbour method to a 1 km spatial resolution in order to match the resolution of the S10 SPOT/VGT images. Glacier-wide winter and summer MB data of 55 different glaciers are used in this study (Table S1 in the Supplementary Material; Fig. 1), covering a total area of ~400 km2, corresponding to 19% of the 2063 km2 glacier area in the Alps (Pfeffer and others, 2014). Winter MB (B w) is measured through to the end of April. Summer MB (B s) is the difference between annual MB determined at the end of the ablation period (end of September) and winter MB (Thibert and others, 2013). Winter MB corresponds to the snow/ice mass accumulated during the winter season (from Octoberyear−1 to Aprilyear of measurement); thereafter, for more clarity, B w_obs refers to the winter MB measured in April of the year of measurement and B s_obs to the summer MB measured in September of the year of measurement. The different MB datasets all start before 1998. However, as we later compare these MB data with SPOT/VGT snow maps available since April 1998, we used seasonal MB only after 1998. The first MB dataset is composed of direct glaciological measurements provided by the World Glacier Monitoring Service (WGMS, 2013). Seasonal MB of 44 Austrian, French, Italian and Swiss glaciers of the European Alps are available between 1950 and 2012 (for the longest series) but with discontinuous data. At the time of access to the database, continuous seasonal MB time series were only available for seven glaciers since 1998 (labelled 'source 1' in Table S1). The second MB dataset provided by Huss and others (2015) is derived from recently re-evaluated glaciological measurements. For seven Swiss glaciers (labelled 'source 2' in Table S1), long-term continuous seasonal MB series from 1966 (at least) to 2014 are inferred from point measurements extrapolated to the entire glacier using distributed modelling, with year-to-year variability directly given by the in situ measurements. The third MB dataset, composed of 41 Swiss glaciers, is a comprehensive set of field data homogenized using distributed modelling, with year-to-year variation constrained by meteorological data (Huss and others, 2010a, b). The seasonal MB time series of 21 glaciers covering 30% of the total glacier area of Switzerland (labelled 'source 3a' in Table S1), are available for each year of the 1908–2008 period (Huss and others, 2010a). For 20 other glaciers located in the southeastern Swiss Alps (labelled 'source 3b' in Table S1), seasonal MB time series are provided each year from 1900 to 2008 (Huss and others, 2010b). The continuous seasonal MB series of these 41 glaciers are derived from three different data sources: (1) Geodetic data of volume change for periods of four to ~ 40 a; three to nine DEMs have been used for each glacier since 1930, mostly originating from aerial photogrammetry or terrestrial topographic surveys for the first DEM. (2) (Partly isolated) in situ measurements of accumulation and ablation for about half of the glaciers; they were used to constrain MB gradients and winter accumulation. (3) Distributed accumulation and temperature-index melt modelling (Hock, 1999; Huss and others, 2008) forced with meteorological data of daily mean air temperature and precipitation, recorded at various weather stations close to each of the glaciers. The model is constrained by ice volume changes obtained from DEM differencing and calibrated with in situ measurements when available. Huss and others (2008) provide more details on the model and calibration procedure. In order to have the most complete set of continuous interannual MB data to calibrate our method, we chose to combine MB time series from all sources (1, 2, 3a, b) over the same period. Since 41 glaciers (annotated 3a, b in Table S1) provide continuous seasonal MB until 2008, the calibration period extends until 2008. As we later compare MB data with SPOT/VGT images provided since April 1998, 1999–2008 constitutes the winter calibration period and 1998–2008 the summer calibration period. In total, 55 glaciers with 10 a of winter MB (B w_obs) over 1999–2008, and 11 a of summer MB (B s_obs) over 1998–2008 have thus been used to calibrate our approach. Coordinates, median elevation and glacier area are available for all 55 glaciers (Table S1). Furthermore, seasonal MBs are also available for some of these 55 glaciers outside the calibration period (2009–2014, i.e. the end of the period covered by SPOT/VGT data). In winter, 66 additional B w _ obs data are available in total for 19 glaciers over 2009–2014 and in summer, 49 B s_obs measurements are available for 13 glaciers over 2009–2013 (summer 2014 is not covered by SPOT/VGT) (WGMS, 2013; Huss and others, 2015). Errors associated with B w _ obs and B s_obs are not provided individually per glacier. Zemp and others (2013) find an average annual random error of 340 mm w.e. a−1 for 12 glaciers with long-term MB measurements. Huss and others (2010a) have quantified the uncertainty in glacier-wide winter balance (see Supplementary Information in their paper) as ±250 mm w.e. a−1. This number is not directly applicable to all glaciers (depending on the sampling), but to most of them. Systematic errors associated with in situ glaciological MBs are expected to be within 90–280 mm w.e. a−1 if cautious measurements are realized with sufficient stake density (Dyurgerov and Meier, 2002), but they are generally assumed to range between 200 and 400 mm w.e. a−1 for one glacier (Braithwaite and others, 1998; Cogley and Adams, 1998; Cox and March, 2004). In this study, we consider an error (noted E obs) of ±200–400 mm w.e. a−1 for observed MB. Snow discrimination is based on its spectral signature, which is characterized by a high reflectance in the visible and a high absorption in the SWIR wavelength. Consequently, the Normalized Difference Snow Index (NDSI) first introduced by Crane and Anderson (1984) and Dozier (1989) for the Landsat sensor, defined as a band ratio combining the visible (green) and SWIR Landsat bands, constitutes a good and efficient proxy to map the snow cover with optical remote sensing. The NDSI has since been widely used with different sensors (e.g. Fortin and others, 2001; Hall and others, 2002; Salomonson and Appel, 2004). The NDSI value is proportional to the snow cover rate of the pixel and allows a monitoring of the spatial and temporal snow cover variations (Chaponnière and others, 2005). In this study, we build an NDSI adapted to the SPOT/VGT sensor (without green channel), inspired by Chaponnière and others (2005). This modified NDSI is computed from the mean of the red B0 and blue B2 channels (to recreate an artificial green band) and from the SWIR: (1) $$NDSI = \displaystyle{{(B0 + B2)/2 - SWIR} \over {(B0 + B2)/2 + SWIR}}.$$ A different (and more standard) formulation of the NDSI is used for the MODIS sensor and calculated only with the red B0 channel and the SWIR (Xiao and others, 2001). Our entire methodology was also tested with this standard NDSI but led to results of inferior quality (not shown). Clouds have a spectral signature similar to snow and they can be misclassified as snow, especially at the edge of the snow pack. We apply a cloud mask proposed by Dubertret, (2012) (and refer to as D-12 cloud mask below) in order to flag cloudy pixels and to avoid overestimating snow coverage. This algorithm, based on various reflectance bands threshold tests, has been adapted to SPOT/VGT images from different cloud masks initially created for higher resolution imagery. It corresponds to the crossing of Sirguey's cloud mask (developed for MODIS sensor) (Sirguey and others, 2009) and of Irish and Zhu's cloud masks, both developed for the Landsat sensor (Irish, 2000; Zhu and Woodcock, 2012). When compared with other classical cloud detection algorithms (e.g. Berthelot, 2004), the D-12 cloud mask performs the best cloud identification, along with Lissens and others (2000) cloud mask. However, Dubertret (2012) concluded that the D-12 cloud mask is more conservative than that of Lissens and others (2000) and allows detection of more-snow covered pixels (by flagging less clouds). We therefore choose to apply the cross-cloud mask on the S10 syntheses, which are composites of the S1 daily images pixels over 10 d. A temporal interpolation is then computed for the 'cloudy' pixels when possible. If a pixel is detected as cloudy in a S10 synthesis, its value is replaced by the mean of the same pixel value in the previous synthesis (S10 t−1), and in the next one (S10 t+1) if these pixels are not cloudy. If they are, we compute the mean of the S10 t−2 and the S10 t+2 synthesis pixel values. In order to produce maps of winter/summer NDSI, we average all 10-d NDSI syntheses included between 1 October and 30 April for each winter of the 1999–2014 period, and between 1 of May and 30 of September for each summer of the 1998–2013 period. We first superimpose each interannual mean seasonal NDSI map derived from SPOT/VGT for the period 1998–2014 on the SRTM30 DEM. Then, considering different sized of square windows varying from 5 to 401 km side lengths (with steps of 2 km) of P × P pixels and centred on each glacier, we derive the altitudinal distribution of the mean seasonal NDSI. For each glacier, we thus obtain the interannual dynamics of the NDSI altitudinal distribution within different Windows of Snow Monitoring (WOSM) sizes surrounding the glacier (16 curves; Fig. 2). The WOSM side lengths tested are always odd numbers for the glacier to be situated in the central pixel of the square window. Then for each WOSM size, from the intersection between a NDSI value (varying from 0.2 to 0.65, with a step of 0.01) and each seasonal curve of the NDSI altitudinal distribution, a 'mean regional' altitude of snow (Z) can be deduced each year, from 1998 to 2014. We do not focus on detecting the exact physical snowline elevation of the glacier at the end of the melt season to estimate annual MB, as done in previous studies (e.g. Rabatel and others, 2005). In fact, the SPOT/VGT 1-km resolution is not adapted to monitor the snowline elevation and its high spatial variability. Moreover, the snowline approach does not allow us to retrieve seasonal MB, because at the end of the accumulation period (end of April/beginning of May in the Alps), the entire glacier is generally covered by snow. For these reasons, we aim at estimating for winter and for summer a statistical mean regional altitude of snow in a region surrounding a glacier. Finally, for each WOSM size and each NDSI value that are tested, we compute a linear regression between the mean regional snow altitudes Z and the observed MBs (Fig. 3), over the calibration period (1999–2008 for winter, 1998–2008 for summer). This linear regression allows an interannual estimation of the glacier's seasonal MB, as a function of Z inferred from SPOT/VGT, as written in the linear equation (generalized for both seasons): (2) $$B_{{\rm w/s\_VGT}} = \alpha _{{\rm w/s}} \times Z_{{\rm w/s}}\; + \; \beta _{{\rm w/s}}.$$ B w/s_VGT is the winter/summer MB estimated for the year y. Z w/s is the winter/summer 'mean regional' altitude of snow; the slope coefficient α w/s expressed in mm w.e. a−1 m−1 represents the sensitivity of a glacier winter/summer MB towards Z w/s and β w/s is the winter/summer intercept term expressed in mm w.e. a−1. Coefficients of determination R 2 w/s and RMSE w/s_cal are computed to assess the quality of the regression over the calibration period for each season. To summarize, for each glacier, a linear regression is computed for each plausible NDSI value and all WOSM sizes. Following some initial tests on a subset of the 55 glaciers, the interval of plausible NDSI values is set to [0.2–0.65]. This NDSI range has also been chosen to include the reference NDSI value of 0.4 commonly accepted to classify a pixel as snow-covered (and associated with a pixel snow cover rate of 50%) (Hall and others, 1995, 1998; Salomonson and Appel, 2004; Hall and Riggs, 2007; Sirguey and others, 2009). Then, for each glacier, and each size of WOSM tested, the seasonal NDSI value optimizing the RMSE w/s_cal is selected. This NDSI value adjustment for each individual site (and each season) allows a better adaptation to the glacier-specific environment (e.g. local land cover type, local topography, etc.). After adjusting the NDSI value, the size of the P × P pixels WOSM surrounding each glacier is also adjusted for each glacier and each season. Here again, the WOSM size minimizing the mean RMSE w/s_cal is selected (Table S1 in Supplementary Material). The WOSM side lengths tested are always odd numbers for the glacier to be situated in the central pixel of the window. A mandatory condition for the WOSM size selection is the continuity of the NDSI altitudinal curves, given the 100 m altitudinal step. The cost function f optimizing the RMSE w/s_cal for each glacier is thus: (3) $$\eqalign{& f\,(NDSI{^\ast},WOSM{^\ast},\alpha _{{\rm w/s}},\beta _{{\rm w/s}}) = \cr & \quad \min \left( {\sqrt {\displaystyle{{\mathop \sum \nolimits_{y = 1}^N {(B_{{\rm w/s\_VGT}\;{\rm y}} - B_{{\rm w/s\_ref\;} \;{\rm y}})}^2} \over N}}} \right).} $$ NDSI* and WOSM* are respectively the NDSI value and the WOSM side length minimizing the RMSE w/s_cal. Allowing the adjustment of both the NDSI value and the WOSM size is a means to select an optimized quantity of snow-covered pixels (where snow dynamics occurs) that are less affected by artefacts such as residual clouds, aerosols and/or directional effects. Changing NDSI value is a means to scan the terrain in altitude, while changing the windows size is a means to scan the terrain in planimetry. Consequently, for each glacier, a unique 'optimized' linear regression (regarding the parameters α and β) allows us to estimate the seasonal MB from the mean regional snow altitude deduced with SPOT/VGT images. To validate our results, we performed two types of evaluation: (1) cross-validation over the period 1998/1999–2008 and (2) evaluation against recent glaciological MB measurements (not used in the calibration) over the period 2009–2013/2014. In order to validate the 55 individual optimized linear regressions, we use a classical leave-one-out cross-validation method based on the reconstruction of MB time series where each MB value estimated for the year y is independent of the observed MB for the same year y (Michaelsen, 1987; Hofer and others, 2010; Marzeion and others, 2012). The cross-validation constitutes an efficient validation mechanism for short time series. For each glacier, we first determine the decorrelation time lag t lag (a), after which the autocorrelation function of the observed MB drops below the 90% significance interval, i.e. for which the serial correlation in the observed MB data is close to zero (for each glacier, t lag = 1). After that, for each glacier with N available observed MBs (N = 10 in winter and N = 11 in summer) we perform N linear regressions between the regional mean snow altitude Z w/s and the observed MB B w/s_obs, leaving each time a moving window of 1 a ± t lag (i.e. 3 a) out of the data used for the regression. The removed value (Z w/s,y and B w/s_obs,y ) for the year y has to be at the centre of the moving window such that the remaining values used for the regression are independent of the removed value. We then obtain N values for the regression coefficients α and β (termed α w/s_cross and β w/s_cross) and N values of reconstructed MB (B w/s_VGT_cross). Standard deviation of the N regression coefficients σ (α w/s_cross) and σ (β w/s_cross) are computed to assess the temporal stability and the robustness of the parameters α w/s and β w/s. The mean regression coefficients α w/s_cross_best and β w/s_cross_best, representing the average of the N α w/s_cross,y and β w/s_cross,y are also calculated. For each glacier, we compute R 2 w/s_cross (as the mean of the N R 2 w/s_ obtained for the N regressions) and an estimate of the error RMSE w/s_cross defined as: (4) $$RMSE_{{\rm w/s\_cross}} = \left( {\sqrt {\displaystyle{{\mathop \sum \nolimits_{y = 1}^N {(B_{{\rm w/s\_VGT\_cross,}\;y} - B_{{\rm w/s\_obs,}\;y})}^2} \over N}}} \right).$$ We then estimate the skill score of the linear regression as (5) $$SS_{{\rm w/s}} = 1 - \displaystyle{{RMSE_{{\rm w/s\_cross}}^2} \over {RMSE_{{\rm w/s\_ref\_cross}}^2}}, $$ where RMSE w/s_obs_cross is the mean square error of a reference model. As the reference model, we determine B w/s_ref_cross,y for each year y by averaging the observed MB values leaving out B w/s_obs,y , (allowing that B w/s_ref_cross,y is independent of the observed B w/s_obs,y ). The skill score can be interpreted as a parameter that measures the correlation between reconstructed and observed values, with penalties for bias and under (over) estimation of the variance (Wilks, 2011; Marzeion and others, 2012). A negative skill score means that the relationship computed has no skill over the reference model to estimate the seasonal MB. As the period covered by SPOT/VGT data stretches until June 2014, it is possible to calculate the seasonal MB after 2008 outside our calibration period for each glacier. From the intersection between the optimized NDSI value fixed for each glacier and the altitudinal distribution of the NDSI, we deduce a value of seasonal mean regional snow altitude for each year during 2009–2014 in winter and during 2009–2013 in summer. Therefore, with the individual 'optimized' relations computed over the calibration period, seasonal MB inferred from SPOT/VGT can be estimated over 2009–2013/2014. In winter, 66 additional B w _ obs data are available for 19 glaciers over 2009–2014 and in summer, 49 B s_obs measurements are available for 13 glaciers over 2009–2013. Therefore, the annual and global RMSE (RMSE w/s_eval) and Mass Balance Error (MBEw/s_eval) can be calculated over each evaluation period. We analysed the performance of the linear regression model for the entire glacier dataset during winter and summer. The analysis was first performed individually for each glacier before considering the average results of the 55 glaciers. Figure 4 presents individual performances for the 55 glaciers ranked by increasing RMSE over the calibration period. In winter, correlations between Z w and B w _ obs are high (Fig. 4a): the mean R 2 w for the 55 glaciers is 0.84. R 2 w ranges between 0.31 (for Aletsch, #54 in Table S1) and 0.97 (for Seewjinen #4), with high first quartile (0.79) and median values (0.88). Except for Aletsch, all R 2 w are superior to 0.5. Glaciers from source 3 result in higher mean R 2 w (0.88) than glaciers from source 1 (0.76) and 2 (0.72). Furthermore, the mean RMSE w_cal of the estimated B w_VGT for the 55 glaciers is smaller (158 mm w.e. a−1) than the range of annual errors acknowledged for the glaciological MB measurements (E obs = ±200–400 mm w.e. a−1). RMSE w_cal ranges between 50 mm w.e. a−1 (for Gorner, #1) and 260 mm w.e. a−1 (for Ciardoney, #55), with third quartile and median values of 189 and 159 mm w.e. a−1, respectively. Even the highest RMSE w_cal are in the same range as the lower limit for E obs. Glaciers less well-ranked in terms of RMSE w_cal (Ciardoney, Aletsch, Cantun, respectively #55, #54, #53) are not necessarily the least well-ranked in terms of R 2 w. Cantun for example presents a relatively high RMSE w_cal (237 mm w.e. a−1) but also a high R 2 w of 0.89 because average winter MBs are higher for this glacier. Thus, the two metrics (R 2 and RMSE) are not redundant for characterizing the linear relationships between Z and observed MB for all the glaciers. As shown in Figure 4d, the correlation between Z s and B s_obs is also high in summer but not as much as in winter: the mean R 2 s for the 55 glaciers is 0.74, and the difference is also significant for the first quartile and median values (respectively 0.70 and 0.77 in summer instead of 0.79 and 0.89 in winter). R 2 s is between 0.52 (Clariden, #55) and 0.88 (Sarennes, #23). The difference between the minimum and maximum of R 2 s (0.36) is inferior to the same value in winter (0.66), indicating a lower spread of the correlation for summer. The mean RMSE s_cal of the estimated B s_VGT for the 55 glaciers (314 mm w.e. a−1) is twice as large as in winter (158 mm w.e. a−1) but still acceptable compared with the error range of glaciological MB measurements. RMSE s_cal ranges between 134 and 528 mm w.e. a−1 (with third quartile and median values of respectively 355 and 299 mm w.e. a−1). Glaciers from source 2 (Table 1) present higher mean RMSE s_cal and lower R 2 s (391 mm w.e. a−1 and 0.70 respectively) than glaciers from source 1 (325 mm w.e. a−1 and 0.77 respectively) and source 3 (300 mm w.e. a−1 and 0.74 respectively). The summer RMSE s_cal range (394 mm w.e. a−1) is larger than in winter (232 mm w.e. a−1) indicating a wider spread in RMSE s_cal during this season. Moreover, we observe that glaciers with a poor performance in summer do not necessarily present the same poor performance in winter (Table S1). Figure 4b, c show B w_obs and B w_VGT time series estimated with SPOT/VGT, over the calibration period 1999–2008, for winter and for the 55 glaciers. By comparing Figure 4b with c, it is seen that for all glaciers the B w _ obs and B w_VGT time series are similar, illustrating that the 'optimized' relationships computed from SPOT/VGT allow a good estimation of the interannual variations in winter MB over 1999–2008. We note that Aletsch (the biggest glacier in the European Alps), ranked 54th in winter (and with the lowest R 2), presents a 'flat' winter observed MB time series, with low interannual variability. According to Figures 4e, f, B s_VGT time series are in agreement with B s_obs time series over 1998–2008. As in winter, the mean differences between B s_obs and B s_VGT are the highest for 2 a with extreme summer MBs: 2003 (with strongly negative MB values) and 2007 (a year with above average MB). The very negative and atypical summer MB time series of Sarennes (#23) is well estimated with SPOT/VGT (R 2 s of 0.88 and RMSE s_cal of 282 mm w.e. a−1). By comparing Figures 6c, d (blue dots), we observe that the regression coefficient α values of all glaciers are much more scattered in summer than in winter. In winter, α w ranges between −2.7 and −0.6 mm w.e. a−1 m−1, whereas α s ranges between −15 and −1 mm w.e. a−1 m−1 in summer. For both seasons, we observe that the lower the absolute α value, the lower the RMSE over the calibration period. The mean MB time series averaged for all glaciers is also interesting to study as it illustrates glacier behaviour at the scale of the entire European Alps. In winter (Fig. 5a), 〈B w_VGT〉 globally fits well with 〈B w_obs〉. Absolute mean MB errors |MBE w_cal | (i.e. the mean difference between B w_VGT and B w_obs for all the glaciers) are maximal in 2005 (175 mm w.e. a−1), 2001 (110 mm w.e. a−1), and to a lesser extent, in 2007 (95 mm w.e. a−1) (Table 2). Nevertheless, these errors are small compared with the standard deviation of the observed winter MB of all glaciers σ w_obs computed for 2001, 2005 and 2007, respectively equal to 615, 373 and 272 mm w.e. a−1.We also note that the two contrasting winter balance years in 2000 and 2001 are well captured by the model. In summer (Fig. 5b), 〈B s_VGT 〉 fits less well with 〈B s_obs〉 than in winter although the largest interannual variations are captured. In 2003, we observe the lowest MB values reflecting the exceptional summer heat wave of 2003. Absolute mean MB errors |MBE s_cal | are the highest for 2007 (448 mm w.e. a−1) and 2003 (356 mm w.e. a−1). These summer errors are higher than winter errors but still inferior to σ s_obs of 2007 and 2003 (respectively 594 and 559 mm w.e. a−1) (Table 2). Larger errors in summer can be partly explained by higher interannual variability in summer observed MBs. If we consider the annual MB (sum of winter and summer MB; Fig. 5c), the largest errors occur for 2007 and 2003 (respectively 582 and 411 mm w.e. a−1), as in summer. Fig. 5. Time series of mean observed MB (red) over the calibration period and of mean VGT MB (blue) over the period covered by SPOT/VGT, averaged for the 55 glaciers. The dashed red curves represent the time series of observed MB±the standard deviation for all glaciers. (a) Winter MB for the calibration period, 1999–2008 and the period covered by SPOT/VGT, 1999–2014. (b) Summer MB for the calibration period, 1998–2008 and the period covered by SPOT/VGT, 1998–2013. (c) Annual MB (sum of winter and summer MBs) over 1999–2013. The agreement between VGT MB estimations and observed MB over the calibration period presented above is satisfying. In order to test the robustness of our approach, we first perform a cross-validation of the 55 regressions to assess the temporal robustness and skills of the regression coefficients. We then compare seasonal VGT MB time series and independent observed MB data (not used for calibration) over the evaluation period 2009–2014. Cross-validation results for the 55 optimized relations initially calibrated over 1998/1999–2008 indicate no negative skill score SS, except for Seewjinen in summer (SS s = −0.08; Fig. 6b). This is consistent with the low R 2 s and high RMSE s_cal calculated for this glacier over the calibration period (Table S1). For most glaciers, the optimized individual relationships thus have skills to estimate the seasonal MB over a simple average of the observed MB. In winter, SS values are higher (〈SS w〉 = 0.76) than in summer (〈SS s〉 = 0.55) (Table 3, Figs 6a, b). In winter, glaciers from Source 3 perform better in terms of 〈SS w〉 and 〈R 2 w_cross〉 than others. This is consistent with their best performance for the calibration (〈R 2 w_cal〉 = 0.88, Table 1). RMSE w_cross are slightly higher than RMSE w_cal (Fig. 6a), but they remain satisfactory with regard to E ref. In summer, RMSE s_cross are superior to RMSE s_cal (Fig. 6b) and also on average slightly superior to E ref (〈RMSE s_cross〉 = 440 mm w.e. a−1; Table 3). Moreover, for both seasons, the higher the SS, the lower the RMSE and the lower the difference between the two RMSEs (derived from both calibration and test-cross) (Figs 6a, b). Only Aletsch Glacier in winter and Cengal Glacier in summer present satisfactory RMSEs (as regards to E ref) and low SS. Therefore, for these two glaciers, despite their RMSE, their calibrated relationships present no particular skill over a simple average of their observed MBs. Fig. 6. Results of the cross-validation for all glaciers. Skill score as a function of RMSE computed for the calibration (blue) and for the test cross (red), in (a) winter and (b) summer. Alpha derived from the calibration (blue) and from the cross-validation (red) as a function of RMSE computed for the calibration, in (c) winter and (d) summer. Standard deviation of the alpha derived from the test-cross as a function of RMSE computed for the calibration, in (e) winter and (f) summer. In winter, we observe high consistency between α w_cross_best (mean of the N α w_cross) and the calibrated α w (Fig. 6c). Moreover, the standard deviations σ(α w_cross) in winter are low and close to an order of magnitude smaller than for α w_cross_best (Fig. 6e; Table 3); the same applies for σ(β w_cross). These results allow us to conclude that the calibrated relationships are robust and temporally stable in winter (except for Aletsch), despite the shortness of the time series used for the linear regression. The outcomes show the potential skill of the individual relations to accurately estimate the winter MB outside the calibration period. In summer, the consistency between α s_cross_best and α s is also quite high, as in winter (Fig. 6d). Nevertheless, unlike in winter, the standard deviations σ(α s_cross) in summer are high, especially for glaciers with high RMSE s_cal (superior to 350–400 mm w.e. a−1) (Fig. 6f; Table 3). Therefore, in summer, we can conclude on the robustness and the temporal stability of about 60% of the 55 calibrated relationships (presenting an RMSE s_cal < 400 mm w.e. a−1 and a skill score >0.65). In order to test the robustness of our approach outside the calibration period 1998–2008, we then compare seasonal VGT MB time series and independent observed MB data (not used for calibration) over the evaluation period 2009–2014. The winter and summer MB averaged for all 55 glaciers estimated with SPOT/VGT (in blue) after 2008 are shown in Figure 5. In winter (Fig. 5a), 〈B w_VGT〉 values are generally above average over 2009–2014, except for 2012, where we note a strong decrease in 〈B w_VGT〉. In summer (Fig. 5b), 〈B s_VGT〉 is close to the average of 1998–2008, except for a notably negative MB during summer 2012. Table S2 (Supplementary Material) presents for each glacier, estimates of both winter and summer MB over 2009–2014 and 2009–2013 respectively. In order to evaluate these VGT MB calculations (over 2009–2014 for winter and 2009–2013 for summer), we now compare them with observed MBs available for a subset of the 55 glaciers used for the calibration (19 glaciers in winter and 13 in summer). For winter, 70% of the computed MB present an error smaller than the estimated uncertainty in the direct measurements E obs max (Fig. 7a). We find the highest errors MBE w_eval for Sarennes and Ciardoney (−1015 mm w.e. in 2009 and −850 mm w.e. in 2011). This result is not surprising as these glaciers are subject to a high RMSE w_cal over the calibration period (Table S1). 2011 and 2013 are the most poorly estimated years (RMSE w_eval of 587 and 501 mm w.e.), whereas 2010 and 2014 are best represented (RMSE w_eval of 252 and 245 mm w.e.; Table 4). RMSE w_eval calculated from all data in the evaluation period (n = 66) is 411 mm w.e. a−1 and thus nearly three times larger than the average RMSE w_cal (150 mm w.e. a−1) computed for the same subset of 19 glaciers over the calibration period. However, RMSE w_eval is still comparable with E obs max and the mean MB error MBE w_eval is close to zero (16 mm w.e. a−1), suggesting that B w_VGT estimations are unbiased. To sum it up, our approach is able to estimate the winter MB out of the calibration period for glaciers in the Alps with an acceptable mean overall error and without bias. Fig. 7. Seasonal mass balances estimated with SPOT/VGT as a function of observed (a) winter and (b) summer MBs for glaciers over the evaluation period 2009–2014. The 1:1 agreement is plotted (bold line). The uncertainty in each MB measurement E obs max (±400 mm w.e. a−1) is not represented for the sake of clarity. In summer, 60% of the computed MB present an error inferior or equal to E obs max (Fig. 7b). Largest errors MBE s_eval are about two (to three) times superior to E obs max (e.g. Gietro with an error of 1695 mm w.e. a−1 in 2011) (Table 4). 2009 and 2011 are poorly estimated (resp. RMSE s_eval of 756 and 660 mm w.e.), but summer 2013 is correctly reproduced (low RMSE s_eval of 230 mm w.e.). RMSE s_eval calculated for the 49 evaluation points (561 mm w.e. a−1) is ~1.5 times larger than the RMSE calculated over the calibration period for the subset of 13 glaciers used for evaluation (RMSE s_cal = 368 mm w.e. a−1). RMSE s_eval is also higher than winter RMSE w_eval and than E obs max. The mean MBE s_eval for the 49 points is not negligible (162 mm w.e. a−1), which indicates that B s_VGT estimations are slightly positively biased during 2009–2013. This positive bias is mainly due to a strong bias (664 mm w.e. a−1) during summer 2009. No obvious explanation was found for this anomalous year. Excluding summer 2009, the MBE s_eval is reduced to 52 mm w.e. a−1. However, validation is made with fewer points in summer than in winter. Furthermore, glaciers used for evaluation in summer are not the best performers over the calibration period: the mean RMSE s_cal of the 13 evaluation glaciers (368 mm w.e. a−1) is higher than the mean RMSE s_cal of the entire (55 glaciers) dataset (318 mm w.e. a−1). Observed MB measurements for more glaciers and more years will be welcome to improve the robustness of the linear regressions between MB and Z in summer and to provide a more representative error RMSE s_eval on the MB estimated with SPOT/VGT. Our method performs better in winter than in summer because it is based on the interannual dynamics of the altitudinal snow cover distribution to retrieve the interannual variation of seasonal MBs. In summer, the interannual dynamics are more difficult to capture as there is less snow. The lower number of snow-covered pixels in summer than in winter constitutes in fact a limit to retrieve smooth curves of altitudinal NDSI distribution. To justify this hypothesis, we estimate for each season, the mean number of pixels N p with a NDSI higher than 0.2 in each optimized WOSM sizes centred on Clariden Glacier. This glacier was chosen for its much lower performance in summer (R 2 s = 0.52; RMSE s_cal = 528 mm w.e. a−1) than in winter (R 2 w = 0.92; RMSE w_cal = 130 mm w.e. a−1). At Clariden, the optimized WOSM for winter (29 km × 29 km), gives a value of N p that is ~6 times higher in winter (840) than in summer (144). With the optimized summer WOSM (225 km × 225 km), N p is ~12 times higher for winter (19 971) than for summer (1541). Thus, N p increases with WOSM size for both seasons. This justifies the enlargement of the optimized WOSM* in summer in order to integrate more snow cover variability (Fig. 8). The distribution of WOSM* sizes is more spread towards larger windows: the median WOSM* side length is 119 km in summer against 43 km in winter. The need to increase the window size in summer suggests relatively homogeneous snow variations in summer across the Alps. This result is in agreement with the homogeneous summer ablation observed on glaciers across the entire Alpine Arc (Vincent and others, 2004). Pelto and Brown (2012) also noted a similarity of summer ablation across glaciers in the North Cascades. Another reason for the difference in results between the seasons might be the cloud interpolation. For each season, we calculate the percentage of interpolated pixels for glaciers with low RMSE (inferior to the first quartile value) and high RMSE (superior to the third quartile), averaged over the calibration period. In winter, the mean percentage of interpolated pixels is large (4%) and the difference between high and low RMSE glaciers is small: the mean difference over the 1999–2014 period is 0.13%. In summer, a striking difference is observed between glaciers with high and low RMSE: the mean difference of the interpolated pixels percentages (0.64%) is more than four times larger than in winter. Therefore, the temporal interpolation seems to have more impact in summer than in winter. Our hypothesis to explain this observation is that the cloud mask performs less well for some glaciers in summer, but this needs to be further explored. The effect of pixels contaminated by undetected clouds can also be another reason explaining the reduced performance for summer. The spectral signatures in the available bands of clouds and snow are close, indicating that cloudy and snow-covered pixels have similar NDSI values. The error in a pixel NDSI value caused by a cloud is smaller for a snow-covered pixel than for a snow-free pixel. As there is less snow in summer than in winter, undetected clouds may impact more NDSI and snow detection in summer. Despite the difference in performances between the seasons, we can highlight the ability of the approach to capture the extreme summer balance in 2003 and the two contrasting winter balance years in 2000 and 2001. The performance of our method during the calibration (in terms of R 2 and RMSE) also varies with the source of the MB data. The best results are obtained with glaciers from source 3 (particularly for winter). The year-to-year variability in this dataset is based on meteorological time series (temperature, precipitation) and distributed modelling. As NDSI and interannual snow cover dynamics are also driven by meteorological parameters, we expected to find a good agreement but we are unable to conclude on the robustness of the method to estimate MB. However, a satisfying mean error for the evaluation period, assessed only with MB data from sources 1 and 2, strengthens the robustness of the method. In fact, these MBs are closer to 'reality' as they are composed of in situ glaciological measurements. Nevertheless, MB data from sources 1 and 2 are also subject to uncertainties. In particular, for Aletsch Glacier (from source 2), the largest glacier of the Alps (83 km2), the lowest R 2 w (0.31) for winter was found. A possible explanation for the poor performance of our approach for Aletsch is that its ablation area reaches to relatively low elevation. It is thus snow-free for a considerable part of the period used for winter balance estimate (October to April). Winter MB data for Aletsch Glacier thus include both accumulation and melting, whereas the NDSI approach for winter is optimized to represent snow accumulation, as for the other glaciers. A limitation of the proposed methodology is the impossibility (in some rare cases) to estimate the seasonal MB after the calibration period. This happens only in summer, specifically in 2009 and 2011 for Sarennes, and for Vernagt in 2012. In those particular cases, the NDSI value optimizing the linear relationships between Z and MB over the calibration period 1998–2008 does not intersect the NDSI altitudinal distribution curves: in both cases the curves are below the NDSI value. Thus, no MB can be retrieved. We note that for these two glaciers the optimized WOSM side lengths are among the smallest (5 km for Sarennes and 21 km for Vernagt). If we increase the WOSM sizes, we can recover summer MB estimations for these years and these two glaciers but at the cost of degrading the regression quality (R 2 s and RMSE s_cal) over the calibration period, mainly for Vernagt (R 2 s decreases from 0.75 to 0.60 and RMSE s_cal increases from 262 to 332 mm w.e. a−1). An objective criterion is being analysed to determine a WOSM side length threshold >5 km in order to get NDSI altitudinal curves representative enough of the altitudinal snow cover distribution. An important outlook of this study is to achieve a better cloud detection as an under/over estimation of cloudy pixels impacts snow-cover monitoring, especially in summer. Another perspective to improve our summer results could be to couple our methodology with a melt model that is able to provide information on the short-term dynamics of melting. One of this study's aim is to perform real-time seasonal MB monitoring with daily low-resolution optical imagery for a large sample (here 55) of Alpine glaciers. Low resolution optical satellite images are available 2–3 d after the acquisition dates, implying that seasonal snow cover and seasonal MB could be estimated a few weeks after the end of each season. Moreover, the method could also allow for glaciers with recently initialized MB series to extend them backwards in timer, for example by using the Advanced Very High Resolution Radiometer satellite data available since 1978. Interrupted MB time series could be reconstructed as well. The ultimate and long-term goal of this work is to apply the method at large scales (i.e. thousands of glaciers in a mountain range for which no or very few direct MB data are available). Therefore, the next step is to assess how accurately the approach can estimate the MB of an unmeasured glacier. For that purpose, it is necessary to build and apply a generic transposable relation between Z and MB with fixed parameters. The sensitivity of the coefficients of the linear regressions towards explicative factors (in particular topographic factors such as glacier size, aspect, etc.) or the variations of the optimized NDSI value and WOSM size need to be investigated in order to determine explicit and objective criteria to build a generic relation. Currently, the variability of alpha and beta remains a little too large (even in winter) to easily and in a satisfactory way obtain a simple generic relationship. In this study, we have described an empirical method to estimate seasonal MBs of Alpine glaciers from kilometric resolution optical SPOT/VGT images. From seasonal snow cover maps of the Alps derived for each year over 1998–2014, a regional mean snow altitude of a region surrounding each glacier is derived for 55 glaciers with MB data. Promising linear relationships between this regional mean snow altitude and observed seasonal MBs have been found over 1998–2008. The explained variance in winter is high for all glaciers (R 2 = 0.84 on average) and the mean RMSE is low (161 mm w.e. a−1). Results are not as good for summer but the explained variance is acceptable (R 2 = 0.73) and the mean RMSE (318 mm w.e. a−1) is still in the range of errors associated with glaciological MB measurements (typically from ±200 to 400 mm w.e. a−1). Cross-validation of the 55 individual linear regressions allows assessment of the temporal stability and the robustness of all the derived relationships in winter and of ~60% in summer. Estimations of seasonal MB over 2009–2014 is also performed for these 55 glaciers, and a mean global error is calculated for some estimates, based on a more limited dataset of observed MBs. We are able to estimate winter MB with an acceptable mean global error (405 mm w.e. a−1) and without bias. In summer, the mean error of MB estimation over 2009–2014 is higher (561 mm w.e. a−1) but less observed MB measurements are available for validating the regressions during this season. Moreover, the subset of glaciers used for evaluation in summer tends to perform more poorly during calibration. Still, these results are promising and estimates of the seasonal MB could be performed as soon as the SPOT/VGT data (or data from similar satellites) are available and processed. A real-time seasonal MB 'monitoring' is thus conceivable, a few weeks after the end of each season for a large sample (here 55) of Alpine glaciers. Our method performs better in winter than in summer. This is mainly explained by the fact that interannual dynamics of altitudinal snow cover distribution is more difficult to capture in summer as there is less snow. We also highlight the fact that interpolation of cloudy pixels has more impact in summer than in winter. However, an in-depth analysis needs to be carried out. The greater accuracy in winter emphasizes the value of this method over the classical snowline approach that does not provide any estimate in winter when accumulation area ratio is 100%. The SPOT/VGT mission ended in May 2014 but the PROBA-V satellite, launched in May 2013, ensures continuity and has been providing data since October 2013, with images at 1 km, 300 m and 100 m resolution. A comparison of snow cover and MB estimates derived from SPOT/VGT and from PROBA-V 1 km for the period of overlap (winter 2013–2014) is underway in order to extend MB data after 2015. The supplementary material for this article can be found at http://dx.doi.org/10.1017/jog.2016.78. We acknowledge the WGMS for the mass-balance data. We also thank all the observers who contributed to collect the seasonal mass balances and shared them with the community through the WGMS database. We are grateful to VITO and Belspo for the SPOT/VGT satellite images distribution, and the financial and scientific support to carry out this study. We acknowledge support from the CNES/TOSCA programme (in particular Juliette Lambin) as well as funding of a Ph.D. fellowship by VITO/CLS (and especially Eric Gontier, VITO and Estelle Obligis, CLS). We thank the Scientific Editor, David Rippin and two anonymous reviewers for their comments and suggestions which significantly improved the manuscript. This article is also a tribute to Gilbert Saint and his vision during the early 90s, of what an operational satellite mission should be. Abermann, J, Fischer, A, Lambrecht, A and Geist, T (2010) On the potential of very high-resolution repeat DEMs in glacial and periglacial environments. Cryosphere, 4, 53–65 (doi: 10.5194/tc-4-53-2010) Arnell, NW (2004) Climate change and global water resources: SRES emissions and socio-economic scenarios. Glob. Environ. Chang., 14(1), 31–52 (doi: 10.1016/j.gloenvcha.2003.10.006) Berthelot, B (2004) Snow detection on VEGETATION data. Improvement of cloud screening Berthier, E and 10 others (2014) Glacier topography and elevation changes from Pléiades very high resolution stereo images. Cryosph. Discuss., 8(5), 4849–4883 (doi: 10.5194/tcd-8-4849-2014) Braithwaite, RJ (1984) Can the mass balance of a glacier be estimated from its equilibrium line altitude. J. Glaciol., 30(106), 364–368 Braithwaite, RJ, Konzelmann, TC, Marty, C and Olesen, OB (1998) Errors in daily ablation measurements in northern Greenland, 1993–94, and their implications for glacier climate studies. J. Glaciol., 44(148), 583–588 Chaponnière, A and 6 others (2005) International Journal of Remote A combined high and low spatial resolution approach for mapping snow covered areas in the Atlas mountains. Int. J. Remote Sens., 26(13), 2755–2777 Cogley, JG and Adams, WP (1998) Mass balance of glaciers other than the ice sheets. J. Glaciol., 44(147), 315–325 Cox, LH and March, RS (2004) Comparison of geodetic and glaciological mass-balance techniques, Gulkana Glacier, Alaska, U.S.A. J. Glaciol., 50(170), 363–370 (doi: 10.3189/172756504781829855) Crane, RG and Anderson, M (1984) Satellite discrimination of snow/cloud surfaces. Int. J. Remote Sens., 5(1), 213–223 Deronde, B and 6 others (2014) 15 years of processing and dissemination of SPOT-VEGETATION products. Int. J. Remote Sens., 35(7), 2402–2420 (doi: 10.1080/01431161.2014.883102) Dozier, J (1989) Spectral signature of alpine snow cover from the Landsat Thematic Mapper. Remote Sens. Environ., 28, 9–22 (doi: 10.1016/0034-4257(89)90101-6) Dubertret, F (2012) Following snow cover dynamics over Mediterranean mountains: combining high and low resolution satellite imagery too assess snow cover on a daily basis Duchemin, B and Maisongrande, P (2002) Normalisation of directional effects in 10-day global syntheses derived from VEGETATION/SPOT: I. Investigation of concepts based on simulation. Remote Sens. Environ., 81(1), 90–100 (doi: 10.1016/S0034-4257(01)00337-6) Dyurgerov, MB and Meier, MF (2002) Glacier mass balance and regime: data of measurements and analysis Farr, TG and Kobrick, M (2000) Transactions of the American Geophysical Union. Amer. Geophys. Union EOS, 81, 583–585 Fortin, J, Bernier, M, El Battay, A, Gauthier, Y and Turcotte, R (2001) Estimation of surface variables at the sub-pixel level for use as input to climate and hydrological models. In Proc. VEGETATION 2000 Conf.. http://www.spot-vegetation.com/pages/vgtprep/vgt2000/fortin.html Gardelle, J, Berthier, E, Arnaud, Y and Kääb, A (2013) Region-wide glacier mass balances over the Pamir-Karakoram-Himalaya during 1999–2011. Cryosph., 7(4), 1263–1286 (doi: 10.5194/tc-7-1263-2013) Gardner, AS and 15 others (2013) A reconciled estimate of Glacier contributions to sea level rise: 2003 to 2009. Science (80-.) 852 (doi: 10.1126/science.1234532) Haeberli, W and Beniston, M (1998) Change climate and its impacts on Glaciers and permafrost in the Alps. Res. Mt. Area Dev. Eur., 27(4), 258–265 Hall, DK and Riggs, GA (2007) Accuracy assessment of the MODIS snow products †. Hydrol. Process. 21, 1534–1547 (doi: 10.1002/hyp) Hall, DK, Riggs, GA and Salomonson, VV (1995) Development of methods for mapping global snow cover using moderate resolution imaging spectroradiometer data. Remote Sens. Environ., 54(2), 127–140 (doi: 10.1016/0034-4257(95)00137-P) Hall, DK, Foster, JL, Verbyla, DL, Klein, AG and Bensont, CS (1998) Assessment of snow-cover mapping accuracy in a variety of vegetation-cover densities in Central Alaska. Remote Sens. Environ., 66, 129–137 Hall, DK, Riggs, GA, Salomonson, VV, DiGirolamo, NE and Bayr, KJ (2002) MODIS snow-cover products. Remote Sens. Environ., 83(1–2), 181–194 (doi: 10.1016/S0034-4257(02)00095-0) Hock, R (1999) A distributed temperature-index ice- and snowmelt model including potential direct solar radiation. J. Glaciol., 45(149), 101–111. http://search.ebscohost.com/login.aspx?direct=true&db=pcl&AN=1791188&lang=fr&site=eds-live Hofer, M, Mölg, T, Marzeion, B and Kaser, G (2010) Empirical-statistical downscaling of reanalysis data to high-resolution air temperature and specific humidity above a glacier surface (Cordillera Blanca, Peru). J. Geophys. Res. Atmos., 115, 1–15 (doi: 10.1029/2009JD012556) Holben, BN (1986) Characteristics of maximum-value composite images from temporal AVHRR data. Int. J. Remote Sens., 7(11), 1417–1434 (doi: 10.1080/01431168608948945) Hulth, J, Rolstad Denby, C and Hock, R (2013) Estimating glacier snow accumulation from backward calculation of melt and snowline tracking. Ann. Glaciol., 54(62), 1–7 (doi: 10.3189/2013AoG62A083) Huss, M (2011) Present and future contribution of glacier storage change to runoff from macroscale drainage basins in Europe. Water Resour. Res., 47(7) (doi: 10.1029/2010WR010299) Huss, M (2013) Density assumptions for converting geodetic glacier volume change to mass change. Cryosph., 7(3), 877–887 (doi: 10.5194/tc-7-877-2013) Huss, M, Bauder, A, Funk, M and Hock, R (2008) Determination of the seasonal mass balance of four Alpine glaciers since 1865. J. Geophys. Res., 113(F1), F01015 (doi: 10.1029/2007JF000803) Huss, M, Hock, R, Bauder, A and Funk, M (2010a) 100-year mass changes in the Swiss Alps linked to the Atlantic Multidecadal Oscillation. Geophys. Res. Lett., 37(10) (doi: 10.1029/2010GL042616) Huss, M, Usselmann, S, Farinotti, D and Bauder, A (2010b) Glacier mass balance in the south-eastern Swiss Alps since 1900 and perspectives for the future. Erdkunde, 64(2), 119–140 (doi: 10.3112/erdkunde.2010.02.02) Huss, M and 6 others (2013) Towards remote monitoring of sub-seasonal glacier mass balance. Ann. Glaciol., 54(63), 85–93 (doi: 10.3189/2013AoG63A427) Huss, M, Dhulst, L and Bauder, A (2015) New long-term mass-balance series for the Swiss Alps. J. Glaciol., 61(227), 551–562 (doi: 10.3189/2015JoG15J015) IPCC (2013) Observations: Cryosphere. In Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (doi: 10.1017/CBO9781107415324.012) Irish, RR (2000) Landsat 7 automatic cloud cover assessment. Int. Soc. Opt. Eng., 4049, 348–355. http://dx.doi.org/10.1117/12.410358 Jóhannesson, T and 7 others (2013) Ice-volume changes, bias estimation of mass-balance measurements and changes in subglacial lakes derived by Lidar mapping of the surface of Icelandic glaciers. Ann. Glaciol., 54, 63–74 (doi: 10.3189/2013AoG63A422) Kääb, A, Treichler, D, Nuth, C and Berthier, E (2015) Brief communication: contending estimates of 2003–2008 glacier mass balance over the Pamir–Karakoram–Himalaya. Cryosphere, 9(2), 557–564 (doi: 10.5194/tc-9-557-2015) Kaser, G, Fountain, AG and Jansson, P (2003) A manual for monitoring the mass balance of mountain glaciers with particular attention to low latitude characteristics. A contribution to the UNESCO HKH-Friend programme. Paris, France Kaser, G, Grosshauser, M and Marzeion, B (2010) Contribution potential of glaciers to water availability in different climate regimes. Proc. Natl. Acad. Sci. USA, 107(47), 20223–7 (doi: 10.1073/pnas.1008162107) Kropáček, J, Neckel, N and Bauder, A (2014) Estimation of mass balance of the Grosser Aletschgletscher, Swiss Alps, from ICESat Laser Altimetry Data and Digital Elevation Models. Remote Sens., 6(6), 5614–5632 (doi: 10.3390/rs6065614) Kuhn, M (1989) The response of the equilibrium line altitude to climatic fluctuations: theory and observations. In Oerlemans, J ed. Glacier fluctuations and climatic change . Kluwer Academic Publishers, Dordrecht, 407–417 LaChapelle, E (1962) Assessing glacier mass budgets by reconnaissance aerial photography. J. Glaciol., 4(33), 290–297 Leonard, KC and Fountain, AG (2003) Map-based methods for estimating glacier equilibrium-line altitudes. J. Glaciol., 49(166), 329–336 Lissens, G, Kempencers, P, Fierens, F and Van Rensbergen, J (2000) Development of cloud, snow, and shadow masking algorithms for VEGETATION imagery. In IGARSS 2000. IEEE 2000 Int. Geoscience and Remote Sensing Symp. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No.00CH37120), Belgirate, 303–306 (doi: 10.1109/IGARSS.2000.861719) Lliboutry, L (1965) Traité de glaciologie. Tome II: Glaciers, variations du climat, sols gelés. Masson et. Paris Maisongrande, P, Duchemin, B and Dedieu, G (2004) VEGETATION/SPOT: an operational mission for the Earth monitoring; presentation of new standard products. Int. J. Remote Sens., 25(1), 9–14 (doi: 10.1080/0143116031000115265) Marzeion, B, Hofer, M, Jarosch, AH, Kaser, G and Mölg, T (2012) A minimal model for reconstructing interannual mass balance variability of glaciers in the European Alps. Cryosphere, 6(1), 71–84 (doi: 10.5194/tc-6-71-2012) Marzeion, B, Leclercq, PW, Cogley, JG and Jarosch, AH (2015) Brief communication: global glacier mass loss reconstructions during the 20th century are consistent. Cryosph. Discuss., 9(4), 3807–3820 (doi: 10.5194/tcd-9-3807-2015) Mernild, SH and 5 others (2013) Identification of snow ablation rate, ELA, AAR and net mass balance using transient snowline variations on two arctic glaciers. J. Glaciol., 59(216), 649–659 (doi: 10.3189/2013JoG12J221) Michaelsen, J (1987) Cross-validation in statistical climate forecast models. J. Clim. Appl. Meteorol., 26, 1589–1600 (doi: 10.1175/1520-0450(1987)026<1589:CVISCF>2.0.CO;2) Ohmura, A (2011) Observed mass balance of Mountain Glaciers and Greenland Ice Sheet in the 20th century and the present trends. Surv. Geophys., 32(4–5), 537–554 (doi: 10.1007/s10712-011-9124-4) Ostrem, G (1973) The transient snowline and glacier mass balance in southern British Columbia and Alberta, Canada. Geogr. Ann., 55A(2), 93–106 Østrem, G and Brugman, M (1991) Glacier mass-balance measurements: a manual for field and office work. NHRI Science Report. Saskatoon, Canada Paul, F, Frey, H and Le Bris, R (2011) A new glacier inventory for the European Alps from Landsat TM scenes of 2003: challenges and results. Ann. Glaciol., 52(59), 144–152 Pelto, M (2011) Utility of late summer transient snowline migration rate on Taku Glacier, Alaska. Cryosphere, 5(4), 1127–1133 (doi: 10.5194/tc-5-1127-2011) Pelto, M and Brown, C (2012) Mass balance loss of Mount Baker, Washington glaciers 1990–2010. Hydrol. Process., 26(17), 2601–2607 (doi: 10.1002/hyp.9453) Pfeffer, WT and 18 others (2014) The Randolph Glacier Inventory: a globally complete inventory of glaciers. J. Glaciol., 60(221), 537–552 (doi: 10.3189/2014JoG13J176) Rabatel, A, Dedieu, JP and Vincent, C (2005) Using remote-sensing data to determine equilibrium-line altitude and mass-balance time series: validation on three French glaciers, 1994–2002. J. Glaciol., 51(175), 539–546 (doi: 10.3189/172756505781829106) Rabatel, A and 7 others (2012) Can the snowline be used as an indicator of the equilibrium line and mass balance for glaciers in the outer tropics? J. Glaciol., 58(212), 1327–1336 (doi: 10.3189/2012JoG12J027) Rabatel, A, Letréguilly, A, Dedieu, JP and Eckert, N (2013) Changes in glacier equilibrium-line altitude in the western Alps from 1984 to 2010: evaluation by remote sensing and modeling of the morpho-topographic and climate controls. Cryosphere, 7(5), 1455–1471 (doi: 10.5194/tc-7-1455-2013) Rabus, B, Eineder, M, Roth, A and Bamler, R (2003) The shuttle radar topography mission – a new class of digital elevation models acquired by spaceborne radar. ISPRS J. Photogramm. Remote Sens., 57(4), 241–262 (doi: 10.1016/S0924-2716(02)00124-7) Radić, V and 5 others (2014) Regional and global projections of twenty-first century glacier mass changes in response to climate scenarios from global climate models. Clim. Dyn., 42(1–2), 37–58 (doi: 10.1007/s00382-013-1719-7) Rahman, H and Dedieu, G (1994) SMAC: a simplified method for the atmospheric correction of satellite measurements in the solar spectrum. Int. J. Remote Sens., 15(1), 123–143 (doi: 10.1080/01431169408954055) Salomonson, VV and Appel, I (2004) Estimating fractional snow cover from MODIS using the normalized difference snow index. Remote Sens. Environ., 89(3), 351–360 (doi: 10.1016/j.rse.2003.10.016) Shea, JM, Menounos, B, Dan Moore, R and Tennant, C (2012) Regional estimates of glacier mass change from MODIS-derived equilibrium line altitudes. Cryosph. Discuss., 6(5), 3757–3780 (doi: 10.5194/tcd-6-3757-2012) Shea, JM, Menounos, B, Moore, RD and Tennant, C (2013) The Cryosphere An approach to derive regional snow lines and glacier mass change from MODIS imagery, western North America. Cryosphere, 7, 667–680 (doi: 10.5194/tc-7-667-2013) Sirguey, P, Mathieu, R and Arnaud, Y (2009) Subpixel monitoring of the seasonal snow cover with MODIS at 250 m spatial resolution in the Southern Alps of New Zealand: methodology and accuracy assessment. Remote Sens. Environ., 113(1), 160–181 (doi: 10.1016/j.rse.2008.09.008) Soruco, A and 9 others (2009) Mass balance of Glaciar Zongo, Bolivia, between 1956 and 2006, using glaciological, hydrological and geodetic methods. Ann. Glaciol., 50(50), 1–8 Sylvander, S, Henry, P, Bastien-Thiry, C, Meunier, F and Fuster, D (2000) Sylvander, S. Saint G. Ed C-T& J-I ed. In Proceedings of the VEGETATION 2000 conference, Belgirate-Italy, 33–44 Tarpley, JD, Schneider, SR and Money, RL (1984) Global vegetation indices from the NOAA-7 Meteorological Satellite. J. Clim. Appl. Meteorol., 23(3), 491–494 (doi: 10.1175/1520-0450(1984)023<0491:GVIFTN>2.0.CO;2) Thibert, E, Blanc, R, Vincent, C and Eckert, N (2008) Glaciological and volumetric mass balance measurements: glaciological and volumetric mass balance measurements: error analysis over 51 years for the Sarennes glacier, French Alps. J. Glaciol., 54(186), 1–36 (doi: 10.3189/002214308785837093) Thibert, E, Eckert, N and Vincent, C (2013) Climatic drivers of seasonal glacier mass balances: an analysis of 6 decades at Glacier de Sarennes (French Alps). Cryosphere, 7, 47–66 (doi: 10.5194/tc-7-47-2013) Vincent, C and 5 others (2004) Ice ablation as evidence of climate change in the Alps over the 20th century. J. Geophys. Res., 109 (doi: 10.1029/2003JD003857) Werner, M (2001) Shuttle radar topography mission (SRTM), mission overview. J. Telecom., 55, 75–79 WGMS (2008) Fluctuations of Glaciers 2000–2005, Volume IX. Zurich, Switzerland WGMS (2012) Fluctuations of Glaciers 2005–2010. Zurich, Switzerland (doi: 10.5904/wgms-fog-2012-11) WGMS (2013) Glacier Mass Balance Bulletin No. 12 (2010–2011). Zurich, Switzerland (doi: 10.5904/wgms-fog-2013-11) Wilks, DS (2011) Statistical methods in the atmospheric sciences, 3rd edn. (International Geophysics Series 100) Academic Press, Oxford Willis, MJ, Herried, BG, Bevis, MG and Bell, RE (2015) Recharge of a subglacial lake by surface meltwater in northeast Greenland. Nature, 518(7538), 223–227 (doi: 10.1038/nature14116) Xiao, X, Shen, Z and Qin, X (2001) Assessing the potential of VEGETATION sensor data for mapping snow and ice cover: a normalized difference snow and ice index. Int. J. Remote Sens., 22(13), 2479–2487 (doi: 10.1080/01431160119766) Zemp, M and 16 others (2013) Reanalysing glacier mass balance measurement series. Cryosphere, 7(4), 1227–1245 (doi: 10.5194/tc-7-1227-2013) Zemp, M and 38 others (2015) Historically unprecedented global glacier decline in the early 21st century. J. Glaciol., 61(228), 745–762 (doi: 10.3189/2015JoG15J017) Zhu, Z and Woodcock, CE (2012) Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ., 118, 83–94 (doi: 10.1016/j.rse.2011.10.028) Loading article...
CommonCrawl
Genome-wide association analysis unveils novel QTLs for seminal root system architecture traits in Ethiopian durum wheat Admas Alemu ORCID: orcid.org/0000-0001-7056-26991,2, Tileye Feyissa1, Marco Maccaferri3, Giuseppe Sciara3, Roberto Tuberosa3, Karim Ammar4, Ayele Badebo5, Maricelis Acevedo6, Tesfaye Letta7 & Bekele Abeyo5 BMC Genomics volume 22, Article number: 20 (2021) Cite this article Genetic improvement of root system architecture is essential to improve water and nutrient use efficiency of crops or to boost their productivity under stress or non-optimal soil conditions. One hundred ninety-two Ethiopian durum wheat accessions comprising 167 historical landraces and 25 modern cultivars were assembled for GWAS analysis to identify QTLs for root system architecture (RSA) traits and genotyped with a high-density 90 K wheat SNP array by Illumina. Using a non-roll, paper-based root phenotyping platform, a total of 2880 seedlings and 14,947 seminal roots were measured at the three-leaf stage to collect data for total root length (TRL), total root number (TRN), root growth angle (RGA), average root length (ARL), bulk root dry weight (RDW), individual root dry weight (IRW), bulk shoot dry weight (SDW), presence of six seminal roots per seedling (RT6) and root shoot ratio (RSR). Analysis of variance revealed highly significant differences between accessions for all RSA traits. Four major (− log10P ≥ 4) and 34 nominal (− log10P ≥ 3) QTLs were identified and grouped in 16 RSA QTL clusters across chromosomes. A higher number of significant RSA QTL were identified on chromosome 4B particularly for root vigor traits (root length, number and/or weight). After projecting the identified QTLs on to a high-density tetraploid consensus map along with previously reported RSA QTL in both durum and bread wheat, fourteen nominal QTLs were found to be novel and could potentially be used to tailor RSA in elite lines. The major RGA QTLs on chromosome 6AL detected in the current study and reported in previous studies is a good candidate for cloning the causative underlining sequence and identifying the beneficial haplotypes able to positively affect yield under water- or nutrient-limited conditions. Ethiopian farmers have grown tetraploid wheat (Triticum turgidum ssp. durum) since its introduction in the northern highlands of the country around 3000 BC [1]. Cultivation was mostly under adverse environmental conditions that likely favored the development of a broad gene pool of durum wheat landraces adapted to various environmental conditions. Ethiopian durum wheat landraces provide a rich and yet untapped native biodiversity [2]. Vavilov [3] and Zohary [4] reported the presence of high-genetic diversity in cultivated tetraploid wheat and recent studies highlighted the uniqueness of Ethiopian durum landraces from the Fertile Crescent collections (primary center of domestication) and considered Ethiopia as a possible second domestication center for the crop [5]. Previous studies, carried out with phenotypic [2, 6,7,8] and molecular approaches [9,10,11,12], have indicated Ethiopian durum germplasm to be a highly diverse and potentially unique source of valuable traits [13,14,15]. This is basically due to the wide range of agro-ecological conditions (altitude in a range of 1600 to 3000 masl) coupled with diverse farmers' culture [9]. Notably, more than 7000 Ethiopian durum wheat landrace accessions are conserved in the Ethiopian Biodiversity Institute (EBI) gene bank [16]. However in recent time, durum wheat cultivation has been largely replaced by bread wheat varieties developed from international and national breeding programs throughout the country [17]. Roots play a key role in nutrient and water uptake, soil anchoring and mechanical support, storage functions, and as the major interface between the plant and various biotic and abiotic factors in the soil environment. Root system architecture (RSA) describes the shape and structure of the root system, both of which have great functional importance [18, 19] and plays a pivotal role in crop performance, especially for cultivation under non-optimal nutritional and water source conditions [20,21,22]. Due to recurrent climate change, declining of soil fertility and water availability, enhancing the genetic capacity to capture the available soil resources is considered a primary target for breeding resource-use efficient crops [20, 23, 24]. Hence, RSA has been an active research topic for the last couple of decades and since then different RSA ideotypes have been proposed and investigated in crops [25,26,27]. The narrow-and-deep or wide-and-shallow root ideotypes have been studied for their effects in nutrient acquisition and drought resistance in crops [28,29,30,31]. Deep and narrow-angled roots could allow plants to exploit more effectively water and nitrogen that are often found in deeper soil layers [29, 30, 32], while shallow wider angled roots enable plants to more effectively uptake nutrients such as phosphorous that are abundantly found at shallower depths in the soil [33]. The genetic basis of RSA traits in durum wheat has been investigated with both linkage and association mapping using durum wheat recombinant inbred line (RIL) populations and/or elite durum wheat panels suitable for association mapping [19, 21, 34,35,36,37]. This notwithstanding, beside the recent studies by Roselló et al. [38] and Ruiz et al. [39], durum wheat landraces have not been extensively studied so far. Ethiopian durum wheat landraces are particularly rich in genetic diversity and thus are very valuable to dissect the genetic basis of governing the variability of RSA traits. Hence, this study aimed to conduct a genome-wide association analysis for root system architecture traits in Ethiopian durum wheat comprising historical landraces (167) and modern cultivars (25) to identify RSA quantitative trait loci (QTLs) of potential interest for marker-assisted selection. Phenotypic variation among RSA traits A total of 2880 seedlings and 14,947 seminal roots were processed and measured for various RSA traits (Additional file 2: Table S2). Analysis of variance (ANOVA) for the studied RSA traits is presented in Table 1. Table 1 ANOVA and heritability results for the root system architecture traits measured in 12-day-old seedlings of 192 Ethiopian durum wheat accessions The ANOVA results indicate the presence of highly significant variation among accessions for all RSA traits. In particular, the seminal root angle ranged from 45.7 to 130.5° with a mean value of 97.3° while the total and average root length and number of roots ranged from minimum values of 66.2 cm, 16.5 cm and 3.4 to maximum values of 195.4 cm, 36.9 cm and 6.7, respectively. The root and shoot dry weight varied from minimum values of 27.7 and 34.7 g to maximum values of 115.0 and 116.6 g, respectively. The coefficient of variance (CV) of RSA traits ranged from 8.38% for average root length (ARL) to 14.63 for root growth angle (RGA). Individual root dry weight (IRW) and bulk root dry weight (RDW) also scored high CV, with a value of 14.55 and 14.22%, respectively. The frequency distribution of most RSA traits was normal except for RT6 that showed a bi-modal distribution (Fig. 1). Distribution frequency for RSA traits measured from 12-day-old seedlings in 192 Ethiopian durum wheat accessions. See Table 6 for trait abbreviations Most RSA traits showed high level of broad sense heritability (H2). Bulk root dry weight (RDW), average root length (ARL) and bulk shoot dry weight (SDW) showed the top three values (91.3, 91.0 and 90.4%, respectively) while the presence of the 6th root showed the lowest value (67.0%). Correlation among RSA traits Several strong correlations were observed between RSA traits (Fig. 2). Highly significant positive correlations were detected for RDW vs. IRW (0.93), RDW vs. SDW (0.92) and IRW vs. SDW (0.84). Strong correlations were recorded between TRN and RT6; TRL and ARL with a correlation coefficient of 0.84 and 0.82, respectively. The initial thousand grain weight showed no significant correlation with any RSA trait suggesting that variation of RSA traits did not have maternal etiology caused by variation in seed size. Correlation coefficient and level of significant for the initial thousand grain weight and RSA traits measured in 12-day-old seedlings of 192 Ethiopian durum wheat accessions Landraces showed a wider range of variability than cultivars in most RSA traits although the latter outperformed the former for some traits (Table 2 and Additional file 9: Figure S2.). For instance, the cultivars mean values for root and shoot dry weight were 90.3 and 92.8 mg, while landraces scored only 56.9 and 66.5 mg for the same traits, respectively. Cultivars also performed better than landraces for TRL and ARL while TRN and RT6 were the only two RSA traits for which landraces showed slightly higher mean values than cultivars. Table 2 Mean and range values of 25 cultivars and 167 landraces for RSA traits Population structure and linkage disequilibrium decay analysis According to population structure analysis, the panel was subdivided into three subpopulations of 75, 27 and 90 accessions each (Fig. 3a, b and Additional file 3: Table S3). All 26 cultivars clustered into subpopulation 2 except for 'Selam' that grouped in subpopulation 1. Clustering analysis indicated that SNP data failed to group landraces clearly based on their geographical backgrounds and accessions were admixed into the three subpopulations irrespective of their geographic origin. Box plot of the three sub-populations inferred from STRUCTURE analysis for the mean values of RSA traits is reported in Additional file 9: Figure S3. Population structure and kinship-matrix similarity analysis for 192 Ethiopian durum wheat accessions. Heat-map clustering results based on the kinship matrix from tag-SNP (r2 = 1) by identity-by-state (IBS) algorithm (a). Population structure plot and K1, K2 and K3 represents subpopulations 1, 2 and 3, respectively (b). The black-dash lines separated the panel into three subpopulations. Accessions arrangement was based on the order of heat-map kinship result. The color represents the membership of each accession in the STRUCTURE-inferred subpopulations. The color of the legend indicates the level of kinship similarity of the heat-map The mean genome wide r2 value was 0.12, with 55% of the pair-wise linkage disequilibrium comparisons showing significant association at P < 0.01. Chromosome 3B scored the highest mean value (r2 = 0.19) with 64% significant pair-wise LD comparisons. On the other hand, 7A scored the lowest mean r2 value (0.11) and 48% of pairwise LD comparisons were significant. The genome-wide LD decayed below r2 = 0.3 (the standard critical threshold) at 2.25 cM. This defines the ±2.25 cM as the genome-wide critical distance to detect linkage and, therefore, as the QTL confidence interval around the QTL-tag SNP, i.e. the SNP found at the peak of the corresponding QTL. The specific critical r2 value beyond which LD is due to true physical linkage was 0.15 and the intersect of the threshold with the LD decay curve was at 5.75 cM. GWAS analysis of RSA traits After filtering SNP data and following imputation, a total of 10,789 polymorphic SNP markers (4591 and 6198 SNPs from A and B genomes, respectively) were used for marker-traits association (MTA) analyses. The mixed linear model with population structure and kinship matrix was chosen for MTA analysis, as the quantile-quantile (Q-Q) plot showed that the observed MTA P-values were close to the expected distribution (Additional file 9: Figure S4). A total of 275 QTLs with various significant values were identified for the tested RSA traits. The only four major QTLs above the experiment-wise threshold (− log10P ≥ 4) were EPdwRGA-6A, EPdwRDW-4A, EPdwiTGW-3B.1 and EPdwIRW-5A with values of 6.85, 4.34, 4.15 and 4.06 which accounted for 16.08, 8.41, 8.71 and 8.03% of the phenotypic variation, respectively. Thirty-four QTLs reached the marker-wise threshold of – log10P ≥ 3 in which the highest number was identified for TRN with eight QTLs followed by SDW and IRW each with six nominal QTLs. Additionally, three nominal QTLs were identified for TRL, iTGW and RT6, two for RDW and only one for RGA, ARL and RSR. The other 237 QTLs with a marker-wise threshold of – log10P ≥ 2 were identified as suggestive QTLs. The major and nominal QTLs are reported in Table 3 while the complete list of identified QTLs with the marker-wise threshold value of –log10P ≥ 2 are reported in Additional file 4: Table S4. Thirteen markers showed significant associations for more than one RSA trait that could be due to either a pleiotropic effect or tight linkage, hence considered as separate QTLs for corresponding traits (Table 4). Notably, the root growth angle QTL showed limited overlap with QTLs of other RSA traits. Table 3 List of major and nominal QTLs for RSA traits identified in 192 Ethiopian durum wheat accessions Table 4 Markers with a significant association/concurrent effect on more than one RSA trait QTL clusters for RSA traits The identified QTLs were further grouped into 15 RSA QTL clusters plus one distinct RGA QTL cluster on chromosome 6AL. Clustering was based on the significance of each QTL and its effects on various traits in this study and overlapping with QTLs from previously reported studies in bread and/or durum wheat (Table 5). Based on these criteria, a total of 103 QTLs were included in 16 QTL clusters. Cluster pairs were identified on chromosomes 1A, 3B and 7A while chromosomes 1B, 2A, 2B, 3A, 4A, 4B, 5A, 5B, 6A and 6B each harbored a single QTL cluster (Fig. 4a, b and Additional file 9: Figure S5). Table 5 Main RSA QTL clusters identified in 192 Ethiopian durum wheat accessions and other studies Genetic map of RSA QTLs identified in 192 Ethiopian durum wheat accessions along with previously published studies projected onto SNP-based tetraploid consensus map published in Maccaferri et al. (2015). RSA QTL identified in the present study are listed on the left side of the chromosomes with their significance level: ** = marker-wise significance of P ≤ 0.01 (− log10P ≥ 2); *** = marker-wise significance of P ≤ 0.001(− log10P ≥ 3); **** = experiment-wise significance of P ≤ 0.05/marker-wise significance of P ≤ 0.0001 (− log10P ≥ 4). RSA QTLs identified in previous studies, orange-filled bars for durum wheat and blue-filled bars for bread wheat, listed on the right side and references given in parentheses. Grey-filled bands are for RSA QTL clusters on chromosomes 1A and 1B (a) and a distinct root growth angle (RGA) QTL cluster identified on chromosome 6A from 105 to125 cM (b). Black-filled bars are for QTLs with R2 < 5%; red bars for R2 values from 5 to 10% and yellow bars for R2 > 10%. The length of bars indicates the confidence interval of each QTL or QTL cluster. Manhattan plot for the major RGA QTL identified on chromosome 6AL (c) QTL for seminal root length and number EPdwTRL-1B, EPdwTRL-4B and EPdwTRL-5A were the three nominal QTLs identified for TRL on chromosomes 1B (Fig. 4a), 4B and 5A, respectively. Other suggestive TRL QTLs were identified on all chromosomes except for chromosome 6A. For ARL, only one nominal QTL (EPdwARL-2A) was detected on chromosome 2A, while other suggestive QTLs were detected for across all chromosomes. Seven nominal QTLs were detected for TRN: three (EPdwTRN-4B.1, EPdwTRN-4B.2 and EPdwTRN-4B.3) were mapped on chromosome 4B, two (EPdwTRN-1A.1 and EPdwTRN-1A.2) on chromosome 1A (Fig. 4a) and the other two (EPdwTRN-1B and EPdwTRN-7A) on chromosomes 1B and 7A, respectively. For the presence of the sixth seminal root, three nominal QTLs (EPdwRT6-4B.1, EPdwRT6-4B.2 and EPdwRT6-4B.3) were mapped on chromosome 4B (Table 3). The allelic distribution and frequency of TRN and TRL QTL-tagging SNPs with phenotypic effect (R2) > 5% are reported in Additional file 6: Table S6 and Additional file 7: Table S7, respectively. QTL for seminal root growth angle The QTL with the largest effect (R2 = 0.16) on RGA (EPdwRGA-6A) was identified on chromosome 6A. Within the confidence interval of this QTL, six SNPs (IWB35245, IWB71122, IWB24306, IWB57413, IWB10077 and IWB74235) showed significant effects for the trait (Fig. 4c; Additional file 4: Table S4). The confidence interval of this major RGA QTL (from 105 to 125 cM) overlapped with the confidence interval of RSA QTLs previously reported in the same region (Fig. 4b). Other suggestive RGA QTLs were identified on chromosomes 1A, 2B, 3A, 3B, 4A, 5B, 6B, 7A and 7B (Additional file 4: Table S4). Notably, RGA QTLs showed no clustering with other RSA QTLs. The allelic distribution and frequency of RGA QTL-tagging SNPs with phenotypic effect > 5% is reported in Additional file 5: Table S5. QTL for root and shoot dry weight Two major QTLs (EPdwRDW-4A and EPdwIRW-5A) were identified for bulk and individual root dry weight on chromosomes 4A and 5A, respectively. Two nominal QTLs were identified for RDW (EPdwRDW-1B and EPdwRDW-3A) on chromosomes 1B and 3A. As to individual root weight six nominal QTLs (EPdwIRW-1B, EPdwIRW-2B, EPdwIRW-5B.1, EPdwIRW-5B.2, EPdwIRW-6B and EPdwIRW-7A) were identified on chromosomes 1B, 2B, 5B (two QTLs), 6B and 7A, respectively. Six nominal QTLs (EPdwSDW-1A, EPdwSDW-1B, EPdwSDW-3B, EPdwSDW-4A, EPdwSDW-4B and EPdwSDW-5B) were identified for SDW. The QTLs for these three traits repeatedly clustered nearby or in single QTLs (Table 5). The allelic distribution and frequency of IRW QTL-tagging SNPs with phenotypic effect > 5% is reported in Additional file 8: Table S8. In the present study, 12-day-old seedlings of 192 Ethiopian durum wheat accessions, predominantly landraces, were phenotyped in controlled conditions to identify the root system architecture (RSA) QTL through GWAS analysis. Moderate to high heritability values, ranging from 67 to 91%, were recorded for all RSA traits, confirming them as potential targets for wheat improvement. The linkage disequilibrium analyzed from 10,789 polymorphic SNPs indicated that LD decays to the threshold value of r2 = 0.3 (the generally accepted limit to detect association with a QTL) at 2.25 cM that was in agreement with the LD decay value previously detected by Liu et al. [15]. Maccaferri et al. [40, 41] specified the LD decays at 2.20 cM for the panel comprising 183 elite durum wheat cultivars and lines from Mediterranean countries, the Southwestern USA and Mexico. The RSA QTL-clusters included either single loci with concurrent effects on different RSA traits or tightly linked loci not resolved by recombination [42], most of which overlapped with previously identified RSA QTL clusters. QTL mapping for RSA traits of wheat based on designed bi-parental populations was recently reviewed by Soriano and Alvaro [43] compiling the results of 27 bread and three durum wheat studies for a total of 754 QTLs. Root length and number at the seedling stage are potential candidates for marker-assisted breeding applications aimed at enhancing early rooting capacity [21]. One novel QTL for TRN, EPdwTRN-4A, was discovered in the present study on the short arm of chromosome 4A. The other TRN QTL identified on the short arm of chromosome 1A overlaps with the TRN QTL reported by Maccaferri et al. [21]. The confidence interval of the TRN QTL on the short arm of chromosome 1B overlapped with the confidence interval of the TRN QTL identified by Christopher et al. [44] and under the 8th root metaQTL (Root_MQTL_8) reported by Soriano and Alvaro [43]. Other nominal TRN QTL identified on the short arm of chromosome 4B overlapped with TRN QTL reported by Ren et al. [45]. The other two TRN QTLs detected on the long arm of chromosome 4B and short arm of chromosome 7A both overlapped with a TRN QTL reported in Maccaferri et al. [21]. Chromosome 4B showed three strong QTLs (EPdwRT6-4B.1, EPdwRT6-4B.2 and EPdwRT6-4B.3) for the development of more than five seminal roots per plantlet. For root length, the other important trait, d three nominal QTLs were identified for TRL and one for ARL. One novel QTL for TRL, EPdwTRL-4B, was mapped on the long arm of chromosome 4B. The TRL QTL identified on the short arm of chromosome 1B overlaps with TRL QTL reported by Petrarulo et al. [36] and Liu et al. [46] and the other one detected on the telomeric region of chromosome 5A overlapped with a TRL QTL reported by Maccaferri et al. [21]. The nominal ARL QTL (EPdwARL-2A) identified on chromosome 2A with a concurrent effect on TRL, SDW, RDW and IRW, is novel since it was not reported in any of the previous studies considered for this meta-analysis based on the tetraploid consensus map. Among the other essential RSA traits, as to root growth angle (RGA), a pivotal trait influencing RSA and its functions, the most notable QTL (EPdwRGA-6A) was identified on the long arm of chromosome 6A, similarly tothat reported by Maccaferri et al. [21], QRga.ubo-6A.2, using 183 elite cultivars and lines representing the main breeding pools from Mediterranean countries (particularly ICARDA and Italy), the Southwestern USA and CIMMYT. Additionally, Alahmad et al. [47] recently reported sizeable and highly significant effects on RGA of the same region of chromosome 6AL. The concomitant effects of the chromosome 6AL on RGA observed in widely different germplasm pool underline the importance of further studies to better characterize the effects of the different haplotypes present at this major QTL. Notably, a novel nominal RGA QTL (EPdwRGA-4A) was detected on the long arm of chromosome 4A. An additional novel major RDW QTL (EPdwRDW-4A) with concurrent effects on SDW, TRN and TR6 was mapped on the short arm of chromosome 4A. A novel RDW QTL (EPdwRDW-3A) was also identified on the long arm of chromosome 3A. EPdwSDW-3B and EPdwSDW-4A were the two newly discovered nominal SDW QTLs on the short arm of chromosome 3B and long arm of chromosome 4A, respectively. Four novel IRW QTLs (EPdwIRW-5B.1, EPdwIRW-5B.2, EPdwIRW-6B, EPdwIRW-7A) were discovered on the short arm of chromosome 5B (the first two), long arm of chromosome 6B and short arm of chromosome 7A, respectively. Iannucci et al. [37] noted the absence of a clear relationship between plant height and root development and added diverse and controversial speculations from a number of previous studies which are probably due to the different conditions and growth stages in which the root traits were evaluated. Some authors reported different genetic control between shoot and root growth [35, 48, 49] while others have reported a negative correlation [50]. Bai et al. [51] investigated a set of NILs for a number of Rht loci/alleles and showed clear effects on both shoot and root traits. Among the four major and 34 nominal RSA QTLs identified in the current study, 14 are novel, hence showing the suitability of Ethiopian landraces for studies aimed at the dissection of the QTL and the identification of novel haplotypes. The remaining 20 RSA QTLs concomitantly identified in this and previous studies provide valuable information on their role across diverse genepools, an important prerequisite to prioritize QTLs for marker-assisted selection aimed at enhancing crop productivity based on the use of RSA traits as proxies. A cluster of RGA QTLs was identified on the long arm of chromosome 6A with a major QTL (EPdwRGA-6A) with a notable phenotypic effect on RGA (R2 = 0.16). This result coupled with those reported in previous RSA studies [21, 47] highlights and reinforces EPdwRGA-6A as a strong candidate for further studies aimed at cloning the causative sequences and identifying the beneficial haplotypes able to positively affect yield under water- or nutrient-limited conditions. One hundred ninety-two Ethiopian durum wheat accessions were used to assemble the GWA mapping panel. The collection included 167 landraces and 25 cultivars collected and maintained as single seed descent (SSD) progenies at the Debre Zeit Agricultural Research Center (DZARC) and Sinana Agricultural Research Center (SARC) in Ethiopia. Landrace collections were originally collected from major wheat-producing areas of Ethiopia, including Bale, Gondar, Gojjam, Shewa, Tigray and Wollo. Twelve Ethiopian durum wheat landraces currently cultivated in the USA are included in the panel. Cultivars were released in the years between 1994 and 2010 from DZARC and SARC and have been/are being cultivated in Ethiopia. Details of accessions used for the current study are summarized in Additional file 1: Table S1. Root system architecture phenotyping Seminal RSA traits were characterized using the protocol described by Canè et al. [19] and later used by Maccaferri et al. [21] with minor adjustments in the present work. Seeds were first weighed to measure thousand grain weight that was later used as a covariate in order to account for maternal effects on RSA traits due to seed size. Twenty seeds per accession were treated in 0.15% Panoctine solution and dried before pre-germinating them in Petri dishes on wet-filter-paper at 28 °C for 24 h. Then, five similar seeds with homogenous seminal root emission were positioned 7-cm apart on a wet-filter-paper sheet moistened with distilled water and placed on a vertical black rectangular (42.5 × 38.5 cm) polycarbonate plate for root obscuration. Root traits were then measured in plantlets grown in a growth chamber for 12 days at 22 °C (day)/18 °C (night) under a 16-h photoperiod and light intensity of 400 μmol m− 2 s− 1 photosynthetically active radiation (PAR). The experiment was conducted adopting a randomized complete block design (RCBD) with three independent replications grown in the growth chamber. The experimental unit included five homogenous seedlings of each accession and hence one screening plate corresponded to one genotype. Blocking was introduced to control for possible differences in growth rate and normalization of the blocking effect (linear adjustment, whenever significant) was undertaken. Due to the high number of genotypes under evaluation and the time required for root preparation and root image acquisition, genotypes were divided into sets of 25–30 accessions that were considered as blocks. Blocks included accessions phenotyped at the same date and kept on shelves in the growth chamber that are positioned at the same distance from the floor under uniform light conditions (see Additional file 9: Figure S1). Data for the following RSA traits were taken based on single-plantlet basis (Table 6): root growth angle (RGA) measured as the linear distance between the two most external seminal roots of each plantlet at 3.5 cm from the seed tip and then converted to degrees (Fig. 5a, b); total root length (TRL); average root length (ARL); total root number (TRN); presence of six seminal roots (RT6). Total root length and root growth angle were measured on plantlet images (Fig. 5c) using GIMP (GNU Image Manipulation Program) and ImageJ [52]. Average root length was estimated as total root length divided by total root number. Bulked roots and shoots from each experiment were cut and dried in an oven for 48 h to measure root dry weight (RDW) and shoot dry weight (SDW), respectively. Individual root dry weight (IRW) was derived from the result of the bulk root dry weight divided by the total root number that could be used as a proxy to measure root thickness. Table 6 Summary of acronyms used for root system architecture (RSA) traits and their measuring unit Root growth angle of seminal roots in 12-day-old seedlings of 'Gondar' landrace with narrow growth angle (a) and 'Obsa' cultivar with wide growth angle (b) measured as the linear distance (red segment) of the two most external roots (green segments) at 3.5 cm from the tip of the seed and later converted into degrees. Example of a root sample ready for image capturing for further root length and root growth angle measurement (c) Phenotypic data analysis Analysis of variance (ANOVA) was conducted including replications, blocks and accessions. Block effect was controlled using the mean of each set of genotypes included in the same block and used to correct the corresponding single values, whenever significant, with a linear regression method. The weight of each individual seed was used as a covariate to correct for any possible variation caused by maternal effects. In addition, the trait was subjected to GWA analysis along with other RSA traits. Broad sense heritability (H2) of RSA traits was calculated with the mean values of each experiment among the three replications according to the formula: $$ {H}^2=\frac{\sigma^2g}{\sigma^2g+{\sigma}^2e/r} $$ Where σ2g (genetic variance) was calculated as (MSgenotypes – MSresidual)/r; σ2e (the residual variance) = MSresidual, r the number of replications and MS the mean square value. The coefficient of variance (CV) was calculated for all RSA traits except for the presence of the 6th root, the only trait with discrete values. Genotypic data and imputation A pooled tissue sample of 25 one-week-old plantlets, from the same seed source used to phenotype RSA traits, was used for genomic DNA extraction for each accession. DNeasy 96 Plant Kit (Qiagen GmbH, Hilden, Germany) was used to extract the genomic DNA. Genotyping was done with the high-density Infinium® iSelect® Illumina 90 K wheat SNP array [53] and SNP calling and clustering were made with the GenomeStudio v2011.1 software (Illumina, San Diego, CA, USA). Calls showing residual heterozygosity were assigned as a missing value. SNP markers with < 0.05 minor allele frequencies (MAF) and markers with > 0.1 missing values per accession were excluded. After filtering, imputation of the missing data was computed using Beagle 4.0 [54]. Owing to the high level of homozygosity, imputation disregarded any phased reference populations. Twenty-five markers were considered in the imputation rolling window (twice the average number of marker present in a 5 cM interval), with an overlap of a single marker, the typical number of markers included in a 0.5 cM interval. Since imputation accuracy was not improved by using other parameters, default values were kept. The high-density consensus map of tetraploid wheat generated by Maccaferri et al. [41] was used to identify chromosome positions of SNPs and markers with unknown positions were removed. Population structure and kinship analysis For population structure analysis, a Bayesian model-based (Markov Chain Monte Carlo) clustering approach was used in STRUCTURE v.2.3 [55]. Haploview v4.2 [56] "Tagger" function (based on analysis of marker pairwise r2 values) was used to select tag-SNPs for population structure analysis with a tagger filter set at r2 = 0.5 and 1496 tag-SNPs were selected. To infer the optimal sub-populations number, an ad hoc quantity (∆K) was calculated based on the second order rate of change of the likelihood (Evanno et al., 2005) and in this analysis approach, the ∆K shows a clear peak at the ideal number of sub-populations. To perform this, 10 sub-populations with 20 independent iterations for each sub-population were done adopting an admixture model of population structure with correlated allele frequencies and 50,000 lengths burn-in period and 100,000 Markov Chain Monte Carlo (MCMC) replications after burn-in were applied for each iteration. Additionally, the Haploview "Tagger" function was used to select tag-SNPs for kinship matrix (K) analysis with a tagger filter set at r2 = 1 and 4842 tag-SNPs were selected, calculated in TASSEL v.5.2 [57] and incorporated in the mixed linear model (MLM) along with the population structure (Q) value for GWAS analysis. Linkage disequilibrium (LD) and GWAS analysis The LD r2 values between pairwise intra-chromosomal SNPs were calculated with TASSEL v.5.2 and LD decay curve was fitted by a smoothing spline regression line at the genome level according to Hill and Weir function [58] in r environment [59]. The specific critical r2 value beyond which LD is due to true physical linkage was determined by taking the 95th percentile of r2 data of unlinked marker pairs [60]. In order to control the rate of false-positive associations, a MLM model [61] with population structure and kinship covariates was applied for the GWAS analyses. Hence, all SNP markers and the phenotypic data generated for the nine RSA traits were used to conduct the MTA analysis. Three levels of significance were introduced according to Maccaferri et al. [21] for reporting the GWAS-QTLs: (i) experiment-wise P ≤ 0.05 (marker-wise P ≤ 0.0001, − log10P ≥ 4) for "major QTLs"; (ii) marker-wise P ≤ 0.001 (− log10P ≥ 3) for "nominal QTLs"; (iii) marker-wise P ≤ 0.01, (− log10P ≥ 2) for "suggestive QTLs". The experiment-wise threshold was established according to the number of 'independent SNP tests' that was estimated in Haploview using the tagger function of r2 = 0.3 [62] and the total number (816) of tag-SNPs. Bonferroni test adjusted for multiple marker tests (P ≤ 0.05) was equal to – log10P = 4.21 (rounded to 4.00). Hence the experiment-wise, Bonferroni-corrected significance threshold at P = 0.05 matched to a marker-wise threshold of – log10P ≥ 4. Significance intervals of identified QTLs were reported as the intervals after including all SNPs associated with the trait with P ≤ 0.01 (marker-wise) and in LD of r2 ≥ 0.3. Confidence intervals were defined based on the GWAS-QTL peak ±2.25 cM on both map sides. The relative positions of RSA QTLs identified in this study along with other previous studies [14, 21, 34, 36, 37, 44,45,46, 51, 63,64,65,66,67] were compared based on the projected QTL peaks and confidence intervals on the tetraploid wheat consensus map [41]. The data sets supporting the results of this article are included in this manuscript and its additional information files. The SNP markers used for the GWAS analysis can be found online at: https://bmcgenet.biomedcentral.com/articles/10.1186/s12863-020-0825-x: Additional file 2. ANOVA: ARL: Average root length DZARC: Debre Zeit Agricultural Research Center EBI: Ethiopian biodiversity institute GWAS: IRW: Individual root dry weight iTGW: Initial thousand grain weight MAF: Minor allele frequency MCMC: Markov chain Monte Carlo MTA: Marker-trait association RDW: Bulk root dry weight RGA: Root growth angle RSA: Root system architecture RSR: Root to shoot ratio RT6: Presence of six seminal roots per seedling SARC: Sinana Agricultural Research Center SDW: Bulk shoot dry weight SNP: Single nucleotide polymorphism TRL: Total root length TRN: Total root number Badebo A, Gelalcha S, Ammar K, Nachit M, Abdalla O, Mcintosh R. Overview of durum wheat research in Ethiopia: challenges and prospects. In: McIntosh R, editor. Proceedings, oral papers and posters, 2009 Technical Workshop, Borlaug Global Rust Initiative, Cd. Obregón, Sonora, Mexico, 17–20 March, 2009. Obregón: Borlaug Global Rust Initiative, Cd; 2009. p. 143–9. http://www.globalrust.org/db/attachme. Mengistu DK, Kiros AY, Pè ME. Phenotypic diversity in Ethiopian durum wheat (Triticum turgidum var. durum) landraces. Crop J. 2015;3:190–9. https://doi.org/10.1016/j.cj.2015.04.003. Vavilov NI. The origin, variation, immunity, and breeding of cultivated plants. Soil Sci. 1951;72:482. https://doi.org/10.1097/00010694-195112000-00018. Zohary D. Centers of diversity and centers of origin. In: Frankel OH, Bennett E, editors. Genetic resources of plants- their exploration and conservation. Oxford & Edinburgh: Blackwell Scientific Publications; 1970. p. 33–42. Kabbaj H, Sall AT, Al-Abdallat A, Geleta M, Amri A, Filali-Maltouf A, et al. Genetic diversity within a global panel of durum wheat (Triticum durum) landraces and modern Germplasm reveals the history of alleles exchange. Front Plant Sci. 2017;8. https://doi.org/10.3389/fpls.2017.01277. Bechere E, Belay G, Mitiku D, Merker A. Phenotypic diversity of tetraploid wheat landraces from northern and north-central regions of Ethiopia. Hereditas. 2004;124:165–72. https://doi.org/10.1111/j.1601-5223.1996.00165.x. Tesemma T, Bechere E. Developing elite durum wheat landrace selections (composites) for Ethiopian peasant farm use: raising productivity while keeping diversity alive. Euphytica. 1998;102:323–8. Teklu Y, Hammer K. Diversity of Ethiopian tetraploid wheat germplasm: breeding opportunities for improving grain yield potential and quality traits. Plant Genet Resour. 2009;7:1–8. https://doi.org/10.1017/S1479262108994223. Alamerew S, Chebotar S, Huang X, Röder M, Börner A. Genetic diversity in Ethiopian hexaploid and tetraploid wheat germplasm assessed by microsatellite markers. Genet Resour Crop Evol. 2004;51:559–67. https://doi.org/10.1023/B:GRES.0000024164.80444.f0. Teklu Y, Hammer K, Huang XQ, Röder MS. Analysis of microsatellite diversity in Ethiopian Tetraploid wheat landraces. Genet Resour Crop Evol. 2006;53:1115–26. https://doi.org/10.1007/s10722-005-1146-7. Haile JK, Hammer K, Badebo A, Nachit MM, Röder MS. Genetic diversity assessment of Ethiopian tetraploid wheat landraces and improved durum wheat varieties using microsatellites and markers linked with stem rust resistance. Genet Resour Crop Evol. 2013;60:513–27. https://doi.org/10.1007/s10722-012-9855-1. Mengistu DK, Kidane YG, Catellani M, Frascaroli E, Fadda C, Pè ME, et al. High-density molecular characterization and association mapping in Ethiopian durum wheat landraces reveals high diversity and potential for wheat breeding. Plant Biotechnol J. 2016;14:1800–12. https://doi.org/10.1111/pbi.12538. Amri A, Hatchett JH, Cox TS, El Bouhssini M, Sears RG. Resistance to hessian Fly from north African durum wheat Germplasm. Crop Sci. 1990;30:378. https://doi.org/10.2135/cropsci1990.0011183X003000020027x. Kubo K, Elouafi I, Watanabe N, Nachit MM, Inagaki MN, Iwama K, et al. Quantitative trait loci for soil-penetrating ability of roots in durum wheat. Plant Breed. 2007;126:375–8. https://doi.org/10.1111/j.1439-0523.2007.01368.x. Liu W, Maccaferri M, Rynearson S, Letta T, Zegeye H, Tuberosa R, et al. Novel Sources of Stripe Rust Resistance Identified by Genome-Wide Association Mapping in Ethiopian Durum Wheat (Triticum turgidum ssp. durum). Front Plant Sci. 2017;8. https://doi.org/10.3389/fpls.2017.00774. Mengistu DK, Kidane YG, Fadda C, Pè ME. Genetic diversity in Ethiopian durum wheat ( Triticum turgidum var durum ) inferred from phenotypic variations. Plant Genet Resour Charact Util. 2018;16:39–49. https://doi.org/10.1017/S1479262116000393. Negassa A, Koo J, Sonder K, Shiferaw B, Smale M, Braun H, et al. The Potential for Wheat Production in Sub-Saharan Africa: Analysis of Biophysical Suitability and Economic Profitability. In: Wheat for food security in Africa: Science and policy dialogue about the future of wheat in Africa. Mexico: CIMMYT; 2012. p. 64. https://repository.cimmyt.org/handle/10883/4015. Lynch J. Root architecture and plant productivity. Plant Physiol. 1995;109:7–13. https://doi.org/10.1104/pp.109.1.7. Canè MA, Maccaferri M, Nazemi G, Salvi S, Francia R, Colalongo C, et al. Association mapping for root architectural traits in durum wheat seedlings as related to agronomic performance. Mol Breed. 2014;34:1629–45. https://doi.org/10.1007/s11032-014-0177-1. Mickelbart MV, Hasegawa PM, Bailey-Serres J. Genetic mechanisms of abiotic stress tolerance that translate to crop yield stability. Nat Rev Genet. 2015;16:237–51. https://doi.org/10.1038/nrg3901. Maccaferri M, El-Feki W, Nazemi G, Salvi S, Canè MA, Colalongo MC, et al. Prioritizing quantitative trait loci for root system architecture in tetraploid wheat. J Exp Bot. 2016;67:1161–78. https://doi.org/10.1093/jxb/erw039. Xie Q, Fernando KMC, Mayes S, Sparkes DL. Identifying seedling root architectural traits associated with yield and yield components in wheat. Ann Bot. 2017;119:1115–29. https://doi.org/10.1093/aob/mcx001. Reynolds M, Tuberosa R. Translational research impacting on crop productivity in drought-prone environments. Curr Opin Plant Biol. 2008;11:171–9. https://doi.org/10.1016/j.pbi.2008.02.005. Hawkesford MJ. Reducing the reliance on nitrogen fertilizer for wheat production. J Cereal Sci. 2014;59:276–83. https://doi.org/10.1016/j.jcs.2013.12.001. King J. Modelling cereal root Systems for Water and Nitrogen Capture: towards an economic optimum. Ann Bot. 2003;91:383–90. https://doi.org/10.1093/aob/mcg033. Lynch JP. Steep, cheap and deep: an ideotype to optimize water and N acquisition by maize root systems. Ann Bot. 2013;112:347–57. https://doi.org/10.1093/aob/mcs293. Meister R, Rajani MS, Ruzicka D, Schachtman DP. Challenges of modifying root traits in crops for agriculture. Trends Plant Sci. 2014;19:779–88. https://doi.org/10.1016/j.tplants.2014.08.005. Steele KA, Price AH, Witcombe JR, Shrestha R, Singh BN, Gibbons JM, et al. QTLs associated with root traits increase yield in upland rice when transferred through marker-assisted selection. Theor Appl Genet. 2013;126:101–8. https://doi.org/10.1007/s00122-012-1963-y. Uga Y, Sugimoto K, Ogawa S, Rane J, Ishitani M, Hara N, et al. Control of root system architecture by DEEPER ROOTING 1 increases rice yield under drought conditions. Nat Genet. 2013;45:1097–102. https://doi.org/10.1038/ng.2725. Borrell AK, Mullet JE, George-Jaeggli B, van Oosterom EJ, Hammer GL, Klein PE, et al. Drought adaptation of stay-green sorghum is associated with canopy development, leaf anatomy, root growth, and water uptake. J Exp Bot. 2014;65:6251–63. https://doi.org/10.1093/jxb/eru232. Kitomi Y, Kanno N, Kawai S, Mizubayashi T, Fukuoka S, Uga Y. QTLs underlying natural variation of root growth angle among rice cultivars with the same functional allele of DEEPER ROOTING 1. Rice. 2015;8:16. https://doi.org/10.1186/s12284-015-0049-2. Manschadi AM, Hammer GL, Christopher JT, DeVoil P. Genotypic variation in seedling root architectural traits and implications for drought adaptation in wheat (Triticum aestivum L.). Plant Soil. 2008;303:115–29. https://doi.org/10.1007/s11104-007-9492-1. Miguel MA, Postma JA, Lynch JP. Phene synergism between root hair length and basal root growth angle for phosphorus acquisition. Plant Physiol. 2015;167:1430–9. https://doi.org/10.1104/pp.15.00145. An D, Su J, Liu Q, Zhu Y, Tong Y, Li J, et al. Mapping QTLs for nitrogen uptake in relation to the early growth of wheat (Triticum aestivum L.). Plant Soil. 2006;284:73–84. https://doi.org/10.1007/s11104-006-0030-3. Sanguineti MC, Li S, Maccaferri M, Corneti S, Rotondo F, Chiari T, et al. Genetic dissection of seminal root architecture in elite durum wheat germplasm. Ann Appl Biol. 2007;151:291–305. https://doi.org/10.1111/j.1744-7348.2007.00198.x. Petrarulo M, Marone D, Ferragonio P, Cattivelli L, Rubiales D, De Vita P, et al. Genetic analysis of root morphological traits in wheat. Mol Gen Genomics. 2015;290:785–806. https://doi.org/10.1007/s00438-014-0957-7. Iannucci A, Marone D, Russo MA, De Vita P, Miullo V, Ferragonio P, et al. Mapping QTL for root and shoot morphological traits in a durum wheat × T. dicoccum segregating population at seedling stage. Int J Genomics. 2017;2017:1–17. https://doi.org/10.1155/2017/6876393. Roselló M, Royo C, Sanchez-Garcia M, Soriano JM. Genetic dissection of the seminal root system architecture in Mediterranean durum wheat landraces by genome-wide association study. Agronomy. 2019;9:364. https://doi.org/10.3390/agronomy9070364. Ruiz M, Giraldo P, González JM. Phenotypic variation in root architecture traits and their relationship with eco-geographical and agronomic features in a core collection of tetraploid wheat landraces (Triticum turgidum L.). Euphytica. 2018;214:54. https://doi.org/10.1007/s10681-018-2133-3. Maccaferri M, Cane' M, Sanguineti MC, Salvi S, Colalongo MC, Massi A, et al. A consensus framework map of durum wheat (Triticum durum Desf.) suitable for linkage disequilibrium analysis and genome-wide association mapping. BMC Genomics. 2014;15:873. https://doi.org/10.1186/1471-2164-15-873. Maccaferri M, Ricci A, Salvi S, Milner SG, Noli E, Martelli PL, et al. A high-density, SNP-based consensus map of tetraploid wheat as a bridge to integrate durum and bread wheat genomics and breeding. Plant Biotechnol J. 2015;13:648–63. https://doi.org/10.1111/pbi.12288. Tuberosa R, Sanguineti MC, Landi P, Giuliani MM, Salvi S, Conti S. Identification of QTLs for root characteristics in maize grown in hydroponics and analysis of their overlap with QTLs for grain yield in the field at two water regimes. Plant Mol Biol. 2002;48:697–712. https://doi.org/10.1023/a:1014897607670. Soriano JM, Alvaro F. Discovering consensus genomic regions in wheat for root-related traits by QTL meta-analysis. Sci Rep. 2019;9:10537. https://doi.org/10.1038/s41598-019-47038-2. Christopher J, Christopher M, Jennings R, Jones S, Fletcher S, Borrell A, et al. QTL for root angle and number in a population developed from bread wheats (Triticum aestivum) with contrasting adaptation to water-limited environments. Theor Appl Genet. 2013;126:1563–74. https://doi.org/10.1007/s00122-013-2074-0. Ren Y, He X, Liu D, Li J, Zhao X, Li B, et al. Major quantitative trait loci for seminal root morphology of wheat seedlings. Mol Breed. 2012;30:139–48. https://doi.org/10.1007/s11032-011-9605-7. Liu X, Li R, Chang X, Jing R. Mapping QTLs for seedling root traits in a doubled haploid wheat population under different water regimes. Euphytica. 2013;189:51–66. https://doi.org/10.1007/s10681-012-0690-4. Alahmad S, El Hassouni K, Bassi FM, Dinglasan E, Youssef C, Quarry G, et al. A major root architecture QTL responding to water limitation in durum wheat. Front Plant Sci. 2019;10. https://doi.org/10.3389/fpls.2019.00436. Wojciechowski T, Gooding MJ, Ramsay L, Gregory PJ. The effects of dwarfing genes on seedling root growth of wheat. J Exp Bot. 2009;60:2565–73. https://doi.org/10.1093/jxb/erp107. Narayanan S, Mohan A, Gill KS, Prasad PVV. Variability of root traits in spring wheat Germplasm. PLoS One. 2014;9:e100317. https://doi.org/10.1371/journal.pone.0100317. Kabir MR, Liu G, Guan P, Wang F, Khan AA, Ni Z, et al. Mapping QTLs associated with root traits using two different populations in wheat (Triticum aestivum L.). Euphytica. 2015;206:175–90. https://doi.org/10.1007/s10681-015-1495-z. Bai C, Liang Y, Hawkesford MJ. Identification of QTLs associated with seedling root traits and their correlation with plant height in wheat. J Exp Bot. 2013;64:1745–53. https://doi.org/10.1093/jxb/ert041. Collins TJ. ImageJ for microscopy. Biotechniques. 2007;43:S25–30. https://doi.org/10.2144/000112517. Wang S, Wong D, Forrest K, Allen A, Chao S, Huang BE, et al. Characterization of polyploid wheat genomic diversity using a high-density 90 000 single nucleotide polymorphism array. Plant Biotechnol J. 2014;12:787–96. https://doi.org/10.1111/pbi.12183. Browning SR, Browning BL. Rapid and accurate haplotype phasing and missing-data inference for whole-genome association studies by use of localized haplotype clustering. Am J Hum Genet. 2007;81:1084–97. https://doi.org/10.1086/521987. Pritchard JK, Stephens M, Donnelly P. Inference of population structure using multilocus genotype data. Genetics. 2000;155:945–59. CAS PubMed PubMed Central Google Scholar Barrett JC, Fry B, Maller J, Daly MJ. Haploview: analysis and visualization of LD and haplotype maps. Bioinformatics. 2005;21:263–5. https://doi.org/10.1093/bioinformatics/bth457. Bradbury PJ, Zhang Z, Kroon DE, Casstevens TM, Ramdoss Y, Buckler ES. TASSEL: software for association mapping of complex traits in diverse samples. Bioinformatics. 2007;23:2633–5. https://doi.org/10.1093/bioinformatics/btm308. Hill WG, Weir BS. Variances and covariances of squared linkage disequilibria in finite populations. Theor Popul Biol. 1988;33:54–78 http://www.ncbi.nlm.nih.gov/pubmed/3376052. R Development Core team. R: a language and environment for statistical computing. Vienna: R Foundation for statistical Computing; 2013. http://www.r-project.org/. Breseghello F, Sorrells ME. Association mapping of kernel size and milling quality in wheat ( Triticum aestivum L.) cultivars. Genetics. 2006;172:1165–77. https://doi.org/10.1534/genetics.105.044586. Yu J, Pressoir G, Briggs WH, Vroh Bi I, Yamasaki M, Doebley JF, et al. A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nat Genet. 2006;38:203–8. https://doi.org/10.1038/ng1702. Carlson CS, Eberle MA, Rieder MJ, Yi Q, Kruglyak L, Nickerson DA. Selecting a maximally informative set of single-nucleotide polymorphisms for association analyses using linkage disequilibrium. Am J Hum Genet. 2004;74:106–20. https://doi.org/10.1086/381000. Laperche A, Devienne-Barret F, Maury O, Le Gouis J, Ney B. A simplified conceptual model of carbon/nitrogen functioning for QTL analysis of winter wheat adaptation to nitrogen deficiency. Theor Appl Genet. 2006;113:1131–46. https://doi.org/10.1007/s00122-006-0373-4. Guo Y, Kong F, Xu Y, Zhao Y, Liang X, Wang Y, et al. QTL mapping for seedling traits in wheat grown under varying concentrations of N, P and K nutrients. Theor Appl Genet. 2012;124:851–65. https://doi.org/10.1007/s00122-011-1749-7. Hamada A, Nitta M, Nasuda S, Kato K, Fujita M, Matsunaka H, et al. Novel QTLs for growth angle of seminal roots in wheat (Triticum aestivum L.). Plant Soil. 2012;354:395–405. https://doi.org/10.1007/s11104-011-1075-5. Cao P, Ren Y, Zhang K, Teng W, Zhao X, Dong Z, et al. Further genetic analysis of a major quantitative trait locus controlling root length and related traits in common wheat. Mol Breed. 2014;33:975–85. https://doi.org/10.1007/s11032-013-0013-z. Atkinson JA, Wingen LU, Griffiths M, Pound MP, Gaju O, Foulkes MJ, et al. Phenotyping pipeline reveals major seedling root growth QTL in hexaploid wheat. J Exp Bot. 2015;66:2283–92. https://doi.org/10.1093/jxb/erv006. Delivering Genetic Gain in Wheat Project and SIDA are greatly acknowledged for their financial support of the first author while conducting the phenotyping and data analysis at University of Bologna. Fingerprinting of accessions was made possible with the financial support from Bill and Melinda Gates Foundation, the Department for International Development of the United Kingdom, and the AGER Project "From Seed to Pasta - Multidisciplinary approaches for a more sustainable and high quality durum wheat production". The role of the funding bodies is limited to direct funding of the fingerprinting of genotypes evaluated in this manuscript. Department of Microbial, Cellular and Molecular Biology, Addis Ababa University, P.O.Box 1176, Addis Ababa, Ethiopia Admas Alemu & Tileye Feyissa Department of Biology, Debre Tabor University, Debra Tabor, Ethiopia Admas Alemu Department of Agricultural and Food Sciences, University of Bologna, Bologna, Italy Marco Maccaferri, Giuseppe Sciara & Roberto Tuberosa International Maize and Wheat Improvement Center (CIMMYT), Texcoco, Mexico Karim Ammar International Maize and Wheat Improvement Center (CIMMYT), Addis Ababa, Ethiopia Ayele Badebo & Bekele Abeyo International Programs, College of Agriculture and Life Sciences, Cornell University, New York City, NY, USA Maricelis Acevedo Oromia Agricultural Research Institute, Addis Ababa, Ethiopia Tesfaye Letta Tileye Feyissa Marco Maccaferri Giuseppe Sciara Roberto Tuberosa Ayele Badebo Bekele Abeyo AA, MM, RT and TF conceived and designed the study. MM, TL and KA involved in genotyping of the durum wheat accessions. AA, MM and GS conducted root phenotyping and data analysis. AA prepared the manuscript. MA, BA and AB involved in assembling and fingerprinting of the Ethiopian durum wheat panel, and providing all the necessary laboratory equipment during the root phenotyping. TF, RT and GS edited the manuscript. All authors read and approved the final manuscript. Correspondence to Admas Alemu. Additional file 1: Table S1. Accession names and types, cultivated areas, seed sources and population structure of 192 Ethiopian durum wheat accessions. Phenotypic mean values of RSA traits measured for 12-day-old seedlings in Ethiopian durum wheat accessions. Inference of the true numbers of subpopulations in Ethiopian durum wheat panel. List of QTLs identified for RSA traits in Ethiopian durum wheat. Allelic distribution for root growth angle QTL-tagging SNPs in the Ethiopian durum wheat panel. Accessions are listed in ascending order for RGA. Allelic distribution for total root number QTL-tagging SNPs in the Ethiopian durum wheat panel. Accessions are listed in ascending order for TRN. Allelic distribution for total root length QTL-tagging SNPs in the Ethiopian durum wheat panel. Accessions are listed in ascending order for TRL. Allelic distribution for individual root weight QTL-tagging SNPs in the Ethiopian durum wheat panel. Accessions are listed in ascending order for IRW. Additional file 9: Figure S1. Introduced blocks during the root experiment in the growth chamber including accessions phenotyped at the same date and positioned shelves at the same distance from the floor under uniform light conditions. Figure S2. Bar chart with error bars of Ethiopian durum wheat cultivars and landraces for means of RSA traits. Figure S3. Box plot of the three sub-populations inferred from population structure for the mean values of RSA traits. The top and bottom of each box represent the 25th and 75th percentiles of the samples, respectively. The line in the middle of each box is the sample median. The whiskers, lines extending above and below each box, are drawn from the ends of the interquartile ranges to the farthest observations. The stars above or below the lines are outliers. Figure S4. Q-Q (quantile-quantile) plot results of the GWAS analysis for RSA traits using different models: General Linear Model with population structure (GLM + Q); Mixed Linear Model with population structure and kinship matrix (MLM + Q + K). Figure S5. Genetic map of identified RSA QTLs in Ethiopian durum wheat and previously published studies in both bread and durum wheat projected onto SNP-based tetraploid consensus map published in Maccaferri et al. (2015). RSA QTL identified in the present study are listed at the left of chromosomes with their significance level: ** = marker-wise significance of P ≤ 0.01 (− log10P ≥ 2); *** = marker-wise significance of P ≤ 0.001 (− log10P ≥ 3); and **** = experiment-wise significance of P ≤ 0.05/ marker-wise significance of P ≤ 0.0001 (− log10P ≥ 4). Black bars are for QTLs with R2 < 5%; red bars for R2 values between 5 and 10% and yellow bars for r2 > 10%. The length of bars indicates the confidence interval of each QTL and QTL cluster. The significance and colour of bars indicated is for the QTL with higher values of significance and r2 in the case of QTL clusters. RSA QTL from previously published studies in wheat have been projected on the consensus map and reported at the right side of chromosome bars in parentheses as orange-filled for durum wheat and blue-filled for bread wheat. The length of the bars represents the confidence interval of single QTL/cluster of QTL. Major RSA QTL-clusters of the present study are stated as grey-banded intervals. Alemu, A., Feyissa, T., Maccaferri, M. et al. Genome-wide association analysis unveils novel QTLs for seminal root system architecture traits in Ethiopian durum wheat. BMC Genomics 22, 20 (2021). https://doi.org/10.1186/s12864-020-07320-4 Received: 03 December 2020 Ethiopian durum wheat Plant genomics
CommonCrawl
Help us improve our products. Sign up to take part. Breast Cancer Multi-classification from Histopathological Images with Structured Deep Learning Model Zhongyi Han1, Benzheng Wei1,2, Yuanjie Zheng3, Yilong Yin4, Kejian Li2 & Shuo Li5 Scientific Reports volume 7, Article number: 4172 (2017) Cite this article Automated breast cancer multi-classification from histopathological images plays a key role in computer-aided breast cancer diagnosis or prognosis. Breast cancer multi-classification is to identify subordinate classes of breast cancer (Ductal carcinoma, Fibroadenoma, Lobular carcinoma, etc.). However, breast cancer multi-classification from histopathological images faces two main challenges from: (1) the great difficulties in breast cancer multi-classification methods contrasting with the classification of binary classes (benign and malignant), and (2) the subtle differences in multiple classes due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. Therefore, automated breast cancer multi-classification from histopathological images is of great clinical significance yet has never been explored. Existing works in literature only focus on the binary classification but do not support further breast cancer quantitative assessment. In this study, we propose a breast cancer multi-classification method using a newly proposed deep learning model. The structured deep learning model has achieved remarkable performance (average 93.2% accuracy) on a large-scale dataset, which demonstrates the strength of our method in providing an efficient tool for breast cancer multi-classification in clinical settings. Automated breast cancer multi-classification from histopathological images is significant for clinical diagnosis and prognosis with the launch of the precision medicine initiative1, 2. According to the World Cancer Report3 from the World Health Organization (WHO), breast cancer is the most common cancer with high morbidity and mortality among women worldwide. Breast cancer patients account for 25.2%, which is ranked first place among women patients, and morbidity is 14.7%, which is ranked second place following lung cancer in the survey about cancer mortality in recent years. About half a million breast cancer patients are dead and nearly 1.7 million new cases arise per year. These statistics are expected to increase significantly. Furthermore, the histopathological image is a gold standard for identifying breast cancer compared with other medical imaging, e.g., mammography, magnetic resonance (MR), and computed tomography (CT). Noticeably, the decision of an optimal therapeutic schedule of breast cancer rests upon refined multi-classification. One main reason is that doctors who know the subordinate classes of breast cancer can control the metastasis of tumor cells early, and make substantial therapeutic schedules according to special clinical performance and prognosis result of multiple breast cancers. Nevertheless, manual multi-classification for breast cancer histopathological images is a big challenge. There are three main reasons: (1) professional background and rich experience of pathologists are so difficult to inherit or innovate that primary-level hospitals and clinics suffer from the absence of skilled pathologists, (2) the tedious task is expensive and time-consuming, and (3) over fatigue of pathologists might lead to misdiagnosis. Hence, it is extremely urgent and important for the use of computer-aided breast cancer multi-classification, which can reduce the heavy workloads of pathologists and help avoid misdiagnosis4,5,6. However, automated breast cancer multi-classification still faces serious obstacles. The first obstacle is that the supervised feature engineering is inefficient and laborious with great computational burden. The initialization and processing steps of supervised feature engineering are also tedious and time-consuming. Meaningful and representative features lie at the heart of its success to multi-classify breast cancer. Nevertheless, feature engineering is an independent domain, task-related features are mostly designed by medical specialists who use their knowledge for histopathological image processing7. E.g., Zhang et al.8 applied a one class kernel principal component analysis (PCA) method based on hand-crafted features to classify benign and malignant of breast cancer histopathological images, the accuracy reached 92%. Recent years, general feature descriptors used for feature extraction have been invented, e.g., scale-invariant feature transform (SIFT)9, gray-level co-occurrence matrix (GLCM)10, histogram of oriented gradient (HOG)11, etc. However, feature descriptors extract merely insufficient features for describing histopathological images, such as low-level and unrepresentative surface features, which are not suitable for classifiers with discriminant analysis ability. There are several applications that use general feature descriptors on binary classification for histopathological images of breast cancer. Spanhol et al.12 used a breast cancer histopathological images dataset (BreaKHis), then provided a baseline of binary classification recognition rates by means of different feature descriptors and different traditional machine learning classifiers, the range of the accuracy is 80% to 85%. Based on four shape and 138 textual feature descriptors, Wang et al.13 realized accurate binary classification using a support vector machine(SVM)14 classifier. The second obstacle is that breast cancer histopathological images have huge limitations. Eight classes histopathological images of breast cancer are presented in Fig. 1. These are fine-grained high-resolution images from breast tissue biopsy slides stained with hematoxylin and eosin (H&E). Noticeably, different classes have subtle differences and cancerous cells have high coherency15, 16. The differences of same class images' resolution, contrast, and appearances are always in greater compared to different classes. In addition, histopathological fine-grained images have large variations which always result in difficulties for distinguishing breast cancers. Finally, despite such effective performance in the medical imaging analysis domain by deep learning7, existing related methods only studied on binary classification for breast cancer8, 12, 13, 17, 18; however, multi-classification has more clinical values. Eight classes of breast cancer histopathological images from BreaKHis12 dataset. There are great challenging histopathological images due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. These histopathological images were all acquired at a magnification factor of 400. To provide an accurate and reliable solution for breast cancer multi-classification, we propose a comprehensive recognition method with a newly proposed class structure-based deep convolutional neural network (CSDCNN). The CSDCNN has broken through the above mentioned barriers by leveraging hierarchical feature representation, which plays a key role for accurate breast cancer multi-classification. The CSDCNN is a non-linear representation learning model that abandons feature extraction steps into feature learning, it also bypasses feature engineering that requires a hand-designed manner. The CSDCNN adopts the end-to-end training manner that can automatically learn semantic and discriminative hierarchical features from low-level to high-level. The CSDCNN is carefully designed to fully take into account the relation of feature space among intra-class and inter-class for overcoming the obstacles from various histopathological images. Particularly, the distance of feature space is a standard for measuring the similarities of images; however, the feature space distance of samples from the same class may be larger than the samples from different classes. Therefore, we formulated some feature space distance constraints integrated into CSDCNN for controlling the feature similarities of different classes of the histopathological images. The major contributions of this work can be summarized in the following aspects: An end-to-end recognition method by a novel CSDCNN model, as shown in Fig. 2, is proposed for the multi-class breast cancer classification. The model has high accuracy and can reduce the heavy workloads of pathologists and assist in the development of optimal therapeutic schedules. Automated multi-class breast cancer classification has more clinical values than binary classification and would play a key role in breast cancer diagnosis or prognosis; however, it has never been explored in literature. Overview of the integrated workflow. The overall approach of our method is composed of three stages: training, validation, and testing. The goal of the training stage is to learn the sufficient feature representation and optimize the distance of different classes' feature space. The validation stage aims to fine-tune parameters and select models of each epoch. The testing stage is designed to evaluate the performance of the CSDCNN. An efficient distance constraint of feature space is proposed to formulate the feature space similarities of histopathological images by leveraging intra-class and inter-class labels of breast cancer as prior knowledge. Therefore, the CSDCNN has excellent feature learning capabilities that can acquire more depicting features under histopathological images. To evaluate the performance of our method, two datasets that include BreaKHis12 and BreaKHis with augmentation of breast cancer histopathological images with ground truth are used. Firstly, our method is evaluated by extensive experiments on a challenging large-scale dataset - BreaKHis. Secondly, in order to evaluate the multi-classification performance more qualitatively, we utilize an augmentation method for oversampling imbalanced classes. The augmentation is done on the training set, then validation and a testing phase are used for the real world data in patient-wise. The details about the two datasets are as follows: BreaKHis BreaKHis is a challenging large-scale dataset that includes 7909 images and eight sub-classes of breast cancers. The source data comes from 82 anonymous patients of Pathological Anatomy and Cytopathology (P&D) Lab, Brazil. BreaKHis is divided into benign and malignant tumors that consist of four magnification factors: 40X, 100X, 200X, and 400X. Particularly, both breast tumors, benign and malignant, can be sorted into different types by pathologists based on the aspect of the tumor cells under microscopes. Hence, the dataset currently contains four histopathological distinct types of benign breast tumors: adenosis (A), fibroadenoma (F), phyllodes tumor (PT), and tubular adenoma (TA); And four malignant tumors: ductal carcinoma (DC), lobular carcinoma (LC), mucinous carcinoma (MC), and papillary carcinoma (PC)12. Images are of three-channel RGB, eight-bit depth in each channel, and 700 × 460 size. Table 1 shows the histopathological image distributions of eight classes of breast cancer. Table 1 Histopathological image distribution of BreaKHis divided by magnification and class before data augmentation. BreaKHis with augmentation In this study, BreaKHis is augmented by a data augmentation method to boost the multi-classification performance and resolve the imbalanced class problem. Based on the standard method in machine learning domain19, the augmentation method is only done on the training set, so the augmentation is only used for training, then validation and a testing phase are used for the real world data in patient-wise. In details, we first split the whole dataset based on patient-wise into training/validation/testing set, then augmented the training examples based on the ratios of imbalanced classes. Reliability and generalization First, to make the results to be more reliable, we split the datasets based on patient-wise into three groups: training set, validation set, and testing set. This results in 61 train/validation subjects and 21 test subjects. The training set accounts for 50% of the two datasets, which uses for training the CSDCNN model and optimizing connection parameters of different neurons. The validation set is used for model selection, while the testing set is used for the testing of multi-classification accuracy and model reliability. The patients of the three-fold are non-overlapping and all experiment results are average accuracy from five cross validation. Second, to test the generalization, the comparison of the CSDCNN and other existing works are validated on the breast cancer binary classification experiments. Recognition rates Assessing the multi-classification performance of machine learning algorithms in medical image dataset, there are two computing methods to access the results17. First, the decision is patient level. Let N p be the number of total patients, and N np be the number of cancer images of patient P. If N rp images are correctly classified, patient score can be defined as $$Patient\,Score=\frac{{N}_{{rp}}}{{N}_{np}}$$ Then the global patient recognition rate is $$Patient\,\mathrm{Re}cognition\,Rate=\frac{\sum Patient\,Score}{{N}_{p}}$$ Second, we evaluated the recognition rate at the image level, not considering the patient level. Let N all be the number of cancer images of the validation or testing set. If N r histopathological images are correctly classified, then the recognition rate at the image level is $$Image\,Recognition\,Rate=\frac{{N}_{r}}{{N}_{all}}$$ The whole multi-classification accuracy of our method are very high with a reliable performance, as shown in Fig. 3. The average accuracy of the patient level is 93.2%, while image level is 93.8% for all magnification factors. The validation set and testing set have almost the same accuracy, which represents that the CSDCNN model has generalization and the ability to avoid overfitting. The performance of two training strategies of CSDCNN from scratch and CSDCNN from transfer learning are shown in Fig. 4, which demonstrates the accuracy of transfer learning is better than training from scratch. Multi-classification performance with recognition rates of the CSDCNN among patient level (PL) and image level (IL). Our method takes advantage of newly network structures, fast convergence rates, and strong generalization capabilities. These can be demonstrated by the validation set and testing set having almost the same accuracy. The comparison between CSDCNN training from transfer learning (TL) and from scratch (FC) among patient level (PL) and image level (IL). The CSDCNN based on the data augmentation method achieves enhanced and remarkable performance via different comparison experiments, as shown in Table 2. In comparison with several popular CNNs, the CSDCNN achieves the best results. The AlexNet20 proposed by Alex Krizhevsky is the first prize of classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2012 (ILSVRC12), which achieved about 83% accuracy in the binary classification of breast cancer histopathological image17. LeNet21 is a traditional CNN proposed by Yann LeCun. LeNet is used for the handwritten character recognition with high accuracy. In comparison with the two datasets, our augmentation methods improved about 3–6% accuracy in different magnification factors, which demonstrates that raw available histopathological images cannot meet the requirements of the CNNs. Besides, the former layers merely learn low-level features that only include simple and obvious information, such as colors, textures, edges. With the model going deep, our CSDCNN can learn high-level features that are rich in easiness discrimination information, as shown in the feature learning process of the testing block in Fig. 2. Table 2 Multi-classification results of comparison experiments based on the raw dataset (Raw) and augmented dataset (Aug). Even in the binary classification, the CSDCNN outperforms the state-of-the-art results of existing works, as shown in Table 3. The accuracy of our method is about 10% and 7% higher than the best results of the prior methods in patient level and image level, respectively. In particular, the average recognition rates for patient level are enhanced to 97%. Meanwhile, the experimental results also show that the ability of feature learning for our model is better than traditional feature descriptors, such as parameter-free threshold adjacency statistics (PFTAS)22, and gray-level co-occurrence matrix (GLCM)10. Table 3 Our model achieves the state-of-the-art accuracy (%) in the binary classification task. Experimental tools and time consumption. The CNN models are trained on Lenovo ThinkStation, Intel i7 CPU, NVIDIA Quadro K2200 GPU, and the Caffe23 framework. The training phase took about one hour and thirteen minutes, and ten hours and ten thirteen minutes under the BreaKHis and BreaKHis with augmentation datasets, respectively. The test phase with a single mini-batch took about 0.044 s; The training of binary classification took about 50 minutes and 10 hours 16 minutes under the binary dataset, and the testing of a single mini-batch took about 0.053 s. Data augmentation algorithms were executed on Matlab 2016a. It is the first time that automated multi-class classification for breast cancer is investigated in histopathological images and the first time that we propose the CSDCNN model, which achieved reliable and accurate recognition rates. By validating the challenging dataset, the performance in the above section confirms that our method is capable of learning higher level discriminating features and has the best accuracy in multi-class breast cancer classification. Although high-resolution breast cancer histopathological images have fine-grained appearances that bring about great difficulties in the multi-classification task, the discriminative power of the CSDCNN is better than traditional models. Furthermore, the performance of CSDCNN is very stable in multi-magnification image groups. The model has greater applicable value in clinical diagnosis and prognosis of breast cancer. Since primary-level hospitals or clinics face a desperate shortage of professional pathologists, our work would be extended to an automated breast cancer multi-classification system for providing scientific, objective and concrete indexes. It is a great advantage that the CSDNN classifies the whole slide images (WSI). The CSDCNN preserves fully global information of breast cancer histopathological images and avoids the limitations of patch extraction methods. Although patch-based methods are common occurrence17, 24, 25; however, it brings up an obvious disadvantage that pathologists have to make biomarkers for the cancerous region because the region of cancerization is only a fraction of breast cancer histopathological images. E.g., Fig. 5 are high-resolution breast cancer histopathological images, the area that is separated by the yellow boxes represent the regions of interest (RoI), which are always solely the cancerous region. However, while the patches are smaller than the WSI, non-cancerous patches will lead to deviations of the parameter learning, that is, deep models will think the non-cancerous region as a cancerous region when training. Hence, only the area that separated by the yellow boxes meet the needs of deep learning models. Under the large-scale medical image dataset, pathologists will waste much time and effort, and the labeling errors will increase the noise of the training sets. Therefore, we carefully use WSI as the model input, which will reduce the workload of pathologists and improve the efficiency of clinical diagnosis. High-resolution breast cancer histopathological images labeled by pathologists. In practice, the region of the cancerization is only a fraction of histopathological images. The area separated by the yellow boxes represents the region of interest labeled by pathologists, which is always solely the region of cancerization. Multi-classification has more clinical values than binary classification because multi-classification provides more details about patients' health conditions, which relieves the workloads of pathologists and also assists the doctors to make more optimal therapeutic schedules. Furthermore, although CNNs inspired by Kunihiko Fukushima26, 27, has been used for medical image analysis, e.g., image segmentation28, 29 image fusion and registration30,31,32, but there still exists a lot of room for improvement of medical data in comparison with the computer vision domain7, 33,34,35,36. Therefore, in this study, an optimal training strategy based on transfer learning from natural images is used to fine-tune the multi-classification model, which is a common manner for deep learning model used in medical imaging analysis. The overall approach of our method is designed in a learning-based and data-driven multi-classification manner. The CSDCNN is achieving learning-based manner by structured formulation and prior knowledge of class structure, which can automatically learn hierarchical feature representations. The CSDCNN is achieving data-driven manner by the augmentation method, which reinforces the multi-classification method to obtain more reliable and efficient performance. Therefore, the overall method develops an end-to-end recognition framework. The CSDCNN architecture The CSDCNN is carefully designed as a deep model with multiple hidden layers that learn inherent rules and features of multi-class breast cancer. The CSDCNN is layer-by-layer designed as follows: Input layer: this layer loads whole breast cancer histopathological images and produces outputs that feed to the first convolutional layer. The input layer is designed to resize the histopathological images as 256 × 256 with mean subtraction. The input images are composed of three 2D arrays in the 8-bit depth of red-green-blue channels. Convolutional layer: this layer extracts features by computing the output of neurons that connect to local regions of the input layer or previous layer. The set of weights which is convolved with the input is called filter or kernel. The size of every filter is 3 × 3, 5 × 5 or 7 × 7. Each neuron is sparsely connected to the area in the previous layer. The distance between the applications of filters is called stride. The hyperparameter of stride is set to 2 that is smaller than the filter size. The convolution kernel is applied in overlapping windows and initializes from a Gaussian distribution with a standard deviation of 0.01. The last convolutional layer is composed of 64 filters that initialize from Gaussian distributions with a standard deviation of 0.0001. The values of all local weights are passed through ReLU (rectified linear activation). Pooling layer: the role of the pooling layer is to down-sample feature map by reducing similar feature points into one. The purposes of the pooling layers are dimension reduction, noise drop, and receptive field amplification. The outputs of pooling layers keep scale-invariance and reduce the number of parameters. Because the relative positions of each feature are coarse-graining, the last pooling layer uses the mean-pooling strategy with a 7 × 7 receptive fields and a stride of 1. The other pooling layers use the max-pooling strategy with a 3 × 3 receptive fields and a stride of 2. Specifically, in comparison with various off-the-shelf" network, GoogLeNet35 is picked out as our basis network. GoogLeNet is the first prize of multi-classification and detection in ILSVRC14. GoogLeNet has significantly improved the classification performance with 22 layers deep network and novel inception modules. Constraint formulation High precision multi-classifier with loss is the last and crucial step in this study. Softmax with loss is used as a multi-class classifier that is extended from the logistic regression algorithm in the task of binary classification to multi-classification. Mathematically, the training set includes N histopathological images: \({\{{x}_{i},{y}_{i}\}}_{i=1}^{N}\). x i is the first i image, y i is the label of x i , and \({y}_{i}\in \mathrm{\{1,2,}\cdots ,k\}\), k ≥ 2. In this study, the class k of breast cancer is eight. For a concrete x i , we use the hypothesis function to estimate the probability of the x i belonging to class j, the probability value is p(y i = j|x i ). Then, the hypothesis function h θ (x i ) is $${h}_{\theta }({x}_{i})=(\begin{array}{c}p({y}_{i}=\mathrm{1|}{x}_{i};\theta )\\ p({y}_{i}=\mathrm{2|}{x}_{i};\theta )\\ \vdots \\ p({y}_{i}=k|{x}_{i};\theta )\end{array})=\frac{1}{\sum _{j=1}^{k}{e}^{{\theta }_{j}^{T}{x}_{i}}}(\begin{array}{c}{e}^{{\theta }_{1}^{T}{x}_{i}}\\ {e}^{{\theta }_{2}^{T}{x}_{i}}\\ \vdots \\ {e}^{{\theta }_{k}^{T}{x}_{i}}\end{array})$$ \(\frac{1}{{\sum }_{j=1}^{k}{e}^{{\theta }_{j}^{T}{x}_{i}}}\) represents the normalization computation for the probability distribution, the sum of all probabilities is 1. Besides, θ is the parameter of the softmax classifier. Finally, The loss function is defined as follows: $$J(x,y,\theta )=-\frac{1}{N}[\sum _{i=1}^{N}\sum _{j=1}^{k}1\{{y}_{i}=j\}\mathrm{log}\,\frac{{e}^{{\theta }_{j}^{T}{x}_{i}}}{\sum _{j=1}^{k}{e}^{{\theta }_{j}^{T}{x}_{i}}}]$$ Where 1{y i = j} is a indicator function, and 1{y i = j} is defined as $$1\{{y}_{i}=j\}=\{\begin{array}{cc}0 & \,{y}_{i}\notin j\,,\\ 1 & \,{y}_{i}\in j\,\mathrm{.}\end{array}$$ The loss function in equation (5) measures the degree of classification error. During training, in order to converge the error to zero, the model continues to adjust network parameters. However, in fine-grained multi-classification, equation (5) aims to squeeze the images from the class into a corner in the feature space. Therefore, the intra-class variance is not preserved15. To address this limitation, we improve the loss function of softmax classifier by formulating a novel distance constraint for feature space15. Theoretically, given four different classes of breast cancer histopathological images: x i , \({p}_{i}^{+}\), \({p}_{i}^{-}\), and n i as input, where x i is a specific class image, \({p}_{i}^{+}\) is the same sub-class as x i , \({p}_{i}^{-}\) represent the same intra-class as x i , and n i represents the inter-class. Ideally, hierarchical relation among the four images can be described as follows: $$D({x}_{i},{p}_{i}^{+})+{m}_{1} < D({x}_{i},{p}_{i}^{-})+{m}_{2} < D({x}_{i},{n}_{i})$$ Where D is the Euclidean distance of two classes in the feature space. m 1 and m 2 are hyperparameters, which control the margin of feature spaces. Then the loss function is composed with the hinge loss function: $$\begin{array}{rcl}{E}_{t}({x}_{i},{p}_{i}^{+},{p}_{i}^{-},{n}_{i},{m}_{1},{m}_{2}) & = & \frac{1}{2N}\sum _{i=1}^{N}max\{\mathrm{0,}D({x}_{i},{p}_{i}^{+})-D({x}_{i},{p}_{i}^{-})+{m}_{1}-{m}_{2}\}\\ & & +\frac{1}{2N}\sum _{i=1}^{N}max\{\mathrm{0,}D({x}_{i},{p}_{i}^{-})-D({x}_{i},{n}_{i})+{m}_{2}\}\end{array}$$ Where m 1 < m 2. Meanwhile, the output of CSDCNN is inserted into the softmax loss layer to compute the classification error J(x, y, θ). Finally, we can rewrite the novel loss function by combining equation (5) and equation (8) as follows: $$E=\lambda J(x,y,\theta )+\mathrm{(1}-\lambda ){E}_{t}({x}_{i},{p}_{i}^{+},{p}_{i}^{-},{n}_{i},{m}_{1},{m}_{2})$$ Where λ is the weight factor controlling the trade-off between two types of losses, we control 0 < λ < 1, and the weight term λ is finally set to 0.5 which achieved optimal performance by cross validation. We optimize equation (9) by a standard stochastic gradient descent with momentum. Workflow overview Our overall workflow can be understood as three top-down multi-classification stages, as shown in Fig. 2. We describe the steps as follows: Training stage: the goal of the training stage is to learn the sufficient feature representation and optimize the distance of different classes' feature space. After importing four breast cancer histopathological images (\({x}_{i},{p}_{i}^{+},{p}_{i}^{-},{n}_{i}\)) at the same time, the CSDCNN first learns the hierarchical feature representation during training and share the same parameters of weights and biases. The high-level feature maps then enter into \({\ell }_{2}\) normalizations. The outputs of the four branches are transmitted to maximize the Euclidean distance of inter-class and minimize the distance of intra-class. Finally, the two types losses are optimized jointly by a stochastic gradient descent method. Validation stage: the validation stage aims to fine-tune hyperparameters, avoid overfitting, and select the best model between each epoch for testing. The validation process presented the optimal multi-classification model of the breast cancer histopathological images, as illustrated in the validation block of Fig. 2. Testing stage: the testing stage aims to evaluate the performance of the CSDCNN. Feature learning process of CSDCNN is shown in the testing block of Fig. 2. After the first step of the input layer, low-level features that include colors, textures, shape can be learned by the former layers. Via repeated iterations of high-level layers, discriminative semantic features can be extracted and inserted into a trainable classifier. Finally, We tried two training strategies. The first one is training the "CSDCNN from scratch", that is, directly train CSDCNN on BreakHis dataset. Another one is based on transfer learning that initially pre-trains CSDCNN on imagenet37, then fine-tunes it on BreakHis. The "CSDCNN from scratch" performed worse on recognition rates, so we chose valuable transfer learning as the final strategy. In addition, the base learning rate of CSDCNN was set to 0.01 and the number of training iterations was 5K, which had the best accuracy from the validation and test set. Data augmentation We utilize multi-scale data augmentation and over-sampling methods to avoid overfitting and unbalanced classes problem. The training set is augmented by 1) intensity variation between −0.1 to 0.1, 2) rotation with −90° to 90°, 3) flip with level and vertical direction, and 4) translation with ±20 pixels. We also adopt a random combination of intensity variation, rotation, flip, and translation. Since the classes of breast cancer are imbalanced due to a large amount of ductal carcinoma, which meets the Gaussian distribution and clinical regularity, we use an over-sampling manner by the above augmentation methods to control the number of breast cancer histopathological images of each class. Collins, F. S. & Varmus, H. A new initiative on precision medicine. The New England journal of medicine 372 9, 793–5 (2015). Reardon, S. Precision-medicine plan raises hopes. Nature 517 7536, 540 (2015). Stewart, B. W. & Wild, C. World cancer report 2014. international agency for research on cancer. World Health Organization 505 (2014). Zheng, Y. et al. De-enhancing the dynamic contrast-enhanced breast mri for robust registration. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 933–941 (Springer, 2007). Cai, Y. et al. Multi-modal vertebrae recognition using transformed deep convolution network. Computerized Medical Imaging and Graphics 51, 11–19 (2016). Zheng, Y., Wei, B., Liu, H., Xiao, R. & Gee, J. C. Measuring sparse temporal-variation for accurate registration of dynamic contrast-enhanced breast mr images. Computerized Medical Imaging and Graphics 46, 73–80 (2015). Shen, D., Wu, G. & Suk, H.-I. Deep learning in medical image analysis. Annual Review of Biomedical Engineering 19 (2016). Zhang, Y., Zhang, B., Coenen, F., Xiao, J. & Lu, W. One-class kernel subspace ensemble for medical image classification. EURASIP Journal on Advances in Signal Processing 2014, 1–13 (2014). Lowe, D. G. Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on, vol. 2, 1150–1157 (IEEE, 1999). Haralick, R. M., Shanmugam, K. et al. Textural features for image classification. IEEE Transactions on systems, man, and cybernetics 610–621 (1973). Dalal, N. & Triggs, B. Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1, 886–893 (IEEE, 2005). Spanhol, F., Oliveira, L., Petitjean, C. & Heutte, L. A dataset for breast cancer histopathological image classification. IEEE Transactions on Biomedical Engineering (TBME) 63(7), 1455–1462 (2016). Wang, P., Hu, X., Li, Y., Liu, Q. & Zhu, X. Automatic cell nuclei segmentation and classification of breast cancer histopathology images. Signal Processing 122, 1–13 (2016). Suykens, J. A. & Vandewalle, J. Least squares support vector machine classifiers. Neural processing letters 9, 293–300 (1999). Zhang, X., Zhou, F., Lin, Y. & Zhang, S. Embedding label structures for fine-grained feature representation. arXiv preprint arXiv:1512.02895 (2015). Wang, J. et al. Learning fine-grained image similarity with deep ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1386–1393 (2014). Spanhol, F. A., Oliveira, L. S., Petitjean, C. & Heutte, L. Breast cancer histopathological image classification using convolutional neural networks. In International Joint Conference on Neural Networks (2016). Bayramoglu, N., Kannala, J. & Heikkilä, J. Deep learning for magnification independent breast cancer histopathology image classification. In 2016 International Conference on Pattern Recognition (ICPR), 2441–2446 (2016). Wong, S. C., Gatt, A., Stamatescu, V. & McDonnell, M. D. Understanding data augmentation for classification: when to warp? In Digital Image Computing: Techniques and Applications (DICTA), 2016 International Conference on, 1–6 (IEEE, 2016). Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097–1105 (2012). LeCun, Y. et al. Comparison of learning algorithms for handwritten digit recognition. In International conference on artificial neural networks. vol. 60, 53–60 (1995). Hamilton, N. A., Pantelic, R. S., Hanson, K. & Teasdale, R. D. Fast automated cell phenotype image classification. BMC bioinformatics 8, 1 (2007). Jia, Y. et al. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, 675–678 (ACM, 2014). Litjens, G. et al. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Scientific reports 6 (2016). Wang, D., Khosla, A., Gargeya, R., Irshad, H. & Beck, A. H. Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:1606.05718 (2016). Hubel, D. H. & Wiesel, T. N. Receptive fields of single neurones in the cat's striate cortex. The Journal of physiology 148, 574–591 (1959). Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics 36, 193–202 (1980). Zhang, W. et al. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage 108, 214–224 (2015). Kleesiek, J. et al. Deep mri brain extraction: a 3d convolutional neural network for skull stripping. NeuroImage 129, 460–469 (2016). Suk, H.-I., Lee, S.-W., Shen, D. & Initiative, A. D. N. et al. Hierarchical feature representation and multimodal fusion with deep learning for ad/mci diagnosis. NeuroImage 101, 569–582 (2014). Wu, G., Kim, M., Wang, Q., Munsell, B. C. & Shen, D. Scalable high-performance image registration framework by unsupervised deep feature representations learning. IEEE Trans. Biomed. Engineering 63, 1505–1516 (2016). Chen, H., Dou, Q., Wang, X., Qin, J. & Heng, P. A. Mitosis detection in breast cancer histology images via deep cascaded networks. In Thirtieth AAAI Conference on Artificial Intelligence (2016). Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006). LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015). Szegedy, C. et al. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 1–9 (2015). He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015). Deng, J. et al. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 248–255 (IEEE, 2009). Boyd, S. P., El Ghaoui, L., Feron, E. & Balakrishnan, V. Linear matrix inequalities in system and control theory, vol. 15 (SIAM, 1994). Weinberger, K. Q., Blitzer, J. & Saul, L. K. Distance metric learning for large margin nearest neighbor classification. In Advances in neural information processing systems, 1473–1480 (2005). Breiman, L. Random forests. Machine learning 45, 5–32 (2001). This work was made possible through support from Natural Science Foundation of China (NSFC) (No.61572300, U1201258), Natural Science Foundation of Shandong Province in China (ZR2015FM010, ZR2014FM001) and Taishan Scholar Program of Shandong Province in China (TSHW201502038), Project of Shandong Province Higher Educational Science and Technology Program in China (No. J15LN20), Project of Shandong Province Medical and Health Technology Development Program in China (No. 2016WS0577). College of Science and Technology, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China Zhongyi Han & Benzheng Wei Institute of evidence based Traditional Chinese Medicine, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China Benzheng Wei & Kejian Li School of Information Science and Engineering, Shandong Normal University, Jinan, 250014, China Yuanjie Zheng School of Computer Science and Technology, Shandong University, Jinan, 250100, China Yilong Yin Department of Medical Imaging, Western University, London, N6A 4V2, Canada Shuo Li Search for Zhongyi Han in: Search for Benzheng Wei in: Search for Yuanjie Zheng in: Search for Yilong Yin in: Search for Kejian Li in: Search for Shuo Li in: Z.H. developed the methods for image processing and data processing, built multi-classification deep models and wrote this manuscript. B.W. supervised the work and was involved in setting up the experimental design. Y.Z., Y.Y., K.L. and S.L. gave suggestions for this research and revised the manuscript. All authors reviewed the manuscript. Correspondence to Benzheng Wei. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Han, Z., Wei, B., Zheng, Y. et al. Breast Cancer Multi-classification from Histopathological Images with Structured Deep Learning Model. Sci Rep 7, 4172 (2017) doi:10.1038/s41598-017-04075-z Received: 01 February 2017 DOI: https://doi.org/10.1038/s41598-017-04075-z BreakHis based breast cancer automatic diagnosis using deep learning: Taxonomy, survey and insights Yassir Benhammou , Boujemâa Achchab , Francisco Herrera & Siham Tabik Neurocomputing (2020) Conventional Machine Learning and Deep Learning Approach for Multi-Classification of Breast Cancer Histopathology Images—a Comparative Insight Shallu Sharma & Rajesh Mehra Journal of Digital Imaging (2020) Cross-task extreme learning machine for breast cancer image classification with deep convolutional features Pin Wang , Qi Song , Yongming Li , Shanshan Lv , Jiaxin Wang , Linyu Li & HeHua Zhang Biomedical Signal Processing and Control (2020) Embedding of Genes Using Cancer Gene Expression Data: Biological Relevance and Potential Application on Biomarker Discovery Chi Tung Choy , Chi Hang Wong & Stephen Lam Chan Frontiers in Genetics (2019) Pathologist-level interpretable whole-slide cancer diagnosis with deep learning Zizhao Zhang , Pingjun Chen , Mason McGough , Fuyong Xing , Chunbao Wang , Marilyn Bui , Yuanpu Xie , Manish Sapkota , Lei Cui , Jasreman Dhillon , Nazeel Ahmad , Farah K. Khalil , Shohreh I. Dickinson , Xiaoshuang Shi , Fujun Liu , Hai Su , Jinzheng Cai & Lin Yang Nature Machine Intelligence (2019) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights Author Highlights Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
The automorphism group of a minimal shift of stretched exponential growth JMD Home This Volume Boundary unitary representations—right-angled hyperbolic buildings 2016, 10: 439-481. doi: 10.3934/jmd.2016.10.439 Smooth diffeomorphisms with homogeneous spectrum and disjointness of convolutions Philipp Kunde 1, Department of Mathematics, University of Hamburg, Bundesstraße 55, 20146 Hamburg, Germany Received March 2015 Revised July 2016 Published October 2016 On any smooth compact connected manifold $M$ of dimension $m\geq 2$ admitting a smooth non-trivial circle action $\mathcal S = \left\{S_t\right\}_{t\in \mathbb{S}^1}$ and for every Liouville number $\alpha \in \mathbb{S}^1$ we prove the existence of a $C^\infty$-diffeomorphism $f \in \mathcal{A}_{\alpha} = \overline{\left\{h \circ S_{\alpha} \circ h^{-1} \;:\;h \in \text{Diff}^{\,\,\infty}\left(M,\nu\right)\right\}}^{C^\infty}$ with a good approximation of type $\left(h,h+1\right)$, a maximal spectral type disjoint with its convolutions and a homogeneous spectrum of multiplicity two for the Cartesian square $f\times f$. This answers a question of Fayad and Katok (10,[Problem 7.11]). The proof is based on a quantitative version of the approximation by conjugation-method with explicitly defined conjugation maps and tower elements. Keywords: Smooth ergodic theory, homogeneous spectrum, disjointness of convolutions, periodic approximation.. Mathematics Subject Classification: Primary: 37A05, 37A30, 37C40; Secondary: 37C0. Citation: Philipp Kunde. Smooth diffeomorphisms with homogeneous spectrum and disjointness of convolutions. Journal of Modern Dynamics, 2016, 10: 439-481. doi: 10.3934/jmd.2016.10.439 O. N. Ageev, On ergodic transformations with homogeneous spectrum, J. Dynam. Control Systems, 5 (1999), 149-152. doi: 10.1023/A:1021701019156. Google Scholar O. N. Ageev, The homogeneous spectrum problem in ergodic theory, Invent. Math., 160 (2005), 417-446. doi: 10.1007/s00222-004-0422-z. Google Scholar D. V. Anosov and A. Katok, New examples in smooth ergodic theory. Ergodic diffeomorphisms, Trudy Moskov. Mat. Obšč., 23 (1970), 3-36. Google Scholar M. Benhenda, Non-standard smooth realization of shifts on the torus, J. Modern Dynamics, 7 (2013), 329-367. Google Scholar R. Berndt, Einführung in die symplektische Geometrie, Friedr. Vieweg & Sohn, Braunschweig, 1998. doi: 10.1007/978-3-322-80215-6. Google Scholar F. Blanchard and M. Lemańczyk, Measure-preserving diffeomorphisms with an arbitrary spectral multiplicity, Topol. Methods Nonlinear Anal., 1 (1993), 275-294. Google Scholar I. P. Cornfeld, S. V. Fomin and Ya. G. Sinaĭ, Ergodic Theory, Springer-Verlag, New York, 1982. doi: 10.1007/978-1-4615-6927-5. Google Scholar G. M. Constantine and T. H. Savits, A multivariate Faà di Bruno formula with applications, Trans. Amer. Math. Soc., 348 (1996), 503-520. doi: 10.1090/S0002-9947-96-01501-2. Google Scholar A. Danilenko, A survey on spectral multiplicities of ergodic actions, Ergodic Theory Dynam. Systems, 33 (2013), 81-117. doi: 10.1017/S0143385711000800. Google Scholar B. Fayad and A. Katok, Constructions in elliptic dynamics, Ergodic Theory Dynam. Systems, 24 (2004), 1477-1520. doi: 10.1017/S0143385703000798. Google Scholar B. Fayad and M. Saprykina, Weak mixing disc and annulus diffeomorphisms with arbitrary Liouville rotation number on the boundary, Ann. Sci. École Norm. Sup. (4), 38 (2005), 339-364. doi: 10.1016/j.ansens.2005.03.004. Google Scholar B. Fayad, M. Saprykina and A. Windsor, Non-standard smooth realizations of Liouville rotations, Ergodic Theory Dynam. Systems, 27 (2007), 1803-1818. doi: 10.1017/S0143385707000314. Google Scholar R. Gunesch and A. Katok, Construction of weakly mixing diffeomorphisms preserving measurable Riemannian metric and smooth measure, Discrete Contin. Dynam. Systems, 6 (2000), 61-88. doi: 10.3934/dcds.2000.6.61. Google Scholar G. R. Goodson, A survey of recent results in the spectral theory of ergodic dynamical systems, J. Dynam. Control Systems, 5 (1999), 173-226. doi: 10.1023/A:1021726902801. Google Scholar B. Hasselblatt and A. Katok, Introduction to the Modern Theory of Dynamical Systems, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511809187. Google Scholar A. Katok, Bernoulli diffeomorphisms on surfaces, Ann. of Math. (2), 110 (1979), 529-547. doi: 10.2307/1971237. Google Scholar A. Katok, Combinatorical Constructions in Ergodic Theory and Dynamics, American Mathematical Society, Providence, RI, 2003. doi: 10.1090/ulect/030. Google Scholar J. Kwiatkowski and M. Lemańczyk, On the multiplicity function of ergodic group extensions. II, Studia Math., 116 (1995), 207-214. Google Scholar A. Kriegl and P. Michor, The Convenient Setting of Global Analysis, American Mathematical Society, Providence, RI, 1997. doi: 10.1090/surv/053. Google Scholar A. Katok and A. Stepin, Approximations in ergodic theory, Russ. Math. Surveys, 22 (1967), 77-102. doi: 10.1070/RM1967v022n05ABEH001227. Google Scholar A. Katok and A. Stepin, Metric properties of measure preserving homeomorphisms, Russ. Math. Surveys, 25 (1970), 191-220. doi: 10.1070/RM1970v025n02ABEH003793. Google Scholar M. G. Nadkarni, Spectral Theory of Dynamical Systems, Birkhäuser Verlag, Basel, 1998. doi: 10.1007/978-3-0348-8841-7. Google Scholar H. Omori, Infinite Dimensional Lie Transformation Groups, Springer-Verlag, Berlin-New York, 1974. Google Scholar V. I. Oseledets, An automorphism with simple continuous spectrum not having the group property, Mat. Zametki, 5 (1969), 323-326. Google Scholar V. V. Ryzhikov, Transformations having homogeneous spectra, J. Dynam. Control Systems, 5 (1999), 145-148. doi: 10.1023/A:1021748902318. Google Scholar V. V. Ryzhikov, Homogeneous spectrum, disjointness of convolutions and mixing properties of dynamical systems, Selected Russian Math., 1 (1999), 13-24. Google Scholar V. V. Ryzhikov, On the spectral and mixing properties of rank-1 constructions in ergodic theory, Doklady Mathematics, 74 (2006), 545-547. Google Scholar A. M. Stepin, Properties of spectra of ergodic dynamical systems with locally compact time, Dokl. Akad. Nauk SSSR, 169 (1966), 773-776. Google Scholar A. M. Stepin, Spectral properties of generic dynamical systems, Math. USSR Izv., 29 (1987), 159-192. doi: 10.1070/IM1987v029n01ABEH000965. Google Scholar Wen Huang, Zhiren Wang, Guohua Zhang. Möbius disjointness for topological models of ergodic systems with discrete spectrum. Journal of Modern Dynamics, 2019, 14: 277-290. doi: 10.3934/jmd.2019010 Ryszard Rudnicki. An ergodic theory approach to chaos. Discrete & Continuous Dynamical Systems, 2015, 35 (2) : 757-770. doi: 10.3934/dcds.2015.35.757 Thierry de la Rue. An introduction to joinings in ergodic theory. Discrete & Continuous Dynamical Systems, 2006, 15 (1) : 121-142. doi: 10.3934/dcds.2006.15.121 Gary Froyland. On Ulam approximation of the isolated spectrum and eigenfunctions of hyperbolic maps. Discrete & Continuous Dynamical Systems, 2007, 17 (3) : 671-689. doi: 10.3934/dcds.2007.17.671 Xiongping Dai, Yu Huang, Mingqing Xiao. Realization of joint spectral radius via Ergodic theory. Electronic Research Announcements, 2011, 18: 22-30. doi: 10.3934/era.2011.18.22 Cristina Lizana, Vilton Pinheiro, Paulo Varandas. Contribution to the ergodic theory of robustly transitive maps. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 353-365. doi: 10.3934/dcds.2015.35.353 Dmitry Kleinbock, Barak Weiss. Dirichlet's theorem on diophantine approximation and homogeneous flows. Journal of Modern Dynamics, 2008, 2 (1) : 43-62. doi: 10.3934/jmd.2008.2.43 Maxim Sølund Kirsebom. Extreme value theory for random walks on homogeneous spaces. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4689-4717. doi: 10.3934/dcds.2014.34.4689 Manfred Denker, Samuel Senti, Xuan Zhang. Fluctuations of ergodic sums on periodic orbits under specification. Discrete & Continuous Dynamical Systems, 2020, 40 (8) : 4665-4687. doi: 10.3934/dcds.2020197 Hebai Chen, Jaume Llibre, Yilei Tang. Centers of discontinuous piecewise smooth quasi–homogeneous polynomial differential systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6495-6509. doi: 10.3934/dcdsb.2019150 Christoph Bandt, Helena PeÑa. Polynomial approximation of self-similar measures and the spectrum of the transfer operator. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 4611-4623. doi: 10.3934/dcds.2017198 El Houcein El Abdalaoui, Joanna Kułaga-Przymus, Mariusz Lemańczyk, Thierry de la Rue. The Chowla and the Sarnak conjectures from ergodic theory point of view. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 2899-2944. doi: 10.3934/dcds.2017125 Peng Huang, Xiong Li, Bin Liu. Invariant curves of smooth quasi-periodic mappings. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 131-154. doi: 10.3934/dcds.2018006 Xu Xu, Xin Zhao. Exponential upper bounds on the spectral gaps and homogeneous spectrum for the non-critical extended Harper's model. Discrete & Continuous Dynamical Systems, 2020, 40 (8) : 4777-4800. doi: 10.3934/dcds.2020201 Hajnal R. Tóth. Infinite Bernoulli convolutions with different probabilities. Discrete & Continuous Dynamical Systems, 2008, 21 (2) : 595-600. doi: 10.3934/dcds.2008.21.595 Sergei A. Nazarov, Rafael Orive-Illera, María-Eugenia Pérez-Martínez. Asymptotic structure of the spectrum in a Dirichlet-strip with double periodic perforations. Networks & Heterogeneous Media, 2019, 14 (4) : 733-757. doi: 10.3934/nhm.2019029 Juntao Sun, Jifeng Chu, Zhaosheng Feng. Homoclinic orbits for first order periodic Hamiltonian systems with spectrum point zero. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3807-3824. doi: 10.3934/dcds.2013.33.3807 Fabien Durand, Alejandro Maass. A note on limit laws for minimal Cantor systems with infinite periodic spectrum. Discrete & Continuous Dynamical Systems, 2003, 9 (3) : 745-750. doi: 10.3934/dcds.2003.9.745 Günter Leugering, Sergei A. Nazarov, Jari Taskinen. The band-gap structure of the spectrum in a periodic medium of masonry type. Networks & Heterogeneous Media, 2020, 15 (4) : 555-580. doi: 10.3934/nhm.2020014 Philipp Kunde
CommonCrawl
Imagine having to calculate the average of something that is constantly changing, like the price of gas. Normally, when calculating the average of a set of numbers, you add them all up and divide by the total amount of numbers. But how can you do this when prices change every month, week, day, or at numerous points throughout the day? How can you choose which prices are included in calculating the average? If you have a function for the price of gas and how it changes over time, this is a situation where the Average Value of a Function can be very helpful. Definition of the Average Value of a Function You might be familiar with the concept of average. Typically, an average is calculated by adding up numbers and dividing by the total amount of numbers. The average value of a function in Calculus is a similar idea. The average value of a function is the height of the rectangle that has an area that is equivalent to the area under the curve of the function. If you look at the picture below, you know already that the integral of the function is all of the area between the function and the \(x\)-axis. The rectangle has the same area as the area below the curve This idea might sound arbitrary at first. How is this rectangle related to an average? The average involves dividing by the number of values, and how do you tell how many values are involved here? Average Value of a Function Over an Interval When talking about the average value of a function you need to state over which interval. This is because of two reasons: You need to find the definite integral over the given interval. You need to divide the above integral by the length of the interval. To find the average value of a function, instead of adding up numbers you need to integrate, and rather than dividing by the number of values you divide by the length of the interval. \[ \begin{align} \text{Adding values} \quad &\rightarrow \quad \text{Integration} \\ \text{Number of values} \quad &\rightarrow \quad \text{Length of the interval} \end{align} \] Using the length of the interval makes sense because intervals have an infinite number of values, so it is more appropriate to use the length of the interval instead. Formula for the Average Value of a Function As stated before, the average value of a function \(f(x)\) over the interval \([a,b]\) is obtained by dividing the definite integral \[ \int_a^b f(x)\,\mathrm{d}x\] by the length of the interval. The average value of the function is often written \(f_{\text{avg}} \). So \[ f_{\text{avg}} = \frac{1}{b-a}\int_a^b f(x)\, \mathrm{d}x.\] Please read our Evaluating Definite Integrals if you need a refresher on integration! Calculus Behind the Average Value of a Function Where does the formula for the average value of a function come from? Recall the Mean Value Theorem for integrals, which states that if a function \(f(x)\) is continuous on the closed interval \([a,b]\), then there is a number \(c\) such that \[ \int_a^b f(x) \, \mathrm{d}x = f(c)(b-a).\] You can see the derivation for the Mean Value Theorem for Integrals in the article! If you simply divide each side of the equation by \(b-a\) to solve for \(f(c)\), you obtain the formula for the average value of a function: \[ f(c)=\frac{1}{b-a} \int_a^b f(x) \, \mathrm{d}x.\] Examples of the Average Value of a Function An economist finds that the gas prices from 2017 to 2022 can be described by the function \[f(x) = 1.4^x.\] Here, \( f \) is measured in dollars per gallon, and \(x\) represents the number of years since 2017. Find the average price of gas per gallon between 2017 and 2022. In order to use the formula for the average value of a function you first need to identify the interval. Since the function measures the years since 2017, then the interval becomes \( [0,5],\) where 0 represents 2017 and 5 represents 2022. Next, you will need to find the definite integral \[\int_0^5 1.4^x\,\mathrm{d}x.\] Begin by finding its antiderivative: \[ \int 1.4^x\,\mathrm{d}x= \frac{1}{\ln{1.4}} 1.4^x,\] and then use the Fundamental Theorem of Calculus to evaluate the definite integral, giving you \[ \begin{align} \int_0^5 1.4^x\,\mathrm{d}x &=\left( \frac{1}{\ln{1.4}} 1.4^5 \right) - \left( \frac{1}{\ln{1.4}} 1.4^0 \right) \\ &= \frac{1.4^5-1}{\ln{1.4}} \\ &= 13.012188. \end{align} \] Now that you found the value of the definite integral, you divide by the length of the interval, so \[ \begin{align} f_{\text{avg}} &= \frac{13.012188}{5} \\ &= 2.6024376. \end{align}\] This means that the average price of gas between 2017 and 2022 is $2.60 per gallon. Take a look at a graphical representation of the problem: Graphical representation of the average value of the price of the gas The rectangle represents the total area under the curve of \(f(x)\). The rectangle has a width of \(5\), which is the interval of integration, and a height equal to the average value of the function, \(2.6\). Sometimes the average value of a function will be negative. Find the average value of \[ g(x) = x^3 \] in the interval \( [-2,1].\) This time the interval is given in a straightforward way, so begin by finding the indefinite integral \[ \int x^3 \, \mathrm{d}x, \] which you can do by using the Power Rule, to find that \[ \int x^3 \, \mathrm{d}x = \frac{1}{4}x^4.\] Next, use the Fundamental Theorem of Calculus to evaluate the definite integral. This gives you \[ \begin{align} \int_{-2}^1 x^3 \, \mathrm{d}x &= \left( \frac{1}{4}(1)^4 \right) - \left( \frac{1}{4} (-2)^4 \right) \\ &= \frac{1}{4} - 4 \\ &= -\frac{15}{4}. \end{align} \] Finally, divide the value of the definite integral by the length of the interval, so \[ \begin{align} g_{\text{avg}} &= \frac{1}{1-(-2)}\left(-\frac{15}{4} \right) \\ &= -\frac{15}{12} \\ &= - \frac{5}{4}. \end{align}\] Therefore, the average value of \( g(x) \) in the interval \( [-2,1] \) is \( -\frac{5}{4}.\) It is also possible that the average value of a function is zero! Find the average value of \(h(x) = x \) on the interval \( [-3,3].\) Begin by using the Power Rule to find the indefinite integral, that is \[ \int x \, \mathrm{d}x = \frac{1}{2}x^2.\] Knowing this, you can evaluate the definite integral, so \[ \begin{align} \int_{-3}^3 x\, \mathrm{d}x &= \left( \frac{1}{2}(3)^2\right)-\left(\frac{1}{2}(-3)^2\right) \\ &= \frac{9}{2}-\frac{9}{2} \\ &= 0. \end{align}\] Since the definite integral is equal to 0, you will also get 0 after dividing by the length of the interval, so \[ h_{\text{avg}}=0.\] You can also find the average value of a trigonometric function. Please check out our article about Trigonometric Integrals if you need a refresher. \[f(x) = \sin(x)\] over the interval \( \left[ 0, \frac{\pi}{2} \right].\) You will need to find first the definite integral \[ \int_0^{\frac{\pi}{2}} \sin{x} \, \mathrm{d}x,\] so find its antiderivative \[ \int \sin{x} \, \mathrm{d}x = -\cos{x},\] and use the Fundamental Theorem of Calculus to evaluate the definite integral, that is \[ \begin{align} \int_0^{\frac{\pi}{2}} \sin{x} \, \mathrm{d}x &= \left(-\cos{\frac{\pi}{2}} \right) - \left(-\cos{0} \right) \\ &= -0-\left( -1 \right) \\ &= 1. \end{align}\] Finally, divide by the length of the interval, so \[ \begin{align} f_{\text{avg}} &= \frac{1}{\frac{\pi}{2}}\\ &= \frac{2}{\pi}. \end{align}\] This means that the average value of the sine function over the interval \( \left[ 0, \frac{\pi}{2} \right]\) is \(\frac{2}{\pi},\) which is about \(0.63.\) Graphical representation of the average value of the sine function in the interval \( [0,\frac{\pi}{2}].\) Average Value of a Function - Key takeaways The average value of a function \(f(x)\) over the interval \( [a,b]\) is given by\[ f_{\text{avg}} = \frac{1}{b-a}\int_a^b f(x)\, dx.\] The average value of a function equation is derived from the Mean Value Theorem for integrals. Frequently Asked Questions about Average Value of a Function What is the meaning of the average value of a function? What is the formula for average value of a function over an interval ? The average value of a function is the integral of the function over an interval [a, b] divided by b - a. What is an example for average value of a function? We can use the average value of a function to find the average value of an infinite set of numbers. Consider the gas prices between 2017 and 2022, which can change almost every second. We can find the average value price per gallon over the 5 year period with the average value of a function equation. How to find average value of a function? To find the average value of a function, take the integral of the over an interval [a, b] and divide by b - a. What is the average value of a function for an integral? Final Average Value of a Function Quiz What is the average value of a function? The average value of a function is the height of the rectangle that the same area as the area under the curve of the function. Where is the average value of a function formula derived from? The average value of a function formula is derived from the Mean Value Theorem for integrals. Which of the following is the formula for the average value of a function? \[f_{\text{avg}} = \frac{1}{b-a} \int_a^b f(x) \, \mathrm{d}x.\] What does the Mean Value Theorem for integrals say? The Mean Value Theorem for integrals states that if a function f(x) is continuous on the closed interval [a, b], then there is a number c such that \[ \int_a^b f(x) \, \mathrm{d}x = f(c)(b-a)\] where f(c) is the average value of the function over the interval [a, b] To find the average value of a function in a given interval you need to divide its definite integral by the ____. length of the interval. The average value of a function can be negative. Suppose you are asked to find the average value of the exponential function \( e^x.\) What information is missing? The interval on which the average is to be calculated. Can you find the average value of a non-integrable function? The average value of a function can be equal to zero. The average value of a function depends on which interval it is being calculated. More about Average Value of a Function of the users don't pass the Average Value of a Function quiz! Will you pass the quiz? Theorems of Continuity Learn
CommonCrawl
Search Results: 1 - 10 of 51177 matches for " Seung Jin Lee " Page 1 /51177 Pieri rule for the affine flag variety Seung Jin Lee Abstract: We prove the affine Pieri rule for the cohomology of the affine flag variety conjectured by Lam, Lapointe, Morse and Shimozono. We study the cap operator on the affine nilHecke ring that is motivated by Kostant and Kumar's work on the equivariant cohomology of the affine flag variety. We show that the cap operators for Pieri elements are the same as Pieri operators defined by Berg, Saliola and Serrano. This establishes the affine Pieri rule. Combinatorial description of the cohomology of the affine flag variety Abstract: We construct the affine version of the Fomin-Kirillov algebra, called the affine FK algebra, to investigate the combinatorics of affine Schubert calculus for type $A$. We introduce Murnaghan-Nakayama elements and Dunkl elements in the affine FK algebra. We show that they are commutative as Bruhat operators, and the commutative algebra generated by these operators is isomorphic to the cohomology of the affine flag variety. We show that the cohomology of the affine flag variety is product of the cohomology of an affine Grassmannian and a flag variety, which are generated by MN elements and Dunkl elements respectively. The Schubert classes in cohomology of the affine Grassmannian (resp. the flag variety) can be identified with affine Schur functions (resp. Schubert polynomials) in a quotient of the polynomial ring. Affine Schubert polynomials, polynomial representatives of the Schubert class in the cohomology of the affine flag variety, can be defined in the product of two quotient rings using the Bernstein-Gelfand-Gelfand operators interpreted as divided difference operators acting on the affine Fomin-Kirillov algebra. As for other applications, we obtain Murnaghan-Nakayama rules both for the affine Schubert polynomials and affine Stanley symmetric functions. We also define $k$-strong-ribbon tableaux from Murnaghan-Nakayama elements to provide a new formula of $k$-Schur functions. This formula gives the character table of the representation of the symmetric group whose Frobenius characteristic image is the $k$-Schur function. Local neighborliness of the symmetric moment curve Abstract: A centrally symmetric analogue of the cyclic polytope, the bicyclic polytope, was defined in [BN08]. The bicyclic polytope is defined by the convex hull of finitely many points on the symmetric moment curve where the set of points has a symmetry about the origin. In this paper, we study the Barvinok-Novik orbitope, the convex hull of the symmetric moment curve. It was proven in [BN08] that the orbitope is locally $k$-neighborly, that is, the convex hull of any set of $k$ distinct points on an arc of length not exceeding $\phi_k$ in $\mathbb{S}^1$ is a $(k-1)$-dimensional face of the orbitope for some positive constant $\phi_k$. We prove that we can choose $\phi_k $ bigger than $\gamma k^{-3/2} $ for some positive constant $\gamma$. Explicit constructions of centrally symmetric k-neighborly polytopes and large strictly antipodal sets Alexander Barvinok,Seung Jin Lee,Isabella Novik Abstract: We present explicit constructions of centrally symmetric 2-neighborly d-dimensional polytopes with about 3^{d/2} = (1.73)^d vertices and of centrally symmetric k-neighborly d-polytopes with about 2^{c_k d} vertices where c_k=3/20 k^2 2^k. Using this result, we construct for a fixed k > 1 and arbitrarily large d and N, a centrally symmetric d-polytope with N vertices that has at least (1-k^2 (gamma_k)^d) binom(N, k) faces of dimension k-1, where gamma_2=1/\sqrt{3} = 0.58 and gamma_k = 2^{-3/{20k^2 2^k}} for k > 2. Another application is a construction of a set of 3^{d/2 -1}-1 points in R^d every two of which are strictly antipodal as well as a construction of an n-point set (for an arbitrarily large n) in R^d with many pairs of strictly antipodal points. The two latter results significantly improve the previous bounds by Talata, and Makai and Martini, respectively. Centrally symmetric polytopes with many faces Abstract: We present explicit constructions of centrally symmetric polytopes with many faces: first, we construct a d-dimensional centrally symmetric polytope P with about (1.316)^d vertices such that every pair of non-antipodal vertices of P spans an edge of P, second, for an integer k>1, we construct a d-dimensional centrally symmetric polytope P of an arbitrarily high dimension d and with an arbitrarily large number N of vertices such that for some 0 < delta_k < 1 at least (1-delta_k^d) {N choose k} k-subsets of the set of vertices span faces of P, and third, for an integer k>1 and a>0, we construct a centrally symmetric polytope Q with an arbitrary large number N of vertices and of dimension d=k^{1+o(1)} such that least (1 - k^{-a}){N choose k} k-subsets of the set of vertices span faces of Q. Neighborliness of the symmetric moment curve Mathematics , 2011, DOI: 10.1112/S0025579312000010 Abstract: We consider the convex hull B_k of the symmetric moment curve U(t)=(cos t, sin t, cos 3t, sin 3t, ..., cos (2k-1)t, sin (2k-1)t) in R^{2k}, where t ranges over the unit circle S= R/2pi Z. The curve U(t) is locally neighborly: as long as t_1, ..., t_k lie in an open arc of S of a certain length phi_k>0, the convex hull of the points U(t_1), ..., U(t_k) is a face of B_k. We characterize the maximum possible length phi_k, proving, in particular, that phi_k > pi/2 for all k and that the limit of phi_k is pi/2 as k grows. This allows us to construct centrally symmetric polytopes with a record number of faces. Alpha-Synuclein Stimulation of Astrocytes: Potential Role for Neuroinflammation and Neuroprotection He-Jin Lee,Changyoun Kim,Seung-Jae Lee Oxidative Medicine and Cellular Longevity , 2010, DOI: 10.4161/oxim.3.4.12809 Abstract: Selective loss of neurons, abnormal protein deposition and neuroinflammation are the common pathological features of neurodegenerative diseases, and these features are closely related to one another. In Parkinson's disease, abnormal aggregation and deposition of α-synuclein is known as a critical event in pathogenesis of the disease, as well as in other related neurodegenerative disorders, such as dementia with Lewy bodies and multiple system atrophy. Increasing evidence suggests that α-synuclein aggregates can activate glial cells to induce neuroinflammation. However, how an inflammatory microenvironment is established and maintained by this protein remains unknown. Findings from our recent study suggest that neuronal α-synuclein can be directly transferred to astrocytes through sequential exocytosis and endocytosis and induce inflammatory responses from astrocytes. Here we discuss potential roles of astrocytes in a cascade of events leading to α-synuclein-induced neuroinflammation. A Retroperitoneal Inflammatory Myofibroblastic Tumor Mimicking a Germ Cell Tumor of the Undescended Testis: A Case Report and Literature Review [PDF] Seul-Bi Lee, Jung-Hee Yoon, Seung-Ho Kim, Yedaun Lee, Jin-Soo Lee, Jung-Wook Seo Advances in Computed Tomography (ACT) , 2016, DOI: 10.4236/act.2016.53004 Abstract: We report here a case of an inflammatory myofibroblastic tumor in the retroperitoneum, which mimicked a germ cell tumor of the undescended testis. A 75-year-old healthy man presented with a palpable abdominal mass. On the computed tomography image, there was large, well-defined soft tissue mass in the left side of the retroperitoneum, and there was no visible left testis or seminal vesicle. After contrast enhancement, the mass appeared to be relatively homogeneous, considering its large size. With ultrasonography, it appeared as a well-defined, hypoechoic mass with intratumoral vascularity. This solid mass was surgically diagnosed as an inflammatory myofibroblastic tumor. Bone-induced streak artifact suppression in sparse-view CT image reconstruction Jin Seung,Kim Jae,Lee Soo,Kwon Oh-Kyong BioMedical Engineering OnLine , 2012, DOI: 10.1186/1475-925x-11-44 Abstract: Background In sparse-view CT imaging, strong streak artifacts may appear around bony structures and they often compromise the image readability. Compressed sensing (CS) or total variation (TV) minimization-based image reconstruction method has reduced the streak artifacts to a great extent, but, sparse-view CT imaging still suffers from residual streak artifacts. We introduce a new bone-induced streak artifact reduction method in the CS-based image reconstruction. Methods We firstly identify the high-intensity bony regions from the image reconstructed by the filtered backprojection (FBP) method, and we calculate the sinogram stemming from the bony regions only. Then, we subtract the calculated sinogram, which stands for the bony regions, from the measured sinogram before performing the CS-based image reconstruction. The image reconstructed from the subtracted sinogram will stand for the soft tissues with little streak artifacts on it. To restore the original image intensity in the bony regions, we add the bony region image, which has been identified from the FBP image, to the soft tissue image to form a combined image. Then, we perform the CS-based image reconstruction again on the measured sinogram using the combined image as the initial condition of the iteration. For experimental validation of the proposed method, we take images of a contrast phantom and a rat using a micro-CT and we evaluate the reconstructed images based on two figures of merit, relative mean square error and total variation caused by the streak artifacts. Results The images reconstructed by the proposed method have been found to have smaller streak artifacts than the ones reconstructed by the original CS-based method when visually inspected. The quantitative image evaluation studies have also shown that the proposed method outperforms the conventional CS-based method. Conclusions The proposed method can effectively suppress streak artifacts stemming from bony structures in sparse-view CT imaging.
CommonCrawl
Periplasmic glucans isolated from Proteobacteria Lee, Sang-Hoo;Cho, Eun-Ae;Jung, Seun-Ho 769 https://doi.org/10.5483/BMBRep.2009.42.12.769 PDF Periplasmic glucans (PGs) are general constituents in the periplasmic space of Proteobacteria. PGs from bacterial strains are found in larger amounts during growth on medium with low osmolarity and thus are often been specified as osmoregulated periplasmic glucans (OPGs). Furthermore, they appear to play crucial roles in pathogenesis and symbiosis. PGs have been classified into four families based on the structural features of their backbones, and they can be modified by a variety of non-sugar substituents. It has also recently been confirmed that novel PGs with various degrees of polymerization (DPs) and/or different substituents are produced under different growth conditions among Proteobacteria. In addition to their biological functions as regulators of low osmolarity, PGs have a variety of physico-chemical properties due to their inherent three-dimensional structures, hydrogen-bonding and complex-forming abilities. Thus, much attention has recently been focused on their physico-chemical applications. In this review, we provide an updated classification of PGs, as well as a description of the occurrences of novel PGs with substituents under various bacterial growth environments, the genes involved in PG biosynthesis and the various physico-chemical properties of PGs. Th17 responses and host defense against microorganisms: an overview Van De Veerdonk, Frank L.;Gresnigt, Mark S.;Kullberg, Bart Jan;Van Der Meer, Jos W.M.;Joosten, Leo A.B.;Netea, Mihai G. 776 T helper (Th) 17 cells have recently been described as a third subset of T helper cells, and have provided new insights into the mechanisms that are important in the development of autoimmune diseases and the immune responses that are essential for effective antimicrobial host defense. Both protective and harmful effects of Th17 responses during infection have been described. In general, Th17 responses are critical for mucosal and epithelial host defense against extracellular bacteria and fungi. However, recent studies have reported that Th17 responses can also contribute to viral persistence and chronic inflammation associated with parasitic infection. It has become evident that the type of microorganisms and the setting in which they trigger the Th17 response determines the outcome of the delicate balancethat exists between Th17 induced protection and immunopathogenesis. Functional characterization of a minimal sequence essential for the expression of human TLX2 gene Borghini, Silvia;Bachetti, Tiziana;Fava, Monica;Duca, Marco Di;Ravazzolo, Roberto;Ceccherini, Isabella 788 TLX2 is an orphan homeodomain transcription factor whose expression is mainly associated with tissues derived from neural crest cells. Recently, we have demonstrated that PHOX2A and PHOX2B are able to enhance the neural cell-type specific expression of human TLX2 by binding distally the 5' -flanking region. In the present work, to deepen into the TLX2 transcription regulation, we have focused on the proximal 5'-flanking region of the gene, mapping the transcription start site and identifying a minimal promoter necessary and sufficient for the basal transcription in cell lines from different origin. Site-directed mutagenesis has allowed to demonstrate that the integrity of this sequence is crucial for gene expression, while electrophoretic mobility shift assays and chromatin immunoprecipitation experiments have revealed that such an activity is dependent on the binding of a PBX factor. Consistent with these findings, such a basal promoter activity has resulted to be enhanced by the previously reported PHOX2-responding sequence. Agrocybe chaxingu polysaccharide prevent inflammation through the inhibition of COX-2 and NO production Lee, Byung-Ryong;Kim, So-Young;Kim, Dae-Won;An, Jae-Jin;Song, Ha-Yong;Yoo, Ki-Yeon;Kang, Tae-Cheon;Won, Moo-Ho;Lee, Kwang-Jae;Kim, Kyung-Hee;Joo, Jin-Ho;Ham, Hun-Ju;Hur, Jang-Hyun;Cho, Sung-Woo;Han, Kyu-Hyung;Lee, Kil-Soo;Park, Jin-Seu;Choi, Soo-Young;Eum, Won-Sik 794 The inhibition of nitric oxide (NO) and cyclooxygenase-2 (COX-2) production is considered to be a promising approach to the treatment of various diseases, including inflammation and cancer. In this study, we examined the effects of the Agrocybe chaxingu $\beta$-glucan (polysaccharide) on lipopolysaccaride (LPS)-induced nitric oxide (NO) and cyclooxygenase-2 (COX-2) expression in murine macrophage Raw 264.7 cells as well as 12-O-tetradecanoylphorbol 13-acetate (TPA)-induced ear edema in mice. The polysaccharide significantly inhibited (P < 0.01) LPS-induced iNOS and COX-2 expression levels in the cells. Furthermore, topical application of polysaccharide resulted in markedly inhibited (P < 0.01) TPA-induced ear edema in mice. These results suggest that this polysaccharide may be used for NO- and COX-2-related disorders such as inflammation and cancer. Bevacizumab accelerates corneal wound healing by inhibiting TGF-βexpression in alkali-burned mouse cornea Lee, Sung-Ho;Leem, Hyun-Sung;Jeong, Seon-Mi;Lee, Koon-ja 800 This study investigated the effect of subconjunctival injections of bevacizumab, an anti-VEGF antibody, on processes involved in corneal wound healing after alkali burn injury. Mice were divided into three groups: Group 1 was the saline-treated control, group 2 received subconjunctival injection of bevacizumab 1hr after injury and group 3 received bevacizumab 1 hr and 4 days after injury. Cornea neovascularization and opacity were observed using a slit lamp microscope. Corneal repair was assessed through histological analysis and immunostaining for CD31, $\alpha$-SMA, collagen I, and TGF-$\beta$2 7 days post-injury. In group 3, injection of bevacizumab significantly lowered neovascularization and improved corneal transparency. Immunostaining analysis demonstrated a reduction in CD31, $\alpha$-SMA and TGF-$\beta$2 levels in stroma compared to group 1. These results indicate that bevacizumab may be useful in reducing neovascularization and improving corneal transparency following corneal alkali burn injury by accelerating regeneration of the basement membrane. Induction of caspase-dependent apoptosis in melanoma cells by the synthetic compound (E)-1-(3,4-dihydroxyphenethyl)-3-styrylurea Kim, Ji-Hae;Jang, Young-Oh;Kim, Beom-Tae;Hwang, Ki-Jun;Lee, Jeong-Chae 806 Recently, various phenolic acid phenethyl ureas (PAPUs) have been synthesized from phenolic acids by Curtius rearrangement for the development of more effective anti-oxidants. In this study, we examined the anti-tumor activity and cellular mechanism of the synthetic compound (E)-1-(3,4-dihydroxyphenethyl)-3-styrylurea (PAPU1) using melanoma B16/F10 and M-3 cells. Results showed that PAPU1 inhibited the cell proliferation and viability, but did not induce cytotoxic effects on primary cultured fibroblasts. PAPU1 induced apoptotic cell death rather than necrosis in melanoma cells, a result clearly proven by the shift of cells into sub-$G_1$ phase of the cell cycle and by the substantial increase in cells positively stained with TUNEL or Annexin V. Collectively, this study revealed that PAPU1 induced apoptosis in a caspase-dependent manner, suggesting a potential role as a cancer chemopreventive agent for melanoma cells. Aspartyl aminopeptidase of Schizosaccharomyces pombe has a molecular chaperone function Lee, Song-Mi;Kim, Ji-Sun;Yun, Chul-Ho;Chae, Ho-Zoon;Kim, Kang-Hwa 812 To screen chaperone proteins from Schizosaccharomyce pombe (S. pombe), we prepared recombinant citrate synthase of the fission yeast as a substrate of anti-aggregation assay. Purified recombinant citrate synthase showed citrate synthase activity and was suitable for the substrate of chaperone assay. Several heat stable proteins including aspartyl aminopeptidase (AAP) for candidates of chaperone were screened from the supernatant fraction of heat-treated crude extract of S. pombe. The purified AAP migrated as a single band of 47 kDa on SDS-polyacrylamide gel electrophoresis. The native size of AAP was estimated as 200 kDa by a HPLC gel permeation chromatography. This enzyme can remove the aspartyl residue at N-terminus of angiotensin I. In addition, AAP showed the heat stability and protected the aggregation of citrate synthase caused by thermal denaturation. This study showed that S. pombe AAP is a moonlight protein that has aspartyl aminopeptidase and chaperone activities. Regulation of type-1 protein phosphatase in a model of metabolic arrest Ramnanan, Christopher J.;Storey, Kenneth B. 817 Type-1 phosphatase (PP-1) was assessed in foot muscle (FM) and hepatopancreas (HP) of estivating (EST) Otala lactea. Snail PP-1 displayed several conserved traits, including sensitivity to inhibitors, substrate affinity, and reduction in size to a 39 kDa catalytic subunit (PP-1c). During EST, PP-1 activity in FM and HP crude extracts was reduced, though kinetics and protein levels of purified PP-1c isoforms were not altered. PP-1c protein levels increased and decreased in nuclear and glycogen-associated fractions, respectively, during EST. Gel filtration determined that a 257 kDa low $K_m$ PP-1$\alpha$ complex decreased during estivation whereas a 76 kDa high $K_m$ complex increased in EST. Western blotting confirmed that the 76 kDa protein consisted of PP-1$\alpha$ and nuclear inhibitor of PP-1 (NIPP-1). A suppression of PP-1 activity factors in the overall metabolic rate depression in estivating snails and the mechanism is mediated through altered cellular localization and interaction with binding partners. CONVIRT: A web-based tool for transcriptional regulatory site identification using a conserved virtual chromosome Ryu, Tae-Woo;Lee, Se-Joon;Hur, Cheol-Goo;Lee, Do-Heon 823 Techniques for analyzing protein-DNA interactions on a genome-wide scale have recently established regulatory roles for distal enhancers. However, the large sizes of higher eukaryotic genomes have made identification of these elements difficult. Information regarding sequence conservation, exon annotation and repetitive regions can be used to reduce the size of the search region. However, previously developed resources are inadequate for consolidating such information. CONVIRT is a web resource for the identification of transcription factor binding sites and also features comparative genomics. Genomic information on ortholog-independent conserved regions, exons, repeats and sequences is integrated into the virtual chromosome, and statistically over-represented single or combinations of transcription factor binding sites are sought. CONVIRT provides regulatory network analysis for several organisms with long promoter regions and permits inter-species genome alignments. CONVIRT is freely available at http://biosoft.kaist.ac.kr/convirt. Role of the surface loop on the structure and biological activity of angiogenin Jang, Seung-Hwan;Song, Hyang-Do;Kang, Dong-Ku;Chang, Soo-Ik;Kim, Min-Kyung;Cho, Kwang-Hwi;Scherga, Harold A.;Shin, Hang-Cheol 829 Angiogenin is a member of the ribonuclease superfamily that induces the formation of new blood vessels. It has been suggested that the surface loop of angiogenin defined by residues 59-71 plays a special role in angiogenic function (1); however, the mechanism of action is not clearly defined. To elucidate the role of the surface loop on the structure, function and stability of angiogenin, three surface loop mutants were produced in which 14 amino acids in the surface loop of RNase A were substituted for the 13 amino acids in the corresponding loop of angiogenin. The structure, stability and biological functions of the mutants were then investigated using biophysical and biological approaches. Even though the substitutions did not influence the overall structure of angiogenin, they affected the stability and angiogenic function of angiogenin, indicating that the surface loop of angiogenin plays a significant role in maintaining the stability and angiogenic function of angiogenin. Putative association of DNA methyltransferase 1 (DNMT1) polymorphisms with clearance of HBV infection Chun, Ji-Yong;Bae, Joon-Seol;Park, Tae-June;Kim, Jason-Y.;Park, Byung-Lae;Cheong, Hyun-Sub;Lee, Hyo-Suk;Kim, Yoon-Jun;Shin, Hyoung-Doo 834 DNA methyltransferase (DNMT) 1 is the key enzyme responsible for DNA methylation, which often occurs in CpG islands located near the regulatory regions of genes and affects transcription of specific genes. In this study, we examined the possible association of DNMT1 polymorphisms with HBV clearance and the risk of hepatocellular carcinoma (HCC). Seven common polymorphic sites were selected by considering their allele frequencies, haplotype-tagging status and LDs for genotyping in larger-scale subjects (n = 1,100). Statistical analysis demonstrated that two intron polymorphisms of DNMT1, +34542G > C and +38565G > T, showed significant association with HBV clearance in a co-dominant model (OR = 1.30, $P^{corr}$ = 0.03) and co- dominant/recessive model (OR = 1.34-1.74, $P^{corr}$ = 0.01-0.03), respectively. These results suggest that two intron polymorphisms of DNMT1, +34542G > C and +38565G > T, might affect HBV clearance. Casein Kinase 2 interacts with human mitogen- and stress-activated protein kinase MSK1 and phosphorylates it at Multiple sites Shi, Yan;Han, Guanghui;Wu, Huiling;Ye, Kan;Tian, Zhipeng;Wang, Jiaqi;Shi, Huili;Ye, Mingliang;Zou, Hanfa;Huo, Keke 840 Mitogen- and stress-activated protein kinase (MSK1) palys a crucial role in the regulation of transcription downstream of extracellular-signal-regulated kinase1/2 (ERK1/2) and mitogen-activated protein kinase p38. MSK1 can be phosphorylated and activated in cells by both ERK1/2 and p38$\alpha$. In this study, Casein Kinase 2 (CK2) was identified as a binding and regulatory partner for MSK1. Using the yeast two-hybrid system, MSK1 was found to interact with the CK2$\beta$ regulatory subunit of CK2. Interactions between MSK1 and the CK2$\alpha$ catalytic subunit and CK2$\beta$ subunit were demonstrated in vitro and in vivo. We further found that CK2$\alpha$ can only interact with the C-terminal kinase domain of MSK1. Using site-directed mutagenesis assay and mass spectrometry, we identified five sites in the MSK1 C-terminus that could be phosphorylated by CK2 in vitro: Ser757, Ser758, Ser759, Ser760 and Thr793. Of these, Ser757, Ser759, Ser760 and Thr793 were previously unknown.
CommonCrawl
The Joy of Barycentric Subdivision We shall be interested in what happens to the shapes of the triangles one gets by subdividing a large number of times. ... Bill Casselman University of British Columbia, Vancouver, Canada Email Bill Casselman Even nowadays, when so much mathematical territory has already been well explored, interesting and apparently new mathematical phenomena can arise in very simple circumstances. The barycenter of a triangle is its center of gravity. It can be found easily by a simple geometric construction. Draw a line from each vertex to the midpoint of the opposite side. Euclid knew that these three lines meet in a single point, and this turns out to be what we call the triangle's barycenter. A basic fact is that the length of the blue line is twice that of the red one. The construction of the barycenter produces a subdivision of the original triangle into six smaller triangles. I'll call the original triangle the parent, and these the children. What can we say about the children, compared to their parent? The only thing that is easy to prove is that they are definitely smaller. The diameter of a triangle is the length of its longest side. It is not too difficult to show that the diameter of any of the children is less than $2/3$ that of the parent. This bound is sharp, in the sense that one can find a sequence of parents degenerating into a line segment for which the diameters of some children approaches $2/3$ that of the parent. There may be interesting things to be said about how the sizes of children vary, but we are going to be interested only in the shapes, rather than the sizes, of children. More precisely, we shall be interested in what happens to the shapes of the triangles one gets by subdividing a large number of times. Original triangle After one subdivision After two subdivisions After three subdivisions One's first impression is that the triangles become more and more pinched---thinner, one might say---as subdivision proceeds. But a close look shows that this is not quite the case---most of the triangles after 3 subdivisions are fairly thin, but there are a few that are not so thin. How can we describe what is going on? The answer will be in statistical terms. But in order to analyze what happens statistically, we need a precise way to specify the shape of a triangle. What is the shape of a triangle? Before we continue, I have to explain more about what we mean by the shape of a triangle. When do two triangles have the same shape? Is it possible to say that two triangles have nearly the same shape, or very different shapes? Two triangles are said to be similar if they are the same up to a scale change. In this case, they certainly have the same shape. But I'll also say they have the same shape if one is the mirror image of the other. (This is a somewhat arbitrary convention, but common enough.) The following triangles have, in this sense, the same shape. But now we can find a standard model of any triangle, so as to be able to compare shapes. Given a triangle, first scale it so that its longest side has length one. Then rotate it so its longest side is horizontal. Rotate it again (by $180^{\circ}$) if necessary so the triangle is on top of its longest side. Flip it around a vertical axis, if necessary, so that its shortest side is at the left. The triangle we are now looking at is very special: $\bullet$ its bottom side is a horizontal segment of length $1$ (because we have scaled it suitably); $\bullet$ its top vertex lies at distance at most one from its right hand vertex (because the right side is at most as long as the bottom); and $\bullet$ the top vertex lies to the left of the center of the bottom (because of our convention regarding mirror images). If we are given a coordinate system, we may even place the bottom to be the segment from $(0,0)$ to $(1,0)$. In this way, every triangle is associated to a triangle of a very special type---one that looks like this: Such a triangle is completely determined by its top vertex, which lies in the region I'll call $\Sigma$ which is colored gray in the figure above. Thus we can say that two triangles have nearly the same shape if the corresponding points of $\Sigma$ are close. The top of the region $\Sigma$ corresponds to an equilateral triangle, points on the arc at the left or the vertical line at the right are all isosceles triangles, and points towards the bottom are all rather flat. Points of the bottom correspond to degenerate or flat triangles, all of whose vertices lie on a line. Eventually I'll want to refer to the triangles in a subdivision, and I assign labels in a somewhat arbitrary fashion as shown here: We can now visualize in an instructive way the process of barycentric subdivision. We start with a given triangle, and plot the point of the region $\Sigma$ corresponding to it. When we subdivide, we get $6$ new points of $\Sigma$. If we again subdivide each of these, we get all together $36$ points of $\Sigma$. In the following figures, I started with a triangle whose vertices were $(0,0)$, $(1,0)$, and $(1/4,1/2)$. The last figure records its $6^{8} = 1,679,616$ descendants after $8$ subdivisions. These figures exhibit a number of interesting features. First of all, it appears that as time goes on the descendants of the original triangle fill out the region $\Sigma$. In other words, the descendants approximate eventually any arbitrary shape. This has been proved rigorously, in what I believe to be the first mathematical paper to discuss barycentric subdivisions: Theorem. (Barany et al.) Successive barycentric subdivisions of a non-degenerate triangle can approximate any given triangular shape. But the density of descendants is not uniform, nor does it remain essentially the same as subdivision proceeds. Instead, there seems to be a kind of flow towards the bottom of $\Sigma$. We'll look at this phenomenon in detail later on. One of the more curious features are the patterns of arcs of circles along the bottom. These are particularly evident at lower left and right. These arcs are also apparent, although more weakly, along all of the bottom. This is, as far as I know, a kind of resonance about which nothing---absolutely nothing---is known. We seem to be entering completely new mathematical country. The bottom in each figure also shows a kind of dead zone in which no triangles are evident. This zone shrinks as the number of subdivisions increases. I do not know that anyone has remarked on this, but it ought not to be difficult to explain, and rather precisely. The drift to the south Let's look again at the apparent flow downwards in successive subdivisions. The images above don't show this very well, because the points recording subdivision triangles achieve saturation. But there are other ways to make it more apparent. The following figure is almost the same as one we have seen previously, but now the bar graph on the right side records the proportion in corresponding horizontal slices. There ought to be no need to reproduce the point plots, so the following figures just exhibit the bar graphs: These should make it much clearer that as the number of subdivisions increases, there is a flow towards the bottom. The impression is reinforced by the following table showing the means of the distributions: $$ \eqalignno { \hbox{Number of subdivisions:} & & \; 0 & \quad 1 & \quad 2 & \quad 3 & \quad 4 & \quad 5 & \quad 6 & \quad 7 &\quad 8 & \quad 9 & \quad 10 \cr \hbox{Mean height:} & & 0.5 & 0.3683 & 0.3136 & 0.2656 & 0.2296 & 0.2013 & 0.1781 & 0.1588 & 0.1425 & 0.1285 & 0.1163 \cr } $$ These form an approximate geometric sequence with ratio $q \sim 0.906 = e^{-0.099}$. This flow, at least, is partly understood, although in terms we haven't yet seen. Corresponding to the barycentric subdivision of triangles is a kind of random walk among the shapes of triangles. Start with a triangle $T_{0}$, and suppose $S_{1}$, ... , $S_{6}$ are the triangles you get by subdividing it. Toss a die. If the face marked $i$ appears, let $T_{1} = S_{i}$. If we do this repeatedly, we get a sequence of triangles $T_{m}$ in which $T_{m+1}$ is a random choice among the subdivisions of $T_{m}$. If I plot for each the corresponding point of the region $\Sigma$, I get a path in $\Sigma$. Here, for example, is what I get if I start with the same triangle $T_{0}$ whose vertices are $(0,0)$, $(1,0)$, $(1/4, 1/2)$: There are occasional bounces up, but the trend is towards the bottom. This is the way things usually go, as has also been proved rigorously by Barany et al.: Theorem. (Barany et al.) A random walk among successive barycentric subdivisions of a triangle will almost certainly converge to flat triangles. Empirically, one sees that as subdivision proceeds the shapes pile up along the bottom of $\Sigma$. This observation can be made more precise. Barycentric subdivision can be applied even to flat triangles, and produces other flat triangles. In order to understand the shapes that barycentric subdivision generates it is important to understand how it works for flat triangles. There are therefore two reasons flat triangles are important. One is that under barycentric subdivision, triangles that are nearly flat behave approximately as if they were flat, and in particular all the triangles they divide into are also fairly flat. Another is that arbitrary triangles tend to become flatter and flatter after many subdivisions. Consistently with the last theorem, the amount a child can bounce above its parent is rather limited. I have already mentioned that if a triangle is nearly flat, its children are also nearly flat. Given this, one might wonder if the triangles you see eventually converge to a fixed flat triangle. This is not, however, what you'd guess from the figure. In fact, what happens is the opposite: as the shapes become flatter and flatter, they traverse somewhat randomly in a horizontal direction. This has been discussed in some detail in the work of Diaconis and Miclo. Not only does barycentric subdivision make sense for flat triangles, but---as we shall see in a later section---it is simpler to understand than for non-degenerate triangles. In addition, Diaconis and Miclo prove that triangles that are nearly flat behave much like flat ones. The barycentric subdivision of flat triangles is all by itself an interesting business, and quite different from what we have seen so far. The basic fact is that subdivision of a flat triangle produces more flat triangles. So we can apply to it much the same process that we did to real triangles. We start with some initial value of $x$, then compute successively all children of current values of $x$. One could illustrate this as I illustrated the barycentric subdivisions of a given starting flat triangle. But this would not be so illuminating, since the interval $[0,1/2]$ simply fills up rapidly. There is no flow, but instead a gradual saturation. This is a proven fact: as time goes on, what we see is that the distribution of descendants approximates better and better a fixed density. (The technical way to phrase this is to say that the process is ergodic.) This also has been proved by Diaconis and Miclo, who prove in addition a few more properties of this density. In the following figure, one sees the approximate density after many subdivisions. It seems to be at least continuous, although this has not been verified rigourously. How smooth is it? More about flat triangles I have said that the analysis of Diaconis and Micro is based upon a close examination of what happens in the subdivision of flat triangles. This is relatively simple from both a theoretical and a practical standpoint, and I will say something about that in this section. One convenient thing about flat triangles is that we can see extremely explicitly how subdivision works. Every flat triangle corresponds to a point $(x,0)$ on the bottom of the region $\Sigma$, with $0 \le x \le 1/2$. The following figure will suggest what the "triangles" in the subdivision are. A flat triangle is a set of three points, not all the same, in a line. One can transform a flat triangle so that its vertices are $(0,0)$, $(1,0)$, $(x,0)$ with $0 \le x \le 1/2$ - i.e. so that $x$ is on the bottom of $\Sigma$. Using labeling introduced earlier, the figure should help you see that the vertices of the barycentric subdivision are: $$ \eqalignno { \Delta_{1} &\colon 1/2 & 1/2 - (x+1)/3 & (x+1)/3 \cr \Delta_{2} &\colon 1 -(x+1)/3 & 1/2 - (x+1)/3 & 1/2 \cr \Delta_{3} &\colon 1-(x+1)/3 & 1- (x+1)/2 & (x+1)/2 - (x+1)/3 \cr \Delta_{4} &\colon (x+1)/2 - x & (x+1)/3 - x & (x+1)/2 - (x+1)/3 \cr \Delta_{5} &\colon (x+1)/3 - x/2 & (x+1)/3 - x & x/2 \cr \Delta_{6} &\colon (x+1)/3 & (x+1)/3 - x/2 & x/2 \cr } $$ For $0 \le x \le 1/2$ all of these are non-negative, and in every case the first is the longest "side" (which is therefore the sum of the other two). Which is the shortest side is not the same for all. The point of $\Sigma$ corresponding to each of these turns out to be $$ \eqalign { \Delta_{1} &\colon \; { 1 - 2x \over 3 } \cr \Delta_{2} &\colon \; { 1 - 2x \over 4 - 2x } \cr \Delta_{3} &\colon \; { 1 + x \over 4 - 2x } \cr \Delta_{4} &\colon \; { 1 + x \over 3 - 3x } \quad (x < 1/5) \cr & \phantom{\colon} \;\> {2 - 4x \over 3 - 3x } \quad (x \ge 1/5) \cr \Delta_{5} &\colon \;\> { 3x \over 2 - x } \>\quad (x < 2/7) \cr & \phantom{\colon} \;\> {2 - 4x \over 2 - x } \quad (x \ge 2/7) \cr \Delta_{6} &\colon \;\> { 3 \over 2 + 2x } \cr } $$ We can see how this works out by plotting the quantities in these columns as a function of $x$ over the range $[0, 1/2]$. For example, let's look at $\Delta_{5}$. Here are graphs of the "sides" of $\Delta_{5}$, plotted as a function of $x$ in the range $[0,1/2]$: Thus the third column is smallest in the range $[0, 2/7]$, and second in the rest. These explicit values make it relatively simple to see what is going on for flat triangles. To each $i = 1$, ... , $6$ we have a map from $[0,1/2]$ to itself, and these are graphed in the following figure: A number of things about the shapes one gets by barycentric subdivision of a triangle have been proven, but some of the most interesting questions do not seem even to have been approached. Here are a few that stand out: Are the circles one sees an artefact, or is there an interesting explanation of them? As far as I can tell, they are a completely new phenomenon. The shapes form a kind of wave flowing down to the bottom of $\Sigma$. Can one find explicitly the asymptotic form of this wave? I am not familiar with anything like this in the current mathematical literature. What one might hope is that what we are seeing here is a kind of universal process, a dynamic analogue of the central limit theorem, which explains why the normal curve is ubiquitous. Of course the answer will depend on the form of the stable distribution on the flat triangles. Are there other examples of similar probabilistic flows? I haven't said anything about how the known facts have been proved. The common technique in all cases is a clever transformation of the random walk to a question concerning the product of random $2 \times 2$ matrices, about which a lot has been known for a long time. I do not see, however, how this technique could tell us anything about the problems posed above. Reading further There is not much literature on this topic, and---as far as I can tell---nothing except research publications. Amie Wilkinson, `What are Lyapunov exponents, and why are they interesting?', Bulletin of the American Mathematical Society 54, pages 79-106. It was this article that brought to my attention the problems of barycentric subdivision. The Lyapunov exponent in play here is the logarithm specifying the geometric progression of means mentioned above. Bob Hough, `Tessellation of a triangle by repeated barycentric subdivision', Electronic Communications in Probability 14 (2015), 270-277. Imre Barany, Alan F. Beardon, T. K. Carne, `Barycentric subdivision of triangles and semigroups of Möbius maps', Mathematika 43 (1996), 165-171. This is, I believe, the first investigation of the dynamics of barycentric subdivision. Persi Diaconis and Laurent Miclo, `On barycentric subdivision, with simulations'. This is the most thorough investigation in the literature. Child plot animation. A file that I have made illustrating through `page turning animation' how the formation of children behaves for different points of $\Sigma$. Keep in mind when looking at it that the graph of any transformation from 2D to 2D sits in 4D, and hence illustration is intrinsically difficult for humans. I have benefited greatly from discussions with my colleague Gordon Slade. The AMS encourages your comments, and hopes you will join the discussions. We review comments before they're posted, and those that are offensive, abusive, off-topic or promoting a commercial product, person or website will not be posted. Expressing disagreement is fine, but mutual respect is required. Print this page Subscribe to RSS Feed Welcome to the Feature Column! These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics. Search Feature Column Feature Column at a glance
CommonCrawl
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up. Is there a more intuitive proof of the halting problem's undecidability than diagonalization? I understand the proof of the undecidability of the halting problem (given for example in Papadimitriou's textbook), based on diagonalization. While the proof is convincing (I understand each step of it), it is not intuitive to me in the sense that I don't see how someone would derive it, starting from the problem alone. In the book, the proof goes like this: "suppose $M_H$ solves the halting problem on an input $M;x$, that is, decides whether Turing machine $M$ halts for input $x$. Construct a Turing machine $D$ that takes a Turing machine $M$ as input, runs $M_H(M;M)$ and reverses the output." It then goes on to show that $D(D)$ cannot produce a satisfactory output. It is the seemingly arbitrary construction of $D$, particularly the idea of feeding $M$ to itself, and then $D$ to itself, that I would like to have an intuition for. What led people to define those constructs and steps in the first place? Does anyone have an explanation on how someone would reason their way into the diagonalization argument (or some other proof), if they did not know that type of argument to start with? Addendum given the first round of answers: So the first answers point out that proving the undecidability of the halting problem was something based on Cantor and Russell's previous work and development of the diagonalization problem, and that starting "from scratch" would simply mean having to rediscover that argument. Fair enough. However, even if we accept the diagonalization argument as a well-understood given, I still find there is an "intuition gap" from it to the halting problem. Cantor's proof of the real numbers uncountability I actually find fairly intuitive; Russell's paradox even more so. What I still don't see is what would motivate someone to define $D(M)$ based on $M$'s "self-application" $M;M$, and then again apply $D$ to itself. That seems to be less related to diagonalization (in the sense that Cantor's argument did not have something like it), although it obviously works well with diagonalization once you define them. @babou summarized what was troubling me better than myself: "The problem with many versions of the proof is that the constructions seem to be pulled from a magic hat." computability proof-techniques undecidability halting-problem intuition user118967user118967 $\begingroup$ Consider the possibility that any proof of the existence of uncountable sets will have to be somewhat counterintuitive, even if we get used to the fact that they are correct. Consider also the possibility that this question (if properly rephrased) belongs to math.stackexchange.com. $\endgroup$ – André Souza Lemos May 21 '15 at 0:01 $\begingroup$ Cantor found the diagonalization argument, and now we cannot unlearn it: Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können. $\endgroup$ – Hendrik Jan May 21 '15 at 0:49 $\begingroup$ After further thought, I have to ask why you think this is so different from Russell's paradox. Russell's paradox even looks the same if we use the notation $S(X)$ to mean $X \in S$ (i.e. think of sets as being functions whose values are true or false). Then Russell's paradox is to define D(M) = not M(M), and then consider D(D). $\endgroup$ – user5386 May 21 '15 at 9:09 $\begingroup$ Diagonalization is a standard technique. Sure there was a time when it wasn't known but it's been standard for a lot of time now, so your argument is simply due to your ignorance (I don't want to be rude, is a fact: you didn't know all the other proofs that use such a technique and hence find it odd the first time you see it. When you've seen it 50 times you'll probably be able to understand how it can be applied in a new situation). $\endgroup$ – Bakuriu May 21 '15 at 11:25 $\begingroup$ Maybe you would read my exchange of comments with Luke Mathieson (following his answer). His answer explains historically why Turing used self-application (one thing you ask for in your question). That seems to be pretty-much how mathematicians perceived the issues at the time. My own answer tries to give a very simple proof that does not use it (or at least shows it is not essential) which is another thing you ask for, quite different. Possibly, I might make it even simpler than in my answer. Why teachers still use Turing's proof is a sociological and pedagogical (?!) issue. cc @HendrikJan $\endgroup$ – babou May 23 '15 at 13:57 In your edit, you write: A common "popular" summarization of Turing's proof goes something like this: "If we had a machine $M_H$ that could decide whether another Turing machine halts or not, we could use this to construct another machine $D$ that, given a Turing machine $M$, would halt if and only if $M$ did not halt. But then we could pass $D$ as input to itself, and thus obtain a paradox: this machine would halt if and only if it did not halt!" Now, it's easy to see that the summarization above glosses over an important detail — the halting of the Turing machine $M$ also depends on its input, which we have not specified! But this issue can be fixed easily enough: we just need to have $D$ pick some suitable input $x_M$ for each input machine $M$, before passing them both to $M_H$. What's a suitable choice for $x_M$, given that we ultimately want to derive a contradiction? Well, a natural choice is suggested directly by the "handwavy" proof above, where we ultimately obtain the contradiction by running the machine $D$ on itself. Thus, for the behavior of $D$ to really be paradoxical in this case, i.e. when invoked as $D(D)$, what we want is for the halting of $D(M)$ to depend on the behavior of $M$ when invoked as $M(M)$. This way, we'll obtain the contradiction we want by setting $M = D$. Mind you, this is not the only choice; we could also have derived the same contradiction by, say, constructing a machine $D'$ such that $D'(M)$ halts if and only if $M(D')$ (rather than $M(M)$) does not halt. But, whereas it's clear that the machine $D$ can easily duplicate its input before passing it to $M_H$, it's not quite so immediately obvious how to construct a machine $D'$ that would invoke $M_H$ with its own code as the input. Thus, using this $D'$ instead of $D$ would needlessly complicate the proof, and make it less intuitive. Ilmari KaronenIlmari Karonen $\begingroup$ Wow, you really grokked my question! That is exactly the type of story I was looking for! Still reading everything, but this looks like it would be the accepted answer. Thanks! $\endgroup$ – user118967 May 21 '15 at 20:16 It may be simply that it's mistaken to think that someone would reason their way to this argument without making a similar argument at some point prior, in a "simpler" context. Remember that Turing knew Cantor's diagonalisation proof of the uncountability of the reals. Moreover his work is part of a history of mathematics which includes Russell's paradox (which uses a diagonalisation argument) and Gödel's first incompleteness theorem (which uses a diagonalisation argument). In fact, Gödel's result is deeply related to the proof of undecidability of the Halting Problem (and hence the negative answer to Hilbert's Entscheidungsproblem). So my contention is that your question is in a sense badly founded and that you can't reach the Halting Problem without going past the rest (or something remarkably similar) first. While we show these things to students without going through the history, if you were a working mathematician it seems unlikely that you go from nothing to Turing Machines without anything in between - the whole point of them was to formalise computation, a problem many people had been working on for decades at that point. Cantor didn't even use diagonalisation in his first proof of the uncountability of the reals, if we take publication dates as an approximation of when he thought of the idea (not always a reliable thing), it took him about 17 years from already knowing that the reals were uncountable, to working out the diagonalisation argument. In reference to the "self-application" in the proof that you mention, this is also an integral part of Russell's paradox (which entirely depends upon self-reference), and Gödel's first incompleteness theorem is like the high-powered version of Russell's paradox. The proof of the undecidability of the Halting Problem is so heavily informed by Gödel's work that it's hard to imagine getting there without it, hence the idea of "self-application" is already part of the background knowledge you need to get to the Halting Problem. Similarly, Gödel's work is a reworking of Russell's paradox, so you don't get there without the other (note that Russell was not the first to observe a paradox like this, so prototypes of the diagonalisation argument has been around in formal logic since about 600BCE). Both Turing and Gödel's work (the bits we're talking about here that is) can be viewed as increasingly powerful demonstrations of the problems with self-reference, and how it is embedding in mathematics. So once again, it's very difficult to suggest that these ideas at the level Turing was dealing with them came a priori, they were the culmination of millennia's work in parts of philosophy, mathematics and logic. This self-reference is also part of Cantor's argument, it just isn't presented in such an unnatural language as Turing's more fundamentally logical work. Cantor's diagonalisation can be rephrased as a selection of elements from the power set of a set (essentially part of Cantor's Theorem). If we consider the set of (positive) reals as subsets of the naturals (note we don't really need the digits to be ordered for this to work, it just makes a simpler presentation) and claim there is a surjection from the naturals to the reals, then we can produce an element of the power set (i.e. a real) that is not in the image of the surjection (and hence derive a contradiction) by take this element to be the set of naturals who are not in their own image under the surjection. Once we phrase it this way, it's much easier to see that Russell's paradox is really the naïve set theory version of the same idea. Luke MathiesonLuke Mathieson $\begingroup$ Yes, it seems the whole point of Turing was to recreate circularity (from which comes diagonalization) using machines, for the sake of introducing some abstract idea of time, with which to talk about finiteness in a new way. $\endgroup$ – André Souza Lemos May 21 '15 at 2:00 $\begingroup$ Maybe you can enlighten me, as I am not familiar with some of these proofs. I can understand that these proofs can be cunducted using self referencing. I can even believe (though it might need a proof) that there is always some self reference to be found in whatever structure is constructed for the purpose. But I do not see the need to use it explicitly to conduct the proof to its conclusion. You can rephrase Cantor's argument that way, but you do not have to. And I do not see why you have to do it for the halting problem. I may have missed a step, but which? $\endgroup$ – babou May 22 '15 at 15:27 $\begingroup$ To make my previous remark clearer, the original question is: "Is there a more intuitive proof of the halting problem's undecidability ...". I am omitting the end, since my feeling is that the OP complains mainly about the lack of intuition. I believe that there is indeed a more intuitive proof, not using self-reference. You may think that using that proof is pedagogically unwise (as not related to Russell's and Gödel's work), but if it answer the question asked, what is the point of rejecting it. You seem to be denying the question rather than answering it. $\endgroup$ – babou May 22 '15 at 17:18 $\begingroup$ @babou I think the problem here is that we're answering different questions. The OP was not well phrased in that regard I guess. The repeated question in the body of the OP seem to me to be "how did someone ever think of the diagonalisation argument to prove ..." (paraphrased of course), and that "the constructions seem to be pulled from a magic hat". $\endgroup$ – Luke Mathieson May 23 '15 at 0:23 $\begingroup$ @babou, also to elaborate a little, with a proper keyboard, I don't think one way or another is necessarily pedagogically useful (it would depend heavily on context). In fact, for most modern CS courses, it's probably better to do it without the diagonalisation argument, most CS students just aren't mathematically inclined enough any more to know the background that would make it easier to understand, but I was definitely answering the question that ended the original body text: ... $\endgroup$ – Luke Mathieson May 23 '15 at 1:23 Self application is not a necessary ingredient of the proof If there is a Turing machine $H$ that solves the halting problem, then from that machine we can build another Turing machine $L$ with a halting behavior (halting characteristic function) that cannot be the halting behavior of any Turing machine. The paradox built on the self applied function $D$ (called $L$ in this answer - sorry about notation inconsistencies) is not a necessary ingredient of the proof, but a device usable with the construction of one specific contradiction, hiding what seems to be the "real purpose" of the construction. That is probably why it is not intuitive. It seems more direct to show that there is only a denumerable number of halting behaviors (no more than Turing machines), that can be defined as characteristic halting functions associated with each Turing machine. One can define constructively a characteristic halting function not in the list, and build from it, and from a machine $H$ that solves the halting problem, a machine $L$ that has that new characteristic halting function. But since, by construction, it is not the characteristic halting function of a Turing machine, $L$ cannot be one. Since $L$ is built from $H$ using Turing machine building techniques, $H$ cannot be a Turing machine. The self-application of $L$ to itself, used in many proofs, is a way to show the contradiction. But it works only when the impossible characteristic halting function is built from the diagonal of the list of Turing permitted characteristic halting functions, by flipping this diagonal (exchanging $0$ and $1$). But there are infinitely many other ways of building a new characteristic halting function. Then non-Turing-ness can no longer be evidenced with a liar paradox (at least not simply). The self-application construction is not intuitive because it is not essential, but it looks slick when pulled out of the magic hat. Basically, $L$ is not a Turing machine because it is designed from the start to have a halting behavior that is not that of a Turing machine, and that can be shown more directly, hence more intuitively. Note: It may be that, for any constructive choice of the impossible characteristic halting function, there is a computable reordering of the Turing machine enumeration such that it becomes the diagonal ( I do not know). But, imho, this does not change the fact that self-application is an indirect proof technique that is hiding a more intuitive and interesting fact. Detailed analysis of the proofs I am not going to be historical (but thanks to those who are, I enjoy it), but I am only trying to work the intuitive side. I think that the presentation given @vzn, which I did encounter a long time ago (I had forgotten), is actually rather intuitive, and even explains the name diagonalization. I am repeating it in details only because I feel @vzn did not emphasize enough its simplicity. My purpose is to have an intuitive way to retrieve the proof, knowing that of Cantor. The problem with many versions of the proof is that the constructions seem to be pulled from a magic hat. The proof that I give is not exactly the same as in the question, but it is correct, as far as I can see. If I did not make a mistake, it is intuitive enough since I could retrieve it after more years than I care to count, working on very different issues. The case of the subsets of $\mathbb N$ (Cantor) The proof of Cantor assumes (it is only an hypothesis) that there is an enumeration of the subsets of the integers, so that all such subset $S_j$ can be described by its characteristic function $C_j(i)$ which is $1$ if $i\in S_j$ and is $0$ otherwise. This may be seen as a table $T$, such that $T[i,j]=C_j(i)$ Then, considering the diagonal, we build a characteristic function $D$ such that $D(i)=\overline{T[i,i]}$, i.e. it is identical to the diagonal of the table with every bit flipped to the other value. There is nothing special about the diagonal, except that it is an easy way to get a characteristic function $D$ that is different from all others, and that is all we need. Hence, the subset characterized by $D$ cannot be in the enumeration. Since that would be true of any enumeration, there cannot be an enumeration that enumerates all the subsets of $\mathbb N$. This is admittedly, according to the initial question, fairly intuitive. Can we make the proof of the halting problem as intuitive? The case of the halting problem (Turing) We assume we have an enumeration of Turing machines (which we know is possible). The halting behavior of a Turing machine $M_j$ can be described by its characteristic halting function $H_j(i)$ which is $1$ if $M_j$ halts on input $i$ and is $0$ otherwise. This may be seen as a table $T$, such that $T[i,j]=H_j(i)$ Then, considering the diagonal, we build a characteristic halting function $D$ such that $D(i)=\overline{T[i,i]}$, i.e. it is identical to the diagonal of the table with every bit flipped to the other value. There is nothing special about the diagonal, except that it is an easy way to get a characteristic halting function $D$ that is different from all others, and that is all we need (see note at the bottom). Hence, the halting behavior characterized by $D$ cannot be that of a Turing machine in the enumeration. Since we enumerated them all, we conclude that there is no Turing machine with that behavior. No halting oracle so far, and no computability hypothesis: We know nothing of the computability of $T$ and of the functions $H_j$. Now suppose we have a Turing machine $H$ that can solve the halting problem, such that $H(i,j)$ always halts with $H_j(i)$ as result. We want to prove that, given $H$, we can build a machine $L$ that has the characteristic halting function $D$. The machine $L$ is nearly identical to $H$, so that $L(i)$ mimics $H(i,i)$, except that whenever $H(i,i)$ is about to terminate with value $1$, $L(i)$ goes into an infinite loop and does not terminate. It is quite clear that we can build such a machine $L$ if $H$ exists. Hence this machine should be in our initial enumeration of all machines (which we know is possible). But it cannot be since its halting behavior $D$ corresponds to none of the machines enumerated. Machine $L$ cannot exist, which implies that $H$ cannot exist. I deliberately mimicked the first proof and went into tiny details My feeling is that the steps come naturally in this way, especially when one considers Cantor's proof as reasonably intuitive. One first enumerates the litigious constructs. Then one takes and modifies the diagonal as a convenient way of touching all of them to get an unaccounted for behaviour, then gets a contradiction by exhibiting an object that has the unaccounted for behaviour ... if some hypothesis were to be true: existence of the enumeration for Cantor, and existence of a computable halting oracle for Turing. Note: To define the function $D$, we could replace the flipped diagonal by any other characteristic halting function, different from all the ones listed in $T$, that is computable (from the ones listed in $T$, for example) provided a halting oracle is available. Then the machine $L$ would have to be constructed accordingly, to have $D$ as characteristic halting function, and $L(i)$ would make use of the machine $H$, but not mimic so directly $H(i,i)$. The choice of the diagonal makes it much simpler. Comparison with the "other" proof The function $L$ defined here is apparently the analog of the function $D$ in the proof described in the question. We only build it in such a way that it has a characteristic halting function that corresponds to no Turing machine, and get directly a contradiction from that. This gives us the freedom of not using the diagonal (for what it is worth). The idea of the "usual" proof seems to try to kill what I see as a dead fish. It says: let's assume that $L$ is one of the machines that were listed (i.e., all of them). Then it has an index $j_L$ in that enumeration: $L=M_{j_L}$. Then if $L(j_L)$ halts, we have $T[j_L,j_L]=H(j_L,j_L)=1$, so that $L(j_L)$ will loop by construction. Conversely, if $L(j_L)$ does not halt, then $T[j_L,j_L]=H(j_L,j_L)=0$ so that $L(j_L)$ will halt by construction. Thus we have a contradiction. But the contradiction results from the way the characteristic halting function of $L$ was constructed, and it seems a lot simpler just to say that $L$ cannot be a Turing machine because it is constructed to have a characteristic halting function that is not that of a Turing machine. A side-point is that this usual proof would be a lot more painful if we did not choose the diagonal, while the direct approach used above has no problem with it. Whether that can be useful, I do not know. baboubabou $\begingroup$ Very nice, thank you! It seems that somehow you managed to go around the self-applying constructions that I found troublesome. Now I wonder why people found them necessary in the first place. $\endgroup$ – user118967 May 21 '15 at 20:15 $\begingroup$ @user118967 I tried to underscore that using the diagonal is not really important. All you want is to define a characteristic halting function that is different from all those listed in the table, and that is computable from those listed, provided we have a halting oracle. There are infinitely many such characteristic halting functions. Now that seems not so visible in the usual proof, and it may be that some constructs of that proof seem arbitrary simply because they are, like chosing the diagonal in the proof above. It is only simple, not essential. $\endgroup$ – babou May 21 '15 at 21:41 $\begingroup$ @user118967 I added and introduction that summarizes the analysis of the various proofs. It complement the comparison between proofs (with and without self application) that is given in the end. I do not know whether I did away with diagonalization as asked :) (I think it would be unfair to say so) but I do hint on how to do away with the obvious diagonal. And the proof does not use self-application, which seems an unnecessary, but slick looking, trick hiding what may seem a more important issue, the halting behavior. $\endgroup$ – babou May 22 '15 at 10:33 $\begingroup$ @user118967 To answer your first comment, and after reading the most upvoted answer, it seem that the main motivation is the link with the work of Russell and Gödel. Now I have no idea whether it is really essential for that purpose, and the self-applying constructions variant can certainly be studied for the purpose, but I don't see the point of imposing it on everyone. Furthermore, the more direct proof seems more intuitive, and does give the tool to further analyse the self-applying version. Why then? $\endgroup$ – babou May 22 '15 at 17:27 $\begingroup$ Yes, I tend to agree with you on that. $\endgroup$ – user118967 May 22 '15 at 18:01 There is also a proof of this fact that uses a different paradox, Berry's paradox, which I heard from Ran Raz. Suppose that the halting problem were computable. Let $B(n)$ be the smallest natural number that cannot be computed by a C program of length $n$. That is, if $S(n)$ is the set of natural numbers computed by C programs of length $n$, then $B(n)$ is the smallest natural number not in $S(n)$. Consider the following program: Go over all C programs of length at most $n$. For each such program, check if it halts; if it does, add it to a list $L$. Output the first natural number not in $L$. This is a program for computing $B(n)$. How large is this program? Encoding $n$ takes $O(\log n)$ characters, and the rest of the program doesn't depend on $n$, so in total the length is $O(\log n)$, say at most $C\log n$. Choose $N$ so that $C\log N \leq N$. Then our program, whose length is at most $N$, computes $B(N)$, contradicting the definition of $B(N)$. The same idea can be used to prove Gödel's incompleteness theorems, as shown by Kritchman and Raz. Yuval FilmusYuval Filmus $\begingroup$ Perhaps it's in the paper I cite, or in the classic monograph Kolmogorov Complexity by Li and Vitányi. $\endgroup$ – Yuval Filmus Apr 9 '16 at 14:53 $\begingroup$ By the way, do you think that this method provides an attack on the NP vs CoNP problem? $\endgroup$ – Mohammad Al-Turkistany Apr 9 '16 at 14:59 $\begingroup$ No. Such problems are beyond us at the moment. $\endgroup$ – Yuval Filmus Apr 9 '16 at 15:00 $\begingroup$ "and the rest of the program doesn't depend on $n$" Why? $\endgroup$ – SK19 Mar 8 '18 at 22:41 $\begingroup$ The parameter $n$ only appears once in the program. The execution of the program depends on $n$, but $n$ itself only appears once in its source code. $\endgroup$ – Yuval Filmus Mar 8 '18 at 22:53 There's a more general idea involved here called the "recursion theorem" that may be more intuitive: Turing machines can use their own description (and thus run themselves). More precisely, there is a theorem: For any Turing machine T, there is a Turing machine R that computes R(x) = T(R;x). If we had a Turing machine that could solve the halting problem, then using the idea described above, we can easily construct a variety of "liar" turing machines: e.g. in python-like notation, def liar(): if halts(liar): return not liar() # or we could do an infinite loop The more complicated argument is essentially just trying to do this directly without appealing to the recursion theorem. That is, it's repeating a recipe for constructing "self-referential" functions. e.g. given a Turing machine T, here is one such recipe for constructing an R satisfying R(x) = T(R; x) First, define S(M; x) = T(M(M; -); x) where by M(M; -), what I really mean is that we compute (using the description of M) and plug in a description of a Turing machine that, on input y, evaluates M(M; y). Now, we observe that if we plug S into itself S(S; x) = T(S(S; -); x) we get the duplication we want. So if we set R = S(S; -) as desired. $\begingroup$ The first paragraph does not match the theorem you cite, which I know by the name of s-m-n theorem. $\endgroup$ – Raphael♦ May 21 '15 at 9:18 $\begingroup$ @Raphael: It's called the recursion theorem in my textbook. :( My brief attempt at google failed to turn up any alternative names. $\endgroup$ – user5386 May 21 '15 at 9:21 $\begingroup$ No worries; maybe I understand you wrong, or there are different names for the same thing. That said, your sentence "Turing machines can use their own description" is not supported by the theorem you quote. In fact, I think it's wrong: if the function a TM computes depended on its index, what would all the infinitely many TMs that compute the same function look like? $\endgroup$ – Raphael♦ May 21 '15 at 9:25 $\begingroup$ Sorry, not following. Shouldn't $T$ be a universal TM? Also why does liar return True in the else case? Is it supposed to answer the question "does 'liar' halt?"? If so, why does is it ok for it to return not liar() in the first case? Shouldn't it be False (or infinite loop)? $\endgroup$ – user118967 May 21 '15 at 20:27 $\begingroup$ @user: Nope: you're got the quantifiers wrong. The theorem is "for every $T$, there exists a $R$ such that $R(x) = T(R; x)$". You are thinking about "There exists a $T$ such that for every $R$, $R(x) = T(R; x)$. $\endgroup$ – user5386 May 21 '15 at 20:36 the Turing proof is quite similar to Cantors proof that the cardinality of reals ("uncountable") is larger than the cardinality of the rationals ("countable") because they cannot be put into 1-1 correspondence but this is not noted/ emphasized in very many references (does anyone know any?). (iirc) a CS prof once showed this years ago in class (not sure where he got it himself). in Cantors proof one can imagine a grid with horizontal dimension the nth digit of the number and the vertical dimension the nth number of the set. the Turing halting proof construction is quite similar except that the contents of the table are Halt/ Nonhalt for 1/ 0 instead, and the horizontal axis is nth input, and the vertical axis is nth computer program. in other words the combination of computer programs and inputs are countable but the infinite table/ array is uncountable based on a universal machine simulator construction that can "flip" a halting to a nonhalting case assuming a halting detector machine exists (hence reductio ad absurdam). some evidence that Turing had Cantors construction partly in mind is that his same paper with the halting proof talks about computable numbers as (along the lines of) real numbers with computable digits. vznvzn $\begingroup$ addendum, there is indeed a very "intuitive" way to view undecidability but it requires a lot of higher math to grasp (ie intuition of a neophyte is much different than intuition of an expert). mathematicians do consider the halting problem and godels thm identical proofs via a Lawvere fixed point theorem, but this is an advanced fact not much accessible to undergraduates "yet". see halting problem, uncomputable sets, common math problem? Theoretical Computer Science & also linked post for refs $\endgroup$ – vzn Mar 18 '17 at 15:01 At this point it is worth noting the work by Emil Post who is (justly) credited with being a co-discoverer of the basic results of computability, though sadly was published too late to be considered a co-discoverer of the solution to the Entscheidungsproblem. He certainly participated in the elaboration of the so-called Church-Turing thesis. Post was motivated by very philosophical considerations, namely the theoretical limitations of the human ability to compute, or even get precise answers in a consistent manner. He devised a system, now called Post canonical systems, the details of which are unimportant, which he claimed could be used to solve any problem which can be solved soely by manipulation of symbols. Interestingly, he explicitly considered mental states to be part of the "memory" explicitly, so it is likely that he at least considered his model of computation to be a model of human thought in it's entirety. The Entscheidungsproblem considers the possibility of using such a means of computation to say, determine the theoremhood of any proposition expressible in the system of the Principia Mathematica. But the PM was a system explicitly designed to be able to represent all of mathematical reasoning, and, by extension (at least at the time, when Logicism was still in vogue) all of human reasoning! It's therefore very unsurprising then, to turn the attention of such a system to the Post canonical systems themselves, just as the human mind, via the works of Frege, Russel and logicians of the turn of the century had turned their attention to the reasoning faculty of the human mind itself. So it is clear at this point, that self-reference, or the ability of systems to describe themselves, was a rather natural subject in the early 1930s. In fact, David Hilbert was hoping to "bootstrap" mathematical reasoning itself, by providing a formal description of all of human mathematics, which then could be mathematically proven to be consistent itself! Once the step of using a formal system to reason about itself is obtained, it's a hop and a skip away from the usual self-referential paradoxes (which have a pretty old history). Since all the statements in Principia are presumed to be "true" in some metaphysical sense, and the Principia can express program p returns result true on input n if a program exists to decide all theorems in that system, it is quite simple to directly express the liar's paradox: this program always lies. can be expressed by The program p always returns the opposite of what the principia mathematica say p will return. The difficulty is building the program p. But at this point, it's rather natural to consider the more general sentence The program p always returns the opposite of what the PM say q will return. for some arbitrary q. But it's easy to build p(q) for any given q! Just compute what PM predicts it will output, and return the opposite answer. We can't just replace q by p at this point though, since p takes q as input, and q does not (it takes no input). Let's change our sentence so that p does take input: The program p returns the opposite of what PM says q(r) will return. Arg! But now p takes 2 pieces of input: q and r, whereas q only takes 1. But wait: we want p in both places anyways, so r is not a new piece of information, but just the same piece of data again, namely q! This is the critical observation. So we finally get The program p returns the opposite of what PM says q(q) will return. Let's forget about this silly "PM says" business, and we get The program p(q) returns the opposite of what q(q) will return. This is a legitimate program provided we have a program that always tells us what q(q) returns. But now that we have our program p(q), we can replace q by p and get our liar's paradox. codycody Not the answer you're looking for? Browse other questions tagged computability proof-techniques undecidability halting-problem intuition or ask your own question. Is there a proof for the halting problem that does not involve an infinite nest of functions? Proof of the undecidability of the Halting Problem Halting Problem without self-reference: why does this argument not suffice (or does it)? Is there any demonstrably uncomputable concrete problem which does not rely on diagonalization? The halting problem for laymen Is there any concrete relation between Gödel's incompleteness theorem, the halting problem and universal Turing machines? Is the undecidable function $UC$ well-defined for proving the undecidability of Halting Problem? Alternative proof for the undecidability of $A_{TM}$ Does the proof of undecidability of the Halting Problem cheat by reversing results? Could the Halting Problem be "resolved" by escaping to a higher-level description of computation? Is possible to prove undecidability of the halting problem in Coq? Can a weaker version of the Halting Problem be solved?
CommonCrawl
Task Running around a track II Running around a track II In a $400$ meter race, runners are staggered with those in the outermost lanes starting the furthest ahead on the track: this way they can all complete the race at a finishing line perpendicular to the track in the straightaway where they begin the race. The width of each lane is $1.22$ meters. Also important for this problem is the fact, as per Olympic guidelines, that the $400$ meter distance for lane $1$ is measured $30$ centimeters from the inside of the track, and $20$ centimeters from the inside of each other lane . This is pictured below: How does the perimeter of the track $20$ centimeters from the inside of lane $2$ compare to the perimeter $30$ centimeters from the inside of lane $1$? How far ahead should the runner in lane $2$ start, compared to the runner in lane $1$, if they are both to complete $400$ meters at the finishing line on the straightaway section? In a longer distance race where the runners are all toward the inside of the track, why is it more efficient for a runner wishing to pass others to do so in the straightaway section of the track instead of through the curves? The goal of this task is to model a familiar object, an Olympic track, using geometric shapes. Calculations of perimeters of these shapes explain the staggered start of runners in a $400$ meter race. The specifications for an Olympic track indicating that the distance around for lanes $2$ and greater is measured from $20$ centimeters of the inside of the given lane is found on pages $35$ and $36$ of the following document: For lane $1$ this measurement is taken $30$ centimeters from the inside of the lane while for the other lanes it is taken $20$ centimeters from the inside of the lane. The teacher may wish to explain this to the students so that the values in the set-up of the problem make sense. In order to get the values in the solution below, an approximation of $3.1416$ for $\pi$ is sufficient. If students are using a scientific calculator to evaluate the expressions they can use the $\pi$ button but if not teachers may wish to share this value with students. This task addresses the ''staggered start'' of racers who stay in their lanes for one lap. Because the runners on the outside lanes have further to go through the curves, they start an appropriate distance ahead of runners inside of them on the track. The staggered starts can be calculated explicitly as is done here. This task is primarily intended for instruction as it builds on ''Running around a track I'' and some time is necessary to explain the different numbers occurring in the task (namely the distances of $30$ and $20$ centimeters where perimeters are measured. We are given that the perimeter of the track $30$ centimeters from the inside of lane 1 is $400$ meters. For the perimeter $20$ centimeters from the inside of lane 2, we can calculate as follows. The straightaway sections are still each $84.39$ meters long. However, the curved sections form a circle whose radius is now $$ 36.5 + 1.22 + 0.2 = 37.92 $$ meters. The diameter of the circle will be $2 \times 37.92 = 75.84$ meters. So the perimeter of the track $20$ centimeters from the inside of lane 1 is $$ 2 \times 84.39 + \pi \times 75.84 \approx 407.04. $$ So the perimeter of the track $20$ centimeters from the inside of lane 2 is approximately $7.04$ meters larger than the perimeter $30$ centimeters from the inside of lane 1. For the perimeter $20$ centimeters from the inside of lane 3, we can calculate as follows. The straightaway sections are still each $84.39$ meters long. However, the curved sections form a circle whose radius is now $$ 36.5 + 2 \times 1.22 + 0.2 = 39.14 $$ meters. The diameter of the circle will be $2 \times 39.14 = 78.28$ meters. So the perimeter $20$ centimeters from the inside of lane 2 is $$ 2 \times 84.39 + \pi \times 78.28 \approx 414.70 $$ meters. So the perimeter of the track $20$ centimeters from the inside of lane 3 is approximately $7.66$ meters larger than the perimeter $30$ centimeters from the inside of lane 2. Note that to compute the difference in length between the perimeters in different lanes, the straightaway sections do not need to be considered because they are the same for all lanes. So we are only interested in comparing the circumferences of two circles with different diameters $d_1$ and $d_2$. This will be given by the formula $$ \pi \times (d_2 - d_1) $$ assuming $d_2$ is the larger of the diameters. So as we continue to move out the track, comparing lane 3 to lane 4 and so on, these values will all be the same since the $d_2 - d_1$ term will always be $2 \times 1.22$ meters. If a runner wishes to pass in the straightaway, the only extra distance to be travelled is the distance to move outside of the other runner when passing. If a runner passes on the curved section of the track, however, then she is running around the circumference of a bigger circle and consequently is running further, in addition to the distance to move outside of the other runner and then back again.
CommonCrawl
Synthesis and performance of a polymeric scale inhibitor for oilfield application Original Paper - Production Engineering Heming Luo1, Dejun Chen1, Xiaoping Yang1, Xia Zhao1, Huixia Feng1, Mingyang Li2 & Junqiang Wang3 Journal of Petroleum Exploration and Production Technology volume 5, pages 177–187 (2015)Cite this article A potential polymeric scale inhibitor used for oilfield was synthesized by solution polymerization. The static inhibition rate on calcium carbonate, calcium sulfate, barium sulfate, and strontium sulfate can reach up to more than 95 %. The inhibition performance of the polymer on calcium fluoride was also evaluated. Effects including test temperature, NaCl concentration, pH value, and evaluation time on the performance of the polymer were researched. Scanning electron microscopy and X-ray diffraction analyses were used to investigate the influence of the antiscalant on the scale crystal. The impact of carbon dioxide on the calcium carbonate crystal was also found. The inhibitor has a well-inhibition rate of more than 90 % in the actual oilfield water samples. Working on a manuscript? Avoid the most common mistakes and prepare your manuscript for journal editors. Scale is often defined as the precipitation from aqueous solution with inorganic sediments and scale deposition is a problem commonly existing in different processes in oilfield productions, for instance, water injecting, oil extracting, gathering and transporting, warming treatment, demulsification, crude oil dehydration and desalting, etc., meanwhile, the scale is prone to be appeared on the down hole, oil well casing, oil pipelines and other production equipments (Liu et al. 2012; Dickson et al. 2011). The scale deposition products in oilfield are mainly consisted of calcium carbonate, calcium sulfate, barium sulfate and strontium sulfate, iron, silicon sediment and other insoluble solids (Senthilmurugan et al. 2011; Dickinson et al. 2012) and the production of scaling is often due to the changes of thermodynamics conditions or the fluids incompatibility. Scale precipitation could cause various damages including blockage of pipeline and equipment, energy leak, accelerate corrosion, and severe accidents, which will influence the safety of production and the economic benefit of petroleum industry (El-Said et al. 2009) and thus should be avoided to the most extent in the oilfield industry. Adding chemical antiscalant (also called scale inhibitor) is an economical and simple effective route for the prevention of scaling (Dickson et al. 2011). Numerous polymeric scale inhibitors with good scale inhibition performance had been reported in literatures, of which maleic polymers are widely used as scale inhibitor. For instance, copolymer of maleic acid-ortho toluidine, maleic acid-acrylic acid and maleic acid-acrylamide was adopted as an inhibitor for calcium sulfate and calcium carbonate (Senthilmurugan et al. 2010), and exhibited a well-inhibition performance. Itaconic acid used as monomer to synthesize a polymeric inhibitor was also studied by Shakkthivel and Vasudevan (2007). Recently, hyperbranched polymer (Jensen and Kelland 2012), modified polymer (Guo et al. 2012) and quadripolymer (Zhang et al. 2007) used as inhibitor draw more and more attention from industry and academia. However, most of the detailed studies are focused on inhibitor performance for calcium carbonate and calcium sulfate, but barium sulfate and strontium sulfate in few literatures were studied simultaneously. Few literature reports on the calcium fluoride are also being found in oilfield and on other industrial equipments (Shen et al. 2013). Preliminary study of the inhibition performance of synthesized polymer on calcium fluoride was done in this work. As many literatures reported, it is meaningful to study more suitable copolymers for the scale prevention in oilfield application. In this work, a copolymer scale inhibitor, including the group of carboxyl, hydroxyl, amide, sulfonic acid and ester, was synthesized and the inhibition performance on calcium carbonate, calcium sulfate, barium sulfate, strontium sulfate and calcium fluoride was evaluated. Furthermore, the scale inhibition performance of the polymer in the produced water from oilfield was also evaluated. Due to the fact that calcium carbonate is a very common deposition in the oilfield, and barium sulfate is one most difficult to be dealt with, effects including test temperature, NaCl concentration, pH value, and evaluation time on the inhibition performance of the as-prepared polymer were studied. By means of XRD and SEM, the influences caused by the synthesized copolymer on the crystal structure of calcium carbonate, strontium sulfate and barium sulfate were investigated. Particularly, the impact of carbon dioxide on the calcium carbonate crystal was found, which was not mentioned clearly in the existing literatures. Experimental work Maleic anhydride, acrylamide, and hydroxypropyl acrylate were of analytical grade. 2-acrylamido-2-methyl-1-pro-panesulfonic acid was of purity of 99 %. All the chemical reagents used for the solutions preparation were of analytical grade. Carbon dioxide was of purity of 99.9 %. The deionized water was used throughout the experiments. Synthesis of the copolymer The copolymer was synthesized in an aqueous medium through free-radical polymerization using ammonium persulfate as initiator in an air atmosphere according to the following optimizing procedure. Weighted 60.0 g of maleic anhydride was added into 50 mL of deionized water, heated to 55 °C and dissolved, then 50.0 g of sodium hydroxide in 150 mL of deionized water was dropwise added while continuous stirring. After that the solution was heated to 75 °C, then 150 mL of deionized water combined with 30.0 g of acrylamide, 20.0 g of 2-acrylamido-2-methyl-1-pro-panesulfonic acid and 15.0 g of hydroxypropyl acrylate was added dropwise accompanying with 18.2 g of ammonium persulfate aqueous solution (with 50 mL of deionized water) dropwise added at the same time in about 60 min. Then the mixture was heated to 80 °C and maintained for 4 h with stirring. The pH value of the medium was adjusted to 7–8 with sodium hydroxide solution after cooling to 45 °C, and finally the polymeric scale inhibitor was synthesized. Characterization and thermal stability of copolymer The polymer was purified through precipitation using methanol for characterization. Then the polymer was dried in vacuum oven with a pressure of 0.07 MPa at 70 °C for 16 h. The structure of the as-dried polymer sample was confirmed by FT-IR (Fourier transform infrared spectrometer, USA Nicolet AVTAR 360). The thermal stability was studied via the thermo gravimetric analysis (Germany STA-6400) in nitrogen environment from room temperature to 600 °C at a ramping rate of 10 °C/min. Evaluation of the inhibition efficiency Following the NACE Standard TM0374-2007 and Chinese Petroleum Industry Standard SY/T 5673-1993 procedures, the static evaluation test for qualifying scale inhibiting properties of the polymer was conducted. The solutions for evaluation of the scale inhibition performance of the polymer were prepared as shown in Table 1. Solutions C and D for calcium carbonate deposition inhibition performance evaluation were saturated using CO2 at room temperature while the upper space of the test flask is 30–35 mL. In the first step, a certain quantity of scale inhibitor was weighed into the Erlenmeyer flask; in the second step, 50 mL of cation solution (e.g., brine A, C, E or G) was added, shaked uniformly; and in the third step, 50 mL of anion solution (e.g., B, D, F, or H) was added, and then plugged by plastic stopper wrapped with polyethylene (PE) film. Wrap the entire flask by PE film, and then pack it tightly using Sellotape. Finally, the Erlenmeyer flasks were placed in the oven for predetermined time at a constant temperature (details in Table 1). For calcium fluoride test, similar procedure was adopted except that the polyethylene terephthalate (PET) bottles were used in place of Erlenmeyer flask for static evaluation test. Table 1 Brines and evaluation condition for inhibition experiments Inhibitor efficiency was calculated based on remaining Ca2+, Ba2+, and Sr2+ ions in solution according to the following equation. $${\text{Percentage inhibition (\%)}} = \, \frac{{\mathop m\nolimits_{2} - \mathop m\nolimits_{0} }}{{\mathop m\nolimits_{1} - \mathop m\nolimits_{0} }} \, \times \, 100 \, \%$$ Where m 2 was the mass concentration of Ca2+, Ba2+, or Sr2+ ions after inhibitor functions, m 1 was the mass concentration of Ca2+, Ba2+, or Sr2+ ions in the solution of which 50 mL deionized water without anions was added in the third step, m 0 was the mass concentration of Ca2+, Ba2+, or Sr2+ ions of the solution with no inhibitor. The test of the mass concentration of Ca2+, Ba2+, and Sr2+ ions was also followed the method specified by the standard TM0374-2007 and SY/T 5673-1993. Effects on the inhibition performance of inhibitor As we know, the tendency of scale forming is affected by temperatures (Dyer and Graham 2002), and the polymeric inhibitor has different inhibition ability at different temperatures. Generally, the performance of inhibitor gets to be challenged at higher temperature. Thus, the inhibition performance of the inhibitor at 80 and 90 °C was also studied in this work. With the variation of acid gas concentration (such as CO2, H2S), or being affected by adding demulsifiers, corrosion inhibitors and other agents, pH value of the oilfield formation water will be changed, as a result the scaling trend of the water changed as well. In oilfield production system, the pH value of the oilfield produced water is generally between 6.5 and 9.0. The pH values of 7.0, 7.4, 8.0, 8.4, and 9.0 were selected here to study the inhibition performance of the polymer on calcium carbonate and barium sulfate. The pH of the solution was adjusted by borax–boric acid buffer solution. The solutions for calcium carbonate evaluation were saturated with CO2 at room temperature after adjusting to the required pH value. Concentration of NaCl High salinity is a significant feature of the oilfield water. It is primarily dominated by sodium chloride, potassium chloride and magnesium chloride, among which sodium chloride is accounting to more than 90 % in most cases as we know. Salts have an obvious impact on the scaling tendency by affecting the ionic strength of the oilfield produced water, and influence the properties of the inhibitor. The inhibition properties of the copolymer at different sodium chloride concentration were investigated. For the calcium carbonate inhibition performance evaluation, the concentration of NaCl was 5.5, 11, 22, 33, 66 and 99 g/L, respectively. For evaluating the inhibition performance on barium sulfate, the concentration of 3.75, 7.5, 15, 30 and 60 g/L was selected. Evaluation time Effective inhibition time is an important aspect for the evaluation of the performance of scale inhibitor, it is necessary to study the inhibition performance of inhibitor along with the time. That will determine the squeezing or injecting method of inhibitor (continuous or intermittent). Through the research of the effective time of the scale inhibitor, we can have a better understanding on the inhibition function. Different heat time from 8 to 72 h was chosen to evaluate the performance of the scale inhibitor. XRD and SEM characterization The calcium carbonate, barium sulfate, and strontium sulfate deposition were obtained at low/no inhibitor dosage; they were carefully collected and dried for characterization. The X-ray diffraction (XRD, CAD-SDPMH, Netherlands Enraf–Nonius) was used to study the crystal structure of the scale. The scanning electron microscope (SEM, HITACHI S-4800, Japan) was used to observe the surface morphology of the scale crystal. The inhibition performance of the inhibitor in oilfield water The water–cut crude oil was from an oilfield of Northwest area in China. Test brines (produced water) were obtained through chemical demulsification/high-speed centrifugal and membrane filtering. Table 2 shows the concentration of compositions in the produced water. The following evaluation conditions are listed as below: heated at 60 °C for 24 h without carbon dioxide as test 1; heated at 60 °C for 24 h, and then heated at 70 °C for 24 h without carbon dioxide as test 2; the evaluation experiments for produced water with saturated carbon dioxide heated at 60 °C for 24 h and then heated at 70 °C for 24 h were done as test 3. These tests were undertaken to simulate the actual situation of oilfield treatment as much as possible. The measurement of the inhibition performance of the as-prepared inhibitor was referred to the method specified by the standard TM0374-2007 and SY/T 5673-1993. Table 2 Composition of oilfield produced water (mg/L) Characterization of copolymer The FT-IR spectra presented in Fig. 1 were used for confirming the structure of copolymer. The possible carboxylic group (O–H) or amine hydrogen (N–H) in the polymer is indicated by the peak at 3,430 cm−1. The peak at 1,660 cm−1 represents the characteristic absorption of C=O of primary amide. The peaks at 1,580 and 1,402 cm−1 represent the C=O and band of C–O stretching vibration of –COONa (Su et al. 2002), and 1,402 cm−1 also represents the combination of C–O stretching vibration and O–H in plane deformation (Senthilmurugan et al. 2010). The peaks at 1,320 and 1,130 cm−1 represent the asymmetric and symmetric stretching vibrations of C–O–C in ester group. At peak of 1,050 cm−1, –OH in primary alcohol and S=O stretching vibration can be found. All these facts confirm the structure of the expected copolymer. The functional groups were responsible for the scale inhibition properties, the inhibition ability of the polymer could be revealed. FT-IR spectrum of the polymer The TG–DTA spectrum in Fig. 2 indicates that less than 15 % weight loss occurred at 270 °C, revealed that the polymer presents a good thermal stability. The heat absorption peak existed lower than 100 °C as can be seen from the DTA spectrum may caused by the evaporation of residual water. The DTA curve keeps relative smooth between 150 and 300 °C revealed that the polymer possesses favorable stability. The initial decomposition temperature of the polymer is about 330 °C through the DTA analysis and TG curve fitting. It is reasonably concluded that the polymer possesses a well anti-temperature capability. TG spectrum of the polymer Evaluation of scale inhibition performance Figure 3 shows the inhibition performance (at 70 °C) of the polymer on calcium sulfate (Fig. 3a), calcium carbonate (Fig. 3b), strontium sulfate (Fig. 3c), barium sulfate (Fig. 3d) with the dosage of copolymer. As can be seen from the figures, with the increase of the inhibitor dosage, the inhibition efficiency increased, when the dosage reaches a critical value, the inhibition rate is highest which is higher than 95 %, after that the inhibition rate exhibited a plateau or a slight decrease tendency. For calcium sulfate, the inhibitor exhibited high inhibition efficiency at a low dosage of 1.25 mg/L (ppm), the best inhibition efficiency of 97 % can be obtained at dosage of 5 ppm. For the calcium carbonate inhibition performance evaluation, the inhibition rate increased sharply higher than 50 % with the antiscalant dosage increased from 2.5 to 7.5 ppm. At the optimum dosage of 7.5 ppm, a inhibition efficiency of 99 % on calcium carbonate can be obtained. It can be seen from Fig. 3 that the optimum inhibitor dosage for barium sulfate is 40 ppm (inhibition efficiency of 99 %), for strontium sulfate is 80 ppm (inhibition efficiency of 97.5 %) under the evaluations. It is clearly demonstrated that the polymer exhibits a relatively good inhibition performance on calcium sulfate, calcium carbonate, strontium sulfate, and barium sulfate. Relation curve of dosage to inhibition efficiency (70 °C) on a calcium sulfate, b calcium carbonate, c barium sulfate, d strontium sulfate Figure 4 shows the inhibition rate of the polymer on calcium fluoride. At dosage lower than 100 ppm, very low inhibition rate can be observed, the inhibition rate increases with further addition of the polymer inhibitor. The opaque suspension occurred between 150 and 300 ppm inhibitor dosage, which means calcium fluoride was dispersed at the solution. The inhibition rate on calcium fluoride became 94 % at the inhibitor dosage of 400 ppm. Undoubtedly, the dosage is extremely high, so further study on dispersant and inhibitor on calcium fluoride is needed. Inhibition performance (70 °C) of the polymer on calcium fluoride Effects on the inhibition performance of the antiscalant As can be seen from the Fig. 5a, the scale inhibition efficiency on calcium carbonate achieves higher than 90 % with the dosage of 20 ppm at 80 °C while a dosage of 40 ppm is needed for 90 °C. This may be attributed to the fact that the high temperature makes the calcium carbonate easy to form, as a result, the additional antiscalant dosage is necessary to get better efficiency. When the dosage added up to a certain value (40 ppm), the inhibition rate at 90 °C is higher than 80 °C, the fact that more polymer adsorption on the calcium carbonate crystal nuclei at higher temperature may account for this phenomenon. For inhibition property on barium sulfate (Fig. 5b), when the optimal dosage at 70 °C (40 ppm) was added, the nearly identical inhibition rate (96 %) can be obtained at 80 and 90 °C. On the whole, at higher temperatures, extra inhibitor dosage is needed to maintain a high inhibition rate for calcium carbonate control, while no further dosage is required for barium sulfate in the scope of this work. Effect of temperature on the inhibition efficiency on a calcium carbonate (inhibitor dosage of 10 ppm), b barium sulfate (inhibitor dosage of 40 ppm) Figure 6 shows the effect of the concentration of sodium chloride on the inhibition efficiency of the polymer. Scale inhibition rate is increasing with the increase of the NaCl content, the inhibition rate on calcium carbonate rises dramatically when the concentration is higher than 11 g/L, the inhibition rate reached more than 95 % as the NaCl concentration higher to 99 g/L. At the NaCl concentration of 3.75 g/L, the scale inhibitor possessed poor performance for barium sulfate inhibition, and at the salt content of 7.5 g/L or more, the scale inhibition rate was more than 90 %. And the scale inhibition rate on barium sulfate is better at the higher concentration of NaCl. Therefore, the scale inhibitor with high concentration of NaCl, i.e., higher than 7.5 g/L, is more suitable for the scale inhibition. Effect of concentration of NaCl on the inhibition efficiency on a calcium carbonate (inhibitor dosage of 10 ppm), b barium sulfate (inhibitor dosage of 40 ppm) Figure 7 shows the effect of pH value on the scale inhibitor performance. When the pH is lower than 8.5, the scale inhibitor presented a good ability on calcium carbonate, there is a slight decrease when the pH is higher than 8.5 (i.e., 9.0). The reason may be due to that the high concentration of hydroxyl at pH value of 9, reacted easily with bicarbonate ions and transformed to a large amount of carbonate ions (more than 2,600 mg/L) which generated calcium carbonate at identical dosage of antiscalant, so more scale precipitation appeared. For barium sulfate, it can be concluded that the better scale inhibition is shown at the higher pH though the scale inhibition rate of pH = 9.0 is slightly lower than that of 8.4. It should be attributed that the carboxyl group on the polymeric inhibitor has a stronger ionization and higher electrostatic repulsion at the higher pH value. Effect of pH value on the inhibition efficiency on a calcium carbonate (inhibitor dosage of 10 ppm), b barium sulfate (inhibitor dosage of 30 ppm) Effect of inhibition time on the evaluation of the performance of scale inhibitor is shown in Fig. 8. As shown in Fig. 8a, the inhibition efficiency on calcium carbonate at an evaluation time of 72 h still has a high inhibition rate higher than 95 %, which indicates that the polymer plays a continuous calcium carbonate inhibition performance. For barium sulfate deposition inhibition experiments (Fig. 8b), the scale inhibition rate is apparently decreased after 36 h at 70 °C. The reason may be that the combination of polymeric scale inhibitor with barium ions or fine crystal of barium sulfate is weak. The combination process was an adsorption–desorption equilibrium, larger extent of desorption could occur for longer time evaluation at a constant temperature which may lead to a reduced scale inhibition efficiency. So, it is necessary to assure a sufficient dosage for controlling the barium sulfate in long time. The method of continuous injecting of inhibitor is probably another route for improving the inhibition efficiency with the constant inhibitor dosage. Effect of evaluation time on the inhibition rate on a calcium carbonate (inhibitor dosage of 10 ppm), b barium sulfate (inhibitor dosage of 40 ppm) XRD and SEM analyses XRD patterns and SEM images of calcium carbonate, barium sulfate, strontium sulfate obtained from the absence or low dosage of inhibitor shown in Figs. 9 and 10, respectively. The crystal of calcium carbonate in the absence of carbon dioxide showed a cube shape (Fig. 9A-a, Fig. 9A-c), but there were many non-cube shaped particles under the presence of carbon dioxide, (Fig. 9A-b, Fig. 9A-d). Therefore, lattice distortion caused by scale inhibitor may play a weak role for calcium carbonate scale inhibition in the oilfield environment containing carbon dioxide. However, the scale inhibition mechanism may include the lattice distortion in the less/none-carbon dioxide system as many researches stated before. SEM graphs (Fig. 9B) and XRD patterns (Fig. 10a) of the calcium carbonate deposition (carbon dioxide condition) without and with synthesized polymer reveal that the added inhibitor lead to more amounts of the fine-size particles (Fig. 9B-b, d). The XRD peak intensity reduced significantly with the presence of synthesized polymer (Fig. 10a) shows that the preferential growth of crystals of calcium carbonate was strongly inhibited. SEM images of A were CaCO3 without (a, c) and with CO2 (b, d), B were CaCO3 (saturated with CO2) without (a, c) and with inhibitor (b, d), C were BaSO4 without (a, c) and with inhibitor (b, d), D were SrSO4 without (a, c) and with inhibitor (b, d) XRD patterns of a calcium carbonate, b barium sulfate, c strontium sulfate scale For barium sulfate, the rule dense-flake crystals changed to be loose, irregular polyhedron (Fig. 9c) which indicates that the crystals were suppressed. For strontium sulfate (Fig. 9d), similar phenomenon was observed, the rule polyhedral cone changed to loose taper lump. The loose material could be easily disturbed by fluid turbulence and suspended in the solution (Shen et al. 2012). It is also worth mentioning that the diameter of crystal of barium sulfate and strontium sulfate is larger with the presence of inhibitor, so the visual inhibition efficiency may be not intuitively dropped. This may be attributed that the polymer has some influence to surface phase of the barium sulfate and strontium sulfate crystal (Mavredaki et al. 2011). Therefore, maintaining a sufficient amount of scale inhibitor for prevention of the formation of barium sulfate and strontium sulfate is necessarily required. From XRD patterns (Fig. 10b, c), peak intensity and the peak position of the barium sulfate and strontium sulfate with the absence or presence of inhibitor are not changed distinctively, that is, the addition of inhibitor did not alter the crystal structure (Shakkthivel and Vasudevan 2007). It could be found that '2θ' values are not exactly same as Fig. 10 shown, indicating some degree of difference in crystal morphology between inhibited and uninhibited crystals (Senthilmurugan et al. 2010) which were also observed by SEM. Inhibition performance of the polymer in actual oilfield water The mass weight of scale deposition (calculated by calcium carbonate hardness) generated at Solution 1 and Solution 2 with the absence of inhibitor in test 1, test 2 and test 3 is shown in Table 3. The evaluation results of the inhibitor in actual produced water as shown in Fig. 11. The inhibitor possessed a good performance in Solution 1 after 60 °C/24 h and then 70 °C/24 h heating at the dosage of 30 ppm. The inhibition efficiency in test 2 was lower than test 1 which may be caused by the existed crystal formed at 60 °C continued to grow when the thermodynamic condition changed (70 °C). So increasing temperature makes the scale forming more serious in actual oilfield water, and a stable thermodynamic environment is beneficial to scale prevention. In addition in test 3, nearly no scales were generated in Solution 1 and Solution 2 without or with scale inhibitor, owing to the Solutions having no/low scale tendency in the saturated carbon dioxide condition while the concentration of scale-forming ions was also low, revealing that scale control is easier in the carbon dioxide environment if corrosion is neglected. Table 3 Mass of scale deposition generated in oilfield produced water absence of inhibitor after heating Inhibition efficiency of the polymer at the actual oilfield water a Solution 1 and b Solution 2 A potential copolymer scale inhibitor for oilfield was prepared, and its inhibition performance on calcium carbonate, calcium sulfate, barium sulfate, and strontium sulfate was evaluated, and the inhibition efficiency could achieve higher than 95 %. The inhibition rate on calcium fluoride could achieve higher than 94 %, however, the dosage is required extremely high, so further attempts are needed for the inhibitor on calcium fluoride. It is found that extra amount of antiscalant is necessary for achieving well-inhibition performance on calcium carbonate deposition at higher temperature and higher pH, while for the barium sulfate scale inhibition it is not necessary. Sodium chloride concentration affects the inhibition performance of the polymer on calcium carbonate and barium sulfate, but the polymer can play a good ability in the high NaCl content environment. The synthesized polymer reveals an excellent inhibition performance on calcium carbonate after heating for 72 h while the inhibition efficiency of the polymer on barium sulfate shows a sharp degradation along with the evaluation time. The diameter of barium sulfate and strontium sulfate formed at low dosage of inhibitor is larger than the ones under blank conditions, thus adequate dosage for the prevention for barium sulfate and strontium sulfate is extremely important. It is also found that carbon dioxide has lattice distortion effect on the crystal of calcium carbonate. Dickinson W, Sanders L, Kemira (2012) Novel barium sulfate scale inhibitor for use in high iron environments. SPE Latin American and Caribbean Petroleum Engineering Conference, Mexico City, Mexico, April 16–18 2012 Dickson W, Griffin R, Sanders L, Lowen C, Kemira (2011) Development and performance of biodegradable antiscalants for oilfield applications. In: Offshore Technology Conference, Houston, USA, 2–5 May 2011 Dyer SJ, Graham GM (2002) The effect of temperature and pressure on oilfield scale formation. J Petrol Sci Eng 35:95–107 El-Said M, Ramzi M, Abdel-Moghny T (2009) Analysis of oilfield waters by ion chromatography to determine the composition of scale deposition. Desalination 249:748–756 Guo XR, Qiu FX, Dong K, Zhou X, Qi J, Zhou Y, Yang DY (2012) Preparation, characterization and scale performance of scale inhibitor copolymer modification with chitosan. J Ind Eng Chem 18:2177–2183 Jensen MK, Kelland MA (2012) A new class of hyperbranched polymeric scale inhibitors. J Petrol Sci Eng 94–95:66–72 Liu X, Chen T, Chen P, Montgomerie H, Hagen T, Wang B, Yang X (2012) Understanding the co-deposition of calcium sulphate and barium sulphate and developing environmental acceptable scale inhibitors applied in HTHP wells. SPE international conference and exhibition on oilfield Scale, Aberdeen, UK, 30–31 May 2012 Mavredaki E, Neville A, Sorbie KS (2011) Initial Stages of Barium Sulfate Formation at Surfaces in the Presence of Inhibitors. Cryst Growth Des 11:4751–4758 Senthilmurugan B, Ghosh B, Kundu SS, Haroun M, Kameshwari B (2010) Maleic acid based scale inhibitors for calcium sulfate scale inhibition in high temperature application. J Petrol Sci Eng 75:189–195 Senthilmurugan B, Ghosh B, Sanker S (2011) High performance maleic acid based on oil well scale inhibitors-development and comparative evaluation. J Ind Eng Chem 17:415–420 Shakkthivel P, Vasudevan T (2007) Newly developed itaconic acid copolymers for gypsum and calcium carbonate scale control. J Appl Polym Sci 103:3026–3213 Shen ZH, Li JS, Xu K, Ding LL, Ren HQ (2012) The effect of synthesized hydrolyzed polymaleic anhydride (HPMA) on the crystal of calcium carbonate. Desalination 284:238–244 Shen XZ, Zhao RM, Gu HF (2013) Analysis of calcium fluoride scaling and treatment measures. Metall Power 4:67–70 (in Chinese) Su KM, Pan TY, Zhang YL (2002) Methods of spectral analysis. East China University of Science and Technology Press, Shanghai, pp 102–103 Zhang YX, Wu JH, Hao SC, Liu MH (2007) Synthesis and inhibition efficiency of a novel quadripolymer inhibitor. Chin J Chem Eng 15:600–605 This work was financially supported by the Funds for Creative Research Groups of China (Grant No. 51121062) and Excellent Young Teachers in Lanzhou University of Technology Training Project (Grant No. 1005ZCX016). School of Petrochemical Engineering, Lanzhou University of Technology, Lanzhou, 730050, Gansu, China Heming Luo, Dejun Chen, Xiaoping Yang, Xia Zhao & Huixia Feng Zhejiang Satellite Petrochemical Company, Jiaxing, 314201, Zhejiang, China Mingyang Li Sinohydro Group No. 4 Engineering Bureau, Xining, 810007, Qinghai, China Junqiang Wang Heming Luo Dejun Chen Xiaoping Yang Xia Zhao Huixia Feng Correspondence to Heming Luo. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Luo, H., Chen, D., Yang, X. et al. Synthesis and performance of a polymeric scale inhibitor for oilfield application. J Petrol Explor Prod Technol 5, 177–187 (2015). https://doi.org/10.1007/s13202-014-0123-0 Copolymer Scale inhibitor Oilfield water
CommonCrawl
Over 3 years (202) Physics and Astronomy (125) Materials Research (92) Earth and Environmental Sciences (18) MRS Online Proceedings Library Archive (69) Journal of Materials Research (18) Microscopy and Microanalysis (15) Proceedings of the International Astronomical Union (15) Epidemiology & Infection (14) Journal of Fluid Mechanics (10) The Journal of Laryngology & Otology (5) Journal of Mechanics (4) Radiocarbon (4) Journal of Paleontology (3) Laser and Particle Beams (3) The European Physical Journal - Applied Physics (3) Bulletin of Entomological Research (2) Canadian Mathematical Bulletin (2) MRS Advances (2) International Astronomical Union (16) MiMi / EMAS - European Microbeam Analysis Society (15) test society (10) Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (4) The Paleontological Society (3) Australian Mathematical Society Inc (2) Canadian Mathematical Society (2) Applied Probability Trust (1) CINP (1) Fauna & Flora International - Oryx (1) Global Science Press (1) Mineralogical Society (1) Royal Aeronautical Society (1) The New Zealand Society of Otolaryngology, Head and Neck Surgery (1) Relating nanoscale structure to optoelectronic functionality in multiphase donor–acceptor nanoparticles for printed electronics applications Mohammed F. Al-Mudhaffer, Natalie P. Holmes, Pankaj Kumar, Matthew G. Barr, Sophie Cottam, Rafael Crovador, Timothy W. Jones, Rebecca Lim, Xiaojing Zhou, John Holdsworth, Warwick J. Belcher, Paul C. Dastoor, Matthew J. Griffith Journal: MRS Communications / Volume 10 / Issue 4 / December 2020 This work investigated the photophysical pathways for light absorption, charge generation, and charge separation in donor–acceptor nanoparticle blends of poly(3-hexylthiophene) and indene-C60-bisadduct. Optical modeling combined with steady-state and time-resolved optoelectronic characterization revealed that the nanoparticle blends experience a photocurrent limited to 60% of a bulk solution mixture. This discrepancy resulted from imperfect free charge generation inside the nanoparticles. High-resolution transmission electron microscopy and chemically resolved X-ray mapping showed that enhanced miscibility of materials did improve the donor–acceptor blending at the center of the nanoparticles; however, a residual shell of almost pure donor still restricted energy generation from these nanoparticles. Altered brain structural and functional connectivity in schizotypy Yong-ming Wang, Xin-lu Cai, Rui-ting Zhang, Yi-jing Zhang, Han-yu Zhou, Yi Wang, Ya Wang, Jia Huang, Yan-yu Wang, Eric F. C. Cheung, Raymond C. K. Chan Published online by Cambridge University Press: 17 July 2020, pp. 1-10 Schizotypy refers to schizophrenia-like traits below the clinical threshold in the general population. The pathological development of schizophrenia has been postulated to evolve from the initial coexistence of 'brain disconnection' and 'brain connectivity compensation' to 'brain connectivity decompensation'. In this study, we examined the brain connectivity changes associated with schizotypy by combining brain white matter structural connectivity, static and dynamic functional connectivity analysis of diffusion tensor imaging data and resting-state functional magnetic resonance imaging data. A total of 87 participants with a high level of schizotypal traits and 122 control participants completed the experiment. Group differences in whole-brain white matter structural connectivity probability, static mean functional connectivity strength, dynamic functional connectivity variability and stability among 264 brain sub-regions of interests were investigated. We found that individuals with high schizotypy exhibited increased structural connectivity probability within the task control network and within the default mode network; increased variability and decreased stability of functional connectivity within the default mode network and between the auditory network and the subcortical network; and decreased static mean functional connectivity strength mainly associated with the sensorimotor network, the default mode network and the task control network. These findings highlight the specific changes in brain connectivity associated with schizotypy and indicate that both decompensatory and compensatory changes in structural connectivity within the default mode network and the task control network in the context of whole-brain functional disconnection may be an important neurobiological correlate in individuals with high schizotypy. Multi-scale dynamics of magnetic flux tubes and inverse magnetic energy transfer Fundamental Problems of Plasma Astrophysics Focus on Plasma Astrophysics Muni Zhou, Nuno F. Loureiro, Dmitri A. Uzdensky Journal: Journal of Plasma Physics / Volume 86 / Issue 4 / August 2020 Published online by Cambridge University Press: 08 July 2020, 535860401 We report on an analytical and numerical study of the dynamics of a three-dimensional array of identical magnetic flux tubes in the reduced-magnetohydrodynamic description of the plasma. We propose that the long-time evolution of this system is dictated by flux-tube mergers, and that such mergers are dynamically constrained by the conservation of the pertinent (ideal) invariants, viz. the magnetic potential and axial fluxes of each tube. We also propose that in the direction perpendicular to the merging plane, flux tubes evolve in a critically balanced fashion. These notions allow us to construct an analytical model for how quantities such as the magnetic energy and the energy-containing scale evolve as functions of time. Of particular importance is the conclusion that, like its two-dimensional counterpart, this system exhibits an inverse transfer of magnetic energy that terminates only at the system scale. We perform direct numerical simulations that confirm these predictions and reveal other interesting aspects of the evolution of the system. We find, for example, that the early time evolution is characterized by a sharp decay of the initial magnetic energy, which we attribute to the ubiquitous formation of current sheets. We also show that a quantitatively similar inverse transfer of magnetic energy is observed when the initial condition is a random, small-scale magnetic seed field. Selective amplification of the chirped attosecond pulses produced from relativistic electron mirrors F. Tan, S. Y. Wang, B. Zhang, Z. M. Zhang, B. Zhu, Y. C. Wu, M. H. Yu, Y. Yang, G. Li, T. K. Zhang, Y. H. Yan, F. Lu, W. Fan, W. M. Zhou, Y. Q. Gu Journal: Laser and Particle Beams / Volume 38 / Issue 2 / June 2020 In this paper, the generation of relativistic electron mirrors (REM) and the reflection of an ultra-short laser off the mirrors are discussed, applying two-dimension particle-in-cell simulations. REMs with ultra-high acceleration and expanding velocity can be produced from a solid nanofoil illuminated normally by an ultra-intense femtosecond laser pulse with a sharp rising edge. Chirped attosecond pulse can be produced through the reflection of a counter-propagating probe laser off the accelerating REM. In the electron moving frame, the plasma frequency of the REM keeps decreasing due to its rapid expansion. The laser frequency, on the contrary, keeps increasing due to the acceleration of REM and the relativistic Doppler shift from the lab frame to the electron moving frame. Within an ultra-short time interval, the two frequencies will be equal in the electron moving frame, which leads to the resonance between laser and REM. The reflected radiation near this interval and corresponding spectra will be amplified due to the resonance. Through adjusting the arriving time of the probe laser, a certain part of the reflected field could be selectively amplified or depressed, leading to the selective adjustment of the corresponding spectra. Repeated ketamine administration redeems the time lag for citalopram's antidepressant-like effects G.-F. Zhang, W.-X. Liu, L.-L. Qiu, J. Guo, X.-M. Wang, H.-L. Sun, J.-J. Yang, Z.-Q. Zhou Journal: European Psychiatry / Volume 30 / Issue 4 / June 2015 Current available antidepressants exhibit low remission rate with a long response lag time. Growing evidence has demonstrated acute sub-anesthetic dose of ketamine exerts rapid, robust, and lasting antidepressant effects. However, a long term use of ketamine tends to elicit its adverse reactions. The present study aimed to investigate the antidepressant-like effects of intermittent and consecutive administrations of ketamine on chronic unpredictable mild stress (CUMS) rats, and to determine whether ketamine can redeem the time lag for treatment response of classic antidepressants. The behavioral responses were assessed by the sucrose preference test, forced swimming test, and open field test. In the first stage of experiments, all the four treatment regimens of ketamine (10 mg/kg ip, once daily for 3 or 7 consecutive days, or once every 7 or 3 days, in a total 21 days) showed robust antidepressant-like effects, with no significant influence on locomotor activity and stereotype behavior in the CUMS rats. The intermittent administration regimens produced longer antidepressant-like effects than the consecutive administration regimens and the administration every 7 days presented similar antidepressant-like effects with less administration times compared with the administration every 3 days. In the second stage of experiments, the combination of ketamine (10 mg/kg ip, once every 7 days) and citalopram (20 mg/kg po, once daily) for 21 days caused more rapid and sustained antidepressant-like effects than citalopram administered alone. In summary, repeated sub-anesthestic doses of ketamine can redeem the time lag for the antidepressant-like effects of citalopram, suggesting the combination of ketamine and classic antidepressants is a promising regimen for depression with quick onset time and stable and lasting effects. Development and external-validation of a nomogram for predicting the survival of hospitalised HIV/AIDS patients based on a large study cohort in western China Z. Yuan, B. Zhou, S. Meng, J. Jiang, S. Huang, X. Lu, N. Wu, Z. Xie, J. Deng, X. Chen, J. Liu, J. Zhang, F. Wu, H. Liang, L. Ye Published online by Cambridge University Press: 01 April 2020, e84 The aim of this study was to develop and externally validate a simple-to-use nomogram for predicting the survival of hospitalised human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) patients (hospitalised person living with HIV/AIDS (PLWHAs)). Hospitalised PLWHAs (n = 3724) between January 2012 and December 2014 were enrolled in the training cohort. HIV-infected inpatients (n = 1987) admitted in 2015 were included as the external-validation cohort. The least absolute shrinkage and selection operator method was used to perform data dimension reduction and select the optimal predictors. The nomogram incorporated 11 independent predictors, including occupation, antiretroviral therapy, pneumonia, tuberculosis, Talaromyces marneffei, hypertension, septicemia, anaemia, respiratory failure, hypoproteinemia and electrolyte disturbances. The Likelihood χ2 statistic of the model was 516.30 (P = 0.000). Integrated Brier Score was 0.076 and Brier scores of the nomogram at the 10-day and 20-day time points were 0.046 and 0.071, respectively. The area under the curves for receiver operating characteristic were 0.819 and 0.828, and precision-recall curves were 0.242 and 0.378 at two time points. Calibration plots and decision curve analysis in the two sets showed good performance and a high net benefit of nomogram. In conclusion, the nomogram developed in the current study has relatively high calibration and is clinically useful. It provides a convenient and useful tool for timely clinical decision-making and the risk management of hospitalised PLWHAs. Investigation on Actuation Performance of Continuous Fiber Reinforced Piezoelectric Composite Actuator X. Ma, B. Zhou, S. F. Xue Journal: Journal of Mechanics / Volume 36 / Issue 3 / June 2020 In this paper, a novel continuous fiber reinforced piezoelectric composite (CFRPC) actuator is proposed to improve the stability and reliability of piezoelectric actuators. A piezoelectric driving structure composed of a cantilever beam and the CFRPC actuator is utilized to research the actuation performance of the CFRPC actuator. The expression of the equivalent moment for the CFRPC actuator is obtained using the equivalent load method and electro-mechanical coupling theory. Based on Euler-Bernoulli beam theory, the analytical expression of the deflection for the cantilever beam is derived. The accuracy of the obtained analytical expressions is demonstrated by finite element simulation as well as published experimental results. The actuation performance of the CFRPC actuator is investigated through the analytical expressions of the equivalent moment and deflection. The results show that the key parameters such as driving voltage, fiber volume fraction, cantilever beam height, actuator height, actuator length and actuator position have great influence on the actuation performance of the CFRPC actuator. The CFRPC actuator has good mechanical and electrical properties, and has a wide application prospect in the field of structural shape control. Galerkin Weighted Residual Method for Axially Functionally Graded Shape Memory Alloy Beams Z. T. Kang, Z. Y. Wang, B. Zhou, S. F. Xue This paper focus on the mechanical and martensitic transformation behaviors of axially functionally graded shape memory alloy (AFG SMA) beams. It is taken into consideration that material properties, such as austenitic elastic modulus, martensitic elastic modulus, critical transformation stresses and maximum transformation strain vary continuously along the longitudinal direction. According to the simplified linear SMA constitutive equations and Bernoulli-Euler beam theory, the formulations of stress, strain, martensitic volume fraction and governing equations of the deflection, height and length of transformed layers are derived. Employing the Galerkin's weighted residual method, the governing differential equation of the deflection is solved. As an example, the bending behaviors of an AFG SMA cantilever beam subjected to an end concentrated load are numerically analyzed using the developed model. Results show that the mechanical and martensitic transformation behaviors of the AFG SMA beam are complex after the martensitic transformation of SMA occurs. The influences of FG parameter on the mechanical behaviors and geometrical shape of transformed regions are obvious, and should be considered in the design and analysis of AFG SMA beams in the related regions. Effects of intravenous arginine infusion on inflammation and metabolic indices of dairy cows in early lactation L. Y. Ding, Y. F. Wang, Y. Z. Shen, G. Zhou, T. Y. Wu, X. Zhang, M. Z. Wang, J. J. Loor, J. Zhang Journal: animal / Volume 14 / Issue 2 / February 2020 Enhancing the supply of arginine (Arg), a semi-essential amino acid, has positive effects on immune function in dairy cattle experiencing metabolic stress during early lactation. Our objective was to determine the effects of Arg supplementation on biomarkers of liver damage and inflammation in cows during early lactation. Six Chinese Holstein lactating cows with similar BW (508 ± 14 kg), body condition score (3.0), parity (4.0 ± 0), milk yield (30.6 ± 1.8 kg) and days in milk (20 ± days) were randomly assigned to three treatments in a replicated 3 × 3 Latin square design balanced for carryover effects. Each period was 21 days with 7 days for infusion and 14 days for washout. Treatments were (1) Control: saline; (2) Arg group: saline + 0.216 mol/day l-Arg; and (3) Alanine (Ala) group: saline + 0.868 mol/day l-Ala (iso-nitrogenous to the Arg group). Blood and milk samples from the experimental cows were collected on the last day of each infusion period and analyzed for indices of liver damage and inflammation, and the count and composition of somatic cells in milk. Compared with the Control, the infusion of Arg led to greater concentrations of total protein, immunoglobulin M and high density lipoprotein cholesterol coupled with lower concentrations of haptoglobin and tumor necrosis factor-α, and activity of aspartate aminotransferase in serum. Infusion of Ala had no effect on those biomarkers compared with the Control. Although milk somatic cell count was not affected, the concentration of granulocytes was lower in response to Arg infusion compared with the Control or Ala group. Overall, the biomarker analyses indicated that the supplementation of Arg via the jugular vein during early lactation alleviated inflammation and metabolic stress. Effects of in ovo injection of vitamin C on heat shock protein and metabolic genes expression Y. F. Zhu, M. B. Bodinga, J. H. Zhou, L. Q. Zhu, Y. L. Cao, Z. Z. Ren, X. J. Yang Some studies have shown that the excessive metabolic heat production is the primary cause for dead chicken embryos during late embryonic development. Increasing heat shock protein (HSP) expression and adjusting metabolism are important ways to maintain body homeostasis under heat stress. This study was conducted to investigate the effects of in ovo injection (IOI) of vitamin C (VC) at embryonic age 11th day (E11) on HSP and metabolic genes expression. A total of 320 breeder eggs were randomly divided into normal saline and VC injection groups. We detected plasma VC content and rectal temperature at chick's age 1st day, and the mRNA levels of HSP and metabolic genes in embryonic livers at E14, 16 and 18, analysed the promoter methylation levels of differentially expressed genes and predicted transcription factors at the promoter regions. The results showed that IOI of VC significantly increased plasma VC content and decreased rectal temperature (P < 0.05). In ovo injection of VC significantly increased heat shock protein 60 (HSP60) and pyruvate dehydrogenase kinase 4 (PDK4) genes expression at E16 and PDK4 and secreted frizzled related protein 1 (SFRP1) at E18 (P < 0.05). At E16, IOI of VC significantly decreased the methylation levels of total CpG sites and −336 CpG site in HSP60 promoter and −1137 CpG site in PDK4 promoter (P < 0.05). Potential binding sites for nuclear factor-1 were found around −389 and −336 CpG sites in HSP60 promoter and potential binding site for specificity protein 1 was found around −1137 CpG site in PDK4 promoter. Our results suggested that IOI of VC increased HSP60, PDK4 and SFRP1 genes expression at E16 and 18, which may be associated with the demethylation in gene promoters. Whether IOI of VC could improve hatchability needs to be further verified by setting uninjection group. Electronic Structure and Coupling of Re Clusters In Monolayer MoS2 Shize Yang, Priyanka Manchanda, Yongji Gong, Sokrates T. Pantelides, Wu Zhou, Matthew F. Chisholm Transforming Samples into Data – Experimental Design and Sample Preparation for Electron Microscopy Alice F. Liang, Chris Petzold, Kristen Dancel-Manning, Yolande Grobler, Joseph Sall, Chuxuan Zhou, Patrick H. Ren, Ruth Lehmann Revealing the ductility of nanoceramic MgAl2O4 Bin Chen, Yuanjie Huang, Jianing Xu, Xiaoling Zhou, Zhiqiang Chen, Hengzhong Zhang, Jie Zhang, Jianqi Qi, Tiecheng Lu, Jillian F. Banfield, Jinyuan Yan, Selva Vennila Raju, Arianna E. Gleason, Simon Clark, Alastair A. MacDowell Journal: Journal of Materials Research / Volume 34 / Issue 9 / 14 May 2019 Print publication: 14 May 2019 Ceramics are strong but brittle. According to the classical theories, ceramics are brittle mainly because dislocations are suppressed by cracks. Here, the authors report the combined elastic and plastic deformation measurements of nanoceramics, in which dislocation-mediated stiff and ductile behaviors were detected at room temperature. In the synchrotron-based deformation experiments, a marked slope change is observed in the stress–strain relationship of MgAl2O4 nanoceramics at high pressures, indicating that a deformation mechanism shift occurs in the compression and that the nanoceramics sample is elastically stiffer than its bulk counterpart. The bulk-sized MgAl2O4 shows no texturing at pressures up to 37 GPa, which is compatible with the brittle behaviors of ceramics. Surprisingly, substantial texturing is seen in nanoceramic MgAl2O4 at pressures above 4 GPa. The observed stiffening and texturing indicate that dislocation-mediated mechanisms, usually suppressed in bulk-sized ceramics at low temperature, become operative in nanoceramics. This makes nanoceramics stiff and ductile. P146: Does a communications skills intervention improve emergency department staff coping skills and burnout? F. Zhou, M. Howlett, J. Talbot, J. Fraser, B. Robinson, P. Atkinson Journal: Canadian Journal of Emergency Medicine / Volume 21 / Issue S1 / May 2019 Published online by Cambridge University Press: 02 May 2019, p. S117 Introduction: Emergency department (ED) staff carry a high risk for the burnout syndrome of increased emotional exhaustion, depersonalization and decreased personal accomplishment. Previous research has shown that task-oriented coping skills were associated with reduced levels of burnout compared to emotion-oriented coping. ED staff at one hospital participated in an intervention to teach task-oriented coping skills. We hypothesized that the intervention would alter staff coping behaviors and ultimately reduce burnout. Methods: ED physicians, nurses and support staff at two regional hospitals were surveyed using the Maslach Burnout Inventory (MBI) and the Coping Inventory for Stressful Situations (CISS). Surveys were performed before and after the implementation of communication and conflict resolution skills training at the intervention facility (I) consisting of a one-day course and a small group refresher 6 to 15 months later. Descriptive statistics and multivariate analysis assessed differences in staff burnout and coping styles compared to the control facility (C) and over time. Results: 85/143 (I) and 42/110 (C) ED staff responded to the initial survey. Post intervention 46 (I) and 23(C) responded. During the two year study period there was no statistically significant difference in CISS or MBI scores between hospitals (CISS: (Pillai's trace = .02, F(3,63) = .47, p = .71, partial η2 = .02); MBI: (Pillai's trace = .01, F(3,63) = .11, p = .95, partial η2 = .01)) or between pre- and post-intervention groups (CISS: (Pillai's trace = .01, F(3,63) = .22, p = .88, partial η2 = .01); MBI: (Pillai's trace = .09, F(3,63) = 2.15, p = .10, partial η2 = .01)). Conclusion: We were not able to measure improvement in staff coping or burnout in ED staff receiving communication skills intervention over a two year period. Burnout is a multifactorial problem and environmental rather than individual factors may be more important to address. Alternatively, to demonstrate a measurable effect on burnout may require more robust or inclusive interventions. Optimizing the clinical utility of four proposed criteria for a persistent and impairing grief disorder by emphasizing core, rather than associated symptoms Stephen J. Cozza, M. Katherine Shear, Charles F. Reynolds, Joscelyn E. Fisher, Jing Zhou, Andreas Maercker, Naomi Simon, Christine Mauro, Natalia Skritskaya, Sidney Zisook, Barry Lebowitz, Colleen Gribbin Bloom, Carol S. Fullerton, Robert J. Ursano Journal: Psychological Medicine / Volume 50 / Issue 3 / February 2020 Distinguishing a disorder of persistent and impairing grief from normative grief allows clinicians to identify this often undetected and disabling condition. As four diagnostic criteria sets for a grief disorder have been proposed, their similarities and differences need to be elucidated. Participants were family members bereaved by US military service death (N = 1732). We conducted analyses to assess the accuracy of each criteria set in identifying threshold cases (participants who endorsed baseline Inventory of Complicated Grief ⩾30 and Work and Social Adjustment Scale ⩾20) and excluding those below this threshold. We also calculated agreement among criteria sets by varying numbers of required associated symptoms. All four criteria sets accurately excluded participants below our identified clinical threshold (i.e. correctly excluding 86–96% of those subthreshold), but they varied in identification of threshold cases (i.e. correctly identifying 47–82%). When the number of associated symptoms was held constant, criteria sets performed similarly. Accurate case identification was optimized when one or two associated symptoms were required. When employing optimized symptom numbers, pairwise agreements among criteria became correspondingly 'very good' (κ = 0.86–0.96). The four proposed criteria sets describe a similar condition of persistent and impairing grief, but differ primarily in criteria restrictiveness. Diagnostic guidance for prolonged grief disorder in International Classification of Diseases, 11th Edition (ICD-11) functions well, whereas the criteria put forth in Section III of Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) are unnecessarily restrictive. Guanidinium-Functionalized Photodynamic Antibacterial Oligo(Thiophene)s Zhe Zhou, Cansu Ergene, Edmund F. Palermo Journal: MRS Advances / Volume 4 / Issue 59-60 / 2019 We synthesized precision oligomers of thiophene with cationic and hydrophobic side chains to mimic the charge, hydrophobicity, and molecular size of antibacterial host defense peptides (HDPs). In this study, the source of cationic charge was a guanidinium salt moiety intended to reflect the structure of arginine-rich HDPs. Due to the pi-conjugated oligo(thiophene) backbone structure, these compounds absorb visible light in aqueous solution and react with dissolved oxygen to produce highly biocidal reactive oxygen species (ROS). Thus, the compounds exert bactericidal activity in the dark with dramatically enhanced potency upon visible light illumination. We find that guanylation of primary amine groups enhanced the activity of the oligomers in the dark but also mitigated their light-induced activity enhancement. In addition, we also quantified their toxicity to mammalian cell membranes using a hemolysis assay with red blood cells, in the light and dark conditions. Post-exposure prophylaxis vaccination rate and risk factors of human rabies in mainland China: a meta-analysis D. L. Wang, X. F. Zhang, H. Jin, X. Q. Cheng, C. X. Duan, X. C. Wang, C. J. Bao, M. H. Zhou, T. Ahmad Published online by Cambridge University Press: 04 December 2018, e64 Rabies is one of the major public health problems in China, and the mortality rate of rabies remains the highest among all notifiable infectious diseases. A meta-analysis was conducted to investigate the post-exposure prophylaxis (PEP) vaccination rate and risk factors for human rabies in mainland China. The PubMed, Web of Science, Chinese National Knowledge Infrastructure, Chinese Science and Technology Periodical and Wanfang databases were searched for articles on rabies vaccination status (published between 2007 and 2017). In total, 10 174 human rabies cases from 136 studies were included in this meta-analysis. Approximately 97.2% (95% confidence interval (CI) 95.1–98.7%) of rabies cases occurred in rural areas and 72.6% (95% CI 70.0–75.1%) occurred in farmers. Overall, the vaccination rate in the reported human rabies cases was 15.4% (95% CI 13.7–17.4%). However, among vaccinated individuals, 85.5% (95% CI 79.8%–83.4%) did not complete the vaccination regimen. In a subgroup analysis, the PEP vaccination rate in the eastern region (18.8%, 95% CI 15.9–22.1%) was higher than that in the western region (13.3%, 95% CI 11.1–15.8%) and this rate decreased after 2007. Approximately 68.9% (95% CI 63.6–73.8%) of rabies cases experienced category-III exposures, but their PEP vaccination rate was 27.0% (95% CI 14.4–44.9%) and only 6.1% (95% CI 4.4–8.4%) received rabies immunoglobulin. Together, these results suggested that the PEP vaccination rate among human rabies cases was low in mainland China. Therefore, standardised treatment and vaccination programs of dog bites need to be further strengthened, particularly in rural areas. Heavy-tailed distributions in branching process models of secondary cancerous tumors Markov processes Philip A. Ernst, Marek Kimmel, Monika Kurpas, Quan Zhou Journal: Advances in Applied Probability / Volume 50 / Issue A / December 2018 Published online by Cambridge University Press: 01 February 2019, pp. 99-114 Recent progress in microdissection and in DNA sequencing has facilitated the subsampling of multi-focal cancers in organs such as the liver in several hundred spots, helping to determine the pattern of mutations in each of these spots. This has led to the construction of genealogies of the primary, secondary, tertiary, and so forth, foci of the tumor. These studies have led to diverse conclusions concerning the Darwinian (selective) or neutral evolution in cancer. Mathematical models of the development of multi-focal tumors have been devised to support these claims. We offer a model for the development of a multi-focal tumor: it is a mathematically rigorous refinement of a model of Ling et al. (2015). Guided by numerical studies and simulations, we show that the rigorous model, in the form of an infinite-type branching process, displays distributions of tumor size which have heavy tails and moments that become infinite in finite time. To demonstrate these points, we obtain bounds on the tails of the distributions of the process and an infinite series expansion for the first moments. In addition to its inherent mathematical interest, the model is corroborated by recent literature on apparent super-exponential growth in cancer metastases. Anti-impact tension control strategy for the space-tethered combination after target capture B. Wang, J. F. Guo, L. Yi, W. H. Zhou Journal: The Aeronautical Journal / Volume 122 / Issue 1257 / November 2018 An electromechanical coupling model is established for the space-tethered combination (STC) under microgravity environment after target capture by the tethered robot system (TRS). A linearized dynamic model of the STC is put forward with its controllability and observability as a control system analyzed. A double closed-loop tension control strategy is proposed to mitigate the impact and suing longitudinal vibration caused by the velocity difference between the platform and target. Experiment setup is built on a ground-based flotation platform to investigate the impact of the STC. Results of simulation and experimental validation show that the proposed tension control strategy is responsive and rapid in tension tracking and effectively prevent impact. Active drag reduction of a high-drag Ahmed body based on steady blowing B. F. Zhang, K. Liu, Y. Zhou, S. To, J. Y. Tu Journal: Journal of Fluid Mechanics / Volume 856 / 10 December 2018 Print publication: 10 December 2018 Active drag reduction of an Ahmed body with a slant angle of $25^{\circ }$ , corresponding to the high-drag regime, has been experimentally investigated at Reynolds number $Re=1.7\times 10^{5}$ , based on the square root of the model cross-sectional area. Four individual actuations, produced by steady blowing, are applied separately around the edges of the rear window and vertical base, producing a drag reduction of up to 6–14 %. However, the combination of the individual actuations results in a drag reduction 29 %, higher than any previous drag reductions achieved experimentally and very close to the target (30 %) set by automotive industries. Extensive flow measurements are performed, with and without control, using force balance, pressure scanner, hot-wire, flow visualization and particle image velocimetry techniques. A marked change in the flow structure is captured in the wake of the body under control, including the flow separation bubbles, over the rear window or behind the vertical base, and the pair of C-pillar vortices at the two side edges of the rear window. The change is linked to the pressure rise on the slanted surface and the base. The mechanisms behind the effective control are proposed. The control efficiency is also estimated.
CommonCrawl
Welcome to the last installment in our mini-series on adjunctions in category theory. We motivated the discussion in Part 1 and walked through formal definitions in Part 2. Today I'll share some examples. In Mac Lane's well-known words, "adjoint functors arise everywhere," so this post contains only a tiny subset of examples. Even so, I hope they'll help give you an eye for adjunctions and enhance your vision to spot them elsewhere. An adjunction, you'll recall, consists of a pair of functors $F\dashv G$ between categories $\mathsf{C}$ and $\mathsf{D}$ together with a bijection of sets, as below, for all objects $X$ in $\mathsf{C}$ and $Y$ in $\mathsf{D}$. In Part 2, we illustrated this bijection using a free-forgetful adjunction in linear algebra as our guide. So let's put "free-forgetful adjuctions" first on today's list of examples. Free-Forgetful Adjunctions Whenever a functor $U\colon \mathsf{D}\to\mathsf{C}$ ignores some data or structure in $\mathsf{D}$ and has a left adjoint $F\colon \mathsf{C}\to\mathsf{D}$, the left adjoint will have a "free" flavor. Since the right adjoint is "forgetful" (this does not have an official definition), such an adjunction $F\dashv U$ is called a free-forgetful adjunction. Last week we saw this with sets and real vector spaces. Another illustration lies in the connection between directed graphs and categories. Both involve vertices/objects and edges/morphisms. So how exactly are they related? Every directed graph gives rise to a category, and every category is a directed graph (with extra data). More formally, there is an adjunction involving the category $\mathsf{DirGraph}$ of directed graphs and the category $\mathsf{Cat}$ of categories. In the picture above, the functor $F$ turns a graph $G$ into a category $FG$ by viewing vertices as objects and edges as morphisms. It also inserts identity arrows at each vertex, and declares the set of morphisms between two vertices to be the set of all finite paths between them. Composition is then concatenation of paths. On the other hand, the functor $U$ assigns to a category $\mathsf{C}$ its underlying graph $U\mathsf{C}$. It just forgets the identity and composition axioms, which aren't needed to specify a graph. The bijection enjoyed by this adjunction is which says something along the lines of, "If you'd like to view a graph $G$ as a diagram in some category $\mathsf{C}$, then you're in luck, because there's exactly one way to turn that graph into a category $FG$ so that the diagram $G$ in $\mathsf{C}$ is a functor $FG\to \mathsf{C}$." This calls to mind an idea we've seen before: diagrams are functors. Product-Hom Adjunction The next example gives a nice categorical relationship between multiplication and exponentiation. Early in life, one learns that $x^{y\times z}=(x^y)^z$ holds whenever $x,y,z$ are numbers. Later in life, one learns that this holds for sets, too: This is called the product-hom adjunction. To unravel it, let's use the notation $X^Y$ to mean the set of functions from $Y$ to $X$, that is $X^Y:=\text{hom}(Y,X)$. This is nice, since if $X$ has 2 elements and $Y$ has 3 elements then there are exactly 8 functions from $Y$ to $X$, i.e. $|\text{hom}(Y,X)|=2^3=|X|^{|Y|}$. Now, how is the above bijection an adjunction? For any set $Y$ there is a functor $Y\times -\colon\mathsf{Set}\to\mathsf{Set}$ that assigns to a set $Z$ the Cartesian product $Y\times Z$. There is another functor $\hom(Y,-)\colon \mathsf{Set}\to\mathsf{Set}$ that assigns to a set $Z$ the set of all functions $\text{hom}(Y,Z)$. Then $Y\times -$ is left adjoint to $\text{hom}(Y,-)$. In other words, the bijection below holds for all sets $X$ and $Z$. Indeed, every function $f\colon Y\times Z\to X$ gives rise to a function $\hat{f_z}\colon Y\to X$ by fixing a variable $z\in Z$, namely $\hat{f_z}(y):=f(y,z)\in X$. Likewise, any function $g\colon Z\to X^Y$ gives rise to a function $\hat{g}\colon Y\times Z\to X$ by $\hat{g}(y,z):=gz(y)$. In computer science, you'll recognize this as currying. Other areas of math have their own version of the product-hom adjunction. For instance, if $X,Y,Z$ are topological spaces with chosen basepoints, then there is a "based" version of the Cartesian product of spaces called the smash product, denoted by a wedge $\wedge$. For example, "multiplying" two circles with the Cartesian product results in a torus, $S^1\times S^1$. But if you further smash the two circles together, then you'll get a sphere. So the smash product of circles $S^1\wedge S^1$ is a sphere. Here's a nice gif from Wikipedia: So an analogue of the product $\times$ is the smash product $\wedge$, and the analogous adjunction $(Y\wedge-)\dashv \text{hom}(Y,-)$ is called the smash-hom adjunction. In the special case when $Y=S^1$ is the circle, the two functors $S^1\times -$ and $\text{hom}(S^1,-)$ are called the suspension and loop functors and the resulting adjunction is the suspension-loop adjunction. It appears in a nice one-line proof that the fundamental group of the circle is $\mathbb{Z}$. Galois Connections The next adjunction we'll consider is called a Galois connection. This is my favorite example because it subsumes so many phenomena in mathematics. A Galois connection is, simply put, an adjunction between functors on posets. I'll explain. First, know that every poset (partially ordered set) is a category. A poset is a set $P$ in which a partial order $\leq$ has been defined. As a category, the objects are the elements in $P$ and there is exactly one morphism $p\to p'$ whenever $p\leq p'$. In particular, there is at most one arrow between any two elements, that is, $\text{hom}(p,p')$ is always a set with either 0 or 1 elements. Using the definition of a partial order, you can verify that the axioms of a category are indeed satisfied. A function $f\colon P\to Q$ between posets that preserves the order—meaning it satisfies $fp\leq fp'$ whenever $p\leq p'$—is called a monotone function. Crucially, a monotone function is precisely a functor when we view the posets as categories. (Below we'll be interested in a function $f$ that's order-reversing, so that $fp\geq fp'$ whenever $p\leq p'$. It's still a functor—it's just a contravariant one.) In this general setting, an adjunction consists of opposing monotone functions $f\colon P\to Q$ and $g\colon Q\to P$ that satisfy Lots of things you might care about are posets, so there are numerous Galois connections throughout mathematics. Here's one example I especially enjoy: Formal Concept Analysis Given a set $X$ consider the power set $2^X$, i.e. the set of all subsets of $X$. It's a poset by inclusion: $A\leq B$ if and only if $A\subseteq B\subseteq X$. So in particular, it's a category. Now here's a nice fact I like: Any relation $R$ on $X\times Y$ defines a Galois connection. A relation is another name for a subset $R\subseteq X\times Y$. If, for example, $X$ is a set of animals and $Y$ is a set of features, then $R$ could be the set of all pairs $(x,y)$ such that animal $x$ possesses feature $y$. Naturally, we might be interested in subsets of animals possessing certain features, and vice versa. This motivates the following two functions, $f$ and $g$: These functions are order reversing (as you can check) and they satisfy the following: Right away, you'll notice this isn't quite the adjunction condition specified above: $f$ and $g$ both appear on the right-hand side of the subset containments! No worries. This is another flavor of adjunction: $f$ and $g$ are called mutually right adjoints, and this is an example of what's sometimes called an antitone Galois connection. As an aside, pairs of subsets $(A,B)\in 2^X\times 2^Y$ for which equality above holds—i.e. $fA=B$ and $A=gB$—have a special name: they're called formal concepts. They are the focal point of interest in formal concept analysis, a nice part of order theory dealing with hierarchy of concepts in data. For more on formal concepts and category theory, you might be interested in this blog series by Simon Willerton on the $n$-Category Café. Galois connections arising from relations are just one example. There are many more, including: the connection between fields and groups in Galois theory (from which this adjunction derives is name) the connection between the floor function $\lfloor -\rfloor$ and the ceiling function $\lceil -\rceil$ the connection between covering spaces and fundamental groups in topology the connection between polynomials and their roots in algebraic geometry the connection between syntax and semantics à la William Lawvere The list goes on, and the Wikipedia entry showcases most of these examples and more. Our last example of adjunctions comes from applied category theory, namely data migration. Today's post is already quite long, so I'll try to keep this brief. A database—tables of information—can be represented by a directed graph, $G$. The columns are vertices and an edge is a relationship between columns. In the airline example below, the column of "Economy Seats" corresponds to one vertex, which is connected to "Price" since every seat has a cost associated to it. The graph keeps track of the database's "syntax." To reinstate the actual data, we need to attach meaning. We need, for example, a principled way of "imagining the leftmost vertex represents the set of economy seats." To do this, we take the free category $\mathsf{G}:=FG$ on the graph (as in the free-forgetful adjunction above!) and define a functor from that category to the category of sets. In short, a database is encoded by functor $ \mathsf{G}\to \mathsf{Set}$. Now suppose we have another database, whose graph $H$ gives rise to a category $\mathsf{H}:=FH$, and suppose the two databases are related so that there is a functor $J\colon \mathsf{H}\to\mathsf{G}$. (Perhaps one database is a more detailed version of another.) Asking for a migration of data from $G$ to $H$ amounts to asking for a functor $\mathsf{H}\to\mathsf{Set}$ given a functor $\mathsf{G}\to\mathsf{Set}$. Is it possible? Sure! Just precompose with $J$. This defines a nice way to get from the category $\mathsf{Set}^\mathsf{G}$ of databases (functors) on $G$ to the category $\mathsf{Set}^\mathsf{H}$ of databases (functors) on $H$: But can we migrate data in the other direction? Given a functor $\mathsf{H}\to \mathsf{Set}$, can we use $J$ to extend it to a functor out of $\mathsf{G}$? For this, we turn to Kan extensions—the name given to solutions to this kind of extension problem. I use plural here because there are both left and right Kan extensions. (This is a consequence of the fact that morphisms have direction: left or right.) There's a lot of rich theory behind this (you'll need to know about limits and colimits—I've written an introduction here!), but the upshot is that the two Kan extensions provide two ways to migrate data from $H$ to $G$. Moreover, they define two adjunctions: I've gone through all this rather quickly, but I think it gives a nice glimpse of applied category theory. A thorough explanation can be found in David Spivak's Category Theory for the Sciences and in the newer An Invitation to Applied Category Theory with Brendan Fong, which is also free on the arXiv as Seven Sketches in Compositionality. (See Section 3.4, "Adjunctions and Data Migration," from which I borrowed the example above.) John Baez also explains the idea nicely in his free online course on applied category theory. Do check them out! Topology Book Launch Notes on Applied Category Theory Language Modeling with Reduced Densities
CommonCrawl
Optimization of immobilized Lactobacillus pentosus cell fermentation for lactic acid production Jianfei Wang1, Jiaqi Huang1,2, Hannah Laffend1, Shaoming Jiang1, Jing Zhang1, Yuchen Ning1, Mudannan Fang1 & Shijie Liu1 Parametric optimization is an effective way in fermentation process to improve product yield and productivity in order to save time, space and financial resources. In this study, Box–Behnken design was applied to optimize the conditions for lactic acid production by immobilized Lactobacillus pentosus ATCC 8041 cell fermentation. Two quadratic models and response surface methodology were performed to illustrate the effect of each parameters and their interactions on the lactic acid yield and glucose consumption rate in immobilized L. pentosus ATCC 8041 cell fermentation. The maximum lactic acid yield was obtained as 0.938 ± 0.003 g/g glucose with a productivity of 2.213 ± 0.008 g/(L × h) under the optimized conditions of 2.0 mm bead diameter, 5.60 pH, 115.3 g/L initial glucose concentration, and 398.2 mg biomass (CDW) in 100 mL hydrogel. The analysis of variance indicated that the quadratic model was significant and could be used to scale up the fermentation process. Lactic acid (LA) is a hygroscopic non-toxic organic compound that has been widely utilized in the cosmetic, pharmaceutical, food, and chemical industries (Vijayakumar et al. 2008). In recent years, polylactic acid (PLA) polymerized from LA has received widespread attention for its excellent biocompatibility and bioabsorbability (Abdel-Rahman et al. 2011). PLA has a wide range of applications with the benefits of environmental protection and energy saving, which can be considered as a suitable alternative for traditional plastics produced by the petrochemical industry (Laopaiboon et al. 2010). Therefore, the market demand for LA has gradually increased (Okano et al. 2010). Chemical synthesis and fermentation are the main methods of industrial production of LA (Abdel-Rahman et al. 2011). The racemic mixture (DL-LA) will be formed during LA production process by chemical synthesis method, resulting in an increase in its cost of separation and purification. In addition, the chemical synthesis method still has the problem of high energy consumption. Nowadays, the production of compounds based on microbial conversion has become an important research direction. Optically pure LA (D-LA or L-LA) can be produced by fermentation method according to specific LA bacteria (LAB), leading to a decrease in time period and cost of recycle process (Bahry et al. 2019). Moreover, the fermentation conditions for LA production are milder, which is regarded to be an environment-friendly production method (Zhao et al. 2016). Immobilized cell fermentation has attracted great attention in the fields of scientific research and industry. Compared with free cell fermentation, immobilized cell fermentation has many advantages, such as higher cell density in the fermenter, higher yield and production rate during fermentation process, high biological activity maintained for a long fermentation period, and the convenience of product recovery (Kumar et al. 2014; Dishisha et al. 2012; Ghorbani et al. 2011; Goranov et al. 2013). In addition, immobilized cells with excellent reusability can be easily separated from the fermentation medium, so they can be used in repeated batch fermentations to reduce the time and cost of cell culture (Djukić-Vuković et al. 2013). Sodium alginate (SA) is a common material to prepare immobilized cell bead based on a cross-linking reaction with Ca2+, while polyvinyl alcohol (PVA) can also be used to encapsulate cells due to its cross-linking reaction with borate ions (Lee and Mooney 2012; Tang et al. 2017). It has been reported that SA–PVA hydrogel for cell encapsulation has significant advantages over cell immobilization using a single material. PVA effectively improves the stability of immobilized cell beads with a promoted mechanical strength, while SA improves the surface properties of immobilized cell beads to keep from the tendency to agglomeration. Therefore, SA–PVA hydrogel can be regarded as a proper material for cell immobilization (Zhan et al. 2013). Lactobacillus pentosus (L. pentosus) is a proper LAB to produce LA by consuming hexose via the Embden–Meyerhof–Parnas (EMP) pathway. When glucose is absent, its metabolic pathway changes from homologous fermentation to heterologous fermentation, which produces LA via phosphoketolase (PK) pathway (Martinez et al. 2013). Both pathways are shown as following equations: $${\text{C}}_{6} {\text{H}}_{12} {\text{O}}_{6} + 2{\text{ADP}} + 2{\text{Pi}} \to 2{\text{C}}_{3} {\text{H}}_{6} {\text{O}}_{3} + 2{\text{ATP}}$$ $${\text{C}}_{6} {\text{H}}_{12} {\text{O}}_{6} + 2{\text{ADP}} + 2{\text{Pi}} \to {\text{C}}_{3} {\text{H}}_{6} {\text{O}}_{3} + {\text{CO}}_{2} + {\text{C}}_{2} {\text{H}}_{5} {\text{OH}} + 2{\text{ATP}} .$$ In this study, the aerobic fermentation of immobilized L. pentosus cells utilizing glucose is still homologous fermentation with no ethanol production, which has been confirmed based on NMR spectroscopy in our previous research. Currently, few research efforts are found on immobilized L. pentosus cell fermentation. Our previous research focus on the effects of the concentration of carrier solutions and cross-linking agent solutions on the efficiency of immobilized L. pentosus cell fermentation, but further research is needed on other conditions of immobilization and fermentation. At the same time, proper design of experiment (DOE) is necessary for effective optimization of the conditions of immobilization and fermentation. Plackett–Burman design, Taguchi design, central composite design, and Box–Behnken design are applied in most studies for optimization, while Doehlert design and superlative box design have also been used in researches to obtain a credible result (Huang et al. 2019; Wahla et al. 2019; Al-Gheethi et al. 2019; Sahin 2019; Chollom et al. 2019). In this study, Box–Behnken design was used to optimize the bead diameter, pH, initial glucose concentration, and the weight of biomass for immobilized L. pentosus ATCC 8041 cell fermentation. The effects of these parameters on the efficiency of immobilized L. pentosus ATCC 8041 cell fermentation were discussed based on the response surface and regression model. Preliminary experiment Based on Plackett–Burman design, nine significant factors have been found in the fermentation of immobilized Lactobacillus pentosus cells by our preliminary experiment as shown in Table 1. The effects of the concentrations of two carrier solutions (SA and PVA) and two cross-linking agent solutions (CaCl2 and H3BO3) on the fermentation performance of immobilized Lactobacillus pentosus cells have been reported in our previous study, and the effect of temperature has also been investigated. The influences of the remaining four factors will be discussed in this study. Table 1 Significant factors in the fermentation of immobilized Lactobacillus pentosus ATCC 8041 cells Seed culture preparation Lactobacillus pentosus ATCC 8041 was purchased from the American Type Culture Collection (ATCC), and stored in a refrigerator at − 8 °C. The lyophilized cells were activated in de Man, Rogosa and Sharpe (MRS) medium at 37 °C and 150 rpm for 8 h on a rotary shaker (GYROMAXTM 747R, Amerex Instruments, Lafayette, CA, USA) before immobilization. Design of experiment Box–Behnken design was used to optimize four parameters with three levels. As shown in Table 2, the four factors are bead diameter (DB), pH, initial glucose concentration (CG), and biomass (BW, CDW); respectively. Their levels from low to high were coded as − 1, 0, and + 1, respectively. Table 2 Range of factors for LA production by immobilized L. pentosus ATCC 8041 cells Cell immobilization SA and PVA were gradually added to deionized water and dissolved with continuous agitation at 30 °C and 80 °C, respectively, and subsequently mixed to prepare SA–PVA hydrogel. The concentration of SA and PVA in mixed solution was 2.0% and 6.0%, respectively (Wang et al. 2020). A specific amount of L. pentosus ATCC 8041 cells were inject into 100 mL sterilized SA–PVA hydrogel solution with continuous agitation. The fully mixed SA–PVA hydrogel solution containing L. pentosus ATCC 8041 cells were injected into the mixed cross-linking agent solution consisting of 0.10 M CaCl2 and 2.5% H3BO3 by an electrostatic droplet generator (Wang et al. 2020; Poncelet et al. 1999). Immobilized cell beads with the certain diameter were prepared with the diameter error of ± 0.3 mm. The prepared immobilized L. pentosus ATCC 8041 cell beads with a shape of approximate sphere were maintained in mixed cross-linking solution and stored in refrigerator at 4 °C for 4 h, and subsequently washed by sterilized deionized water to remove residual CaCl2 and H3BO3 (Bahry et al. 2019; Zhu et al. 2009). Batch fermentation Batch fermentation was carried out in a 1.0-L New Brunswick Bioreactor (BIOFLO 110; New Brunswick Scientific Co., Edison, NJ, USA) with the working volume of 800 mL for 48 h. Impellers of the bioreactor were disassembled to keep from the breakage of beads. Agitation speed was maintained at 150 rpm by a magnetic stirrer. Fermentation medium prepared under optimum conditions of 5 g/L peptone, 5 g/L yeast extract, 0.5 g/L MgSO4, and 0.5 g/L KH2PO4 and glucose with specific concentration were prepared and sterilized (Lee et al. 2011). The fermentation pH was controlled at designed value by adding 5 mol/L NaOH, and the fermentation temperature was maintained at 35 °C by a heat blanket. The concentrations of glucose and LA were measured by proton nuclear magnetic resonance spectroscopy (1H NMR). NMR samples consisting of 0.5 mL fermentation sample, 0.4 mL deuterium oxide (Acros organics), and 0.1 mL internal standard were prepared in 5-mm-o.d. nuclear magnetic resonance (NMR) tubes (Corning, NY, USA) for 1H NMR analysis (Holzgrabe 2010). The internal standard contained 95.5 wt% deuterium oxide, 4.2 wt% glucosamine, 0.2 wt% trimethylamine and 0.1 wt% trimethylsilyl propionate. The signal peak area of α-glucose (C1–H), β-glucose (C1–H), and LA (C3–H) was integrated by using MestReNova software on 1H NMR spectrum at 5.25 ppm, 4.65 ppm and 1.35 ppm (Mittal et al. 2009; Buyondo and Liu 2013). The calibration curve was developed by the linear relationship between the concentration and peak area for the calculation of glucose concentration and LA concentration. All the experimental values are shown in Table 3 as the format "mean value ± standard deviation". The analysis of variance (ANOVA) was carried out by Design Expert (Version 11). The response surface was applied to describe the effects of different factors on the efficiency of immobilized L. pentosus ATCC 8041 cell fermentation. Table 3 Experimental data of LA production by immobilized L. pentosus ATCC 8041 cells Quadratic models of LA yield and LA productivity were generated to show the functional relationship between factors and responses (Aslan and Cebeci 2007). For Box–Behnken design, a proper quadratic model will be established to show the effect of each factor and their quadratic interaction, which is shown as following format: $$Y = a_{0} + \mathop \sum \limits_{i = 1}^{n} a_{i} x_{i} + \mathop \sum \limits_{i = 1}^{n} a_{ii} x_{i}^{2} + \mathop \sum \limits_{i = 1}^{n} \mathop \sum \limits_{j = 1}^{n} a_{ij} x_{i} x_{j} + \varepsilon ,$$ where \(Y\), \(a_{0}\), \(a_{i}\), \(a_{ii}\), \(a_{ij}\), \(x_{i}\), and \(x_{j}\) represent predicted responses, offset constant, linear term coefficient, square term coefficient, interaction term coefficient, \(i\)th factor, and \(j\)th factor, respectively (Thakur et al. 2018); whereas, \(\varepsilon\) is the random error or uncertainties between predicted values and measured values (Lu et al. 2010). Regression model For LA yield (YLA), the quadratic model of normalized factors with the value from − 1 to 1 is shown as the following format: $$\begin{aligned} Y_{\text{LA}} & = 0.9322 - 0.0058 \times D_{\text{B}} + 0.0112 \times {\text{pH}} + 0.0080 \times C_{\text{G}} - 0.0009 \times B_{\text{W}} \\ & \quad + 0.0013 \times D_{\text{B}} \times {\text{pH}} + 0.0010 \times D_{\text{B}} \times C_{\text{G}} - 0.0015 \times D_{\text{B}} \times B_{\text{W}} \\ & \quad - 0.0030 \times {\text{pH}} \times C_{\text{G}} - 0.0128 \times {\text{pH}} \times B_{\text{W}} + 0.0125 \times C_{\text{G}} \\ & \quad \times B_{\text{W}} - 0.0019 \times {D_{\text{B}}}^{2} - 0.0328 \times {\text{pH}}^{2} - 0.0078 \times {C_{\text{G}}}^{2} - 0.0056 \times {B_{\text{W}}}^{2} . \\ \end{aligned}$$ The actual quadratic model of LA yield has been obtained as follows: $$\begin{aligned} Y_{\text{LA}} & = - 1.05297 - 0.008400 \times D_{\text{B}} + 0.449533 \times {\text{pH}} + 0.014037 \times C_{\text{G}} \\ & \quad - 0.000236 \times B_{\text{W}} + 0.001250 \times D_{\text{B}} \times {\text{pH}} + 0.000100 \times D_{\text{B}} \times C_{\text{G}} \\ & \quad - 0.000015 \times D_{\text{B}} \times B_{\text{W}} - 0.000300 \times {\text{pH}} \times C_{\text{G}} - 0.000128 \times {\text{pH}} \\ & \quad \times B_{\text{W}} + 0.000012 \times C_{\text{G}} \times B_{\text{W}} - 0.001892 \times {D_{\text{B}}}^{2} - 0.032767 \times {\text{pH}}^{2} \\ & \quad - 0.000078 \times {C_{\text{G}}}^{2} - 5.64167 \times 10^{ - 7} \times {B_{\text{W}}}^{2} . \\ \end{aligned}$$ LA yield model had 15 terms, including four linear terms, four quadratic terms, six terms of two-factorial interaction, and one constant term. The ANOVA of this model is shown in Table 4. The F-value of model was 118.09 with a p value less than 0.0001, implying that the model was significant (Aslan and Cebeci 2007). The model terms with a p-value less than 0.05 could be regarded as highly significant terms in the model (Xu and Xu 2014). Therefore, \(D_{\text{B}}\), \({\text{pH}}\), \(C_{\text{G}}\), \({\text{pH}} \times C_{\text{G}}\), \({\text{pH}} \times B_{\text{W}}\), \({\text{pH}}^{2}\), \({C_{\text{G}}}^{2}\), and \({B_{\text{W}}}^{2}\) were model terms that played a significant role in LA yield by immobilized L. pentosus ATCC 8041 cell fermentation. Other model terms with larger p-values could be identified to have minor influence on LA yield by immobilized L. pentosus ATCC 8041 cell fermentation. The correlation coefficient (R2) between experimental results and predicted values of the response variable was used to check the goodness-of-fit formula (Miaou et al. 1996). The R2 of LA yield model was 0.9916, implying that the model was able to describe the trend of experimental result accurately, and only 0.84% of the sample variation could not be explained by this model (Lu et al. 2010). The adjusted R2 and predicted R2 were 0.9832 and 0.9572, respectively, and their difference was less than 0.2, which indicated that the model value was in a reasonable agreement with a high accuracy (Miaou et al. 1996). The adequate precision of 38.1890 was larger than 4, illustrating that the conditions of immobilized L. pentosus ATCC 8041 cell fermentation could be theoretically predicted by this model with a high adequacy. The "Lack of Fit F-value" of 2.15 was not significant relative to the pure error, confirming that the significant model was good fitting and reliable (Thakur et al. 2018; Lu et al. 2010; Xu and Xu 2014; Miaou et al. 1996). Table 4 ANOVA for LA yield by immobilized L. pentosus ATCC 8041 cells The quadratic LA productivity model of normalized factors with the value from − 1 to 1 is shown as the following format: $$\begin{aligned} P_{\text{LA}} & = 2.09 - 0.0384 \times D_{\text{B}} - 0.0468 \times {\text{pH}} + 0.1271 \times C_{\text{G}} - 0.0102 \times B_{\text{W}} \\ & \quad + 0.0035 \times D_{\text{B}} \times {\text{pH}} - 0.0005 \times D_{\text{B}} \times C_{\text{G}} - 0.0583 \times D_{\text{B}} \times B_{\text{W}} \\ & \quad - 0.0005 \times {\text{pH}} \times C_{\text{G}} - 0.0455 \times {\text{pH}} \times B_{\text{W}} + 0.1187 \times C_{\text{G}} \times B_{\text{W}} \\ & \quad - 0.0339 \times {D_{\text{B}}}^{2} - 0.1386 \times {\text{pH}}^{2} - 0.0442 \times {C_{\text{G}}}^{2} - 0.0640 \times {B_{\text{W}}}^{2} . \\ \end{aligned}$$ The actual quadratic model of LA productivity (PLA) is shown as follows: $$\begin{aligned} P_{\text{LA}} & = - 7.26932 + 0.324383 \times D_{\text{B}} + 1.74727 \times {\text{pH}} + 0.070318 \times C_{\text{G}} - 0.004844 \\ & \quad \times B_{\text{W}} + 0.003500 \times D_{\text{B}} \times {\text{pH}} - 0.000050 \times D_{\text{B}} \times C_{\text{G}} - 0.000582 \\ & \quad \times D_{\text{B}} \times B_{\text{W}} - 0.000050 \times {\text{pH}} \times C_{\text{G}} - 0.000455 \times {\text{pH}} \times B_{\text{W}} \\ & \quad + 0.000119 \times C_{\text{G}} \times B_{\text{W}} - 0.033925 \times {D_{\text{B}}}^{2} - 0.138550 \times {\text{pH}}^{2} \\ & \quad - 0.000422 \times {C_{\text{G}}}^{2} - 6.40500 \times 10^{ - 6} \times {B_{\text{W}}}^{2} . \\ \end{aligned}$$ The LA productivity model contained four linear terms, four quadratic terms, six terms of two-factorial interaction, and one constant term. The ANOVA of this model is shown in Table 5. In this significant model, \(D_{\text{B}}\), \({\text{pH}}\), \(C_{\text{G}}\), \(D_{\text{B}} \times B_{\text{W}}\), \({\text{pH}} \times B_{\text{W}}\), \(C_{\text{G}} \times B_{\text{W}}\), \({D_{\text{B}}}^{2}\), \({\text{pH}}^{2}\), \({C_{\text{G}}}^{2}\), and \({B_{\text{W}}}^{2}\) could be considered to have main influence on LA productivity as significant terms. The models of LA productivity were also confirmed to be adequate and reliable by ANOVA, which could be used to navigate the design space. Table 5 ANOVA for LA productivity by immobilized L. pentosus ATCC 8041 cells Effect of parameters on immobilized L. pentosus ATCC 8041 cell fermentation As shown in Figs. 1a–c and 2a–c, LA yield and productivity increase with a decreased bead diameter, indicating the promotion of mass transfer performance and reduction of product inhibition by smaller bead diameter (Wu et al. 2010). As the bead diameter decreased, the surface-to-volume ratio of the beads increased, leading to a decrease in diffusion resistance (Guoqiang et al. 1991). Therefore, the substrate could be more easily diffused into the smaller-sized beads and be utilized by cells, resulting in an increase in product yield and productivity (Park et al. 2002). Idris and Suzana (2006) reported that the maximum LA yield was obtained by immobilized Lactobacillus delbrueckii cells with the bead diameter of 1 mm, and the LA yield decreased significantly when the bead diameter was equal to or larger than 5 mm. Park et al. (2002) studied biodegradation of hydrogen sulfide by immobilized Thiobacillus sp. IW and concluded that the biodegradation efficiency increased with a decrease in bead diameter from 4 to 1 mm. It has been reported that substrates with a higher concentration are more easily to diffuse into immobilized cell beads with larger diameter (Idris and Suzana 2006; Dwevedi and Kayastha 2009; Won et al. 2005). Other reports point out that larger numbers of cells were contained in the particles with a larger diameter, which cause a higher rate of substrate consumption (Wu et al. 2010; Won et al. 2005; Mundra et al. 2007; Gummadi et al. 2009). Therefore, the LA productivity increases with a properly increased bead diameter from 2 to 3 mm due to an increased initial glucose concentration and amount of biomass, while it decreases with the further increased bead diameter due to the substrate consumption for cell growth as shown in Fig. 2b, c. Response surface of LA yield affected by a bead diameter and pH at 101.9 g/L initial glucose concentration and 204.6 mg biomass (CDW); b bead diameter and initial glucose concentration at pH 5.99 and 204.6 mg biomass (CDW); c bead diameter and biomass at pH 5.99 and 101.9 g/L initial glucose concentration; d pH and initial glucose concentration at 2.0 mm bead diameter and 204.6 mg biomass (CDW); e pH and biomass at 2.0 mm bead diameter and 101.9 g/L initial glucose concentration, and f initial glucose concentration and biomass at 2.0 mm bead diameter and pH 5.99 Response surface of LA productivity affected by a bead diameter and pH at 119.6 g/L initial glucose concentration and 377.4 mg biomass (CDW); b bead diameter and initial glucose concentration at pH 5.21 and 377.4 mg biomass (CDW); c bead diameter and biomass at pH 5.21 and 119.6 g/L initial glucose concentration; d pH and initial glucose concentration at 2.0 mm bead diameter and 377.4 mg biomass (CDW); e pH and biomass at 2.0 mm bead diameter and 119.6 g/L initial glucose concentration, and f initial glucose concentration and biomass at 2.0 mm bead diameter and pH 5.21 As shown in Fig. 1a, d, e, the highest LA yield has been observed at around pH 6.0 by immobilized L. pentosus ATCC 8041 cell fermentation, while the highest LA productivity has been obtained at around pH 5.5 as shown in Fig. 2a, d, e. The same result was also obtained by Bahry et al. (2019) and Krischke et al. (1991) in the fermentation process of different immobilized cells. A higher external pH inhibited the activity of the ATPase combined with a change in pH of cytoplasm, resulting in a decrease in fermentation efficiency (Valli et al. 2005; Kourkoutas et al. 2004; Carmelo et al. 1997). Liu and Shen (2008) studied the effect of pH on ethanol production by immobilized S. cerevisiae. They indicated that the H+ in the fermentation broth diffused freely to the inside of the immobilized cell particles through the large pores of the calcium alginate matrix and caused a change in charge quantities of the plasma membrane, which led to a change in the permeability of the plasma membrane to nutrients and inorganic salt ions, affecting the fermentation efficiency. Another research showed that extreme acidic pH caused changes in intracellular ionic environment and damages to protein structure, which was detrimental to cell growth and metabolism, leading to a decrease in product yield and productivity (Bhushan et al. 2015). It has also been reported that the inhibitory effect of LA with the undissociated form on LA production is more significant than that of the dissociated lactate form if the pH is closer to the pKa of LA (about 3.9) (Pal et al. 2009). When pH is lower than 5, LA production will be inhibited because of the increased amount of LA with the undissociated form, while LA production will also be inhibited due to the inhibited activity of related enzymes when pH is higher than 7 (Bahry et al. 2019). Therefore, the pH value from 5.5 to 6.0 is an ideal condition for LA production by fermentation process. When the concentrations of initial glucose and biomass were low, LA yield of immobilized L. pentosus ATCC 8041 cells at pH 7.0 was higher than that at pH 5.0, and its change from pH 5–6 was faster than that from pH 6.0–7.0. However, LA productivity of immobilized L. pentosus ATCC 8041 cells at pH 7.0 was lower than that at pH 5.0, and its change from pH 5.0–6.0 was slower than that from pH 6.0–7.0. It indicated that the cell growth of L. pentosus at pH 5 was more significant than that at pH 7. The enzyme activity for cell growth of L. pentosus was higher at pH 5.0 due to its acid-resistivity, and more glucose was consumed for cell growth rather than LA production at pH 5.0. When pH increased to be 7.0, the enzyme activity for cell growth was inhibited, and more glucose was utilized to produce LA (Buyondo and Liu 2013). When the concentrations of initial glucose and biomass were high, both LA yield and productivity were lower at pH 7.0 than that at pH 5.0. As shown in Figs. 1b, d, f and 2b, d, f, an increased initial glucose concentration leads to a decreased LA yield but an increased LA productivity. It indicates that the initial glucose concentration increasing in a certain range would promote cell growth and subsequently promoted the LA productivity, while the LA yield was inhibited due to the limited substrate and enzyme activity (Bahry et al. 2019; Thakur et al. 2018; Liu et al. 2018). Qin et al. (2009) reported that the initial glucose concentration has a significant effect on LA production in batch fermentation. They found that an initial glucose concentration of 9.7% to 13.3% had non-significant limitation to the LA production, but an initial glucose concentration of 18.6% or higher would lead to a significant restriction on LA production. The similar result was also found by Wendhausen et al. (2001) that the initial glucose concentration strongly affected ethanol production of immobilized S. cerevisiae in batch fermentation of sugar cane syrup. A study reported that a higher glucose concentration leads to a longer retention time required to obtain a higher product yield due to a lower feed dilution (Liu et al. 2009). Feed dilution causes osmotic effects on the mass transfer performance and cell activity, leading to changes in product yield and productivity (Lee et al. 2011). Wada et al. (1980) studied effects of different glucose concentrations on the growth and production activity of immobilized cells and concluded that the largest number of cells in the gel beads was obtained by using a medium with the glucose concentration of 10%. They found that increased glucose concentration out of proper range caused a decrease in cell growth rate and number of viable cells in the gel, resulting in a reduction of product yield and productivity. Other reports explained that the reduced water activity and plasmolysis led to decreased cell activity or cell dormancy when the substrate concentration exceeded the critical value, resulting in a decrease in fermentation efficiency (Roukas et al. 1991; Tapia et al. 2008). However, the result reported by Mariam et al. (2009) was different from the result shown in Fig. 1, which concluded that the sugar concentration of 15% led to the maximum product yield in both free cell fermentation and immobilized cell fermentation. As shown in Figs. 1f and 2f, when the amount of cell is lower, the product yield and productivity increases with the increase of the substrate concentration, and when the substrate concentration exceeds the critical value, the product yield and productivity show a decreasing trend with the increase of the substrate concentration (Liu et al. 2008). As shown in Figs. 1c, e, f and 2c, e, f, regardless of other parameters, the increased amount of cell biomass results in a decrease in LA yield but an increase in LA productivity. The same result was obtained by Dong et al. (2017). They reported that more substrate was consumed for the cell growth and the maintenance for physiological activities of cells when the cell concentration was higher, resulting in a significant decrease in product yield due to substrate deficiency. As shown in Fig. 1f, with the sufficient substrate concentration, product yield increases as the amount of entrapped cells rose. When the substrate concentration is low, nutrients are not enough to support cell growth and metabolism for a larger amount of cells, resulting in a decrease in product yield due to the limited metabolism rate (Thakur et al. 2018). As shown in Fig. 2f, when the glucose concentration is low, the LA productivity decreases with the increase in biomass from 200 to 300 mg caused by the glucose consumption for cell growth. However, LA productivity increases with the further increase in amount of biomass. When the glucose concentration is high, the increased number of cells in a certain period of time promotes LA production with richer substrates, which lead to an increase in LA productivity. In addition, an extremely low cell density would lead to the cell growth to become the main activity of cells in the initial stage of fermentation, which subsequently causes a decrease in product yield and productivity (Wendhausen et al. 2001; Liu et al. 2009; Wada et al. 1980; Roukas et al. 1991; Tapia et al. 2008; Dong et al. 2017). Furthermore, it was observed that cell growth in immobilized cell particles led to the formation of distinct and large colonies when the concentration of the entrapped cells was low, and the size of colonies increased as the concentration of the captured cells reduced (Walsh et al. 1996). However, this result was not in agreement with that of Bhatnagar et al. (2016). They studied the biodegradation of carbazole by immobilized Pseudomonas sp. GBS.5 cells and concluded that the rate of biodegradation did not enhance with an increase in entrapped cell concentration when the substrate concentration is appropriate. Optimization of LA production and validation test Fermentation conditions for the highest LA yield and the highest LA productivity were numerically optimized, respectively. It was estimated based on the mathematical model that the highest LA yield was 0.942 g/g glucose with a productivity of 1.947 g/(L × h) at 2.0 mm bead diameter, 5.99 pH, 101.9 g/L initial glucose concentration, and 204.611 mg biomass (CDW). However, the highest LA productivity was estimated as 2.242 g/(L × h) with a yield of 0.926 g/g glucose at 2.0 mm bead diameter, 5.21 pH, 119.6 g/L initial glucose concentration, and 377.4 mg biomass (CDW). In order to validate the effectiveness of estimated optimum conditions, a batch fermentation of immobilized L. pentosus ATCC 8041 cells under the estimated optimum conditions were carried out at 35 °C and 150 rpm in a complete fermentation period of 48 h. The experimental result of highest LA yield test was obtained as 0.941 ± 0.004 g/g glucose with a productivity of 1.950 ± 0.011 g/(L × h), while the data of highest LA productivity test was obtained as 2.245 ± 0.007 g/(L × h) with a yield of 0.928 ± 0.005 g/g glucose. Therefore, the close correspondence could be observed between the experimental data and the predicted result by mathematical model. Due to the different optimum conditions for LA yield and productivity, the conditions should be optimized to obtain a higher LA yield and productivity simultaneously for practical application. The conditions optimized by this method can effectively improve the product conversion rate and fermentation efficiency during the fermentation process, which leads to a reduction of costs and fermentation period (Thakur et al. 2018). Based on the mathematical model, the highest LA yield and productivity were obtained as 0.936 g/g glucose and 2.210 g/(L × h), respectively. The estimated scheme of optimum condition was obtained as the bead meter of 2.0 mm, the pH value of 5.60, the initial glucose concentration of 115.3 g/L, and the biomass (CDW) of 398.2 mg. The result of the validation experiment was obtained as the LA yield of 0.938 ± 0.003 g/g glucose and a productivity of 2.213 ± 0.008 g/(L × h), which also confirmed a high reliability of this model. The density of viable cells was obtained as 8.13 × 109 CFU/g (9.91 log CFU/g) beads, implying that the immobilized L. pentosus ATCC 8041 cells had high viability in batch fermentation under optimum conditions. The batch fermentation of free L. pentosus ATCC 8041 cells was also conducted under the same fermentation conditions. It was observed to take approximately 72 h in free L. pentosus ATCC 8041 cell fermentation for glucose to be consumed almost completely with a LA yield of 0.826 ± 0.004 g/g glucose and a productivity of 1.323 ± 0.006 g/(L × h). The changes of LA and glucose concentrations during the fermentation process of both immobilized L. pentosus ATCC 8041 cells and free L. pentosus ATCC 8041 cells are shown in Fig. 3. Therefore, the LA yield and LA productivity of immobilized L. pentosus ATCC 8041 cells were 13.6% and 67.3% higher than that of free L. pentosus ATCC 8041 cells, respectively. The high LA yield and productivity obtained in the validation experiment indicated the high fermentation efficiency and excellent stability of the immobilized L. pentosus ATCC 8041 cells during the fermentation process under the optimum conditions. Repeated batch fermentation was also conducted to test the reusability of immobilized L. pentosus ATCC 8041 cells. As shown in Table 6, immobilized L. pentosus ATCC 8041 cells can maintain stable fermentation performance in 8 batches of fermentation to reach a high LA yield and productivity, implying that immobilized L. pentosus ATCC 8041 cells have excellent reusability. Time profiles of LA and glucose concentrations during the fermentation process of immobilized L. pentosus ATCC 8041 cells and free L. pentosus ATCC 8041 cells Table 6 The results of repeated batch fermentation of immobilized L. pentosus ATCC 8041 cells under optimum conditions The maximum LA yield of 0.96 g/g substrate with a productivity of 1.69 g/(L × h) was obtained by Djukić-Vuković et al. in the study of non-mineral LA fermentation by immobilized Lactobacillus rhamnosus ATCC 7469 cells, and they also obtained the maximum LA yield of 0.96 g/g substrate with a productivity of 1.41 g/(L × h) by immobilized L. rhamnosus ATCC 7469 cell fermentation without nitrogen source (Djukić-Vuković et al. 2013, 2016). Radosavljević et al. (2019) studied immobilization of L. rhamnosus in PVA–SA hydrogel for LA production and obtained the maximum LA productivity of 0.8 g/(L × h) in batch fermentation and an overall productivity of 0.78 g/(L × h) in seven batches of repeated batch fermentation. Hajar et al. obtained the maximum LA productivity of 1.22 g/(L × h) in immobilized L. rhamnosus cell fermentation of carob pod waste (Bahry et al. 2019). The maximum LA yield of 0.93 g/g glucose with a productivity of 2.7 g/(L × h) in immobilized Lactobacillus casei cell fermentation with a dry cell concentration of 7.5 g/L was obtained by Maslova et al. (2016). The immobilized L. pentosus cells perform with a high fermentation efficiency and excellent stability in homofermentation for LA production. The parameters have been optimized to be 2.0 mm bead diameter, 5.60 pH, 115.3 g/L initial glucose concentration, and 398.2 mg biomass (CDW) based on response surface methodology with Box–Behnken design for maximizing the LA yield. The highest LA yield and productivity have been obtained as 0.938 ± 0.003 g/g glucose and 2.213 ± 0.008 g/(L × h), respectively. The stable and high LA yield and productivity can be obtained by repeated batch fermentation by L. pentosus cells under optimum conditions. The LA yield and glucose consumption rate of immobilized L. pentosus cells can be optimized with a quadratic model with a high accuracy, which can be used to navigate a design space. All data obtained or analyzed during this study are included in this article and available from the corresponding author. PLA: Polylactic acid LAB: Lactic acid bacteria PVA: EMP: Embden–Meyerhof–Parnas Phosphoketolase DOE: ATCC: American Type Culture Collection de Man, Rogosa and Sharpe NMR: Nuclear magnetic resonance Abdel-Rahman MA, Tashiro Y, Sonomoto K (2011) Lactic acid production from lignocellulose-derived sugars using lactic acid bacteria: overview and limits. J Biotechnol 156(4):286–301 Al-Gheethi A, Noman E, Mohamed RMSR, Ismail N, Abdullah AHB, Kassim AHM (2019) Optimizing of pharmaceutical active compounds biodegradability in secondary effluents by β-lactamase from Bacillus subtilis using central composite design. J Hazard Mater 365:883–894 Aslan N, Cebeci Y (2007) Application of Box–Behnken design and response surface methodology for modeling of some Turkish coals. Fuel 86(1–2):90–97 Bahry H, Abdalla R, Pons A, Taha S, Vial C (2019) Optimization of lactic acid production using immobilized Lactobacillus rhamnosus and carob pod waste from the Lebanese food industry. J Biotechnol 306:81–88 Bhatnagar Y, Singh GB, Mathur A, Srivastava S, Gupta S, Gupta N (2016) Biodegradation of carbazole by Pseudomonas sp. GBS. 5 immobilized in polyvinyl alcohol beads. J Biochem Technol 6(3):1003–1007 Bhushan B, Pal A, Jain V (2015) Improved enzyme catalytic characteristics upon glutaraldehyde cross-linking of alginate entrapped xylanase isolated from Aspergillus flavus MTCC 9390. Enzyme Res. https://doi.org/10.1155/2015/210784 Buyondo JP, Liu S (2013) Unstructured kinetic modeling of batch production of lactic acid from hemicellulosic sugars. J Bioprocess Eng Biorefin 2(1):40–45 Carmelo V, Santos H, SaCorreia I (1997) Effect of extracellular acidification on the activity of plasma membrane ATPase and on the cytosolic and vacuolar pH of Saccharomyces cerevisiae. Biochim Biophys Acta 1325(1):63–70 Chollom MN, Rathilal S, Swalaha FM, Bakare BF, Tetteh EK, Chollom MN, Rathilal S, Swalaha FM, Bakare BF, Tetteh EK (2019) Comparison of response surface methods for the optimization of an upflow anaerobic sludge blanket for the treatment of slaughterhouse wastewater. Environ Eng Res 25(1):114–122 Dishisha T, Alvarez MT, Hatti-Kaul R (2012) Batch and continuous propionic acid production from glycerol using free and immobilized cells of Propionibacterium acidipropionici. Bioresour Technol 118:553–5622 Djukić-Vuković AP, Mojović LV, Jokić BM, Nikolić SB, Pejin JD (2013) Lactic acid production on liquid distillery stillage by Lactobacillus rhamnosus immobilized onto zeolite. Bioresour Technol 135:454–458 Djukić-Vuković AP, Jokić BM, Kocić-Tanackov SD, Pejin JD, Mojović LV (2016) Mg-modified zeolite as a carrier for Lactobacillus rhamnosus in L (+) lactic acid production on distillery wastewater. J Taiwan Inst Chem Eng 59:262–266 Dong Y, Zhang Y, Tu B (2017) Immobilization of ammonia-oxidizing bacteria by polyvinyl alcohol and sodium alginate. Braz J Microbiol 48(3):515–521 Dwevedi A, Kayastha AM (2009) Optimal immobilization of β-galactosidase from Pea (PsBGAL) onto sephadex and chitosan beads using response surface methodology and its applications. Bioresour Technol 100(10):2667–2675 CAS PubMed Article PubMed Central Google Scholar Ghorbani F, Younesi H, Sari AE, Najafpour G (2011) Cane molasses fermentation for continuous ethanol production in an immobilized cells reactor by Saccharomyces cerevisiae. Renew Energy 36(2):503–509 Goranov B, Blazheva D, Kostov G, Denkova Z, Germanova Y (2013) Lactic acid fermentation with encapsulated Lactobacillus casei ssp. rhamnosus ATCC 11979 (NBIMCC 1013) in alginate/chitosan matrices. Bulg J Agric Sci 19(2):101–104 Gummadi SN, Ganesh K, Santhosh D (2009) Enhanced degradation of caffeine by immobilized cells of Pseudomonas sp. in agar–agar matrix using statistical approach. Biochem Eng J 44(2–3):136–141 Guoqiang D, Kaul R, Mattiasson B (1991) Evaluation of alginate-immobilized Lactobacillus casei for lactate production. Appl Microbiol Biotechnol 36(3):309–314 Holzgrabe U (2010) Quantitative NMR spectroscopy in pharmaceutical applications. Prog Nucl Magn Reson Spectrosc 57(2):229–240 Huang J, Zhang G, Zheng L, Lin Z, Wu Q, Pan Y (2019) Plackett–Burman design and response surface optimization of conditions for culturing Saccharomyces cerevisiae in Agaricus bisporus industrial wastewater. Acta Sci Pol Technol Aliment. https://doi.org/10.17306/J.AFS.2019.0620 Idris A, Suzana W (2006) Effect of sodium alginate concentration, bead diameter, initial pH and temperature on lactic acid production from pineapple waste using immobilized Lactobacillus delbrueckii. Process Biochem 41(5):1117–1123 Kourkoutas Y, Bekatorou A, Banat IM, Marchant R, Koutinas A (2004) Immobilization technologies and support materials suitable in alcohol beverages production: a review. Food Microbiol 21(4):377–397 Krischke W, Schröder M, Trösch W (1991) Continuous production of l-lactic acid from whey permeate by immobilized Lactobacillus casei subsp. casei. Appl Microbiol Biotechnol 34(5):573–578 Kumar MN, Gialleli AI, Masson JB, Kandylis P, Bekatorou A, Koutinas AA, Kanellaki M (2014) Lactic acid fermentation by cells immobilised on various porous cellulosic materials and their alginate/poly-lactic acid composites. Bioresour Technol 165:332–335 Laopaiboon P, Thani A, Leelavatcharamas V, Laopaiboon L (2010) Acid hydrolysis of sugarcane bagasse for lactic acid production. Bioresour Technol 101(3):1036–1043 Lee KY, Mooney DJ (2012) Alginate: properties and biomedical applications. Prog Polym Sci 37(1):106–126 Lee KH, Choi IS, Kim YG, Yang DJ, Bae HJ (2011) Enhanced production of bioethanol and ultrastructural characteristics of reused Saccharomyces cerevisiae immobilized calcium alginate beads. Bioresour Technol 102(17):8191–8198 Liu R, Shen F (2008) Impacts of main factors on bioethanol fermentation from stalk juice of sweet sorghum by immobilized Saccharomyces cerevisiae (CICC 1308). Bioresour Technol 99(4):847–854 Liu YP, Zheng P, Sun ZH, Ni Y, Dong JJ, Zhu LL (2008) Economical succinic acid production from cane molasses by Actinobacillus succinogenes. Bioresour Technol 99(6):1736–1742 Liu CZ, Wang F, Ou-Yang F (2009) Ethanol fermentation in a magnetically fluidized bed reactor with immobilized Saccharomyces cerevisiae in magnetic particles. Bioresour Technol 100(2):878–882 Liu J, Pan D, Wu X, Chen H, Cao H, Li QX, Hua R (2018) Enhanced degradation of prometryn and other s-triazine herbicides in pure cultures and wastewater by polyvinyl alcohol-sodium alginate immobilized Leucobacter sp. JW-1. Sci Total Environ 615:78–86 Lu Z, He F, Shi Y, Lu M, Yu L (2010) Fermentative production of L (+)-lactic acid using hydrolyzed acorn starch, persimmon juice and wheat bran hydrolysate as nutrients. Bioresour Technol 101(10):3642–3648 Mariam I, Manzoor K, Ali S, Ul-Haq I (2009) Enhanced production of ethanol from free and immobilized Saccharomyces cerevisiae under stationary culture. Pak J Bot 41(2):821–833 Martinez FAC, Balciunas EM, Salgado JM, González JMD, Converti A, de Souza Oliveira RP (2013) Lactic acid properties, applications and production: a review. Trends Food Sci Technol 30(1):70–83 Maslova O, Sen'ko O, Stepanov N, Efremenko E (2016) Lactic acid production using free cells of bacteria and filamentous fungi and cells immobilized in polyvinyl alcohol cryogel: a comparative analysis of the characteristics of biocatalysts and processes. Catal Ind 8(3):280–285 Miaou SP, Lu A, Lum HS (1996) Pitfalls of using R2 to evaluate goodness of fit of accident prediction models. Transp Res Rec 1542(1):6–13 Mittal A, Scott GM, Amidon TE, Kiemle DJ, Stipanovic AJ (2009) Quantitative analysis of sugars in wood hydrolyzates with 1H NMR during the autohydrolysis of hardwoods. Bioresour Technol 100(24):6398–6406 Mundra P, Desai K, Lele S (2007) Application of response surface methodology to cell immobilization for the production of palatinose. Bioresour Technol 98(15):2892–2896 Okano K, Zhang Q, Yoshida S, Tanaka T, Ogino C, Fukuda H, Kondo A (2010) D-lactic acid production from cellooligosaccharides and β-glucan using L-LDH gene-deficient and endoglucanase-secreting Lactobacillus plantarum. Appl Microbiol Biotechnol 85(3):643–650 Pal P, Sikder J, Roy S, Giorno L (2009) Process intensification in lactic acid production: a review of membrane based processes. Chem Eng Process 48(11–12):1549–1559 Park DH, Cha JM, Ryu HW, Lee GW, Yu EY, Rhee JI, Park JJ, Kim SW, Lee IW, Joe YI (2002) Hydrogen sulfide removal utilizing immobilized Thiobacillus sp. IW with Ca-alginate bead. Biochem Eng J 11(2–3):167–173 Poncelet D, Neufeld R, Goosen M, Burgarski B, Babak V (1999) Formation of microgel beads by electric dispersion of polymer solutions. AIChE J 45(9):2018–2023 Qin J, Zhao B, Wang X, Wang L, Yu B, Ma Y, Ma C, Tang H, Sun J, Xu P (2009) Non-sterilized fermentative production of polymer-grade l-lactic acid by a newly isolated thermophilic strain Bacillus sp. 2–6. PLoS ONE 4(2):e4359 PubMed PubMed Central Article CAS Google Scholar Radosavljević M, Lević S, Belović M, Pejin J, Djukić-Vuković A, Mojović L, Nedović V (2019) Immobilization of Lactobacillus rhamnosus in polyvinyl alcohol/calcium alginate matrix for production of lactic acid. Bioprocess Biosyst Eng. https://doi.org/10.1007/s00449-019-02228-0 Roukas T, Lazarides H, Kotzekidou P (1991) Ethanol production from deproteinized whey by Saccharomyces cerevisiae cells entrapped in different immobilization matrices. Milchwissenschaft 46(7):438–441 Sahin S (2019) Optimization of the immobilization conditions of horseradish peroxidase on TiO2–COOH nanoparticles by Box–Behnken design. J Nat Appl Sci. https://doi.org/10.19113/sdufenbed.557021 Tang Y, Pang L, Wang D (2017) Preparation and characterization of borate bioactive glass cross-linked PVA hydrogel. J Non-Cryst Solids 476:25–29 Tapia MS, Alzamora SM, Chirife J (2008) 10 effects of water activity (aw) on microbial stability: as a hurdle in food preservation. Water activity foods. Wiley, New York Thakur A, Panesar PS, Saini MS (2018) Parametric optimization of lactic acid production by immobilized Lactobacillus casei using Box–Behnken design. Periodica Polytech Chem Eng 62(3):274–285 Valli M, Sauer M, Branduardi P, Borth N, Porro D, Mattanovich D (2005) Intracellular pH distribution in Saccharomyces cerevisiae cell populations, analyzed by flow cytometry. Appl Environ Microbiol 71(3):1515–1521 Vijayakumar J, Aravindan R, Viruthagiri T (2008) Recent trends in the production, purification and application of lactic acid. Chem Biochem Eng Q 22(2):245–264 Wada M, Kato J, Chibata I (1980) Continuous production of ethanol using immobilized growing yeast cells. Eur J Appl Microbiol Biotechnol 10(4):275–287 Wahla AQ, Iqbal S, Anwar S, Firdous S, Mueller JA (2019) Optimizing the metribuzin degrading potential of a novel bacterial consortium based on Taguchi design of experiment. J Hazard Mater 366:1–9 Walsh PK, Isdell FV, Noone SM, O'Donovan MG, Malone DM (1996) Growth patterns of Saccharomyces cerevisiae microcolonies in alginate and carrageenan gel particles: effect of physical and chemical properties of gels. Enzyme Microb Technol 18(5):366–372 Wang J, Huang J, Guo H, Jiang S, Zhang J, Ning Y, Fang M, Liu S (2020) Optimization of immobilization conditions for Lactobacillus penntosus cells. Bioprocess Biosyst Eng. https://doi.org/10.1007/s00449-020-02305-9 Wendhausen R, Fregonesi A, Moran PJ, Joekes I, Rodrigues JAR, Tonella E, Althoff K (2001) Continuous fermentation of sugar cane syrup using immobilized yeast cells. J Biosci Bioeng 91(1):48–52 Won K, Kim S, Kim K-J, Park HW, Moon S-J (2005) Optimization of lipase entrapment in Ca-alginate gel beads. Process Biochem 40(6):2149–2154 Wu J, Wang JL, Li MH, Lin JP, Wei DZ (2010) Optimization of immobilization for selective oxidation of benzyl alcohol by Gluconobacter oxydans using response surface methodology. Bioresour Technol 101(23):8936–8941 Xu K, Xu P (2014) Efficient production of L-lactic acid using co-feeding strategy based on cane molasses/glucose carbon sources. Bioresour Technol 153:23–29 Zhan J, Jiang S, Pan L (2013) Immobilization of phospholipase A1 using a polyvinyl alcohol-alginate matrix and evaluation of the effects of immobilization. Braz J Chem Eng 30(4):721–728 Zhao Z, Xie X, Wang Z, Tao Y, Niu X, Huang X, Liu L, Li Z (2016) Immobilization of Lactobacillus rhamnosus in mesoporous silica-based material: an efficiency continuous cell-recycle fermentation system for lactic acid production. J Biosci Bioeng 121(6):645–651 Zhu GL, Hu YY, Wang QR (2009) Nitrogen removal performance of anaerobic ammonia oxidation co-culture immobilized in different gel carriers. Water Sci Technol 59(12):2379–2386 The authors thank SUNY College of Environmental Science and Forestry for the help and support in this study. Department of Paper and Bioprocess Engineering, SUNY College of Environmental Science and Forestry, Syracuse, NY, 13210, USA Jianfei Wang, Jiaqi Huang, Hannah Laffend, Shaoming Jiang, Jing Zhang, Yuchen Ning, Mudannan Fang & Shijie Liu The Center for Biotechnology & Interdisciplinary Studies (CBIS) at Rensselaer Polytechnic Institute, Troy, NY, 12180, USA Jiaqi Huang Jianfei Wang Hannah Laffend Shaoming Jiang Yuchen Ning Mudannan Fang Shijie Liu JW and SL are the primary contributors of this work. JH provided important technical support for experiments and manuscript preparation. HL, SJ, JZ, YN, and MF supported in the manuscript preparation. All authors read and approved the final manuscript. Correspondence to Shijie Liu. The publication of the paper has been agreed by the authors. The authors declare that they have no potential conflicts of interest. Wang, J., Huang, J., Laffend, H. et al. Optimization of immobilized Lactobacillus pentosus cell fermentation for lactic acid production. Bioresour. Bioprocess. 7, 15 (2020). https://doi.org/10.1186/s40643-020-00305-x Lactobacillus pentosus
CommonCrawl
Load sharing model for high contact ratio spur gears with long profile modifications Originalarbeiten/Originals José I. Pedrero ORCID: orcid.org/0000-0001-8354-21441, Miguel Pleguezuelos ORCID: orcid.org/0000-0003-0174-57601 & Miryam B. Sánchez ORCID: orcid.org/0000-0001-5476-60181 Forschung im Ingenieurwesen volume 83, pages 401–408 (2019)Cite this article The start of contact between loaded involute gear teeth occurs before reaching the theoretical inner point of contact due to the load-induced deflections of previous tooth pairs in contact. This sooner contact occurs outside the pressure line and produces a shock between the driving tooth root and the driven tooth tip, which induces noise, vibrations and dynamic load. To avoid these undesirable effects profile modifications are often used, which through a suitable tip relief at the driven tooth delay the actual start of contact until locate it at the theoretical inner point of contact. However, the length and shape of profile modification have also influence on the curves of load sharing and quasi-static transmission error. Specifically, long tip relieves, beyond the interval of minimum tooth pair contact, which are unsuitable for standard contact ratio spur gears, may reduce drastically the load at the inner points of the path of contact of high contact ratio gears, though a peak of load arises at the outer interval of two pair tooth contact. Since the determinant contact stresses are usually located at the inner points of the contact interval and the determinant tooth-root stresses at the outer ones, long tip relieves can be used for balancing both determinant stresses and improving the load capacity. The transmission error of a gear pair is defined as the difference between the actual and theoretical positions of the driven gear, for a given position of the driving gear [1]. One of the sources of transmission error is the flexibility of the teeth. The transmitted load induces deflections in the teeth, in such a way that the driving tooth tries to penetrate the driven tooth, resulting in a delay of the driven gear respect to the driving one [2]. This is the so-called quasi-static transmission error (QSTE). Since the transmission error is not uniform along the path of contact, the output velocity oscillates along the meshing cycle, so dynamic loads and vibrations are unavoidably induced in the output shaft. In addition, the QSTE caused by the load, and the subsequent delay of the driven gear, results in a earlier start of contact of the next tooth pair, which occurs outside the pressure line and between non-conjugate contact points [2]. The root of the driving tooth hits the tip of the driven one, which provides longer effective contact interval but increases the noise, vibration and dynamic load levels. Previous studies on the load transfer along this extra contact interval reveal a parabolic loading process between actual and theoretical start of contact points [2, 3]. A similar parabolic unloading process occurs at the end of contact [2]. To avoid this shock and its undesirable effects, profile modifications are often used. Indeed, a suitable tip relief at the driven tooth will move the start of contact to the theoretical location on the pressure line [2], providing a smoother transmission. A symmetric relief at the driving tooth tip will move the actual end of contact to the theoretical outer point of contact, though this profile modification is not as critical as the previous one because the parabolic unloading process induces a sudden disengage of meshing teeth but not a shock. Consequently, it is not unusual to modify the profile at the inner limit of the contact interval (i.e., at the driven tooth tip) but not at the outer limit (at the driving tooth tip). The amount of material eliminated by profile modifications is very small (few microns in general) and has not significant influence on the stiffness of the tooth. However, the different contact conditions will affect the mesh stiffness of the couple of teeth, which will have influence on important transmission parameters, as the load sharing among couples of teeth in simultaneous contact or the transmission error [2, 3]. The influence of profile modifications on the meshing conditions has been investigated for many years. Last century, some studies on the influence of the amount, length and shape of profile modification on the dynamic load and transmission error were developed for both standard and high contact ratio spur gears [4, 5]. More recently, the stiffness, the load sharing, and the transmission error have been studied by means of Finite Element techniques [6, 7]. Nowadays, the relations among meshing stiffness, transmission error, dynamic response and profile modification are still under the interest of researchers and manufacturers [8, 9], for which more accurate behavior models of gear teeth are essential to get improved and more efficient designs. A tip relief is described by three parameters: the amount of modification, the length of modification and the shape of modification. The amount of modification is given by the teeth deflections of previous couples at the theoretical start of contact of the new pair, (i.e., the QSTE at the inner point of contact [2, 3]), so it cannot be chosen by the designer. The shape of modification influences the load sharing between tooth pairs in contact, and consequently upsets the load sharing ratio (LSR) curve. The length of modification defines the interval of the meshing cycle in which the actual LSR curve differs from the theoretical one. For gear pairs with modified driven tooth tip but unmodified driving teeth, the shape of the tip relief governs the loading curve of the tooth pair at the start of contact, while the unloading curve at the end of contact is described by the above-mentioned parabola. The interval of modification is usually contained within the interval of two pair tooth contact [2], since longer tip reliefs will produce higher QSTE but not better load sharing, as the load along the interval of single pair tooth contact will be equally the total load. However, such a long tip reliefs may be suitable for high contact ratio (HCR) spur gears, in which contact between involute profile points in at least one pair of teeth is guaranteed at any moment. For long tip relief at the driven tooth tip of HCR spur gear, while a couple of teeth meshes at the extended contact interval at the outer limit, with the parabolic unloading process in progress, other couple is in mesh inside the modified contact interval. It has been proven that the load decreases considerably at the beginning of the contact interval (along the whole interval of profile modification), although it increases at the end of the contact interval (the outer interval of three pair tooth contact and the extended, parabolic-unloading contact interval). In addition, a peak of load may arise at the inner limit of the outer interval of two pair tooth contact, increasing the maximum transmitted load [2]. For HCR spur gears, the critical contact stress is always located at the beginning of the contact interval [10], while the critical tooth-root stress corresponds to contact somewhere inside the outer interval of two pair tooth contact [11]. Accordingly, with long tip relief at the driven tooth tip, the determinant contact stress will be smaller, though the determinant tooth-root stress may be greater, which means that long tip relief can be used for balancing the determinant stresses and improving the load capacity of the spur gear transmission. This paper presents an investigation on the application of long tip relief to balance the contact and tooth-root stresses of HCR spur gears. An example illustrating the improvement on the load capacity by suitable long tip reliefs, is also provided. Mesh stiffness and LSR of spur gear teeth The teeth deflections under load induce a delay of the driven gear respect to the driving one which results in an earlier start of contact, below the theoretical inner point of contact, and a delayed end of contact, beyond the theoretical outer point of contact. Fig. 1 shows the delay between the driving gear and the driven gear, which is described by the angle φ2. Distance ab in the line of action, which is equal to distance cd, represents the tooth pair deflection inducing the delay φ2. Due to this relative rotation between both gears, the driving tooth root hits the driven tooth tip at point I in the figure, before reaching the theoretical inner point of contact (point e). This earlier contact produces an additional contact interval, which is described by the interval be in Fig. 1. A similar additional contact interval occurs at the end of contact (not represented in Fig. 1). Actual start of contact of a loaded tooth The curve of meshing stiffness along this extended contact interval is shown in Fig. 2. Within the theoretical contact interval, the curve of meshing stiffness of the tooth pair is accurately described by [12]: $$K_{M}\left(\xi \right)=K_{M\max }\cos \left(b_{0}\left(\xi - \xi _{m}\right)\right)$$ $$b_{0}=\left[\frac{1}{2}\left(1.11+\frac{\varepsilon_{\alpha}}{2}\right)^{2} - 1.17 \right]^{-1/2}; \quad \xi_{m}=\xi_{\mathrm{inn}}+\frac{\varepsilon _{\alpha}}{2}$$ where εα is the contact ratio and ξinn and ξm the driving tooth profile parameter ξ corresponding to the inner point and the midpoint of the theoretical interval of contact, being ξ: $$\xi =\frac{z_{1}}{2\pi }\sqrt{\frac{r_{c1}^{2}}{r_{b1}^{2}}- 1}$$ in which z is the number of teeth, rc the radius of the contact point, rb the base radius and subscript 1 denotes the driving gear (subscript 2 will denote the driven gear). Along the extended contact intervals, the meshing stiffness can be approximated to its value at the corresponding limit of the theoretical contact interval. Consequently, the meshing stiffness along the whole interval of contact can be expressed as: $$\begin{aligned} & K_{M}\left(\xi \right)=K_{M\max }\cos \left(b_{0}\frac{\varepsilon _{\alpha }}{2}\right) \\ & \quad \xi _{\min }\leq \xi \leq \xi _{\mathrm{inn}}\\ & K_{M}\left(\xi \right)=K_{M\max }\cos \left(b_{0}\left(\xi - \xi _{m}\right)\right) \\ & \quad \xi _{\mathrm{inn}}\leq \xi \leq \xi _{\mathrm{inn}}+\varepsilon _{\alpha }\\ & K_{M}\left(\xi \right)=K_{M\max }\cos \left(b_{0}\frac{\varepsilon _{\alpha }}{2}\right) \\ & \quad \xi _{\mathrm{inn}}+\varepsilon _{\alpha }\leq \xi \leq \xi _{\max } \end{aligned}$$ Meshing stiffness along the extended interval of contact (εα = 2.20) ξmin and ξmax denotes the profile parameter of the limits of the actual, extended interval of contact. Their values can be computed as described in [2, 3]. From this equation for the meshing stiffness, the load at tooth pair i will be [2, 3]: $$F_{i}\left(\xi \right)=K_{Mi}\left(\xi \right)\left(\delta \left(\xi \right)- \delta _{Gi}\left(\xi \right)\right)$$ where δ(ξ) is the delay of the driven gear in the path of contact (i.e., the QSTE multiplied by the base radius rb2) and δG(ξ) the distance that the driving tooth should approach to the driven one to contact it within the extended intervals, which can be calculated as: $$\begin{aligned} &\delta _{G}\left(\xi \right)=C_{p}r_{b1}\left(\frac{2\pi }{z_{1}}\right)^{2}\left(\xi - \xi _{\min }\right)^{2} \\ & \quad \xi _{\min }\leq \xi \leq \xi _{\mathrm{inn}}\\ &\delta _{G}\left(\xi \right)=0 \\ & \quad \xi _{\mathrm{inn}}\leq \xi \leq \xi _{\mathrm{inn}}+\varepsilon _{\alpha }\\ &\delta _{G}\left(\xi \right)=C'_{p}r_{b1}\left(\frac{2\pi }{z_{1}}\right)^{2}\left(\xi _{\max }- \xi \right)^{2} \\ & \quad \xi _{\mathrm{inn}}+\varepsilon _{\alpha }\leq \xi \leq \xi _{\max } \end{aligned}$$ where coefficients Cp and C′p are calculated as described in [2]. The total load will be equal to the sum of the load at any tooth pair, so that, from Eq. 5: $$\begin{aligned} & F_{T}=\sum _{j}F_{j}\left(\xi \right)=\sum _{j}K_{Mj}\left(\xi \right)\left(\delta \left(\xi \right)- \delta _{Gj}\left(\xi \right)\right) \\ & \quad = \delta \left(\xi \right)\sum _{j}K_{Mj}\left(\xi \right)- \sum _{j}K_{Mj}\left(\xi \right)\delta _{Gj}\left(\xi \right) \end{aligned}$$ and consequently: $$\delta \left(\xi \right)=\frac{F_{T}+\sum _{j}K_{Mj}\left(\xi \right)\delta _{Gj}\left(\xi \right)}{\sum _{j}K_{Mj}\left(\xi \right)}$$ which represents the delay of the driven gear and therefore describes the QSTE. Replacing Eq. 8 in Eq. 5, after some calculations, the following expression for the LSR is obtained: $$\begin{aligned} & R_{i}\left(\xi \right) = \frac{F_{i}\left(\xi \right)}{F_{T}}=\frac{K_{Mi}\left(\xi \right)}{\sum _{j}K_{Mj}\left(\xi \right)} \\ & \left[1+\frac{\sum _{j}K_{Mj}\left(\xi \right)\left[\delta _{Gj}\left(\xi \right)- \delta _{Gi}\left(\xi \right)\right]}{F_{T}}\right] \end{aligned}$$ Fig. 3 represents the LSR curves for standard and HCR spur gears. Dashed lines represent the theoretical LSR, which are valid for weakly loaded teeth. Curves of LSR along the extended interval of contact. a Standard contact ratio (εα = 1.60). b High contact ratio (εα = 2.20) Tip relief on driven gear teeth A tip relief on the driven tooth can be expressed in terms of the ξ-parameter of the contact interval as: $$\begin{aligned} & \delta _{R}=\delta _{R}\left(\xi \right) \mathrm{ for } \xi \leq \xi _{\mathrm{inn}}+\Updelta \xi _{r}\\ & \delta _{R}\left(\xi \right)=0 \mathrm{ for } \xi \geq \xi _{\mathrm{inn}}+\Updelta \xi _{r} \end{aligned}$$ where ∆ξr is the length of relief and function δR(ξ) defines the shape of relief. Obviously, to shift the start of contact to the theoretical inner point of contact ξinn the amount of relief (i.e., the relief at the driven tooth tip ξo2) should be equal to the teeth deflection at ξinn, so that: $$\delta _{R}\left(\xi _{\mathrm{inn}}\right)=\delta \left(\xi _{\mathrm{inn}}\right)$$ which can be computed with Eq. 8. Fig. 4 shows the geometrical parameters of the tip relief. The load at the tooth pair i is given by: $$F_{i}\left(\xi \right)=K_{Mi}\left(\xi \right)\left(\delta \left(\xi \right)- \delta _{Gi}\left(\xi \right)- \delta _{Ri}\left(\xi \right)\right)$$ and accordingly: $$\begin{aligned} & F_{T}=\sum _{j}F_{j}\left(\xi \right)=\delta \left(\xi \right)\sum _{j}K_{Mj}\left(\xi \right) \\ & \quad - \sum _{j}K_{Mj}\left(\xi \right)\left[\delta _{Gj}\left(\xi \right)+\delta _{Rj}\left(\xi \right)\right]\\ & \delta \left(\xi \right)=\frac{F_{T}+\sum _{j}K_{Mj}\left(\xi \right)\left[\delta _{Gj}\left(\xi \right)+\delta _{Rj}\left(\xi \right)\right]}{\sum _{j}K_{Mj}\left(\xi \right)}\\ & R_{i}\left(\xi \right)=\frac{K_{Mi}\left(\xi \right)}{\sum _{j}K_{Mj}\left(\xi \right)} \\ & \left[1+ \frac{\sum _{j}K_{Mj}\left(\xi \right) \left[ \left(\delta _{Gj}\left(\xi \right)+\delta _{Rj}\left(\xi \right)\right) - \left(\delta _{Gi}\left(\xi \right)+\delta _{Ri}\left(\xi \right)\right) \right] }{F_{T}}\right] \end{aligned}$$ Tip relief Note that: $$\left[F_{i},K_{Mi},\delta _{Gi},\delta _{Ri}\right]\left(\xi \right)=\left[F,K_{M},\delta _{G},\delta _{R}\right]\left(\xi +i\right)$$ Fig. 5 represents the theoretical LSR curves, regardless parabolic unloading process, for HCR spur gear with linear tip relief at the driven tooth tip. Dashed lines represent the theoretical curves without relief. Diagram in Fig. 5a correspond to a tip relief with length of relief smaller than the fractional part of the contact ratio dα (hence forward, short tip relief); Fig. 5b represents a tip relief longer than dα (long tip relief). As expectable, along the interval of modification, the load on relieved teeth decreases, and consequently the load on pairs in simultaneous contact increases. Theoretical curves of LSR for short and long tip relief (εα = 2.40). a Short tip relief (∆ξr = 0.15). b Long tip relief (∆ξr = 0.50) It can be observed that, for long tip relief, a peak of load arises at the inner limit of the outer interval of two pair tooth contact. This is because the load decreases at the inner limit of the other two pair contact interval, due to the relief. This means an increase in the maximum load, which may be undesirable from the strength point of view. In fact, the critical tooth-root stress on HCR spur gears corresponds to contact somewhere inside this outer two pair contact interval [11]. However, the critical contact stress is often located at the inner points of the inner two pair contact interval. Consequently, long tip reliefs improve the surface strength but worsen the tooth-root strength. Since the surface strength is usually more restrictive than the tooth-root strength, the length of relieve could be used for balancing the power capacity from both points of view. On the other hand, Fig. 5 represents the theoretical LSR of tip relieved HCR spur gears, but the actual curves will be affected by the parabolic unloading at the outer limit of the contact interval. In fact, Fig. 3b suggests that the undesirable peak of load will be sensibly mitigated and its effects rather less dangerous. Fig. 6 shows the actual LSR for long tip relief. The peak of load is drastically reduced for not excessively long tip relieve (Fig. 6a), even eliminated if the length of modification is adjusted to the length of the three pair tooth contact interval (equal to dα) plus the length of the additional contact interval at the end of contact ∆ξmax (equal to ξmax − ξinn − εα). Actual LSR for HCR spur gears with long tip relief (εα = 2.40, ∆ξmax = 0.10). a Long tip relief (∆ξr = 0.55). b Adjusted length of relief (∆ξr = 0.50) Influence on stresses and load capacity According to [10], the critical contact stress is located at one of these points: the inner point of contact, ξinn (point A in Fig. 7a). the inner point of the inner interval of two pair tooth contact, ξinn + dα (point B). the outer point of the inner interval of two pair tooth contact, ξinn + 1 (point C). Evolution of the stresses along the path of contact for no tip relief. a Contact stress. b Tooth-root stress The equivalent curvature radius increases rapidly as the parameter ξ decreases, and therefore, the lower the values of the parameter ξ of the contact interval, the inner the point of critical contact stress. Consequently, the critical stress at point C will only occur for low gear ratio and high number of teeth on pinion. For these cases, the tip relief will not improve the surface strength. Critical stress is typically located at the inner point of contact (point A) for low number of teeth on pinion and high contact ratio, and will be always improved with tip relief, short reliefs even. Long tip reliefs may improve pitting load capacity for critical contact stress at point B, which occurs for intermediate values of the pinion tooth number and contact ratio [10]. Fig. 7a presents the evolution of the contact stress along the path of contact for critical contact stress at point B. The critical tooth-root stress does always correspond to contact at a point within the outer interval of two pair tooth contact [11] (interval D–E in Fig. 7b). Although this critical load point may be located at any point of that interval, the tooth-root stress is quite uniform along it, as represented in Fig. 7b. The critical stress will therefore correspond, for long tip relief, to the peak of load, which is always located at ξ = ξmax − 1. In addition, the relation between critical tooth-root stresses for long tip relief and no tip relief will be very close to the relation between the peak of load and the load at this point without relief. From Fig. 6a, b and 7a, for long tip relief the critical contact stress will occur well at the upper limit of the interval of modification ξ = ξinn + ∆ξr, well at the peak of load ξ = ξmax − 1. The relation between critical stresses for long tip relief and no tip relief will be equal to the relation between corresponding Φ parameters: $$\Upphi \left(\xi \right)=\sqrt{\frac{R\left(\xi \right)}{\xi \left(\lambda _{\xi }- \xi \right)}} \text{ with } \lambda _{\xi }=\frac{z_{1}+z_{2}}{2\pi }\tan \alpha '_{t}$$ where α′t is the operating pressure angle. Summarizing, if RR(ξ) denotes the load sharing ratio for tip relief and ΦR(ξ) the Φ(ξ) parameter computed with RR(ξ), the ratio between bending load capacities for long tip relief PFR and no tip relief PF, will be given by: $$\frac{P_{FR}}{P_{F}}=\left[\frac{R\left(\xi _{\max }- 1\right)}{R_{R}\left(\xi _{\max }- 1\right)}\right]$$ while the ratio between pitting load capacities with and without relief, PHR and PH, is given by: $$\frac{P_{HR}}{P_{H}}=\left[\frac{\Upphi \left(\xi _{\mathrm{inn}}+d_{\alpha }\right)}{\max \left[\Phi _{R}\left(\xi _{\mathrm{inn}}+\Updelta \xi _{r}\right),\Phi _{R}\left(\xi _{\max }- 1\right)\right]}\right]^{2}$$ The following spur gear pair will be considered: number of teeth on pinion and wheel 39 and 78, module 5 mm, pressure angle 14º, rack shift coefficient on both gears 0, tooth addendum 5 mm, tooth dedendum 6.25 mm, tool tip radius 1.25 mm, operating center distance 292.5 mm. For this geometry, the value of λξ is 4.643, the contact ratio is εα = 2.198 and the inner point of contact is described by ξinn = 0.390. The material is steel C45. The output torque of 318.31 N · m induces an additional interval of contact at the end of meshing of ∆ξ = 0.03, and therefore ξmax = 2.618. An analysis according to ISO 6336 for 900 rpm input velocity, reveals power capacities of 72.795 and 15.765 kW for bending and pitting, respectively. Fig. 8a presents the variation of bending and pitting load capacities, for different values of the length of relief ∆ξr between 0.0 and 1.0. Some conclusions can be drawn: For tip relief shorter than dα + ∆ξ (0.228 in this example), both load capacities for bending and pitting remain unalterable. For longer reliefs, the bending load capacity decreases with the length of relief, while the pitting load capacity increases up to a maximum (for ∆ξr = 0.47 in the example) and decreases from it. This maximum is due to the contact stress at the peak of load becomes determinant, as shown in Fig. 8b. Results of the considered example. a Evolution of the load capacities with the length of modification. b Evolution of LSR and contact stress for no relief (NR) and optimal relief (OR) In this example, the pitting load capacity can be increased up to 19.4%, though the bending load capacity will decrease by 14.3%. Since the last one remains greater than the other, the final load capacity will increase by 19.4%. Long tip reliefs in HCR spur gears, beyond the inner interval of three pair tooth contact, produces lower loads at the inner points on the contact interval, resulting in many cases in a decrease on the determinant contact stress. However, a peak of load may arise at the outer interval of two pair tooth contact, which results in an increase on the determinant tooth-root stress. Thus, for the not unusual cases in which calculated load carrying capacity for bending is greater than that for pitting, long tip relief could be used for balancing pitting and bending load capacities, which allows to improve the final load capacity of the gear pair. b 0 : Parameter for the approximation of meshing stiffens d α : Fractional part of contact ratio F : Load, N F T : Total load, N K M : Meshing stiffness, N/mm P F : Bending load capacity, W P H : Pitting load capacity, W R : Load sharing ratio r b : Base radius, mm r c : Contact point radius, mm z : α′t : Operating pressure angle δ : Tooth pair deflection, mm δ G : Approach distance inside the extended contact interval, mm δ R : Amount of relief, mm ∆ξ r : Length of relief ε α : Contact ratio φ 2 : Delay angle ξ : Involute profile parameter σ F : Tooth root stress, MPa σ H : Contact stress, MPa ξ max : Outer limit of the extended contact interval ξ min : Inner limit of the extended contact interval c : inn : Inner point of interval of contact m : Midpoint of interval of contact Outer point of interval of contact Relieved teeth Driven gear Gregory RW, Harris SL, Munro RG (2002) A method of measuring transmission error in spur gears of 1:1 ratio. J Sci Instrum. https://doi.org/10.1088/0950-7671/40/1/303 Pedrero JI, Pleguezuelos M, Sánchez MB (2018) Control del error de transmisión cuasiestático mediante rebaje de punta en engranajes rectos de perfil de evolvente. Revista Iberoamericana De Ingeniería Mecánica 22:71–90 Pedrero JI, Pleguezuelos M, Sánchez MB (2017) Load sharing model for spur gears with tip relief. International Conference on Gears, Munich, Germany, 2017 (Proc.) Tavakoli MS, Houser DR (1986) Optimum profile modifications for the minimization of static transmission errors of spur gears. J Mech Trans Autom. https://doi.org/10.1115/1.3260791 Lin HH, Oswald FB, Townsend DP (1994) Dynamic loading of spur gears with linear or parabolic tooth profile modifications. Mech Mach Theory. https://doi.org/10.1016/0094-114X(94)90003-5 Beghini M, Presicce F, Santus C (2004) A method to define profile modification of spur gear and minimize the transmission error, AGMA Paper 04FTM3. American Gear Manufacturers Association, Alexandria VA Wen Q, Du Q, Zhai X (2019) An analytical method for calculating the tooth surface contact stress of spur gears with tip relief. Int J Mech Sci. https://doi.org/10.1016/j.ijmecsci.2018.11.007 Velex P, Bruyere J, Houser DR (2011) Some analytical results on transmission errors in narrow-faced spur and helical gears: influence of profile modifications. J Mech Des. https://doi.org/10.1115/1.4003578 Ma H, Zeng J, Feng R, Pang X, Wen B (2016) An improved analytical method for mesh stiffness calculation of spur gears with tip relief. Mech Mach Theory. https://doi.org/10.1016/j.mechmachtheory.2015.11.017 Sánchez MB, Pedrero JI, Pleguezuelos M (2013) Contact stress calculation of high transverse contact ratio spur and helical gear teeth. Mech Mach Theory. https://doi.org/10.1016/j.mechmachtheory.2013.01.013 Article MATH Google Scholar Sánchez MB, Pleguezuelos M, Pedrero JI (2014) Tooth-root stress calculation of high transverse contact ratio spur and helical gears. Meccanica. https://doi.org/10.1007/s11012-013-9799-3 Sánchez MB, Pleguezuelos M, Pedrero JI (2017) Approximate equations for the meshing stiffness and the load sharing ratio of spur gears including hertzian effects. Mech Mach Theory. https://doi.org/10.1016/j.mechmachtheory.2016.11.014 Our gratitude to the Spanish Council for Scientific and Technological Research for the support of the project DPI2015-69201-C2-1‑R, "Load Distribution and Strength Calculation of Gears with Modified Geometry", as well as to the School of Engineering of UNED for the support of the action 2019-MEC24, "Simulation of transmission error in spur gears". Departamento de Mecánica, UNED, Juan del Rosal 12, 28040, Madrid, Spain José I. Pedrero, Miguel Pleguezuelos & Miryam B. Sánchez José I. Pedrero Miguel Pleguezuelos Miryam B. Sánchez Correspondence to José I. Pedrero. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Pedrero, J.I., Pleguezuelos, M. & Sánchez, M.B. Load sharing model for high contact ratio spur gears with long profile modifications. Forsch Ingenieurwes 83, 401–408 (2019). https://doi.org/10.1007/s10010-019-00379-w Issue Date: 01 September 2019 DOI: https://doi.org/10.1007/s10010-019-00379-w
CommonCrawl
Genetic variation underlying renal uric acid excretion in Hispanic children: the Viva La Familia Study Geetha Chittoor1, Karin Haack2, Nitesh R. Mehta3, Sandra Laston4, Shelley A. Cole2, Anthony G. Comuzzie2, Nancy F. Butte3 & V. Saroja Voruganti1 BMC Medical Genetics volume 18, Article number: 6 (2017) Cite this article Reduced renal excretion of uric acid plays a significant role in the development of hyperuricemia and gout in adults. Hyperuricemia has been associated with chronic kidney disease and cardiovascular disease in children and adults. There are limited genome-wide association studies associating genetic polymorphisms with renal urate excretion measures. Therefore, we investigated the genetic factors that influence the excretion of uric acid and related indices in 768 Hispanic children of the Viva La Familia Study. We performed a genome-wide association analysis for 24-h urinary excretion measures such as urinary uric acid/urinary creatinine ratio, uric acid clearance, fractional excretion of uric acid, and glomerular load of uric acid in SOLAR, while accounting for non-independence among family members. All renal urate excretion measures were significantly heritable (p <2 × 10−6) and ranged from 0.41 to 0.74. Empirical threshold for genome-wide significance was set at p <1 × 10−7. We observed a strong association (p < 8 × 10−8) of uric acid clearance with a single nucleotide polymorphism (SNP) in zinc finger protein 446 (ZNF446) (rs2033711 (A/G), MAF: 0.30). The minor allele (G) was associated with increased uric acid clearance. Also, we found suggestive associations of uric acid clearance with SNPs in ZNF324, ZNF584, and ZNF132 (in a 72 kb region of 19q13; p <1 × 10−6, MAFs: 0.28–0.31). For the first time, we showed the importance of 19q13 region in the regulation of renal urate excretion in Hispanic children. Our findings indicate differences in inherent genetic architecture and shared environmental risk factors between our cohort and other pediatric and adult populations. Renal excretion of uric acid is commonly involved in the development of gout, hyperuricemia, and nephropathy [1, 2]. The kidney filters freely circulating uric acid, accounting for ~70% of total uric acid excretion from the body [3]. The prevalence of hyperuricemia (increased serum uric acid (SUA) concentrations) and gout are on the rise along with other metabolic disorders such as obesity, type 2 diabetes, and metabolic syndrome [3, 4]. Hyperuricemia and hyperuricosuria (increased urinary uric acid (UrUA) concentrations) can lead to uric acid nephrolithiasis [5–7]. Moreover, these two are common multifactorial disorders that have been shown to have a familial inheritance and further associated with progression to chronic kidney disease [4, 8, 9]. Hyperuricemia is shown to cluster within families, heritabilities ranged from 39 to 45% in our family studies [10, 11], and twin studies up to 80% for serum uric acid [12], 60% for renal clearance of urate and 87% for fractional excretion of urate [13]. Uric acid is the end-product of purine metabolism, which is secreted, filtered, and reabsorbed with the help of specific urate transporters such as uric acid transporter-1 protein (URAT1) and solute carrier family 2, member 9 (SLC2A9) [4, 14, 15]. Any defect in these urate transporters, affecting the secretion and reabsorption processes, might lead to increased concentration in blood or reduced excretion of uric acid in the body [3, 9]. We have shown that variation in SUA concentrations is affected by genetic factors and is also associated with obesity and its comorbidities in Hispanic children [11]. However, there are no extensive genetic studies on parameters of renal urate excretion in children. Therefore, assessing the genetic contribution to excretion of uric acid and its related indices might help develop methods to better understand the renal urate excretion in children. In the present study, our aim was to identify genetic loci contributing to renal urate excretion measures using the genome-wide association (GWA) approach in Hispanic children of Viva La Familia Study (VFS). Viva La Familia Study (VFS) VFS was designed to identify genetic variants influencing pediatric obesity and its comorbidities in Hispanic children with the majority from Mexican American families. VFS study design, recruitment, and methodology, demographic and phenotypic information have been described in detail elsewhere [16]. All participants gave written informed consent or assent. Institutional Review Boards at Baylor College of Medicine and Affiliated Hospitals, Texas Biomedical Research Institute and UNC Chapel Hill approved the protocol for Human Subject Research. Methods used to measure fasting blood and 24-h urinary biochemistries are described elsewhere [11]. Availability of the phenotypic data after calculation of urinary indices was limited to 768 children. Urinary and other indices calculation Urine sample was collected over a 24-h period and were alkalized (pH around 8.0) using 0.5N NaOH during initial dilution. The urinary indices, using serum and urinary uric acid and creatinine, were calculated according to Perez-Ruiz, et al. [2] with following equations: $$ \begin{array}{l}\mathrm{Body}\ \mathrm{S}\mathrm{urface}\ \mathrm{A}\mathrm{rea}\ \left(\mathrm{B}\mathrm{S}\mathrm{A},\ {\mathrm{m}}^2\right)\ \mathrm{calculated}\ \mathrm{b}\mathrm{y}\ \mathrm{Dubois}\ \mathrm{Equation} = 0.007184 \times \mathrm{Height}\ {\left(\mathrm{cm}\right)}^{0.725} \times \mathrm{Weight}\ {\left(\mathrm{kg}\right)}^{0.425}\hfill \\ {}\mathrm{Creatinine}\ \mathrm{clearance}\ \left(\mathrm{CrCl},\ \mathrm{ml}/ \min \right) = \mathrm{U}\mathrm{v} \times \left(\mathrm{UrCr}/\mathrm{SrCr}\right);\ \mathrm{U}\mathrm{v}:\ 24-\mathrm{hr}\ \mathrm{urine}\ \mathrm{v}\mathrm{o}\mathrm{lume}/\mathrm{time},\ \mathrm{U}\mathrm{rCr}:\ \mathrm{urinary}\ \mathrm{creatinine},\ \mathrm{S}\mathrm{rCr}:\ \mathrm{s}\mathrm{erum}\ \mathrm{creatinine}\hfill \\ {}\mathrm{U}\mathrm{ric}\ \mathrm{a}\mathrm{cid}\ \mathrm{clearance}\ \left(\mathrm{UACl},\ \mathrm{ml}/ \min \right) = \mathrm{U}\mathrm{v} \times \left(\mathrm{UrUA}/\mathrm{SrUA}\right);\ \mathrm{U}\mathrm{rUA}:\ \mathrm{urinary}\ \mathrm{uric}\ \mathrm{a}\mathrm{cid},\ \mathrm{S}\mathrm{rUA}:\ \mathrm{s}\mathrm{erum}\ \mathrm{uric}\ \mathrm{a}\mathrm{cid}\hfill \\ {}\mathrm{U}\mathrm{rinary}\ \mathrm{uric}\ \mathrm{a}\mathrm{cid}\ \mathrm{t}\mathrm{o}\ \mathrm{urinary}\ \mathrm{creatinine}\ \mathrm{ratio} = \mathrm{U}\mathrm{rUA}/\mathrm{UrCr}\ \mathrm{a}\mathrm{nd}\ \mathrm{expressed}\ \mathrm{a}\mathrm{s}\ \mathrm{a}\mathrm{b}\mathrm{s}\mathrm{o}\mathrm{lute}\ \mathrm{v}\mathrm{a}\mathrm{lue}\hfill \\ {}\mathrm{Fractional}\ \mathrm{excretion}\ \mathrm{o}\mathrm{f}\ \mathrm{uric}\ \mathrm{a}\mathrm{cid}\ \left(\mathrm{FEUA},\ \%\right) = \left(\mathrm{UrUA} \times \mathrm{S}\mathrm{rCr}\right)/\left(\mathrm{SrUA} \times \mathrm{U}\mathrm{rCr}\right) \times 100\hfill \\ {}\mathrm{Glomerular}\ \mathrm{load}\ \mathrm{o}\mathrm{f}\ \mathrm{uric}\ \mathrm{a}\mathrm{cid}\ \left(\mathrm{GLUA},\ \mathrm{mg}/ \min /{\mathrm{m}}^2\right) = \mathrm{CrCl} \times \mathrm{S}\mathrm{rUA}\hfill \\ {}\mathrm{Excretion}\ \mathrm{o}\mathrm{f}\ \mathrm{uric}\ \mathrm{a}\mathrm{cid}\ \mathrm{per}\ \mathrm{v}\mathrm{o}\mathrm{lume}\ \mathrm{o}\mathrm{f}\ \mathrm{glomerular}\ \mathrm{f}\mathrm{iltration}\ \left(\mathrm{EUAGF},\ \mathrm{mg}/\mathrm{dL}/{\mathrm{m}}^2\right) = \left(\mathrm{UrUA} \times \mathrm{S}\mathrm{rCr}\right)/\mathrm{UrCr}\hfill \end{array} $$ SNP genotyping The Illumina HumanOmni1-Quad v1.0 BeadChip marker assays were used to genotype 1.1 million SNPs in 815 children enrolled in VFS [17]. Genotype calls were obtained after scanning on the Illumina BeadStation 500GX and analyzed by using the GenomeStudio software. Genotyping error rate was 2 per 100,000 genotypes (based on duplicates). The average call rate for all SNPs per individual sample was 97%. SNP genotypes were checked for Mendelian consistency using the program, SimWalk2 [18]. The estimates of the allele frequencies and their standard errors were obtained using SOLAR [19]. Heritability analysis A variance components decomposition method was used to estimate heritability of uric acid and other renal urate excretion phenotypes. To estimate the genetic contribution to the variation in urinary indices, their heritability was estimated in SOLAR. Total phenotypic variance can be partitioned into its genetic and environmental components. The fraction of total phenotypic variance (VP) resulting from additive genetic effects (VG) is called heritability and is denoted as h2 = VG/VP [20]. All traits were adjusted for age, sex, their interaction effects, and body surface area. Genome-wide association (GWA) study of urinary uric acid excretion measures A total of 899,892 SNPs passed quality control and were included in the GWA analysis. GWAS was performed on 768 children from 260 Hispanic families including 1643 relative pairs. A measured genotype analysis (MGA) was performed on the inverse normal transformed residual traits (after regressing for covariate effects mentioned above) to minimize the non-normality distribution of the data using SOLAR. Each SNP genotype was converted in SOLAR to a covariate measure equal to 0, 1, or 2 copies of the minor alleles (or, the weighted covariate based on imputation for missing genotypes). These SNP covariates also were included in the variance components mixed models for MGA [21] vs. null models that incorporated the random effect of kinship and fixed effects such as age, sex, their interaction effects, and body surface area. For the initial GWA screen, we tested each SNP covariate independently as a 1-df likelihood ratio test. The linkage disequilibrium (LD) was computed in SOLAR by using information for all genotyped SNPs in all individuals. The effective number of SNPs accounting for LD was calculated by the method of Moskvina and Schmidt [22]. The average ratio of effective number of SNPs to the actual number obtained from analysis of non-overlapping bins of SNPs was used to calculate the genome-wide effective number of tests and thus the significance threshold for genome-wide association. Empirical thresholds for genome-wide significant and suggestive evidence of association were based on the distribution of p-values from 10,000 simulated null GWAS (i.e. simulations of a heritable trait with no modeled SNP covariate effects using the VFS pedigree and genotypes). The threshold for significance (p < 1 × 10−7) was defined as the cutoff for the lower 5% tails of the empirical distribution, and the threshold for suggestive evidence (p < 1 × 10−6) was the minimum p-value obtained not more than once per genome scan [11, 22]. General characteristics of renal urate excretion measures The study included data from 768 Hispanic children for the traits considered. General characteristics are given in Table 1 for both boys and girls and the total population, respectively. Mean age of boys and girls was approximately 12 years; body surface area (BSA) was higher (p <0.05) for boys (mean ± sd: 1.55 ± 0.40 m2 vs. 1.47 ± 0.33 m2) than girls, similar with BSA z-scores (0.33 ± 0.91 vs. 0.14 ± 0.76). Table 1 General characteristics and renal urate excretion in Hispanic children Renal 24-h urate excretion measures and their descriptions are given in Table 1. Creatinine clearance (CrCl), serum uric acid (SUA), urinary uric acid (UrUA), uric acid clearance (UACl), urinary uric acid to urinary creatinine ratio (UrUA/UrCr), and glomerular load of uric acid (GLUA) were higher in boys than girls (p <0.05). However, urinary creatinine (UrCr), and excretion of uric acid per volume of glomerular filtration (EUAGF) tended to be higher in boys compared to girls; and, fractional excretion of uric acid (FEUA) tended to be higher in girls compared to boys, but did not reach statistical significance. Heritability estimates Table 1 also lists the heritability estimates and corresponding p-values for the phenotypes considered in this study. All phenotypes were adjusted for covariate effects (age, sex, age*sex, age2, age2*sex, body surface area z-scores). Significant heritability (h2) was detected for all the traits (p <2 × 10−6) and ranged from 0.41 to 0.74. Genome-wide association analysis Measured Genotype Analysis (MGA) was conducted using a variance components approach in 768 VFS Hispanic children that accounted for family kinships (Table 2). Genome-wide significant evidence of association was found for single nucleotide polymorphism (SNP) rs2033711, an intronic variant, in zinc finger protein 446 (ZNF446) on chromosome 19 with UACl (p <8 × 10−8, minor allele frequency (MAF) = 0.30) (Fig. 1). The effect size or the proportion of the residual phenotypic variance accounted for by the minor allele of the SNP was 4.5%. Genotype-specific means of this SNP showed that minor allele (G) was associated with increased UACl. In addition to ZNF446, a 72 kb region of chromosome 19q13 containing several zinc finger protein (ZNF) genes (ZNF324, ZNF584 and ZNF132) showed suggestive association with UACl (Fig. 2 & Additional file 1: Table S1), but after accounting for linkage disequilibrium (LD; r2 < 0.80) only ZNF446 and ZNF584 and an insulin receptor-related receptor (INSRR) gene on chromosome 1q21-23 showed evidence of suggestive association with UACl (Table 2). Table 2 Results of genome-wide association analyses of renal urate excretion measures in Viva Hispanic children Genome-wide scan showing significant evidence of association for uric acid clearance with variants on chromosome 19 Locus Zoom plot showing the most significant SNPs on chromosome 19q13 A total of 7 SNPs were found to be suggestively (p <1 × 10−6) associated with one or the other renal urate excretion measures with MAFs ranging from 0.01 to 0.43, and the effect size ranged from 3.1 to 4.9%, respectively. After 19q13, another notable region where either GLUA or CrCl exhibited suggestive association was with a 13q12 SNP in ankyrin repeat domain 20 family, member A19, pseudogene (ANKRD20A19P). The genotype-specific mean values and the direction of associations (for minor alleles) of the SNPs in the genes associated with renal urate excretion measures from GWAS are shown in Table 3. For all UACl associated SNPs, the minor allele was associated with elevated UACl, except for rs10908521 on chromosome 1 (Additional file 2: Figure S1). Table 3 Genotypic class specific mean values for top SNPs from genome-wide association analyses of renal urate excretion measures This GWAS identified a set of genes encoding zinc finger proteins on chromosome 19q13 that were associated with uric acid clearance in Hispanic children. To the best of our knowledge, this is the first family-based GWAS of renal urate excretion measures with prominent effect sizes reported in children. The kidney is responsible for ~70% of the excretion of total body uric acid, and also reabsorbs about 90% of the filtered urate, regulating circulating uric acid levels. Epidemiological studies have shown that optimum uric acid levels may have antioxidant properties, however, elevated SUA and UrUA levels are associated with hypertension, inflammation, and also with kidney stones [3, 7, 23, 24]. A urate-transporting molecular complex (urate transportome) has been proposed as a model for urate transport in renal proximal tubules since several membrane proteins seem to be involved in urate transport [25]. According to this model, the mechanism of uric acid transport cannot be understood by evaluating one or two transporters. It has to be investigated as a functional unit comprising of all uric acid transporters and other molecules. This is because the renal handling of urate transport involves several genes (e.g., solute carrier family 2, member 9 (SLC2A9) and ATP-binding cassette ABC, subfamily G, member 2 (ABCG2), solute carrier family 16, member 9 (SLC16A9), solute carrier family 17, members 1, 3 and 4 (SLC17A1, SLC17A3 and SLC17A4), and, solute carrier family 22, members 11 and 12 (SLC22A11 and SLC22A12), most of which have been implicated in the regulation of urate levels [26–29]. Defects in the tubular secretion of uric acid are mainly involved with urate clearance, hyperuricemia and gout [2]. Tubular secretion and reabsorption of urate levels change dynamically with age [9]. Genetic disorders have been indicated in the prevalence of hyperuricemia with increased urinary uric acid levels, which eventually lead to the formation of kidney stones and are often associated with chronic kidney disease [7, 30, 31]. Studies have demonstrated in adults that renal excretion of uric acid is under considerable genetic influence [32]. Our results indicated that all renal urate excretion measures are heritable and are within similar range of heritabilities reported in twin adults, [13] and Chinese twin children and adolescents [12]. Our GWAS findings on chromosome 19 revealed genes that have not been linked to renal urate handling before. We observed a strong genome-wide significant association of UACl with rs2033711 in ZNF446. Additionally, we found suggestive association of UACl with ZNF584 genetic variant (rs10423138), not in LD with rs2033711 (ZNF446) suggesting that these genetic variants are independently influencing variation in UACl. The ZNF family is one of the major human gene families and comprises of several of the currently recognized transcription factors. ZNFs through the zinc finger motifs are shown to interact with nucleic acids and are involved in various molecular mechanisms, cell differentiation and development [33, 34]. Studies indicate that ZNF446 is a novel member of Kruppel-related family, and believed to be one of the conserved proteins during human evolution. Also, ZNF446 gene is highly expressed in adult tissue cells (muscle), and may function as a transcriptional repressor in cellular growth and development [34, 35]. It is known to inhibit transcriptional activities of serum response element (SRE) activator protein 1(AP-1) [34]. Interestingly, AP-1 is a cis regulatory element regulating ABCG2 expression, one of the main uric acid transporter [36]. Moreover, strong association of UACl with ZNF446 itself is interesting because studies have shown association of ZNF365, on chromosome 10, with urolithiasis in children, [7] and with uric acid nephrolithiasis in adults [5, 37]. However, our results indicate a different pattern of zinc finger protein involvement in uric acid metabolism compared with adult studies [5, 7, 37]. The functional relevance of these genes with uric acid involvement is largely unknown, except that the emergence of ZNF365 variants is correlated with disappearance of uricase in primate evolution and hence causing a predisposition to hyperuricemia in humans [5, 37]. We found suggestive association of rs4889855 on chromosome 17 (Gene not identified) with FEUA. However, in adults, FEUA was shown to be associated with genetic variants of glucokinase regulator (GCKR on chromosome 2), SLC2A9 and ABCG2 on chromosome 4, and insulin like growth factor 1 receptor (IGF1R on chromosome 15) [38]. We also found suggestive evidence of association between GLUA and CrCl and ANKRD20A19P gene variant (on chromosome 13q12), both these associations prove to be interesting as beta-spectrin and ankyrin are key components of the cytoskeleton membrane that regulates clustering of sodium channel [39]. Additionally, we also found suggestive association of UACl with SNPs in INSRR on chromosome 1. Some studies have suggested functional association between mRNA expression of insulin receptor-related and insulin-like growth factor receptors and tumor cells [40]. One GWAS study indicated a new locus linked to chromosome 2p22.1-p21 for familial juvenile hyperuricaemic nephropathy, [41] but none of our associations with urinary uric acid handling have been reported before. The relatively small sample size in this study is a limitation. However, family-based studies have increased power to detect associations due to the fact that 768 children generate 1643 relative pairs and the degree of resemblance between relative pairs is considered for the genetic analysis. Our study in VFS children is the first to report such genetic findings, and existing literature is limited in children involving GWAS on renal urate excretion measures. However, previous studies have reported association of 19q13 region with kidney phenotypes and diseases [42, 43]. This region has been linked to familial nephrotic syndrome [44], focal segmental glomerularosclerosis [45, 46] and cystinuria [47]. Our results are also different from studies that have reported association of uric acid transporters with renal uric acid excretion, for example, ABCG2 for renal urate excretion [27], and GCKR, SLC2A9, and IGF1R for FEUA [38], partly attributable to population substructure and sample size considered as majority of the GWAS are conducted in European or Asian descent populations. Nevertheless, our 19q13 region contains several ZNF proteins (including ZNF446) that may be involved in transcription of specific uric acid transporters. Thus, our GWAS associations could reflect differences in inherent genetic architecture and shared environmental risk factors between our VFS pediatric cohort and other study populations. Our GWAS identified novel loci, particularly in 19q13, influencing the regulation of renal excretion of uric acid in Hispanic children. Our findings from children are not identical with those from adults [5, 37, 38] suggesting metabolic alterations in uric acid metabolism tracking from childhood to adults. The majority of the GWAS studies are conducted in adults highlighting the need for pediatric studies investigating the genetic underpinnings of the variation in uric acid earlier in the life course. It is essential to acquire knowledge on renal urate handling in children as it may reveal clinical and biological insights regarding the pathophysiology of uric acid excretion by the kidneys, given the inherited nature of these disorders. ABCG2 : ATP-binding cassette ABC, subfamily G, member 2 CrCl: Creatinine clearance EUAGF: Excretion of uric acid per volume of glomerular filtration FEUA: Fractional excretion of uric acid GCKR : Glucokinase regulator GLUA: Glomerular load of uric acid GWAS: IGF1R : Insulin like growth factor 1 receptor SLC2A9 : Solute carrier family 2, member 9 SrCr: Serum creatinine SUA: Serum uric acid UACl: Uric acid clearance URAT-1 : Uric acid transporter-1 protein UrCr: Urinary creatinine UrUA: Urinary uric acid UrUA/UrCr: Urinary uric acid to urinary creatinine ratio VFS: Viva La Familia Study ZNF: Zinc Finger Proteins Puig JG, Miranda ME, Mateos FA, Picazo ML, Jimanez ML, Calvin TS, et al. Hereditary nephropathy associated with hyperuricemia and gout. Arch Intern Med. 1993;153:357–65. Perez-Ruiz F, Calabozo M, Erauskin GG, Ruibal A, Herrero-Beites AM. Renal underexcretion of uric acid is present in patients with apparent high urinary uric acid output. Arthritis Rheum. 2002;47:610–3. Bobulescu IA, Moe OW. Renal transport of uric acid: evolving concepts and uncertainties. Adv Chronic Kidney Dis. 2012;19:358–71. Yee J. Uric acid: a clearer focus. Adv Chronic Kidney Dis. 2012;19:353–5. Gianfrancesco F, Esposito T, Casu G, Maninchedda G, Roberto R, Pirastu M. Emergence of talanin protein associated with human uric acid nephrolithiasis in the hominidae lineage. Gene. 2004;339:131–8. Gianfrancesco F, Esposito T. Multifactorial disorder: molecular and evolutionary insights of uric acid nephrolithiasis. Minerva Med. 2005;96:409–16. Medina-Escobedo M, Gonzalez-Herrera L, Villanueva-Jorge S, Martin-Soberanis G. Metabolic abnormalities and polymorphisms of the vitamin D receptor (VDR) and ZNF365 genes in children with urolithiasis. Urolithiasis. 2014;42:395–400. Akl K, Ghawanmeh R. The clinical spectrum of idiopathic hyperuricosuria in children: isolated and associated with hypercalciuria/hyperoxaluria. Saudi J Kidney Dis Transpl. 2012;23:979–84. Stiburkova B, Bleyer AJ. Changes in serum urate and urate excretion with age. Adv Chronic Kidney Dis. 2012;19:372–6. Voruganti VS, Kent Jr JW, Debnath S, Cole SA, Haack K, Göring HH, et al. Genome-wide association analysis confirms and extends the association of SLC2A9 with serum uric acid levels to Mexican Americans. Front Genet. 2013;4:279. Voruganti VS, Laston S, Haack K, Mehta NR, Cole SA, Butte NF, et al. Serum uric acid concentrations and SLC2A9 genetic variation in Hispanic children: The Viva La Familia study. Am J Clin Nutr. 2015;101:1. Ji F, Ning F, Duan H, Kaprio J, Zhang D, Zhang D, et al. Genetic and environmental influences on cardiovascular disease risk factors: a study of Chinese twin children and adolescents. Twin Res Hum Genet. 2014;17:72–9. Emmerson BT, Nagel SL, Duffy DL, Martin NG. Genetic control of the renal clearance of urate: a study of twins. Ann Rheum Dis. 1992;51:375–7. Enomoto A, Kimura H, Chairoungdua A, Shigeta Y, Jutabha P, Cha SH, et al. Molecular identification of a renal urate anion exchanger that regulates blood urate levels. Nature. 2002;417:447–52. Augustin R, Carayannopoulos MO, Dowd LO, Phay JE, Moley JF, Moley KH. Identification and characterization of human glucose transporter-like protein-9 (GLUT9): alternative splicing alters trafficking. J Biol Chem. 2004;279:16229–36. Butte NF, Cai G, Cole SA, Comuzzie AG. Viva la familia study: genetic and environmental contributions to childhood obesity and its comorbidities in the Hispanic population. Am J Clin Nutr. 2006;84:646–54. Comuzzie AG, Cole SA, Laston SL, Voruganti VS, Haack K, Gibbs RA, et al. Novel genetic loci identified for the pathophysiology of childhood obesity in the Hispanic population. PLoS One. 2012;7:e51954. Sobel E, Lange K. Descent graphs in pedigree analysis: applications to haplotyping, location scores, and marker-sharing statistics. Am J Hum Genet. 1996;58:1323–37. Almasy L, Blangero J. Multipoint quantitative-trait linkage analysis in general pedigrees. Am J Hum Genet. 1998;62:1198–211. Rogers J, Mahaney MC, Almasy L, Comuzzie AG, Blangero J. Quantitative trait linkage mapping in anthropology. Am J Phy Anthropol. 1999;Suppl29:127–51. Boerwinkle E, Chakraborty R, Sing CF. The use of measured genotype information in the analysis of quantitative phenotypes in man. I. Models and analytical methods. Ann Hum Genet. 1986;50:181–94. Moskvina V, Schmidt KM. On multiple-testing correction in genome-wide association studies. Genet Epidemiol. 2008;32:567–73. Ames BN, Cathcart R, Schwiers E, Hochstein P. Uric acid provides an antioxidant defense in humans against oxidant- and radical-caused aging and cancer: a hypothesis. PNAS. 1981;78:6858–62. Feig DI, Soletsky B, Johnson RJ. Effect of allopurinol on blood pressure of adolescents with newly diagnosed essential hypertension: a randomized trial. JAMA. 2008;300:924–32. Anzai N, Jutabha P, Amonpatumrat-Takahashi S, Sakurai H. Recent advances in renal urate transport: characterization of candidate transporters indicated by genome-wide association studies. Clin Exp Nephrol. 2012;16:89–95. Dehghan A, Kottgen A, Yang Q, et al. Association of three genetic loci with uric acid concentration and risk of gout: a genome-wide association study. Lancet. 2008;372:1953–61. Matsuo H, Takada T, Ichida K, Nakamura T, Nakayama A, Ikebuchi Y, et al. Common defects of ABCG2, a high-capacity urate exporter, cause gout: a function-based genetic analysis in a Japanese population. Sci Transl Med. 2009;1:5ra11. Merriman TR, Dalbeth N. The genetic basis of hyperuricaemia and gout. Joint Bone Spine. 2011;78:35–40. Voruganti VS, Franceschini N, Haack K, Laston S, MacCluer JW, Umans JG, et al. Replication of the effect of SLC2A9 genetic variation on serum uric acid levels in American Indians. EJHG. 2014;22:938–43. Baggio B. Genetic and dietary factors in idiopathic calcium nephrolithiasis. What do we have, what do we need? J Nephrol. 1999;12:371–4. Sebesta I. Genetic disorders resulting in hyper- or hypouricemia. Adv Chronic Kidney Dis. 2012;19:398–403. Lipkowitz MS. Regulation of uric acid excretion by the kidney. Curr Rheumatol Rep. 2012;14:179–88. Kadonaga JT, Carner KR, Masiarz FR, Tjian R. Isolation of cDNA encoding transcription factor Sp1 and functional analysis of the DNA binding domain. Cell. 1987;51:1079–90. Liu F, Zhu C, Xiao J, Wang Y, Tang W, Yuan W, et al. A novel human KRAB-containing zinc-finger gene ZNF446 inhibits transcriptional activities of SRE and AP-1. Biochem Biophys Res Commun. 2005;333:5–13. Xiao P, Chen Y, Jiang H, Liu YZ, Pan F, Yang TL, et al. In vivo genome-wide expression study on human circulating B cells suggests a novel ESR1 and MAPK3 network for postmenopausal osteoporosis. J Bone Miner Res. 2008;23:644–54. Stacy AE, Jansson PJ, Richardson DR. Molecular pharmacology of ABCG2 and its role in chemoresistance. Mol Pharmacol. 2013;84:655–69. Gianfrancesco F, Esposito T, Ombra MN, Forabosco P, Maninchedda G, Fattorinin M, et al. Identification of a novel gene and a common variant associated with uric acid nephrolithiasis in a Sardinian genetic isolate. Am J Hum Genet. 2003;72:1479–91. Köttgen A, Albrecht E, Teumer A, Vitart V, Krumsiek J, Hundertmark C, et al. Genome-wide association analyses identify 18 new loci associated with serum urate concentrations. Nat Genet. 2013;45:145–54. Komada M, Soriano P. [Beta]IV-spectrin regulates sodium channel clustering through ankyrin-G at axon initial segments and nodes of Ranvier. J Cell Biol. 2002;156:337–48. Elmlinger MW, Rauschnabel U, Koscielniak E, Haenze J, Ranke MB, Berthold A, et al. Correlation of type I insulin-like growth factor receptor (IGF-I-R) and insulin receptor-related receptor (IRR) messenger RNA levels in tumor cell lines from pediatric tumors of neuronal origin. Regul Pept. 1999;84:37–42. Piret SE, Danoy P, Dahan K, Reed AA, Pryce K, Wong W, et al. Genome-wide study of familial juvenile hyperuricaemic (gouty) nephropathy (FJHN) indicates a new locus, FJHN3, linked to chromosome 2p22.1-p21. Hum Genet. 2011;129:51–8. Chambers JC, Zhang W, Lord GM, van der Harst P, Lawlor DA, Sehmi JS, et al. Genetic loci influencing kidney function and chronic kidney disease in man. Nat Genet. 2010;42:373–5. Satko SG, Freedman BI. The importance of family history on the development of renal disease. Curr Opin Nephrol Hypertens. 2004;13:337–41. Vats A, Nayak A, Ellis D, Randhawa PS, Finegold DN, Levinson KL, et al. Familial nephrotic syndrome: clinical spectrum and linkage to chromosome 19q13. Kidney Int. 2000;57:875–81. Mathis BJ, Kim SH, Calabrese K, Haas M, Seidman JG, Seidman CE, et al. A locus for inherited focal segmental glomerulosclerosis maps to chromosome 19q13. Kidney Int. 1998;53:282–6. Winn MP, Conlon PJ, Lynn KL, Howell DN, Gross DA, Rogala AR, et al. Clinical and genetic heterogeneity in familial focal segmental glomerulsclerosis. Kidney Int. 1999;55:1241–6. Langen H, von Kietzell D, Byrd D, Arslan-Kirchner M, Vester U, Stuhrmann M, et al. Renal polyamine excretion, tubular amino acid reabsorption and molecular genetics in cystinuria. Pediatr Nephrol. 2000;14:376–84. We thank all the families who participated in the Viva La Familia Study. The authors wish to acknowledge the technical assistance of Grace-Ellen Meixner and Maria del Pilar Villegas. The contents of this publication do not necessarily reflect the views or policies of the USDA, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. The National Institutes of Health (NIH) [R01DK080457, R01MH59490, P30ES010126 and R01DK092238] and the USDA/ARS [Cooperative Agreement 6250-51000-053] supported this work. Data and materials are available in dbGaP. Publicly available repositories. CIP: Obesity-Diabetes Familial Risk, Viva La Familia Study. dbGaP Study Accession: phs000616.v2.p2. Wrote paper: GC, NFB, VSV. Analyzed data or performed statistical analysis: GC, VSV. Provided essential reagents: KH, NRM, NFB, SAC. Primary responsibility for final content: VSV, GC, SAC, NFB, AGC. Read and provided edits to the paper: SL, KH, NRM, SAC, AGC. Designed research: VSV, GC, SAC, NFB, AGC. Conducted research: GC, NFB, VSV. All authors read and approved the final manuscript. Ethical approval and consent to participate All participants gave written informed consent or assent. Institutional Review Boards at Baylor College of Medicine and Affiliated Hospitals, Texas Biomedical Research Institute and UNC Chapel Hill approved the protocol for Human Subject Research. Department of Nutrition and UNC Nutrition Research Institute, University of North Carolina at Chapel Hill, 500 Laureate Way, Kannapolis, NC, 28081, USA Geetha Chittoor & V. Saroja Voruganti Department of Genetics, Texas Biomedical Research Institute, San Antonio, TX, USA Karin Haack, Shelley A. Cole & Anthony G. Comuzzie USDA/ARS Children's Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA Nitesh R. Mehta & Nancy F. Butte South Texas Diabetes and Obesity Institute, School of Medicine, University of Texas Rio Grande Valley, Brownsville, TX, USA Sandra Laston Geetha Chittoor Karin Haack Nitesh R. Mehta Shelley A. Cole Anthony G. Comuzzie Nancy F. Butte V. Saroja Voruganti Correspondence to V. Saroja Voruganti. Table S1. Results of genome-wide association analysis for uric acid clearance with variants on chromosome 19 in Viva Hispanic children. (DOCX 17 kb) Figure S1. Graphical presentation of genotypic class specific mean values for top SNPs from GWAS of uric acid clearance. (GIF 40 kb) Chittoor, G., Haack, K., Mehta, N.R. et al. Genetic variation underlying renal uric acid excretion in Hispanic children: the Viva La Familia Study. BMC Med Genet 18, 6 (2017). https://doi.org/10.1186/s12881-016-0366-3 Hispanic children
CommonCrawl
Matthew Andres Moreno About Works Research Blog Photos Practical Steps Toward Indefinite Scalability: In Pursuit of Robust Computational Substrates for Open-Ended Evolution Case Study: Interconnect Resource Sharing Case Study: Interconnect Messaging Wiring a Generic Small World Graph Wiring an Ideal Space-Filling Hierarchical Tree without Log-Time Physical Interconnects Wiring an Ideal Space-Filling Hierarchical Tree with Log-Time Physical Interconnects Wiring a Watts-Strogatz Graph 🔗 Abstract Studying how artificial evolutionary systems can continually produce novel artifacts of increasing complexity has proven to be a rich vein for practical, scientific, philosophical, and artistic innovations. Unfortunately, existing computational artificial life systems appear constrained by practical limitations on simulation scale. The concept of indefinite scalability describes constraints on open-ended systems necessary to incorporate theoretically unbounded computational resources. Here, we argue that along the path to indefinite scalability, we must consider practical scalability: how can we design open-ended evolutionary systems that make effective use of existing, commercially-available distributed-computing hardware? We highlight log-time hardware interconnects as a potentially fruitful tool for practical scalability and describe how digital evolution systems might be constructed to exploit physical log-time interconnects. We extend the DISHTINY digital multicellularity framework to allow cells to establish long-distance cell-cell interconnects that, in implementation, could take advantage of log-time physical interconnects. We examine two case studies of evolved strains, demonstrating how evolved cells adaptively exploit these interconnects. 🔗 Introduction The challenge, and promise, of open-ended evolution has animated decades of inquiry and discussion within the artificial life community [Packard et al., 2019]. The difficulty of devising models that produce characteristic outcomes of open-ended evolution suggests profound philosophical or scientific blind spots in our understanding of the natural processes that gave rise to contemporary organisms and ecosystems. Already, pursuit of open-ended evolution has yielded paradigm-shifting insights. For example, novelty search demonstrated how processes promoting non-adaptive diversification can ultimately yield adaptive outcomes that were previously unattainable [Lehman and Stanley, 2011]. Such work lends insight to fundamental questions in evolutionary biology, such as the relevance — or irrelevance – of natural selection with respect to increases in complexity [Lehman, 2012; Lynch, 2007] and the origins of evolvability [Lehman and Stanley, 2013; Kirschner and Gerhart, 1998]. Evolutionary algorithms devised in support of open-ended evolution models also promise to deliver tangible broader impacts on society. Possibilities include the generative design of engineering solutions, consumer products, art, video games, and AI systems [Nguyen et al., 2015; Stanley et al., 2017]. Preceding decades have witnessed advances toward defining — quantitatively and philosophically — the concept of open-ended evolution [Lehman and Stanley, 2012; Dolson et al., 2019; Bedau et al., 1998] as well as investigating causal phenomena that promote open-ended dynamics such as ecological dynamics, selection, and evolvability [Dolson, 2019; Soros and Stanley, 2014; Huizinga et al., 2018]. Together, methodological and theoretical advances have begun to yield evidence that the generative potential of artificial life systems is — at least in part — meaningfully constrained by available compute resources [Channon, 2019]. 🔗 Advances in Modern Compute Resources Rely on Distribution and Parallelism Since the turn of the century, advances in the clock speed of traditional serial processors have trailed off [Sutter, 2005]. Existing technologies have begun to encounter fundamental constraints including power use and thermal dissipation [Markov, 2014]. Instead, hardware innovation began to revolve around multiprocessing [Hennessy and Patterson, 2011, p.55] and hardware acceleration (e.g., GPU, FPGA, etc.) [Che et al., 2008]. For scientific and engineering applications, individual multiprocessors and accelerators are joined together with fast interconnects to yield so-called high-performance computing clusters s [Hennessy and Patterson, 2011, p.436]. Until fundamental changes to computing technology transpire, scaling up artificial life compute power will require taking advantage of these existing parallel and distributed systems. 🔗 Distributed Hardware in Digital Evolution Digital evolution practitioners have a rich history of leveraging distributed hardware. It is common practice to distribute multiple self-isolated instantiations of evolutionary runs over multiple hardware units. In scientific contexts, this practice yields replicate datasets that provide statistical power to answer research questions [Dolson and Ofria, 2017]. In applied contexts, this practice yields many converged populations that can be scavenged for the best solutions overall [Hornby et al., 2006]. Another established practice is to use "island models" where individuals are transplanted between populations that are otherwise independently evolving across distributed hardware. Koza and collaborators' genetic programming work with a 1,000-cpu Beowulf cluster typifies this approach [Bennett III et al., 1999]. In recent years, Sentient Technologies spearheaded digital evolution projects on an unprecedented computational scale, comprising over a million CPUs and capable of a peak performance of 9 petaflops [Miikkulainen et al., 2019]. According to its proponents, the scale and scalability of this DarkCycle system was a key aspect of its conceptualization [Gilbert, 2015]. Much of the assembled infrastructure was pieced together from heterogeneous providers and employed on a time-available basis [Blondeau et al., 2012]. Unlike island model where selection events are performed independently on each CPU, this scheme transferred evaluation criteria between computational instances (in addition to individual genomes) [Hodjat and Shahrzad, 2013]. Sentient Technologies also accelerated the deep learning training process by using many massively-parallel hardware accelerators (e.g., 100 GPUs) to evaluate the performance of candidate neural network architectures on image classification, language modeling, and image captioning problems [Miikkulainen et al., 2019]. Analogous work parallelizing the evaluation of an evolutionary individual over multiple test cases in the context of genetic programming has used GPU hardware and vectorized CPU operations [Harding and Banzhaf, 2007b; Langdon and Banzhaf, 2019]. Existing applications of concurrent approaches to digital evolution distribute populations or individuals across hardware to process them with minimal interaction. Task independence facilitates this simple, efficient implementation strategy, but precludes application on elements that are not independent. Parallelizing evaluation of a single individual often emphasizes data-parallelism over independent test cases, which are subsequently consolidated into a single fitness profile. With respect to model parallelism, Harding has notably applied GPU acceleration to cellular automata models of artificial development systems, which involve intensive interaction between spatially-distributed instantiation of a genetic program [Harding and Banzhaf, 2007a]. However, in systems where evolutionary individuals themselves are parallelized they are typically completely isolated from each other. We argue that, in a manner explicitly accommodating capabilities and limitations of available hardware, open-ended evolution should prioritize dynamic interactions between simulation elements situated across physically distributed hardware components. 🔗 Leveraging Distributed Hardware for Open-Ended Evolution Unlike most existing applications of distributed computing in digital evolution, open-ended evolution researchers should prioritize dynamic interactions among distributed simulation elements. Parallel and distributed computing enables larger populations and metapopulations. However, ecologies, co-evolutionary dynamics, and social behavior all necessitate dynamic interactions among individuals. Distributed computing should also enable more computationally intensive or complex individuals. Developmental processes and emergent functionality necessitate dynamic interactions among components of an evolving individual. Even at a scale where individuals remain computationally tractable on a single hardware component, modeling them as a collection of discrete components configured through generative development (i.e., with indirect genetic representation) can promote scalable properties [Lipson, 2007] such as modularity, regularity, and hierarchy [Hornby, 2005; Clune et al., 2011]. Developmental processes may also promote canalization [Stanley and Miikkulainen, 2003], for example through exploratory processes and compensatory adjustments [Gerhart and Kirschner, 2007]. To reach this goal, David Ackley has envisioned an ambitious design for modular distributed hardware at a theoretically unlimited scale [Ackley and Cannon, 2011] and demonstrated an algorithmic substrate for emergent agents that can take advantage of it [Ackley, 2018]. 🔗 A Path of Expanding Computational Scale While by no means certain, the idea that orders-of-magnitude increases in compute power will open up qualitatively different possibilities with respect to open-ended evolution is well founded. Spectacular advances achieved with artificial neural networks over the last decade illuminate a possible path toward this outcome. As with digital evolution, artificial neural networks (ANNs) were traditionally understood as a versatile, but auxiliary methodology — both techniques were described as "the second best way to do almost anything" [Miaoulis and Plemenos, 2008; Eiben, 2015]. However, the utility and ubiquity of ANNs has since increased dramatically. The development of AlexNet is widely considered pivotal to this transformation. AlexNet united methodological innovations from the field (such as big datasets, dropout, and ReLU) with GPU computing that enabled training of orders-of-magnitude-larger networks. In fact, some aspects of their deep learning architecture were expressly modified to accommodate multi-GPU training [Krizhevsky et al., 2012]. By adapting existing methodology to exploit commercially available hardware, AlexNet spurred the greater availability of compute resources to the research domain and eventually the introduction of custom hardware to expressly support deep learning [Jouppi et al., 2017]. Similarly, progress toward realizing artificial life systems with indefinite scalability seems likely to unfold as incremental achievements that spur additional interest and resources in a positive feedback loop with the development of methodology, software, and eventually specialized hardware to take advantage of those resources. In addition to developing hardware-agnostic theory and methodology, we believe that pushing the envelope of open-ended evolution will analogously require designing systems that leverage existing commercially-available parallel and distributed compute resources at circumstantially-feasible scales. Modern high-performance scientific computing clusters appear perhaps the best target to start down this path. These systems combine memory-sharing parallel architectures comprising dozens of cores (commonly targeted using OpenMP [Dagum and Menon, 1998] and low-latency high-throughput message-passing between distributed nodes (commonly targeted using MPI [Clarke et al., 1994]). Contemporary scientific computing clusters lack key characteristics required for indefinite scalability: fault tolerance and arbitrary extensibility. However, they also may offer an opportunity not available in an indefinitely scalable framework: log-time interconnects [Mollah et al., 2018]. 🔗 Instantiating Small-World Networks on Parallel and Distributed Hardware Many natural systems — such as ecosystems, genetic regulatory networks, and neural networks — are known to exhibit small-world patterns of connectivity or interaction among components [Bassett and Bullmore, 2017; Fox and Bellwood, 2014; Gaiteri et al., 2014]. In small-world graphs, mean path length (the number of edges traversed on a shortest-route) between arbitrary components scales logarithmically with system size [Watts and Strogatz, 1998]. We anticipate that open-ended phenomena emerging across distributed hardware might also involve small-world connectivity dynamics. What would the impact be of providing a system of hierarchical log-time hardware interconnects as opposed to relying solely on local hardware interconnects? In Sections WAGSW, WAISO, WAISW, and WAWSG, we analyze the scaling relationship between system size and expected node-to-node hops traversed between computational elements interacting as part of an emergent small-world network. [Footnote WDWCM] with and without hierarchical log-time physical interconnects between computational nodes, and with computational nodes embedded on one-, two-, or three-dimensional computational meshes. In Section WAGSW, we find that expected hops over edges weighted by edge betweenness centrality scales polynomially in all cases without hierarchical physical interconnects. With hierarchical physical interconnects, a logarithmic scaling relationship can be achieved. In Sections WAISO, WAISW we find that hierarchical physical interconnects yield better best-case mean hops per edge in the case of a one-dimensional computational mesh. Interestingly, asymptotically better outcomes in two- and three- dimensional meshes cannot be guaranteed by hierarchical physical interconnects. This suggests that — even at truly vast scales — emergent inter-component interaction networks could arise with bounded per-hardware-component messaging load. In Section WAWSG we show that, with a specific traditional construction of small-world graphs, best-case mean hops per edge scales polynomially with graph size. With hierarchical physical interconnects, a logarithmic scaling relationship can still be achieved. These theoretical analyses suggest that whether log-time physical interconnects deliver asymptotically better mean connection latency and hop-efficiency depend on the structure of the network overlaid on a spatially-distributed hardware system. Although we focus on asymptotic analyses, better scaling coefficients might be achieved with long-distance hardware interconnects. Equivalent asymptotic behavior does not preclude important considerations with respect to performance. 🔗 Exploiting Log-Time Physical Interconnects: a Case Study What could an artificial life system that exploits log-time hardware interconnects look like? We present an extension to the DISHTINY platform for studying evolutionary transitions in individuality [Moreno and Ofria, 2019]. In previous work with the system, cells situated on a two-dimensional grid interact exclusively with immediate neighbors. This extension introduces genetic programs that can explicitly-register direct interconnects between (potentially distant) cells for messaging and resource-sharing. Cells establish these interconnects through a genetically-mediated exploratory growth process. We report a case study of an evolved strain that adaptively employs interconnects to communicate and selectively distribute resources to the periphery of a multicellular organism. In Section CSIM, we report another case study of an evolve strain that adaptively employs over-interconnect messaging to selectively suppress somatic cell reproduction. In future implementations, explicitly-registered cell-cell interconnects may use log-time physical interconnects. Our prototype implementation exploits shared-memory thread-level parallelism on a single multiprocessor. 🔗 Abstraction, Engineering, and Computational Scale Although designed with an eye toward scalability, largely along the lines outlined by Ackley, DISHTINY exchanges a uniform, evolutionary-passive substrate for manually-engineered self-replicating cells. Evolutionary transitions in individuality provide a framework to unite self-replicators and induce meaningful functional synthesis of programmatic components tied to individual compute elements. This approach mirrors the philosophy of practicality and feasibility laid out by Channon [Channon, 2019]: It is not computationally feasible (even if we knew how) for an OEE simulation to start from a sparse fog of hydrogen and helium and transition to a biological-level era, so it is clearly necessary to skip over or engineer in at least some complex features that arose through major transitions in our universe. DISHTINY engineers-in some complex features (e.g., cellular structure, genetic transmission of variation, explicitly-registered cell-cell interconnects) in a manner that aims to reflect underlying hardware capabilities (e.g., procedural expression of programs, log-time physical interconnects) so they can be fully utilized. More granular, less prescriptional approaches seem likely to become preferred when orders of magnitude of more compute power — toward the extent envisioned by Ackley — become available. Such systems will address important questions in their own right about the computational foundations of physical and biological reality. Current work developing those systems sets the stage for that eventuality [Ackley, 2018]. 🔗 Methods Our evolutionary case studies employ an extension to the DISHTINY framework for studying fraternal transitions in individuality. 'Initial work with this system characterized selective pressures for cooperation with kin [Moreno and Ofria, 2019]. We have since extended the system to use the SignalGP event-driven genetic programming technique [Lalejini and Ofria, 2018] to control cell behaviors. Diverse multicellular life histories evolved in SignalGP-enabled DISHTINY evolutionary trials, involving reproductive division of labor, resource sharing (including, in some treatments, endowment of offspring groups), asymmetrical within-group and inter-group phenomena mediated by cell-cell messaging, morphological patterning, gene-regulation mediated life cycles, and adaptive apoptosis [Moreno and Ofria, prep]. DISHTINY simulates individual cells, each of which occupies a tile on a toroidal grid. Cells can reproduce, placing daughter cells into adjoining tiles. We allow cells the opportunity to engage with kin in a cooperative resource-collection task (Supplementary Section A), which can increase their individual cellular reproduction rates [Moreno and Ofria, 2020]. Kin groups are explicitly registered: on birth, a cell is either added to its parents group or expelled to establish a new group (Supplementary Section B) [Moreno and Ofria, 2020]. Cells can differentiate between neighbors that are members of their kin group and neighbors that are not and alter their behavior accordingly. Each cell contains four SignalGP instances (all executing the same genetic program), one of which controls cell behavior with respect to each neighbor. These instances may communicate with one another by means of intracellular messaging. In this work, we add a fifth SignalGP instance to the DISHTINY cell. This instance can execute special instructions to establish long-distance interconnects with other cells and engage in resource-sharing and/or message passing with those cells. Figure AOSHW summarizes how SignalGP hardware is arranged within DISHTINY cells. Figure AOSHW: Arrangement of SignalGP hardware within DISHTINY cells (gray squares). Neighbor-managing hardware (circles) receive stimuli and control cell behavior with respect to a particular cell neighbor. Network-managing hardware (interior squares) receive stimuli and control cell behavior with respect to more distant neighbors a cell has established interconnects with. Long-distance interconnects are established through a developmental process, summarized in Figure IOTDP. The process begins with the placement of two independent search prongs at the originating cell. Each prong performs a random walk over the originating cell's kin group, accumulating positive or negative feedback based on tags expressed by underfoot cells. If a prongs accumulates positive feedback too slowly, it is reset to the location of the better-scoring prong. Once a positive feedback threshold has been reached, the best-scoring prong develops into a full-fledged connection. At this point, the originating cell can begin exchanging messages and/or resource over the connection. Established interconnects may be subsequently removed by either participating cell. Full details on hardware-level instructions and event-driven environmental cues available to cells are provided in Supplementary Sections D, E, F, and G [Moreno and Ofria, 2020]. Figure IOTDP: Illustration of the developmental process used to establish long-distance interconnects. Cells start by budding developmental search prongs (a) that perform a random search (b), reverting to the most successful search (c) where it matures to establish a connection (d). Messages and resources can be transmitted over a connection (e) until either cell decides to terminate the connection (f). You can see this developmental process in action in an evolved strain at https://mmore500.com/hopto/ap. 🔗 Evolutionary Screens Our evolutionary screens consisted of 64 independent evolutionary batches. We processed each batch in four-hour epochs to enable efficient job scheduling. Each batch consisted of four isolated 45-by-45 toroidal subpopulations. Subpopulations were completely intermixed in between four-hour steps. To facilitate evolutionary search, in addition to a base mutation rate applied to cell division, additional mutations were applied to cells seeding a toroidal grid at the outset of an epoch or budding to form new kin groups during an epoch. We screened across four-hour checkpoints of replicate batches to see if messages or resource were being sent over interconnects. We sampled from these populations, performing screens for knockouts of over-interconnect messaging or resource sharing. We then performed a secondary screen on strains with adaptive over-interconnect messaging or resource sharing to determine if re-routing either messages or shared resources decreased fitness. We measured relative fitness using competition experiments between strains. For some competition experiments reported in the case studies, we provide hyperlinks to load a in-browser DISHTINY simulation with the actual strains that were used. In this web viewer, wild-type strains carry phylogentic root ID 1 and knockout strains carry ID 2. 🔗 Implementation We implemented our experimental system using the Empirical library for scientific software development in C++, available at https://github.com/devosoft/Empirical [Ofria et al., 2019]. We used OpenMP to parallelize our main evolutionary replicates, distributing work over two threads. The code used to perform and analyze our experiments, our figures, data from our experiments, and a live in-browser demo of our system is available via the Open Science Framework at https://osf.io/53vgh/ [Foster and Deardorff, 2017]. 🔗 Case Study: Interconnect Resource Sharing (a) Hypothesized resource-recruiting mechanism (b) Kin groups (c) Established interconnects (d) Spatial distribution of resource-sending cells (e) Spatial distribution of resource-receiving cells Figure B42CS: Batch 42 case study overview. Figures 3b through 3e are generated from a snapshot of a wild-type strain monoculture population. In these images, each grid tile represents an individual cell. Cells are organized into kin groups, color-coded by hue in Figure B42CS(b). Established interconnects are overlaid in blue on Figure B42CS(c). In Figures B42CS(d) and B42CS(e), kin groups are outlined in black. Figure B42CS(d) highlights cells that are sending resource over-interconnect. Figure B42CS(e) highlights cells that are receiving resource over-interconnect. You can view an animation of the wild-type monoculutre at https://mmore500.com/hopto/ao. This case study was drawn from epoch 24 of batch 42 of the initial set of evolutionary runs. We initially considered it for further study due to the presence of widespread over-interconnect resource sharing. After preliminary knockout experiments confirmed the adaptive significance of both over-interconnect resource-sharing and over-interconnect messaging, we set aside the strain for a case study. The evolutionary history preceding this case study consumed approximately 96 hours of wall-clock time and 736 compute-core hours. Approximately 30 million simulation updates and 40,000 cellular generations elapsed. You can view the strain this case study characterizes in a live in-browser simulation at https://mmore500.com/hopto/8. Our first step was to evaluate whether the intercellular nature of over-interconnect messaging and resource sharing contributed to this strain's fitness. (It is possible that messaging and/or resource sharing behaviors might generate stimuli on the recipient or side-effects on the sender that have adaptive consequences whether or not the sender and recipient are distinct cells; in such a scenario, cells would be just as well off sending messages and/or resource to themselves.) We performed several competition experiments between the wild-type strain and variants where interconnect messaging and resource sharing was altered to be intracellular instead of intercellular. At the end of competition experiments, we evaluated the relative abundances of wild-type and variant strains. In the first variant strain we tested, all outgoing over-interconnect messages were instead delivered to the sending cell. In 16 out of 16 one-hour competition runs that were seeded half-and-half with the wild-type and variant strains, the wild-type strain drove the variant strain to extinction (one-tailed binomial test; \(p < 0.0001\); 290 S.D 17 cell gens elapsed). We observed a similar outcome with a second variant strain where all outgoing over-interconnect resource sharing was rerouted back to the sending cell (14/16 variant strain extinctions; 16/16 wild-type prevalence; 289 S.D. 25 cell gens elapsed). Finally, a third variant strain where both over-interconnect messaging and over-interconnect resource sharing were returned to the sending cell exhibited the same outcome (16/16 variant strain extinctions; 300 S.D. 23 cell gens elapsed). The intercellular natures of both over-interconnect messaging and resource sharing appear essential to fitness. Next, we took a closer look at the evolved cellular mechanisms controlling over-interconnect messaging and resource sharing. We monitored hardware execution of the wild-type strain in a monoculture population to detect which signals, messages, and fork/call instructions activated each SignalGP module. We manually cross-referenced this information with a human-readable printout of the strain's genetic program to construct a hypothesized mechanism shown in Figure B42CS. We hypothesize that cells at the periphery of a registered kin groups send messages backwards over incoming interconnects that induce interconnect-originating cells to send them resource. Such a mechanism could preferentially increase resource availability at the group periphery, a region where cell-cell conflict is likely elevated. We performed a series of four-hour competition experiments between wild type and knockout strains to confirm the adaptive significance of each component of this mechanism. We began by re-routing stimulus 19, which alerts cells to neighbors that are members of a foreign kin group, to activate a known no-op module. This knockout strain experienced decreased fitness compared to the wild-type strain (16/16 knockout strain extinctions; one-tailed binomial test; \(p < 0.0001\); 1996 S.D. 280 cell gens elapsed; https://mmore500.com/hopto/ak). Next, we replaced the over-interconnect messaging instruction that triggers over-interconnect resource-sharing with a no-op instruction. This knockout strain also experienced decreased fitness (16/16 knockout strain extinctions; 1932 S.D. 223 cell gens elapsed; https://mmore500.com/hopto/al). We then replaced all eight copies of the over-interconnect resource-sharing instruction triggered by the over-interconnect messaging with no-op instructions, once more yielding a strain with diminished fitness (16/16 knockout strain extinctions; 1860 S.D. 370 cell gens elapsed; https://mmore500.com/hopto/am). Finally, we confirmed the soundness of our fitness competition methodology by running control wild-type versus wild-type competitions. As expected, we observed no effect of strain ID on competition dominance (8/16 knockout strain extinctions; 8/16 wild-type strain extinctions; one-tailed binomial test; \(p = 0.60\); 1738 S.D. 217 cell gens elapsed; https://mmore500.com/hopto/aj). 🔗 Case Study: Interconnect Messaging (a) Hypothesized selective reproduction pausing mechanism (d) Spatial distribution of stimulus 5 (e) Spatial distribution of module 14 execution Figure B32CS: Batch 32 case study overview. Figures B32CS(b) through B32CS(e) are generated from a snapshot of a wild-type strain monoculture population. In these images, each grid tile represents an individual cell. Cells are organized into kin groups, color-coded by hue in Figure B32CS(b). Established interconnects are overlaid in blue on Figure B32CS(c). In Figures B32CS(d) and B32CS(e), kin groups are outlined in black. Figure B32CS(d) highlights cells that are sending resource over-interconnect. Figure B32CS(e) highlights cells that are receiving resource overinterconnect. You can view an animation of the wild-type monoculutre at https://mmore500.com/hopto/an. This case study was drawn from epoch 18 of batch 32 from a secondary set of 64 evolutionary runs. These runs were identical to the first, except: increasing the default outgoing connection cap, making cells default-accept instead of default-reject intracellular messages from same-channel cells, and removing system-mediated parent-kin-group recognition to promote kin group turnover. We set this strain aside for case study after preliminary screening suggested that over-interconnect messaging played an adaptive role and that the intercellular nature the messaging of was necessary to that adaptation. The evolutionary history preceding this case study consumed approximately 72 hours of wall-clock time and 576 compute-core hours. Approximately 2,197,976 simulation updates and 8,884 cellular generations elapsed. You can view this case study strain in a live in-browser simulation at https://mmore500.com/hopto/7. As before, we began by testing whether over-interconnect interaction was adaptive because of its intercellularity. We performed a competition experiment between the wild-type strain and a variant where over-interconnect messages were re-routed back to the sender. [Footnote TWAIR] The wild-type strain was present in greater abundance at the end of all 16 competitions (one-tailed binomial test; \(p < 0.0001\); 2/16 variant strain extinctions; 52 S.D. 3 cell gens elapsed). So the adaptiveness of over-interconnect messaging does depend on the intercellular nature of that messaging in this strain. We proceeded to tease apart the evolved cellular mechanisms this messaging interacts with. We monitored hardware execution of the wild-type strain in a monoculture population to detect which signals, messages, and fork/call instructions activated each SignalGP module. %shorten because we already said it % put in common things into case study introduction Referring to a human-readable printout of the strain's evolved genetic programming, we pieced together the hypothesized mechanism shown in Figure B32CS. It appears that neighboring a direct cellular offspring stimulates dispatch of an over-interconnect message that induces the recipient to pause somatic reproduction. Four-hour competition experiments between wild type and knockout strains allowed us to assess the adaptiveness of each component of this mechanism. We replaced the instruction responsible for over-interconnect messaging with a no-op instruction and observed a corresponding fitness penalty (16/16 knockout strain extinctions; one-tailed binomial test; \(p < 0.0001\); 416 S.D. 58 cell gens elapsed; https://mmore500.com/hopto/aa). We also replaced the reproduction-pausing instruction executed in response to over-interconnect messaging with a no-op. This caused a similar fitness penalty (16/16 knockout strain extinctions; 401 S.D. 29 cell gens elapsed). To double-check whether messaging specifically over interconnects was key to adaptivity we also competed the wild-type strain against variants with the focal over-interconnect messaging instruction substituted for all other possible module-activating instructions: call (378 S.D. 42 cell gens elapsed; https://mmore500.com/hopto/ac) fork (377 S.D. 37 cell gens elapsed; https://mmore500.com/hopto/ad) internal message send (406 S.D. 39 cell gens elapsed; https://mmore500.com/hopto/ae) internal message send-to-all (422 S.D. 30 cell gens elapsed; https://mmore500.com/hopto/af) external message send (377 S.D. 37 cell gens elapsed; https://mmore500.com/hopto/ag) external message send-to-all (440 S.D. 32 cell gens elapsed; https://mmore500.com/hopto/ah) In each case the substitution variant strain was driven to extinction across all 16 replicate experiments (one-tailed binomial test; \(p < 0.0001\)). The directionality of messaging over the interconnect, however, does not appear to affect fitness. We tried substituting the wild-type instruction, which dispatches a message from the terminus of an interconnect to its origin, with an instruction that instead dispatches a message from the origin of an interconnect to its terminus. In competition against wild-type, the wild type strain was more abundant in only ten of 16 replicate competitions (one-tailed binomial test; \(p = 0.2272\); 14/16 coalesced to a single strain; 410 S.D. 50 cell gen; https://mmore500.com/hopto/ai. Next we assessed the adaptiveness of the particular spatio-temporal pattern of stimulation induced by incoming over-interconnect messages. Does this pattern differ from spatially and temporally random stimulation? If it does, is the non-uniformity of stimulation adaptive? To assess these questions, we measured the fraction of cells expressing module 14 in a monoculture wild-type population. Then, we created a variant strain where outgoing over-interconnect messages from module 5 were disabled and, instead, module 14 activated randomly with probability based on the empirical wild-type activation rate. [Footnote BOIBM] In effect, this manipulation decouples reproductive pause from the distribution of over-interconnect message delivery and instead couples it to a comparable uniform random distribution. Indeed, in competition experiments against the wild-type strain this variant fares poorly (15/16 wild-type strain prevalent; 0 variant strain extinctions; one-tailed binomial test; \(p < 0.001\); 36 S.D. 2 cell gens elapsed), suggesting that the pattern of stimulation induced by over-interconnect messaging is meaningfully non-uniform. We confirmed this result with a larger-scale set of competition trials (58/64 wild-type strain prevalent; 0 strain extinctions; one-tailed binomial test; \(p < 0.0001\); 33 S.D. 2 cell gens elapsed). Does the adaptively non-uniform pattern of stimulation induced by over-interconnect messages depend on non-uniform dispatch of messages from sending cells? To assess, this question, we measured the per-cell frequency of module 5 activation in a monoculture wild-type population. We then created a variant strain where outgoing over-interconnect messages from module 5 were disabled. Instead, the over-interconnect message instruction was randomly executed with uniform per-cell probability based on the empirical wild-type execution rate. This variant strain held its own against the wild-type strain (5/16 wild-type strain prevalent; 0 strain extinctions; one-tailed binomial test; \(p = 0.9\); 30 S.D 1 cell gens elapsed). So, this strain's non-uniform pattern of stimulation seems likely to a result from the actual pattern of cell-cell interconnection rather than selective message dispatch. We did not find evidence that cells were using tag-based developmental attractors or repulsors to bias connectivity (5/16 wild-type strain prevalent; 0 strain extinctions; \(p=0.9\); 35 S.D. 8 cell gens elapsed). However, we did notice frequent interconnect turnover via execution of both remove-incoming and remove-outgoing interconnect instructions. Substituting these instructions for no-ops yielded a knockout strain with lower fitness than wild-type (13/16 wild-type strain prevalent; 0 strain extinctions; one-tailed binomial test; \(p < 0.05\); 30 S.D. 11 cell gens elapsed). We confirmed this result with a larger-scale set of competition trials (60/64 wild-type strain prevalent; 0 strain extinctions; one-tailed binomial test; \(p < 0.0001\); 39 S.D. 5 cell gens elapsed). Is this remodeling of connectivity adaptively non-uniform? We measured the interconnect removal rate in a monoculture wild-type population. Then, we created a variant strain where interconnect-removal instructions were disabled. Instead, interconnects in this strain were removed randomly with uniform probability. In head-to-head competitions, this variant strain did not exhibit diminished fitness (20/64 wild-type strain prevalent; 0 strain extinctions; one-tailed binomial test; \(p = 1.0\); 30 S.D. 2 cell gens elapsed). The adaptive mechanism of over-interconnect messaging at play in this strain remains somewhat unclear. Over-interconnect messaging induces an adaptively non-uniform pattern of module 14 activation. The transmission of messages between cells over the interconnects, in particular, contributes to fitness. When messages that would be delivered over interconnects are instead re-routed to the sending cell, fitness decreases. Substituting over-interconnect messaging for local messaging also decreases fitness. However, message dispatch is effectively random. This strain employs an adaptive during-lifetime interconnect-remodeling scheme. However, this remodeling scheme is also effectively random. Although the process of interconnect development and retention might contribute some sort of spatial and/or temporal bias to module 14 activation, a full characterization of the nature of this bias and the mechanism inducing remains elusive. 🔗 Wiring a Generic Small World Graph Consider a set of computational nodes arranged in an \(r\)-dimensional mesh. In each dimension, physical interconnects run between immediately adjacent pairs of nodes. Represent this physical hardware with a graph \(N\). Vertices of \(N\), \(V(N)\), represent computational nodes. Edges of \(N\), \(E(N)\), represent physical interconnects between nodes. Let \(d(a,b)\) represent the typical number of physical interconnects traversed on a shortest path between a pair of arbitrary nodes \(a, b \in V(N)\). This is conceptually equivalent to Manhattan distance. In the case of a one-dimensional sequence of nodes, for a pair of arbitrary nodes \(a,b \in V(N)\), \begin{equation} \bar{d}(a,b) \propto |V(N)|. \end{equation} Consider next the case of a higher-dimensional grid topology, like a two-dimensional grid or a three-dimensional mesh. Because \(d\) is a Manhattan metric, the number of physical interconnects requiring traversal in each dimension on a shortest-path between two nodes is completely independent. Arranging the set of nodes \(N\) in a \(r\)-dimensional cube, cube width in each dimensions scales proportionally to the \(r\)-th root of \(|V(N)|\). So, for a pair of arbitrary nodes \(a,b \in V(N)\), \begin{equation*} \bar{d}(a, b) \propto |V(N)|^{\frac{1}{r}} \times r. \end{equation*} We proceed to construct a small world directed graph \(G\) using the set of nodes \(N\) as vertices. In formal terms, a bijective relationship \(f: V(N) \rightarrow V(G)\) unites these two sets. The inverse mapping, \(f^{-1}: V(G) \rightarrow V(N)\), is also bijective. Edges in the graph \(G\) do not represent a physical interconnect. Instead, edges \(\{\hat{a}, \hat{b}\} \in E(G)\) represent a close-coordination relationship where node \(\hat{a}\) frequently interacts with (i.e., dispatches messages to) the destination node \(\hat{b}\). Figure RBACM illustrates the relationship between \(N\) and \(G\). Let \(\hat{d}(\hat{a},\hat{b})\) denote distance between vertices \(\hat{a}\) and \(\hat{b}\) with respect to the graph \(G\). That is, the number of graph edges traversed on a shortest-path route between \(\hat{a}\) and \(\hat{b}\) over \(G\). In a small-world network, typical graph distance scales proportionally with the logarithm of network size [Watts and Strogatz, 1998]. In our case, for arbitrary \(\hat{a},\hat{b} \in V(G)\), \begin{equation} \label{eqn:smallworld_prop} \bar{\hat{d}}(\hat{a},\hat{b}) \propto \log(|V(G)|). \end{equation} Consider the sequence of edges in \(G\) traversed on a shortest-path route \(R_{\hat{a},\hat{b}}\) between \(\hat{a}, \hat{b} \in V(G)\), \(\{\{\hat{v}_1, \hat{v}_2\}, \{\hat{v}_2, \hat{v}_3\}, \ldots, \{\hat{v}_{n-1}, \hat{v}_n\} \}\). If we traverse these same nodes over the graph \(N\) this path would be at least as long as the direct path between \(f^{-1}(\hat{a})\) and \(f^{-1}(\hat{b})\) over \(N\). (Otherwise, we would violate the Manhattan metric on \(N\)'s triangle inequality.) Therefore, \begin{equation} \label{eqn:path_hops_inequality} \sum_{\{\hat{v}_i, \hat{v}_{i+1}\} \in R_{\hat{a},\hat{b}}} \Big[ d\Big(f^{-1}(\hat{v}_i), f^{-1}(\hat{v}_{i+1})\Big) \Big] \geq d\Big(f^{-1}(\hat{a}), f^{-1}(\hat{b})\Big). \end{equation} Recall that \(\hat{a},\hat{b}\) are sampled uniformly from \(V(G)\). So, \(f^{-1}(\hat{a}),f^{-1}(\hat{b})\) are sampled uniformly from \(V(N)\). Thus, Equation \ref{eqn:mesh_prop} allows us to establish the following lower bound, \begin{equation*} d\Big(f^{-1}(\hat{a}), f^{-1}(\hat{b})\Big) \in \Omega \Big( |V(N)|^{\frac{1}{r}} \times r \Big). \end{equation*} It follows from Inequality \ref{eqn:path_hops_inequality} that \begin{equation*} \sum_{\{\hat{x}, \hat{y}\} \in R_{\hat{a},\hat{b}}} \Big[ d\Big(f^{-1}(\hat{x}), f^{-1}(\hat{y})\Big) \Big] \in \Omega \Big( |V(N)|^{\frac{1}{r}} \times r \Big). \end{equation*} Equation \ref{eqn:smallworld_prop} tells us that the mean number of edges in \(R_{\hat{a}, \hat{b}}\) is proportional to \(\log(|V(G)|)\). So, letting \(\bar{d}\) represent the mean case, \begin{equation*} \log(|V(G)|) \times \bar{d}\Big(f^{-1}(\hat{x}), f^{-1}(\hat{y})\Big) \in \Omega \Big( |V(N)|^{\frac{1}{r}} \times r \Big). \end{equation*} Rearranging and simplifying, we arrive at a lower bound of mean distance over the Manhattan network \(N\) traversed for a connection in the interaction network \(G\), \begin{equation*} \bar{d}\Big(f^{-1}(\hat{x}), f^{-1}(\hat{y})\Big) \in \Omega \Big( \frac{ |V(N)|^{\frac{1}{r}} \times r }{ \log(|V(N)|) } \Big). \end{equation*} Note that edges \(\{\hat{x},\hat{y}\}\) are not sampled uniformly from \(E(G)\). Instead, their sampling is weighted by edge betweenness centrality. [Footnote CAPON] 🔗 Wiring an Ideal Space-Filling Hierarchical Tree without Log-Time Physical Interconnects Consider, again, a set of computational nodes arranged in an \(r\)-dimensional mesh. In each dimension, physical interconnects run between immediately adjacent pairs of nodes. Let this physical hardware corresponds to a graph \(N\) where \(V(N)\) represents computational nodes and \(E(N)\) represents physical interconnects between nodes. Suppose we have a small-world graph \(G\) with maximum vertex degree bounded by a finite constant \(m\). The vertices of this graph \(G\) are embedded one-to-one on \(N\) such that \(|V(G)| = |V(N)|\). (Again, along the lines of Figure RBACM.) Pick an arbitrary vertex \(a \in V(G)\). By the definition of a small-world graph, \begin{equation*} \frac{1}{|V(G)|} \sum_{v \in V(G)} d(a, v) \propto \log |V(G)|. \end{equation*} Because the degree of the graph \(G\) is bounded by \(m\), there must be a subset \(T \subseteq G\) that, for some branching factor \(k \geq 2\) forms a complete \(k\)-nary tree rooted at \(a\) such that the tree height of \(T\) is \(h \propto \log |V(G)|\) and \(|V(T)| \propto |V(G)|\). Legenstein and Maass establish a lower bound for the length of wiring required to construct a \(k\)-nary tree with \(n\) nodes on a one-dimensional \(L_1\) grid, \begin{equation*} \Omega(n \log n). \end{equation*} In our case, this corresponds to the total number of hops over \(N\) to traverse every edge in \(T\). Because, \(T \subseteq G\), \(\Omega(n \log n)\) is also a best-case lower bound for the total number of hops over \(N\) to traverse every edge in \(G\). Because the degree of vertices in \(V(T)\) is bounded by \(k\), \begin{equation*} |E(T)| \in O \Big( |V(T)| \Big). \end{equation*} In fact, because the degree of vertices in \(V(G)\) is also bounded by \(m\), \begin{equation*} |E(G)| \in O \Big( |V(G)| \Big). \end{equation*} Let the wiring cost of an edge \(\{x, y\}\) in \(E(G)\) refer to the number of hops over \(N\) required to travel from \(x\) to \(y\). The best-case average wiring cost per edge can be computed as the best-case total wiring cost divided by the worst-case number of edges. For arbitrary \(\{x, y\} \in E(G)\), \begin{eqnarray*} \bar{d}(x, y) &\in& \Omega \Big( \frac{ |E(G)| \times \log |E(G)| }{ |E(G)| } \Big)\\ &\in& \Omega \Big( \log |E(G)| \Big). \end{eqnarray*} This result applies to all possible small-world graphs \(G\) embedded on a one-dimensional computational mesh. To tractably extend our analysis to three-dimensional meshes, rather than all small-world graphs we will specifically analyze the wiring cost of ideal space-filling trees [Kuffner and LaValle, 2009]. This construction efficiently distributes elements of \(G\) over \(N\) with respect to wiring cost. Although this construction potentially represents a lower bound on wiring cost, its optimality has not been concretely established. For three dimensions, the total length of wiring required as a function of the number of nodes is \begin{equation*} w_3(n) = \sum_{i=1}^{\log_8 n} \Big[ \frac{n}{8^i} % how many to draw \times \frac{3}{2} \times 8 \times 2^{i} % how big each one is \Big]. \end{equation*} \begin{equation*} \lim_{n \rightarrow \infty} \frac{w_3(n)}{n} = 4, \end{equation*} we have \(w_3(n) \in \Theta \Big( n \Big)\). For an \(n\)-node tree, edge count \(|E(G)| \in \Theta \Big( |V(G)| \Big)\). So, average edge wiring cost remains constant as \(|V(G)|\) scales. Similar analysis concludes an equivalent result in the two-dimensional case. 🔗 Wiring an Ideal Space-Filling Hierarchical Tree with Log-Time Physical Interconnects Once more, we will work with a mesh of \(n\) physical hardware nodes corresponding to a graph \(N\) where \(V(N)\) represents computational nodes and \(E(N)\) represents physical interconnects between nodes. In this case, in addition to physical interconnects between spatially adjacent nodes we will assume a system of hierarchical physical interconnects that allows log-hop traversal between nodes. Figure ECOAI: Example construction of an ideal space-filling tree over a computational mesh. Suppose we have a small-world graph \(G\) with maximum vertex degree bounded by a finite constant \(m\). The vertices of this graph \(G\) are embedded one-to-one on \(N\) such that \(|V(G)| = |V(N)|\). We will specifically construct this graph as an ideal space-filling tree. Figure RBACM illustrates the relationship between \(N\) and \(G\). As a property of this construction, \(|E(N)| \propto |V(N)|\). In the best case, where edges in \(E(G)\) happen to correspond exactly to hierarchical physical interconnects \(E(N)\), the average hops required per edge is 1. However, in the worst case the average number of hops over \(N\) required per edge in \(E(G)\) is bounded by \(\log_m n\). What if, instead of routing all traffic through log-time hierarchical interconnects, we routed traffic between nodes less than \(\log_m n\) apart through local grid-mesh interconnects? In this case, we can bound worst-case total wiring cost by \begin{equation*} \sum_{l = 1}^{\log_2 \log_m n} % short edges \Big[ m^{\log_m n - l} % number of edges \times 2^l % hop length \Big] + \log_m n % number hops \times \sum_{l = \log_2 \log_m n }^{ \log_m n} % long edges m^{\log_m n - l}. \end{equation*} For the space-filling tree on a one-dimensional mesh, we have \(m = 2\). Our upper bound on total wiring cost simplifies to \begin{equation*} w_2(n) = n \times \log_2 \log_2 n + \log_2 n \times (n \times \log_n 4 - 1). \end{equation*} \begin{equation*} \lim_{n \rightarrow \infty} \frac{ w_2(n) }{ n \times \log_2 \log_2 n } = 1, \end{equation*} we have \(w_2(n) \in \Theta \Big( n \times \log_2 \log_2 n \Big)\). Because edge count \(|E(G)| \in \Theta \Big( n \Big)\) we can establish the following upper bound on \(\bar{W}(n)\) mean wiring cost per edge in \(|E(G)|\) for the one-dimensional case, \begin{equation*} \bar{W}(n) \in \Omega\Big( \log_2 \log_2 n \Big). \end{equation*} What about the three-dimensional case? For the space-filling tree on a three-dimensional mesh, we have \(m = 8\). Our upper bound on total wiring cost simplifies to \begin{eqnarray*} w_8(n) =& & \frac{n}{3} \times (1 - 4^{ \log_2 \log_n 8 }) \\ & &+ \frac{ ( n \times 8^{ \log_2 \log_n 64 } - 1 ) \times \log_8 n }{7}. \end{eqnarray*} \begin{equation*} \lim_{n \rightarrow \infty} \frac{ w_8(n) }{ n } = \frac{1}{3}, \end{equation*} we have \(w_8 \in \Theta \Big( n \Big)\). Once more, because edge count \(|E(G)| \in \Theta \Big( n \Big)\) we can establish the following upper bound on \(\bar{W}(n)\) mean wiring cost per edge for the three-dimensional case, \begin{equation*} \bar{W}(n) \in \Omega \Big( 1 \Big). \end{equation*} 🔗 Wiring a Watts-Strogatz Graph \begin{equation*} \bar{d}(a,b) \propto |V(N)|. \end{equation*} \begin{equation} \label{eqn:mesh_prop} \bar{d}(a, b) \propto |V(N)|^{\frac{1}{r}} \times r. \end{equation} Figure RBACM: Relationship between a computational mesh \(N\) and a small-world interaction network \(G\) constructed over \(N\). \begin{equation*} \bar{\hat{d}}(\hat{a},\hat{b}) \propto \log(|V(G)|). \end{equation*} Suppose we have a small-world graph \(G\) constructed over a mesh \(N\) (as in Figure RBACM) using the Watts–Strogatz algorithm. In this procedure, vertices in \(V(G)\) corresponding to neighboring computational nodes in \(V(N)\) are wired together to form a lattice with mean degree \(k\). Then, for every vertex \(v \in V(G)\), each edge \(\{x, y\} \in E(G)\) containing \(v\) is reconfigured with probability \(0 < \beta < 1\) to connect \(v\) to a randomly-chosen node \(w \in V(G)\). Before reconfiguration, the total wiring cost of \(G\) with respect to hops over \(N\) was proportional to \(|V(G)|\). Recall that, with mesh dimensionality \(r\) we know that for a pair of arbitrary nodes \(a,b \in V(N)\), So, after rewiring, the total wiring cost \(w\) of \(G\) with respect to hops over \(N\) can be calculated as \begin{equation*} \beta |V(G)| \times |V(N)|^{\frac{1}{r}} \times r + (1 - \beta) |V(G)|. \end{equation*} So, \(w \in \Omega \Big( |V(N)|^{\frac{r+1}{r}} \times r \Big)\). In this graph, the number of edges is proportional to the graph size \(n\). With bounded mean degree, we have \(|E(G)| \propto |V(G)|\) so we can establish the following lower bound on mean wiring cost per edge of \(G\) with respect to hops over \(N\), \begin{equation*} \Omega \Big( |V(G)|^{\frac{1}{r}} \times r \Big). \end{equation*} Note that, with the introduction of log-time hierarchical hardware interconnects into \(N\) the mean wiring cost per edge of \(G\) with respect to hops over \(N\) is bounded in the worst case by \(\Omega \Big( \log |V(G)| \Big)\). 🔗 Conclusion Ackley's concept of indefinite scalability lays out an ambitious vision for the computational substrate of future open-ended evolution models. This vision has inspired researchers to incorporate thinking about underlying computational substrates into open-ended evolution theory and to consider how (or whether) available computational resources meaningfully constrain existing open-ended evolution models. For the time being, computational substrates for open-ended evolution limited purely by physical (or economic) concerns remain on the horizon, but indefinite scalability has already had concrete, and fruitful, impact on thinking around open-ended evolution. Although prevalent contemporary computational hardware (and the developer-facing software infrastructure that supports its use) lacks essential features necessary to achieve true indefinite scalability such as fault tolerance and purely relative addressing, many cores designed to support low-latency interconnects. These high-performance computing resources are increasingly accessible. Concern over indefinite scalability should not dissuade the design and implementation of open-ended evolution models that accommodate for the limitations of existing hardware and software infrastructure to make effective use of it. We highlight how log-time hardware interconnects might be exploited in practically scalable, but other model design or implementation tradeoffs may be relevant too (e.g., model dynamics or performance gains that rely on absolute instead of purely relative addressing). Realizing open-ended evolution models with truly vast computational substrates will require intermediate steps. Efforts to pursue practical scalability that wrings out contemporary, commercially-available hardware and software infrastructure, will accelerate progress toward realizing truly indefinitely scalable systems. It seems conceivable that, coupled with innovative model design informed by open-ended evolution theory and effective model implementation in code, contemporary hardware systems and software infrastructure harbor the potential to realize paradigm-shifting advances in open-ended evolution. As was the case with deep learning, the tipping point of scale for model systems to exhibit qualitatively different behavior may be closer than we assume, perhaps only two or three orders of magnitude. We highlighted how dynamic interactions within and between evolutionary individuals are crucial to open-ended evolution. Open-ended evolution models designed to scale computationally should realize these dynamic interactions within a framework that can be efficiently and readily mapped onto parallel computational implementation. Software tools that enable artificial life researchers to rapidly (and reusably) develop artificial life models have yielded substantial benefit to the field [Bohm and Hintze, 2017; Ofria et al., 2019]. Software tools or frameworks for parallel and distributed artificial life models that are versatile enough to support diverse use cases might help make practical scalability more practical. In particular, tools to collect data on distributed evolving systems (especially systematics tracking) seem likely to benefit the community. Here, we presented an extension of the DISHTINY framework as an example of an artificial life system that might hypothetically take advantage of log-time hardware interconnects. We employed a very modest prototype parallel implementation that used shared-memory parallelism to distribute evolving — and interacting — populations of cells over two threads. We provided a faculty for cells to establish long-distance interconnects over the computational mesh, which in future implementations could rely on hardware-level log-time interconnects. We have characterized two strains that adaptively employed these interconnects to synthesize spatially-distributed functionality. In the first case study, messaging and resource sharing over interconnects appeared to facilitate resource recruitment to multicell peripheries. In a second case study, interconnect messaging played an adaptive role in selectively moderating somatic reproduction. Incorporating simulation-level objects or physics in open-ended evolution models that explicitly correspond to hardware interconnects represents just one possible approach to exploiting them. Automatic detection of emergent long-distance interactions across a computational mesh and dynamically re-routing signaling traffic to use hierarchical interconnects might also be possible. Open-ended evolution models could also be entirely designed around hierarchical interconnects instead of a space-filling computational mesh. At the core, from both the practical and indefinite standpoints, efforts to scale computational models of open-ended evolution, seek to realize the evolutionary generation of continually novel and increasingly complex artifacts. As we scale DISHTINY, we are interested in assembling metrics to quantify different aspects of complexity in the system such as organization [Goldsby et al., 2012], structure, and function [Goldsby et al., 2014]. We believe that open-ended model systems built on contemporary distributed computational substrates will prove fruitful tools to investigate questions about how biological complexity relates to fitness, genetic drift over elapsed evolutionary time, mutational load, genetic recombination (sex and horizontal gene transfer), ecology, historical contingency, and key innovations. 🔗 Let's Chat I would love to hear your thoughts on scaling artificial life simulations and studying major transitions in evolution!! I started a twitter thread (right below) so we can chat nothing to see here, just a placeholder tweet 🐦 — Matthew A Moreno (@MorenoMatthewA) October 21, 2018 Pop on there and drop me a line or make a comment 🔗 Cite This Post Moreno, M. A., & Ofria, C. (2020, June 25). Practical Steps Toward Indefinite Scalability: In Pursuit of Robust Computational Substrates for Open-Ended Evolution. https://doi.org/10.17605/OSF.IO/53VGH Moreno, Matthew A, and Charles Ofria. "Practical Steps Toward Indefinite Scalability: In Pursuit of Robust Computational Substrates for Open-Ended Evolution." OSF, 25 June 2020. Web. Moreno, Matthew A, and Charles Ofria. 2020. "Practical Steps Toward Indefinite Scalability: In Pursuit of Robust Computational Substrates for Open-Ended Evolution." OSF. June 25. doi:10.17605/OSF.IO/53VGH. @misc{Moreno_Ofria_2020, title={Practical Steps Toward Indefinite Scalability: In Pursuit of Robust Computational Substrates for Open-Ended Evolution}, url={osf.io/53vgh}, DOI={10.17605/OSF.IO/53VGH}, publisher={OSF}, author={Moreno, Matthew A and Ofria, Charles}, month={Jun} 🔗 Footnotes Footnote WDWCM Why do we consider mean node-to-node hops per connection? Although relativistic concerns do ultimately limit latency between spatially-distributed computational elements, with respect to contemporary hardware co-located at a single physical site at foreseeable scales, we expect node-to-node hops to represent an important bottleneck on system performance. At larger scales, consider the case where emergent connections are embodied via simulation state along the entire path of node-to-node hops traversed by the by the connection (along the line of axon wiring of biological neural networks). If mean emergent connections per simulation element remain constant as the system scales, then mean node-to-node hops per connection relates to the amount of state required per node to represent connections that pass through it. (Specifically, if mean node-to-node hops per connection remains constant than the amount of state required per node remains constant.) Finally, the asymptotic analyses performed on mesh networks without long-distance hierarchical interconnects can be interpreted in terms of Euclidean distance. (Potentially of interest with respect to relativistic limitations.) Footnote BOIBM Because over-interconnect broadcast messages activate all hardware units of a cell, we selected entire cells randomly and activated module 14 on all hardware units. Footnote TWAOI This was an independent replication of the initial experiment (performed as part of a wider screen) that singled out the case study strain for further analysis. Footnote CAPON Consider all pairings of nodes in a graph. Now, construct a multiset of paths that, for each possible node pairing, contains the shortest path between those two nodes. Edge betweenness is the fraction of the paths in this mulitset that passes through a particular edge [Lu and Zhang, 2013]. 🔗 References Ackley, D. H. (2018). Digital protocells with dynamic size, position, and topology. The 2018 Conference on Artificial Life: A Hybrid of the European Conference on Artificial Life (ECAL) and the International Conference on the Synthesis and Simulation of Living Systems (ALIFE), pages 83–90. Ackley, D. H. and Cannon, D. C. (2011). Pursue robust indefinite scalability. In HotOS. Bassett, D. S. and Bullmore, E. T. (2017). Small-world brain networks revisited. The Neuroscientist, 23(5):499–516. Bedau, M. A., Snyder, E., and Packard, N. H. (1998). A classification of long-term evolutionary dynamics. In Artificial life VI, pages 228–237. Bennett III, F. H., Koza, J. R., Shipman, J., and Stiffelman, O. (1999). Building a parallel computer system for $18,000 that performs a half peta-flop per day. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume 2, pages 1484–1490. Citeseer. Blondeau, A., Cheyer, A., Hodjat, B., and Harrigan, P. (2012). Distributed network for performing complex algorithms. US Patent App. 13/443,546. Bohm, C. and Hintze, A. (2017). Mabe (modular agent based evolver): A framework for digital evolution research. In Artificial Life Conference Proceedings 14, pages 76–83. MIT Press. Channon, A. (2019). Maximum individual complexity is indefinitely scalable in geb. Artificial life, 25(2):134–144. Che, S., Li, J., Sheaffer, J. W., Skadron, K., and Lach, J. (2008). Accelerating compute-intensive applications with gpus and fpgas. In 2008 Symposium on Application Specific Processors, pages 101–107. IEEE. Clarke, L., Glendinning, I., and Hempel, R. (1994). The mpi message passing interface standard. In Programming environments for massively parallel distributed systems, pages 213–218. Springer. Clune, J., Stanley, K. O., Pennock, R. T., and Ofria, C. (2011). On the performance of indirect encoding across the continuum of regularity. IEEE Transactions on Evolutionary Computation, 15(3):346–367. Dagum, L. and Menon, R. (1998). Openmp: an industry standard api for shared-memory programming. IEEE computational science and engineering, 5(1):46–55. Dolson, E. and Ofria, C. (2017). Spatial resource heterogeneity creates local hotspots of evolutionary potential. In Artificial Life Conference Proceedings 14, pages 122–129. MIT Press. Dolson, E. L. (2019). On the Constructive Power of Ecology in Open-Ended Evolving Systems. Michigan State University. Dolson, E. L., Vostinar, A. E., Wiser, M. J., and Ofria, C. (2019). The modes toolbox: Measurements of open-ended dynamics in evolving systems. Artificial life, 25(1):50–73. Eiben, A. and Smith, J. E. (2015). Introduction to evolutionary computing. Springer, Berlin. Foster, E. D. and Deardorff, A. (2017). Open science framework (osf). Journal of the Medical Library Association: JMLA, 105(2):203. Fox, R. J. and Bellwood, D. R. (2014). Herbivores in a small world: network theory highlights vulnerability in the function of herbivory on coral reefs. Functional Ecology, 28(3):642–651. Gaiteri, C., Ding, Y., French, B., Tseng, G. C., and Sibille, E. (2014). Beyond modules and hubs: the potential of gene coexpression networks for investigating molecular mechanisms of complex brain disorders. Genes, brain and behavior, 13(1):13–24. Gerhart, J. and Kirschner, M. (2007). The theory of facilitated variation. Proceedings of the National Academy of Sciences, 104(suppl 1):8582–8589. Gilbert, D. (2015). Artificial intelligence is here to help you pick the right shoes. Goldsby, H. J., Dornhaus, A., Kerr, B., and Ofria, C. (2012). Taskswitching costs promote the evolution of division of labor and shifts in individuality. Proceedings of the National Academy of Sciences, 109(34):13686–13691. Goldsby, H. J., Knoester, D. B., Ofria, C., and Kerr, B. (2014). The evolutionary origin of somatic cells under the dirty work hypothesis. PLOS Biology, 12(5):e1001858. Harding, S. and Banzhaf, W. (2007a). Fast genetic programming and artificial developmental systems on gpus. In 21st International Symposium on High Performance Computing Systems and Applications (HPCS'07), pages 2–2. IEEE. Harding, S. and Banzhaf, W. (2007b). Fast genetic programming on gpus. In European conference on genetic programming, pages 90–101. Springer. Hennessy, J. L. and Patterson, D. A. (2011). Computer architecture: a quantitative approach. Elsevier. Hodjat, B. and Shahrzad, H. (2013). Distributed evolutionary algorithm for asset management and trading. US Patent 8,527,433. Hornby, G., Globus, A., Linden, D., and Lohn, J. (2006). Automated antenna design with evolutionary algorithms. Space 2006. Hornby, G. S. (2005). Measuring, enabling and comparing modularity, regularity and hierarchy in evolutionary design. In Proceedings of the 7th annual conference on Genetic and evolutionary computation, pages 1729–1736. Huizinga, J., Stanley, K. O., and Clune, J. (2018). The emergence of canalization and evolvability in an open-ended, interactive evolutionary system. Artificial life, 24(3):157–181. Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., Bates, S., Bhatia, S., Boden, N., Borchers, A., et al. (2017). In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, pages 1–12. Kirschner, M. and Gerhart, J. (1998). Evolvability. Proceedings of the National Academy of Sciences, 95(15):8420–8427. Kuffner, J. J. and LaValle, S. M. (2009). Space-filling trees. RI, Pittsburgh, PA, Tech. Rep. CMU-RI-TR-09-47 Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105. Lalejini, A. and Ofria, C. (2018). Evolving event-driven programs with signalgp. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 1135–1142. Langdon, W. B. and Banzhaf, W. (2019). Continuous long-term evolution of genetic programming. In The 2018 Conference on Artificial Life: A Hybrid of the European Conference on Artificial Life (ECAL) and the International Conference on the Synthesis and Simulation of Living Systems (ALIFE), pages 388–395. MIT Press. Lehman, J. (2012). Evolution through the search for novelty. Lehman, J. and Stanley, K. O. (2011). Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation, 19(2):189–223. Lehman, J. and Stanley, K. O. (2012). Beyond open-endedness: Quantifying impressiveness. In Artificial Life Conference Proceedings 12, pages 75–82. MIT Press. Lehman, J. and Stanley, K. O. (2013). Evolvability is inevitable: Increasing evolvability without the pressure to adapt. PloS one, 8(4). Lipson, H. (2007). Principles of modularity, regularity, and hierarchy for scalable systems. Journal of Biological Physics and Chemistry, 7(4):125–128. Legenstein, R. A. and Maass, W. (2001). Optimizing the layout of a balanced tree. In Electronic Colloquium on Computational Complexity (ECCC), volume 8. Lu, L. and Zhang, M. (2013). Edge Betweenness Centrality, pages 647–648. Springer New York, New York, NY. Lynch, M. (2007). The frailty of adaptive hypotheses for the origins of organismal complexity. Proceedings of the National Academy of Sciences, 104(suppl 1):8597–8604. Markov, I. L. (2014). Limits on fundamental limits to computation. Nature, 512(7513):147–154. Miaoulis, G. and Plemenos, D. (2008). Intelligent Scene Modelling Information Systems, volume 181. Springer. Miikkulainen, R., Liang, J., Meyerson, E., Rawal, A., Fink, D., Francon, O., Raju, B., Shahrzad, H., Navruzyan, A., Duffy, N., et al. (2019). Evolving deep neural networks. In Artificial Intelligence in the Age of Neural Networks and Brain Computing, pages 293–312. Elsevier. Mollah, M. A., Faizian, P., Rahman, M. S., Yuan, X., Pakin, S., and Lang, M. (2018). A comparative study of topology design approaches for hpc interconnects. In 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), pages 392–401. IEEE. Moreno, M. A. and Ofria, C. (2019). Toward open-ended fraternal transitions in individuality. Artificial life, 25(2):117–133. Moreno, M. A. and Ofria, C. (2020). Practical steps toward indefinite scalability: In pursuit of robust computational substrates for open-ended evolution. DOI: 10.17605/OSF.IO/53VGH; URL: https://osf.io/53vgh. Moreno, M. A. and Ofria, C. (in prep.). Spatial constraints and kin recognition can produce open-ended major evolutionary transitions in a digital evolution system. https://doi.org/10.17605/OSF.IO/G58XK. Nguyen, A. M., Yosinski, J., and Clune, J. (2015). Innovation engines: Automated creativity and improved stochastic optimization via deep learning. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pages 959–966. Ofria, C., Dolson, E., Lalejini, A., Fenton, J., Moreno, M. A., Jorgensen, S., Miller, R., Stredwick, J., Zaman, L., Schossau, J., Gillespie, L., G, N. C., and Vostinar, A. (2019). Empirical. Packard, N., Bedau, M. A., Channon, A., Ikegami, T., Rasmussen, S., Stanley, K. O., and Taylor, T. (2019). An overview of open-ended evolution: Editorial introduction to the open-ended evolution ii special issue. Artificial life, 25(2):93–103. Soros, L. and Stanley, K. (2014). Identifying necessary conditions for open-ended evolution through the artificial life world of chromaria. In Artificial Life Conference Proceedings 14, pages 793–800. MIT Press. Stanley, K. O., Lehman, J., and Soros, L. (2017). Open-endedness: The last grand challenge you've never heard of. O'Reilly Online. Stanley, K. O. and Miikkulainen, R. (2003). A taxonomy for artificial embryogeny. Artificial Life, 9(2):93–130. Sutter, H. (2005). The free lunch is over: A fundamental turn toward concurrency in software. Dr. Dobb's journal, 30(3):202–210. Watts, D. J. and Strogatz, S. H. (1998). Collective dynamics of 'small-world' networks. Nature, 393(6684):440. 🔗 Acknowledgements Thanks to members of the DEVOLAB, in particular Santiago Rodgriguez-Papa for help developing the DISHTINY web interface. Thanks also to Ryan Moreno for feedback and suggestions on the asymptotic scaling proofs. This research was supported in part by NSF grants DEB-1655715 and DBI-0939454, and by Michigan State University through the computational resources provided by the Institute for Cyber-Enabled Research. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1424871. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Matthew Andres Moreno (hehimhis) doctoral student, Michigan State University my google scholar profile [email protected] morenomatthewa mmore500 subscribesubscribeRSSRSS send tipsend tip$ ฿ Ł Ξ$ ฿ Ł Ξ
CommonCrawl
Predictive validity of a novel non-invasive estimation of effective shunt fraction in critically ill patients Emma M. Chang1 na1, Andrew Bretherick1,2 na1, Gordon B. Drummond1 & J Kenneth Baillie ORCID: orcid.org/0000-0001-5258-793X1,3 120 Altmetric Accurate measurement of pulmonary oxygenation is important for classification of disease severity and quantification of outcomes in clinical studies. Currently, tension-based methods such as P/F ratio are in widespread use, but are known to be less accurate than content-based methods. However, content-based methods require invasive measurements or sophisticated equipment that are rarely used in clinical practice. We devised two new methods to infer shunt fraction from a single arterial blood gas sample: (1) a non-invasive effective shunt (ES) fraction calculated using a rearrangement of the indirect Fick equation, standard constants, and a procedural inversion of the relationship between content and tension and (2) inferred values from a database of outputs from an integrated mathematical model of gas exchange (DB). We compared the predictive validity—the accuracy of predictions of PaO2 following changes in FIO2—of each measure in a retrospective database of 78,159 arterial blood gas (ABG) results from critically ill patients. In a formal test set comprising 9,635 pairs of ABGs, the median absolute error (MAE) values for the four measures were as follows: alveolar-arterial difference, 7.30 kPa; PaO2/FIO2 ratio, 2.41 kPa; DB, 2.13 kPa; and ES, 1.88 kPa. ES performed significantly better than other measures (p < 10-10 in all comparisons). Further exploration of the DB method demonstrated that obtaining two blood gas measurements at different FIO2 provides a more precise description of pulmonary oxygenation. Effective shunt can be calculated using a computationally efficient procedure using routinely collected arterial blood gas data and has better predictive validity than other analytic methods. For practical assessment of oxygenation in clinical research, ES should be used in preference to other indices. ES can be calculated at http://baillielab.net/es. Hypoxia is the defining feature of respiratory failure. Accurate quantification of pulmonary oxygenation defect is essential to determine inclusion in clinical trials, to measure outcomes in research studies, and to observe changes in lung function in a clinical setting. In severely hypoxic patients, direct measurement of intrapulmonary shunt provides the most accurate quantification of an oxygenation defect [1]. Tension-based indices, including PaO2/FIO2 (P/F) ratio and alveolar-arterial (A-a) difference have poor agreement with intrapulmonary shunt fraction [1–3]. The primary limitation in tension-based indices is the marked and non-linear change in PaO2 when FIO2 is changed [4]. Brochard and colleagues demonstrated that this can be predicted from a simple mathematical model [5]. The concept of predictive validity is a mathematical reality check for a clinical measure. For a given clinical measure, predictive validity quantifies the extent to which that measure predicts an unseen event. The intent is not to predict the future, but rather to provide a rigourous, unbiased test of how well a clinical measure is describing a real entity: the assumption is that whichever measure is closest to the truth should also provide the best prediction. This approach, using mortality as the predicted event, was used in the development of consensus definitions for both acute respiratory distress syndrome (ARDS) [6] and sepsis [7]. A measure that accurately reflects the true state of a patient's lungs should not change markedly following a change in FIO2. Therefore, the prediction of a PaO2 following a change in FIO2, assuming that the measure of the oxygenation defect remains unaltered, is a valid assessment for predictive validity. We hypothesised that an easily understood, content-based oxygenation index may be obtainable from routinely-acquired arterial blood gas (ABG) data, without any need for additional invasive measurements. In order to assess different approaches, we quantified the predictive validity of P/F, A-a, and two new methods of estimating shunt fraction (effective shunt fraction (ES) and a database method (DB)) in a simple test: prediction of PaO2 following a change in FIO2 in a large retrospective cohort. Data source and filtering We used a set of 78,159 arterial blood gas samples taken between 2011 and 2016 from 6511 patients on the general intensive care unit (ICU) at the Royal Infirmary of Edinburgh. The unit admits adult patients, with predominantly emergency medical, trauma, and general surgery conditions—not elective cardiac or thoracic surgery or neurosurgery. We did not study patients who had ECMO. The samples were routine analyses: the FIO2 value was input by the clinician performing the analysis. The analysis machine was maintained by the clinical chemistry department and regularly calibrated against known standards. To obtain sample sets in which underlying pulmonary pathology was unlikely to change substantially between samples, we limited the selection of samples to pairs of ABGs that met the following inclusion criteria: (1) taken within a 3-h window, (2) taken from a mechanically ventilated patient, (3) where the FIO2 was reduced between the first and the second sample, and (4) where alveolar ventilation was stable (change in PaCO2 < 0.3 kPa). Derivation of effective shunt fraction ES expresses the shunt fraction that would be required to produce a given impairment in oxygenation, that is the proportion of cardiac output that would have to shunt in order to have this effect (i.e. to produce this degree of hypoxia). In clinical practice, it will almost never be the case that a given patient has pure shunt; ES is intended to provide an intuitive and consistent quantification of oxygenation impairment. Full details of the methods used are given in the Additional file 1. Briefly, ES was first calculated from the blood gas results as follows. The shunt equation is usually expressed in the following way [8]: $$ \frac{Q_{S}}{Q_{T}} = \frac{C_{c'}O_{2} - C_{a}O_{2}}{C_{c'}O_{2} - C_{v}O_{2}} $$ All of the necessary variables can be easily calculated from routine clinical measurements, with the exception of CvO2. We applied the Fick principle for oxygen uptake (see Additional file 2), in order to replace this term: $$ \frac{Q_{S}}{Q_{T}} = \frac{C_{c'}O_{2} - C_{a}O_{2}}{C_{c'}O_{2} - C_{a}O_{2} - \frac{VO_{2}}{Q}} $$ After estimating PAO2 using the alveolar gas equation, arterial (CaO2) and end-capillary (Cc'O2) oxygen contents were derived using model equations from Dash and Bassingthwaighte [9], using measured pH and PaCO2 to estimate the PO2 at which Hb is 50% saturated (P50). Values for oxygen consumption (VO2) and cardiac output (Q) were set at single values in the physiological range (see Additional file 1). Each measure was quantified for the earlier ABG in each pair. For ES, P/F and A-a, the FIO2 and PAO2 for the second ABG was used in the rearranged equations (derived in Additional file 1) to estimate the new PaO2, using the same value for the oxygenation index under inspection. For measurement of predictive validity in the ES method, a predicted CaO2, after change of FIO2, was calculated using FIO2 and PAO2 from the second ABG: $$ C_{a}O_{2} = C_{c'}O_{2} - \frac{Q_{S}/Q_{T} \times VO_{2}}{Q_{T} - Q_{S}} $$ This value of CaO2 was then converted to a predicted PaO2 value using the method of Dash and Bassingthwaighte [9]. In order to minimise the noise generated (affecting all measures) by changes in ventilation or circulation, we focused our study on patients whose FIO2 was being weaned downwards. The median absolute differences between predicted and observed PaO2 across all ABG pairs were taken as the predictive validity for each measure. For the DB method, input settings for an integrated mathematical model of gas exchange were identified which matched to the first ABG. These were then extrapolated to the FIO2 and PaCO2 of the second ABG, and the mean PaO2 of all matching model runs taken to be the prediction (Fig. 1b). Further details of the mathematical model used for the DB method can be found in supplementary content (see Additional file 2). a Boxplot showing distribution of absolute error for each measure in all samples, together with baseline distribution of pairs of ABGs in which FIO2 was unchanged (box shows mean +/ − one quartile, whiskers show range). b Range of possible FIO2-PaO2 combinations for conditions matching a single ABG result. c Range of possible results for conditions matching two ABG results at different FIO2 Ethical approval was obtained from the Scotland A Research Ethics Committee [16/SS/0209]. Software and statistical analyses All analysis was performed using Python 3.5.2 and scipy.stats version 0.18.1. A Kruskal-Wallis H test was used to determine the difference in error rate between the different measures. Mann-Whitney U tests with Bonferroni correction were used as a stringent post test for pairwise comparisons. Comparison of oxygenation measures From the total set of 78,159 ABGs from 6511 patients, an initial test set was selected at random, containing 54,115 ABGs from 4558 patients. From this random sample, we selected a formal test set comprising 9635 pairs of ABGs, which met the criteria listed above, pairs of ABGs taken from mechanically ventilated patients within a 3-h window, where the FIO2 was reduced between the first and the second sample, and where alveolar ventilation was stable. When we compared the predicted with the measured PaO2 values in the second values of these pairs, the median absolute error (MAE) values for the four measures considered were as follows: A-a, 7.30 kPa; P/F, 2.41 kPa; DB, 2.13 kPa; and ES, 1.88 kPa. ES had significantly superior predictive validity than all other measures (Table 1). Table 1 Pairwise comparisons between errors in oxygenation measures in test set (Mann-Whitney U test, Bonferroni correction) Effective shunt values in this population ranged from 0 to 63% (mean 16.1%, SD 8.6%). P/F values (kPa) ranged from 6.65 to 84.7 (mean 32.2, SD 12.5). A-a values (kPa) ranged from 0 to 81.2 (mean 22.7, SD 13.6). Validation of assumed values Three key assumed values are required for the calculation of ES: respiratory exchange ratio (RER), cardiac output (Q), and metabolic oxygen consumption (VO2). In order to prevent bias, we optimised estimation of these assumed parameters in a training set comprising 30% of the available ABGs (n=24,044), selected at random. Varying the three values over wide ranges (RER, 0.8 to 1.1; Q, 3 to 15 l.min−1; VO2, 0.15 to 1 l.min−1) had minimal effect on the value of effective shunt. Multiple ABGs under different conditions Although the DB method does not perform as well as the simpler and less computationally demanding ES method, it provides an opportunity to test the effect of obtaining multiple ABGs at different FIO2. The model takes standard physiological inputs, including pure shunt fraction, V/Q heterogeneity index, cardiac output, and FIO2 and returns blood gas results at steady state. The database of model results enables us to infer the possible physiological conditions that could give rise to a given ABG result. With one ABG, a wide range of possibilities remain (Fig. 1b); a second ABG at a different FIO2 substantially constrains the range of possible physiological states that describe a given patient (Fig. 1c). This is the first study, to our knowledge, comparing the predictive validity of non-invasive effective shunt fraction with tension-based measures. Our observations are consistent in magnitude and direction with previous work studying changes in measures of oxygenation in human participants [1–3]. The poor performance of A-a difference is consistent with the report by Cane and colleagues [1], who demonstrated that A-a was the least reliable measure compared with invasive measurements of Qs/Qt. The very low predictive validity of A-a in our study, together with previous work, leads us to conclude that this measure has no role in any context. Our study has several limitations. The ABG data itself was obtained from electronic records whereby FIO2 was entered by the treating clinician (nurse/doctor/nurse practitioner), which is a possible source of error. Importantly, this potential error applies equally to all measures of oxygenation. To compare the integrity of each oxygenation measure alone, we assumed that baseline physiological function is not altered by the change of FIO2. However, increases in FIO2 cause absorption atelectasis [10]. We have mitigated this by restricting our analysis to pairs of ABGs in which the FIO2 was decreasing. Since the reversal of absorption atelectasis is slower than the onset [10], and there are fewer sudden changes in oxygenation in this group, we expect that restricting our analysis to weaning patients will mitigate this source of noise. Marked changes in PaO2 may occur within a 3-h interval due to real changes in pulmonary function, for example due to recruitment, suction, diuresis, or change in posture. We therefore cannot draw any inference from the absolute value of the error in prediction, only a comparison between different methods. Noise caused by these and other factors is expected to limit the maximum possible accuracy of any prediction of PaO2. This minimum achievable error is reflected by the baseline values (Fig. 1a) showing the change in PaO2 between pairs of ABGs meeting the other selection criteria, with no change in FIO2. There is also a significant limitation in the concept of reducing the full complexity of pulmonary oxygenation to a single numerical value. All clinical measurements are subject to this limitation—they provide summary measurements that require informed interpretation. The lung is no different from any other system in this regard. Studies using the multiple gas elimination technique (MIGET) have confirmed that lung injury leads to substantial heterogeneity in the matching of ventilation to perfusion, which causes hypoxia without pure shunt [11, 12]. ES, by design, combines these mechanisms into a single value from a three-compartment model: the amount of pure shunt that would be needed to have a given effect on oxygenation. Since V/Q heterogeneity and shunt are separate inputs into the physiological model used to generate the database (see Additional file 1), the database approach is expected to handle this distinction better than the other measures. However, as shown in Fig. 1b, there is insufficient information in a single ABG to distinguish between shunt and V/Q heterogeneity. In contrast, with two ABGs taken at different settings of FIO2, the patient's oxygen responsiveness is quantified, greatly restricting the range of possible values for both shunt and V/Q heterogeneity (Fig. 1c). This double FIO2 test may resolve the uncertainty in quantifying pulmonary shunt but is, at present, computationally demanding. The striking superiority of ES in the context of critical illness may lead to an increase in clinical use. We support this, in part because the value itself is intuitive to critical care clinicians and is comparable across different health care systems and measurement units. Although it performs substantially better than other measures, it should be noted that ES is an imperfect measure and is not expected to be completely independent of extra-pulmonary factors, including FIO2, alveolar ventilation, and intracardiac shunt. The effective shunt fraction, a new, non-invasive method of estimating shunt, can be calculated on any ABG result, provided the FIO2 is known. The computation is fast and simple. Hence, the method could be retrospectively applied to previous studies that hold ABG data in a machine-readable format. Whilst the simplicity of the P/F ratio will continue to make it a popular choice for clinical use, the superior predictive validity of ES makes it a better choice where accurate quantification of oxygenation defect is necessary. An online calculator to compute the effective shunt fraction is available at: http://baillielab.net/es Python code to calculate the effective shunt fraction is available from github: http://github.com/baillielab A-a: Alveolar-arterial difference ABG: Arterial blood gas ARDS: CaO2 : Arterial blood oxygen content Cc'O2 : End-capillary blood oxygen content Database method Effective shunt FIO2; ICU: MWu: Mann-Whitney U test P/F: The ratio of PaO2 in the arterial blood to the therapeutic MAE: Median absolute error Qs/Qt: Shunt fraction RER: Respiratory exchange ratio V/Q: Ventilation/perfusion ratio VO2 : Metabolic oxygen consumption Cane RD, Shapiro BA, Templin R, et al. (1988) Unreliability of oxygen tension-based indices in reflecting intrapulmonary shunting in critically ill patients. Crit Care Med 16:1243–5. Drummond GB, Zhong NS (1983) Inspired oxygen and oxygen transfer during artificial ventilation for respiratory failure. Br J Anaesth 55:3–13. Allardet-Servent J, Forel J-M, Roch A, et al. (2009) FIO2 and acute respiratory distress syndrome definition during lung protective ventilation. Crit Care Med 37:e4–6. https://doi.org/10.1097/CCM.0b013e31819261db. Gowda MS, Klocke RA (1997) Variability of indices of hypoxemia in adult respiratory distress syndrome. Crit Care Med 25:41–5. Aboab J, Louis B, Jonson B, et al. (2006) Relation between pao2/fio2 ratio and fio2: a mathematical description. Intensive Care Med 32:1494–7. https://doi.org/10.1007/s00134-006-0337-9. ARDS Definition Task Force, Ranieri VM, Rubenfeld GD, et al. (2012) Acute respiratory distress syndrome: the Berlin Definition. JAMA 307:2526–33. https://doi.org/10.1001/jama.2012.5669. Shankar-Hari M, Phillips GS, Levy ML, et al. (2016) Developing a new definition and assessing new clinical criteria for septic shock: For the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA 315:775–87. https://doi.org/10.1001/jama.2016.0289. Benatar SR, Hewlett AM, Nunn JF (1973) The use of iso-shunt lines for control of oxygen therapy. Br J Anaesth 45:711–8. Dash RK, Bassingthwaighte JB (2010) Erratum to: Blood hbo2 and hbco2 dissociation curves at varied o2, co2, pH, 2,3-dpg and temperature levels. Ann Biomed Eng 38:1683–701. https://doi.org/10.1007/s10439-010-9948-y. Santos C, Ferrer M, Roca J, et al. (2000) Pulmonary gas exchange response to oxygen breathing in acute lung injury. Am J Respir Crit Care Med 161:26–31. https://doi.org/10.1164/ajrccm.161.1.9902084. Duenges B, Vogt A, Bodenstein M, et al. (2009) A comparison of micropore membrane inlet mass spectrometry-derived pulmonary shunt measurement with riley shunt in a porcine model. Anesth Analg 109:1831–5. https://doi.org/10.1213/ANE.0b013e3181bbc401. Wagner PD (2008) The multiple inert gas elimination technique (MIGET). Intensive Care Med 34:994–1001. https://doi.org/10.1007/s00134-008-1108-6. This research was supported by The University of Edinburgh and NHS Lothian. JKB is grateful to acknowledge funding support from a Wellcome Trust Intermediate Clinical Fellowship (103258/Z/13/Z) and a Wellcome-Beit Prize (103258/Z/13/A), BBSRC Institute Strategic Programme Grant to the Roslin Institute, and the UK Intensive Care Society. A.B. is grateful to acknowledge funding from Edinburgh Clinical Academic Track and funding from the Wellcome Trust (204979/Z/16/Z). Emma M. Chang and Andrew Bretherick contributed equally to this work. Anaesthesia, Critical Care and Pain Medicine, Royal Infirmary of Edinburgh, Edinburgh, EH16 4SA, UK Emma M. Chang , Andrew Bretherick , Gordon B. Drummond & J Kenneth Baillie MRC Institute of Genetics and Molecular Medicine, The University of Edinburgh, Edinburgh, EH4 2XU, UK Andrew Bretherick The Roslin Institute and Royal (Dick) School of Veterinary Studies, University of Edinburgh, Easter Bush, Edinburgh, EH25 9RG, UK J Kenneth Baillie Search for Emma M. Chang in: Search for Andrew Bretherick in: Search for Gordon B. Drummond in: Search for J Kenneth Baillie in: JKB and EMC designed the study and conducted the computational analysis. JKB and AB wrote the computational model of gas exchange. GBD contributed to the conception of hypothesis and interpretation of the results. JKB, EMC, and AB wrote the manuscript with assistance from GBD. All authors commented on or contributed to the final manuscript. All authors read and approved the final manuscript. Correspondence to J Kenneth Baillie. Additional file 1 Supplementary information. Includes the derivation of oxygenation measures and method of predicting PaO2, as well as optimisation of assumed variables using the test dataset. (PDF 227 kb) Integrated model of gas exchange. Structure of an integrated computational model of oxygen delivery which was used to generate a database of model results for inferring the possible physiological conditions that could give to rise to any given ABG result ("DB" method in the main manuscript). (PDF 80 kb) Chang, E.M., Bretherick, A., Drummond, G.B. et al. Predictive validity of a novel non-invasive estimation of effective shunt fraction in critically ill patients. ICMx 7, 49 (2019) doi:10.1186/s40635-019-0262-1
CommonCrawl
Non-linear flow modes of identified particles in Pb-Pb collisions at $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV S. Acharya, The ALICE collaboration, D. Adamová, A. Adler, J. Adolfsson, M. M. Aggarwal, G. Aglieri Rinella, M. Agnello, N. Agrawal, Z. Ahammed, S. Ahmad, S. U. Ahn (+998 others) 2020 Journal of High Energy Physics https://web.archive.org/web/20201107091954/https://link.springer.com/content/pdf/10.1007/JHEP06(2020)147.pdf The p T -differential non-linear flow modes, v 4,22 , v 5,32 , v 6,33 and v 6,222 for π ± , K ± , K 0 S , p + p, Λ + Λ and φ-meson have been measured for the first time at √ s NN = 5.02 TeV in Pb-Pb collisions with the ALICE detector at the Large Hadron Collider. The results were obtained with a multi-particle technique, correlating the identified hadrons with reference charged particles from a different pseudorapidity region. These non-linear observables probe the contribution from the second more » ... on from the second and third order initial spatial anisotropy coefficients to higher flow harmonics. All the characteristic features observed in previous p T -differential anisotropic flow measurements for various particle species are also present in the non-linear flow modes, i.e. increase of magnitude with increasing centrality percentile, mass ordering at low p T and particle type grouping in the intermediate p T range. Hydrodynamical calculations (iEBE-VISHNU) that use different initial conditions and values of shear and bulk viscosity to entropy density ratios are confronted with the data at low transverse momenta. These calculations exhibit a better agreement with the anisotropic flow coefficients than the non-linear flow modes. These observations indicate that non-linear flow modes can provide additional discriminatory power in the study of initial conditions as well as new stringent constraints to hydrodynamical calculations. Open Access, Copyright CERN, for the benefit of the ALICE Collaboration. Article funded by SCOAP 3 . JHEP06(2020)147 constituents. This leads to the so-called number of constituent quarks (NCQ) scaling, observed to hold at an approximate level of ±20% for p T > 3 GeV/c [18, 39, 40, 61] . The measurements of non-linear flow modes in different collision centralities could pose a challenge to hydrodynamic models and have the potential to further constrain both the initial conditions of the collision system and its transport properties, i.e. η/s and ζ/s (the ratio between bulk viscosity and entropy density) [54, 62] . The p T -dependent non-linear flow modes of identified particles, in particular, allow the effect of late-stage interactions in the hadronic rescattering phase, as well as the effect of particle production to be tested via the coalescence mechanism to the development of the mass ordering at low p T and particle type grouping in the intermediate p T region, respectively [33, 42] . In this article, we report the first results of the p T -differential non-linear flow modes, i.e. v 4,22 , v 5,32 , v 6,33 and v 6,222 for π ± , K ± , K 0 S , p + p, Λ + Λ and φ measured in Pb-Pb collisions at a centre of mass energy per nucleon pair √ s NN = 5.02 TeV, recorded by the ALICE experiment [63] at the LHC. The detectors and the selection criteria used in this analysis are described in section 2 and 3, respectively. The analysis methodology and technique are presented in section 4. In this article, the identified hadron under study and the charged reference particles are obtained from different, non-overlapping pseudorapidity regions. The azimuthal correlations not related to the common symmetry plane (known as non-flow), including the effects arising from jets, resonance decays and quantum statistics correlations, are suppressed by using multi-particle correlations as explained in section 4 and the residual effect is taken into account in the systematic uncertainty as described in section 5. All coefficients for charged particles were measured separately for particles and anti-particles and were found to be compatible within statistical uncertainties. The measurements reported in section 6 are therefore an average of the results for both charges. The results are reported within the pseudorapidity range |η| < 0.8 for different collision centralities between 0-60% range of Pb-Pb collisions. 2 Experimental setup ALICE [63, 64] is one of the four large experiments at the LHC, particularly designed to cope with the large charged-particle densities present in central Pb-Pb collisions [65] . By convention, the z-axis is parallel to the beam direction, the x-axis is horizontal and points towards the centre of the LHC, and the y-axis is vertical and points upwards. The apparatus consists of a set of detectors located in the central barrel, positioned inside a solenoidal magnet which generates a maximum of 0.5 T field parallel to the beam direction, and a set of forward detectors. The Inner Tracking System (ITS) [63] and the Time Projection Chamber (TPC) [66] are the main tracking detectors of the central barrel. The ITS consists of six layers of silicon detectors employing three different technologies. The two innermost layers, positioned at r = 3.9 cm and 7.6 cm, are Silicon Pixel Detectors (SPD), followed by two layers of Silicon Drift Detectors (SDD) (r = 15 cm and 23.9 cm). Finally, the two outermost layers are double-sided Silicon Strip Detectors (SSD) at r = 38 cm and 43 cm. The TPC has a cylindrical shape with an inner radius of about 85 cm, an outer radius of about 250 cm, -4 -i Deceased doi:10.1007/jhep06(2020)147 fatcat:2bvmbe2aureqnbep74gxo3kjj4 S. Acharya, The ALICE collaboration, D. Adamová, A. Adler, J. Adolfsson, M. M. Aggarwal, G. Aglieri Rinella, M. Agnello, N. Agrawal, Z. Ahammed, S. Ahmad, S. U. Ahn, A. Akindinov, M. Al-Turany, S. N. Alam, D. S. D. Albuquerque, D. Aleksandrov, B. Alessandro, H. M. Alfanda, R. Alfaro Molina, B. Ali, Y. Ali, A. Alici, A. Alkin, J. Alme, T. Alt, L. Altenkamper, I. Altsybeev, M. N. Anaam, C. Andrei, D. Andreou, H. A. Andrews, A. Andronic, M. Angeletti, V. Anguelov, C. Anson, T. Antičić, F. Antinori, P. Antonioli, R. Anwar, N. Apadula, L. Aphecetche, H. Appelshäuser, S. Arcelli, R. Arnaldi, M. Arratia, I. C. Arsene, M. Arslandok, A. Augustinus, R. Averbeck, S. Aziz, M. D. Azmi, A. Badalà, Y. W. Baek, S. Bagnasco, X. Bai, R. Bailhache, R. Bala, A. Baldisseri, M. Ball, S. Balouza, R. Barbera, L. Barioglio, G. G. Barnaföldi, L. S. Barnby, V. Barret, P. Bartalini, K. Barth, E. Bartsch, F. Baruffaldi, N. Bastid, S. Basu, G. Batigne, B. Batyunya, D. Bauri, J. L. Bazo Alba, I. G. Bearden, C. Bedda, N. K. Behera, I. Belikov, A. D. C. Bell Hechavarria, F. Bellini, R. Bellwied, V. Belyaev, G. Bencedi, S. Beole, A. Bercuci, Y. Berdnikov, D. Berenyi, R. A. Bertens, D. Berzano, M. G. Besoiu, L. Betev, A. Bhasin, I. R. Bhat, M. A. Bhat, H. Bhatt, B. Bhattacharjee, A. Bianchi, L. Bianchi, N. Bianchi, J. Bielčík, J. Bielčíková, A. Bilandzic, G. Biro, R. Biswas, S. Biswas, J. T. Blair, D. Blau, C. Blume, G. Boca, F. Bock, A. Bogdanov, S. Boi, L. Boldizsár, A. Bolozdynya, M. Bombara, G. Bonomi, H. Borel, A. Borissov, H. Bossi, E. Botta, L. Bratrud, P. Braun-Munzinger, M. Bregant, M. Broz, E. Bruna, G. E. Bruno, M. D. Buckland, D. Budnikov, H. Buesching, S. Bufalino, O. Bugnon, P. Buhler, P. Buncic, Z. Buthelezi, J. B. Butt, J. T. Buxton, S. A. Bysiak, D. Caffarri, A. Caliva, E. Calvo Villar, R. S. Camacho, P. Camerini, A. A. Capon, F. Carnesecchi, R. Caron, J. Castillo Castellanos, A. J. Castro, E. A. R. Casula, F. Catalano, C. Ceballos Sanchez, P. Chakraborty, S. Chandra, W. Chang, S. Chapeland, M. Chartier, S. Chattopadhyay, S. Chattopadhyay, A. Chauvin, C. Cheshkov, B. Cheynis, V. Chibante Barroso, D. D. Chinellato, S. Cho, P. Chochula, T. Chowdhury, P. Christakoglou, C. H. Christensen, P. Christiansen, T. Chujo, C. Cicalo, L. Cifarelli, F. Cindolo, G. Clai, J. Cleymans, F. Colamaria, D. Colella, A. Collu, M. Colocci, M. Concas, G. Conesa Balbastre, Z. Conesa del Valle, G. Contin, J. G. Contreras, T. M. Cormier, Y. Corrales Morales, P. Cortese, M. R. Cosentino, F. Costa, S. Costanza, P. Crochet, E. Cuautle, P. Cui, L. Cunqueiro, D. Dabrowski, T. Dahms, A. Dainese, F. P. A. Damas, M. C. Danisch, A. Danu, D. Das, I. Das, P. Das, P. Das, S. Das, A. Dash, S. Dash, S. De, A. De Caro, G. de Cataldo, J. de Cuveland, A. De Falco, D. De Gruttola, N. De Marco, S. De Pasquale, S. Deb, B. Debjani, H. F. Degenhardt, K. R. Deja, A. Deloff, S. Delsanto, D. Devetak, P. Dhankher, D. Di Bari, A. Di Mauro, R. A. Diaz, T. Dietel, P. Dillenseger, Y. Ding, R. Divià, D. U. Dixit, Ø. Djuvsland, U. Dmitrieva, A. Dobrin, B. Dönigus, O. Dordic, A. K. Dubey, A. Dubla, S. Dudi, M. Dukhishyam, P. Dupieux, R. J. Ehlers, V. N. Eikeland, D. Elia, E. Epple, B. Erazmus, F. Erhardt, A. Erokhin, M. R. Ersdal, B. Espagnon, G. Eulisse, D. Evans, S. Evdokimov, L. Fabbietti, M. Faggin, J. Faivre, F. Fan, A. Fantoni, M. Fasel, P. Fecchio, A. Feliciello, G. Feofilov, A. Fernández Téllez, A. Ferrero, A. Ferretti, A. Festanti, V. J. G. Feuillard, J. Figiel, S. Filchagin, D. Finogeev, F. M. Fionda, G. Fiorenza, F. Flor, S. Foertsch, P. Foka, S. Fokin, E. Fragiacomo, U. Frankenfeld, U. Fuchs, C. Furget, A. Furs, M. Fusco Girard, J. J. Gaardhøje, M. Gagliardi, A. M. Gago, A. Gal, C. D. Galvan, P. Ganoti, C. Garabatos, E. Garcia-Solis, K. Garg, C. Gargiulo, A. Garibli, K. Garner, P. Gasik, E. F. Gauger, M. B. Gay Ducati, M. Germain, J. Ghosh, P. Ghosh, S. K. Ghosh, P. Gianotti, P. Giubellino, P. Giubilato, P. Glässel, D. M. Goméz Coral, A. Gomez Ramirez, V. Gonzalez, P. González-Zamora, S. Gorbunov, L. Görlich, S. Gotovac, V. Grabski, L. K. Graczykowski, K. L. Graham, L. Greiner, A. Grelli, C. Grigoras, V. Grigoriev, A. Grigoryan, S. Grigoryan, O. S. Groettvik, F. Grosa, J. F. Grosse-Oetringhaus, R. Grosso, R. Guernane, M. Guittiere, K. Gulbrandsen, T. Gunji, A. Gupta, R. Gupta, I. B. Guzman, R. Haake, M. K. Habib, C. Hadjidakis, H. Hamagaki, G. Hamar, M. Hamid, R. Hannigan, M. R. Haque, A. Harlenderova, J. W. Harris, A. Harton, J. A. Hasenbichler, H. Hassan, D. Hatzifotiadou, P. Hauer, S. Hayashi, S. T. Heckel, E. Hellbär, H. Helstrup, A. Herghelegiu, T. Herman, E. G. Hernandez, G. Herrera Corral, F. Herrmann, K. F. Hetland, H. Hillemanns, C. Hills, B. Hippolyte, B. Hohlweger, D. Horak, A. Hornung, S. Hornung, R. Hosokawa, P. Hristov, C. Huang, C. Hughes, P. Huhn, T. J. Humanic, H. Hushnud, L. A. Husova, N. Hussain, S. A. Hussain, D. Hutter, J. P. Iddon, R. Ilkaev, M. Inaba, G. M. Innocenti, M. Ippolitov, A. Isakov, M. S. Islam, M. Ivanov, V. Ivanov, V. Izucheev, B. Jacak, N. Jacazio, P. M. Jacobs, S. Jadlovska, J. Jadlovsky, S. Jaelani, C. Jahnke, M. J. Jakubowska, M. A. Janik, T. Janson, M. Jercic, O. Jevons, M. Jin, F. Jonas, P. G. Jones, J. Jung, M. Jung, A. Jusko, P. Kalinak, A. Kalweit, V. Kaplin, S. Kar, A. Karasu Uysal, O. Karavichev, T. Karavicheva, P. Karczmarczyk, E. Karpechev, U. Kebschull, R. Keidel, M. Keil, B. Ketzer, Z. Khabanova, A. M. Khan, S. Khan, S. A. Khan, A. Khanzadeev, Y. Kharlov, A. Khatun, A. Khuntia, B. Kileng, B. Kim, B. Kim, D. Kim, D. J. Kim, E. J. Kim, H. Kim, J. Kim, J. S. Kim, J. Kim, J. Kim, J. Kim, M. Kim, S. Kim, T. Kim, T. Kim, S. Kirsch, I. Kisel, S. Kiselev, A. Kisiel, J. L. Klay, C. Klein, J. Klein, S. Klein, C. Klein-Bösing, M. Kleiner, A. Kluge, M. L. Knichel, A. G. Knospe, C. Kobdaj, M. K. Köhler, T. Kollegger, A. Kondratyev, N. Kondratyeva, E. Kondratyuk, J. Konig, P. J. Konopka, L. Koska, O. Kovalenko, V. Kovalenko, M. Kowalski, I. Králik, A. Kravčáková, L. Kreis, M. Krivda, F. Krizek, K. Krizkova Gajdosova, M. Krüger, E. Kryshen, M. Krzewicki, A. M. Kubera, V. Kǔcera, C. Kuhn, P. G. Kuijer, L. Kumar, S. Kundu, P. Kurashvili, A. Kurepin, A. B. Kurepin, A. Kuryakin, S. Kushpil, J. Kvapil, M. J. Kweon, J. Y. Kwon, Y. Kwon, S. L. La Pointe, P. La Rocca, Y. S. Lai, R. Langoy, K. Lapidus, A. Lardeux, P. Larionov, E. Laudi, R. Lavicka, T. Lazareva, R. Lea, L. Leardini, J. Lee, S. Lee, F. Lehas, S. Lehner, J. Lehrbach, R. C. Lemmon, I. León Monzón, E. D. Lesser, M. Lettrich, P. Ĺevai, X. Li, X. L. Li, J. Lien, R. Lietava, B. Lim, V. Lindenstruth, S. W. Lindsay, C. Lippmann, M. A. Lisa, A. Liu, J. Liu, S. Liu, W. J. Llope, I. M. Lofnes, V. Loginov, C. Loizides, P. Loncar, J. A. L. Lopez, X. Lopez, E. López Torres, J. R. Luhder, M. Lunardon, G. Luparello, Y. G. Ma, A. Maevskaya, M. Mager, S. M. Mahmood, T. Mahmoud, A. Maire, R. D. Majka, M. Malaev, Q. W. Malik, L. Malinina, D. Mal'Kevich, P. Malzacher, G. Mandaglio, V. Manko, F. Manso, V. Manzari, Y. Mao, M. Marchisone, J. Mareš, G. V. Margagliotti, A. Margotti, J. Margutti, A. Maŕın, C. Markert, M. Marquard, N. A. Martin, P. Martinengo, J. L. Martinez, M. I. Martínez, G. Martínez García, M. Martinez Pedreira, S. Masciocchi, M. Masera, A. Masoni, L. Massacrier, E. Masson, A. Mastroserio, A. M. Mathis, O. Matonoha, P. F. T. Matuoka, A. Matyja, C. Mayer, F. Mazzaschi, M. Mazzilli, M. A. Mazzoni, A. F. Mechler, F. Meddi, Y. Melikyan, A. Menchaca-Rocha, C. Mengke, E. Meninno, M. Meres, S. Mhlanga, Y. Miake, L. Micheletti, D. L. Mihaylov, K. Mikhaylov, A. N. Mishra, D. Mískowiec, A. Modak, N. Mohammadi, A. P. Mohanty, B. Mohanty, M. Mohisin Khan, C. Mordasini, D. A. Moreira De Godoy, L. A. P. Moreno, I. Morozov, A. Morsch, T. Mrnjavac, V. Muccifora, E. Mudnic, D. Mühlheim, S. Muhuri, J. D. Mulligan, M. G. Munhoz, R. H. Munzer, H. Murakami, S. Murray, L. Musa, J. Musinsky, C. J. Myers, J. W. Myrcha, B. Naik, R. Nair, B. K. Nandi, R. Nania, E. Nappi, M. U. Naru, A. F. Nassirpour, C. Nattrass, R. Nayak, T. K. Nayak, S. Nazarenko, A. Neagu, R. A. Negrao De Oliveira, L. Nellen, S. V. Nesbo, G. Neskovic, D. Nesterov, L. T. Neumann, B. S. Nielsen, S. Nikolaev, S. Nikulin, V. Nikulin, F. Noferini, P. Nomokonov, J. Norman, N. Novitzky, P. Nowakowski, A. Nyanin, J. Nystrand, M. Ogino, A. Ohlson, J. Oleniacz, A. C. Oliveira Da Silva, M. H. Oliver, C. Oppedisano, R. Orava, A. Ortiz Velasquez, A. Oskarsson, J. Otwinowski, K. Oyama, Y. Pachmayer, V. Pacik, D. Pagano, G. Paić, J. Pan, A. K. Pandey, S. Panebianco, P. Pareek, J. Park, J. E. Parkkila, S. Parmar, S. P. Pathak, R. N. Patra, B. Paul, H. Pei, T. Peitzmann, X. Peng, L. G. Pereira, H. Pereira Da Costa, D. Peresunko, G. M. Perez, E. Perez Lezama, V. Peskov, Y. Pestov, V. Petráček, M. Petrovici, R. P. Pezzi, S. Piano, M. Pikna, P. Pillot, O. Pinazza, L. Pinsky, C. Pinto, S. Pisano, D. Pistone, M. Płoskoń, M. Planinic, F. Pliquett, J. Pluta, S. Pochybova, M. G. Poghosyan, B. Polichtchouk, N. Poljak, A. Pop, H. Poppenborg, S. Porteboeuf-Houssais, V. Pozdniakov, S. K. Prasad, R. Preghenella, F. Prino, C. A. Pruneau, I. Pshenichnov, M. Puccio, J. Putschke, L. Quaglia, R. E. Quishpe, S. Ragoni, S. Raha, S. Rajput, J. Rak, A. Rakotozafindrabe, L. Ramello, F. Rami, R. Raniwala, S. Raniwala, S. S. Räsänen, R. Rath, V. Ratza, I. Ravasenga, K. F. Read, A. R. Redelbach, K. Redlich, A. Rehman, P. Reichelt, F. Reidt, X. Ren, R. Renfordt, Z. Rescakova, J.-P. Revol, K. Reygers, V. Riabov, T. Richert, M. Richter, P. Riedler, W. Riegler, F. Riggi, C. Ristea, S. P. Rode, M. Rodríguez Cahuantzi, K. Røed, R. Rogalev, E. Rogochaya, D. Rohr, D. Röhrich, P. S. Rokita, F. Ronchetti, E. D. Rosas, K. Roslon, A. Rossi, A. Rotondi, A. Roy, P. Roy, O. V. Rueda, R. Rui, B. Rumyantsev, A. Rustamov, E. Ryabinkin, Y. Ryabov, A. Rybicki, H. Rytkonen, O. A. M. Saarimaki, S. Sadhu, S. Sadovsky, K. Šafǎŕık, S. K. Saha, B. Sahoo, P. Sahoo, R. Sahoo, S. Sahoo, P. K. Sahu, J. Saini, S. Sakai, S. Sambyal, V. Samsonov, D. Sarkar, N. Sarkar, P. Sarma, V. M. Sarti, M. H. P. Sas, E. Scapparone, B. Schaefer, J. Schambach, H. S. Scheid, C. Schiaua, R. Schicker, A. Schmah, C. Schmidt, H. R. Schmidt, M. O. Schmidt, M. Schmidt, N. V. Schmidt, A. R. Schmier, J. Schukraft, Y. Schutz, K. Schwarz, K. Schweda, G. Scioli, E. Scomparin, M. Šefčík, J. E. Seger, Y. Sekiguchi, D. Sekihata, I. Selyuzhenkov, S. Senyukov, D. Serebryakov, E. Serradilla, A. Sevcenco, A. Shabanov, A. Shabetai, R. Shahoyan, W. Shaikh, A. Shangaraev, A. Sharma, A. Sharma, H. Sharma, M. Sharma, N. Sharma, S. Sharma, A. I. Sheikh, K. Shigaki, M. Shimomura, S. Shirinkin, Q. Shou, Y. Sibiriak, S. Siddhanta, T. Siemiarczuk, D. Silvermyr, G. Simatovic, G. Simonetti, R. Singh, R. Singh, R. Singh, V. K. Singh, V. Singhal, T. Sinha, B. Sitar, M. Sitta, T. B. Skaali, M. Slupecki, N. Smirnov, R. J. M. Snellings, T. W. Snellman, C. Soncco, J. Song, A. Songmoolnak, F. Soramel, S. Sorensen, I. Sputowska, J. Stachel, I. Stan, P. Stankus, P. J. Steffanic, E. Stenlund, D. Stocco, M. M. Storetvedt, L. D. Stritto, A. A. P. Suaide, T. Sugitate, C. Suire, M. Suleymanov, M. Suljic, R. Sultanov, M. Šumbera, V. Sumberia, S. Sumowidagdo, S. Swain, A. Szabo, I. Szarka, U. Tabassam, S. F. Taghavi, G. Taillepied, J. Takahashi, G. J. Tambave, S. Tang, M. Tarhini, M. G. Tarzila, A. Tauro, G. Tejeda Muñoz, A. Telesca, L. Terlizzi, C. Terrevoli, D. Thakur, S. Thakur, D. Thomas, F. Thoresen, R. Tieulent, A. Tikhonov, A. R. Timmins, A. Toia, N. Topilskaya, M. Toppi, F. Torales-Acosta, S. R. Torres, A. Trifiro, S. Tripathy, T. Tripathy, S. Trogolo, G. Trombetta, L. Tropp, V. Trubnikov, W. H. Trzaska, T. P. Trzcinski, B. A. Trzeciak, T. Tsuji, A. Tumkin, R. Turrisi, T. S. Tveter, K. Ullaland, E. N. Umaka, A. Uras, G. L. Usai, A. Utrobicic, M. Vala, N. Valle, S. Vallero, N. van der Kolk, L. V. R. van Doremalen, M. van Leeuwen, P. Vande Vyvre, D. Varga, Z. Varga, M. Varga-Kofarago, A. Vargas, M. Vasileiou, A. Vasiliev, O. Vázquez Doce, V. Vechernin, A. M. Veen, E. Vercellin, S. Vergara Limón, L. Vermunt, R. Vernet, R. Vértesi, L. Vickovic, Z. Vilakazi, O. Villalobos Baillie, A. Villatoro Tello, G. Vino, A. Vinogradov, T. Virgili, V. Vislavicius, A. Vodopyanov, B. Volkel, M. A. Völkl, K. Voloshin, S. A. Voloshin, G. Volpe, B. von Haller, I. Vorobyev, D. Voscek, J. Vrláková, B. Wagner, M. Weber, A. Wegrzynek, D. F. Weiser, S. C. Wenzel, J. P. Wessels, J. Wiechula, J. Wikne, G. Wilk, J. Wilkinson, G. A. Willems, E. Willsher, B. Windelband, M. Winn, W. E. Witt, Y. Wu, R. Xu, S. Yalcin, K. Yamakawa, S. Yang, S. Yano, Z. Yin, H. Yokoyama, I.-K. Yoo, J. H. Yoon, S. Yuan, A. Yuncu, V. Yurchenko, V. Zaccolo, A. Zaman, C. Zampolli, H. J. C. Zanoli, N. Zardoshti, A. Zarochentsev, P. Závada, N. Zaviyalov, H. Zbroszczyk, M. Zhalov, S. Zhang, X. Zhang, Z. Zhang, V. Zherebchevskii, D. Zhou, Y. Zhou, Z. Zhou, J. Zhu, Y. Zhu, A. Zichichi, M. B. Zimmermann, G. Zinovjev, N. Zurlo. "Non-linear flow modes of identified particles in Pb-Pb collisions at $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV." Journal of High Energy Physics (2020)
CommonCrawl
A new secure transmission scheme between senders and receivers using HVCHC without any loss Saad Almutairi1, Manimurugan S ORCID: orcid.org/0000-0003-1837-67972 & Majed Aborokbah1 This paper presents a novel secure medical image transmission scheme using hybrid visual cryptography and Hill cipher (HVCHC) between sender and receiver. The gray scale medical images have been considered as a secret image and split into different shares by visual cryptography (VC) encryption process. The split shares are once again encoded by Hill cipher (HC) encode process for improving the efficiency of the proposed method. In this process, the encrypted medical image (shares) pixels are converted as characters based on the character determination (CD) and lookup tables. In result, a secret image is converted into characters. These characters are sent to the receiver/authenticated person for the reconstruction process. In receiver side, the ciphertext has been decoded by HC decode process for reconstructing the shares. The reconstructed shares are decrypted by the VC decryption process for retaining the original secret medical image. The proposed algorithm has provided better CC, less execution time, higher confidentiality, integrity, and authentication (CIA). Therefore, using this proposed method, cent percent of the original secret medical image can be obtained and the secret image can be prevented from the interception of intruders/third parties. Cryptography is a method which is used to convert an original message into cipher message. There are many cryptographic methods available for encrypting the pain information/text. However, in this paper, we have used two foremost cryptography techniques of visual cryptography (VC) and Hill cipher (HC). VC is one of the powerful cryptosystems which converts a secret image into different secret shares. It was invented/proposed by Naor and Shamir [1] in 1994. The advantage of this system is that, while seeing the shares, original information cannot be identified and during decryption process, all shares must be presented. HC is also one of the cryptography methods; the original information is converted to characters [2, 3]. In 1929, it was introduced by Hill. We can define it in another way that a system of cryptography in which the plaintext is divided into sets of 'n' numbers of letters, each of which is replaced by a set of 'n' number of cipher letters, is called a polygraphic system. Zhuhong Shao, YuanyuanShang, RuiZeng, HuazhongShu, GouenouCoatrieux, and Jiasong Wu had introduced a novel robust watermarking scheme color image copyright protection. This scheme is based on the VC and quaternion-type moment invariants. They used VC for constructing the ownership share. Later, the ownership share is registered and it is responsible for authentication. In result, their proposed scheme provides a better robustness against different attacks [4]. S. Cimato, R. De Prisco, and A. De Santis had introduced a (k, n) colored-black-and-white visual cryptography scheme (CBW-VCS), which adopts colored pixels in shadow images to share a black and white secret image [5,6,7,8,9,10]. In connection with the same, Ching-NungYang, Li-Zhe Sun, and Song-Ruei Cai proposed to extend conventional BW-EVCS to the CBW-EVCS. It has two main divisions, one constructed (k, n)-CWB-EVCSs, and another one is that all constructions prove to satisfy security, contrast, and cover image conditions [11]. S Manimurugan and Porkumaran introduced a novel encryption scheme based on the visual cryptography. In this proposed method, the given medical image had encrypted and compressed before the transmission. In result, they claimed that the proposed technique had provided double encryptions [12, 13]. In 2016, Tayebe Amiri and Mohsen Ebrahimi Moghaddam proposed VC-based watermarking scheme for multiple cover images. This scheme concealed watermarking without modifying the cover image. To develop the same, they used discrete wavelet transform (DWT), singular value decomposition (SVD), and scale invariant feature transform (SWIFT). In experimental results, they showed the method robustness versus various attacks, especially rotation and scaling [14]. Xuehu Yan, Shen Wang, and Xiamu Niu had proposed a general threshold progressive visual secret sharing (PVSS) construction method from a case (2, n) with unexpanded shares in 2016. This scheme had the feature of (k, n) threshold with no pixel expansion, which could be loss-tolerant and control access for a wider application. Based on the proposed construction method, a new threshold PVSS scheme was constructed. They claimed that the proposed method performance was superior to relative approaches [15]. In 2016, Guangyu Wang, Feng Liu, and Wei Qi Yan had conducted an experiment embedding Braille into grayscale and halftone images as well as VC shares. The result indicated that the embedding of Braille had a little impact on VC secret revealing and enhances the security of VC shares [16]. S Manimurugan and his teammates presented various visual cryptography techniques related to the secure image transmission without the pixel expansions [17,18,19,20,21]. They achieved the good signal ratios of the reconstructed image. A.V.N. Krishna and K. Madhuravani in 2012 introduced a modified Hill cipher using randomized approach. In this proposed technique, the plain text is divided into equal sized blocks. The output of hill cipher is randomized to generate multiple ciphertexts for one plain text [22]. In 2013, Suman Chandrasekhar, Akash H.P, Adarsh.K, and Smitha Sasi implemented a second level (advanced Hill cipher) of encryption using permutation approach, which made the cipher highly secure. This encryption scheme is highly reliable as it uses tamper detection of the ciphertext ensuring successful decryption of the cipher [23]. M. Nordin A. Rahman et al. all proposed a robust Hill algorithm (Hill++). The algorithm was an extension of the Affine Hill cipher (AHC) [24]. A random matrix key was introduced as an extra key for encryption. Furthermore, an involuntary matrix key formulation was also implemented in the proposed algorithm. D.C. Mishra, R.K. Sharma, Rakesh Ranjan, and M. Hanmandlu had introduced a cryptosystem using AHC for color images in the year of 2015. In this approach, they considered multiplicative keys of AHC from SLn(Fq) domain and additive keys of AHC from Mn(Fq) domain, which provides exorbitant key space for the proposed system [25]. Bibhudendra Acharya and his teammates introduced a modified Hill cipher for solving the drawbacks of the conventional scheme by iterations and interlacing. They claimed that this approach performed well than the conventional Hill cipher [26]. Adinarayana Reddy K and his co-research workers had proposed a prime circulant matrix which have been shared as a secret key and a non-singular matrix G. It uses a public key such that the determinant of coefficient matrix Gc is zero [27]. In 2014, Neha Sharma and Sachin Chirgaiya had proposed a new variant of Hill cipher, to find the decryption of the ciphertext even when the key matrix was non-invertible [28]. In above statements, many authors had proved different image encryption techniques. However, each method has its own merits and demerits. This paper presents a novel secure medical image transmission scheme using hybrid visual cryptography and Hill cipher (HVCHC). The entire work has been divided into seven sections in this paper. Section 1 discusses the literature review of conventional VC and HC. Sections 2 and 3 describe the proposed encryption and decryption techniques. Section 4 considers the experimental results and the conclusion is discussed in Section 5. Sections 6 and 7 deals with the acknowledgement and references. Proposed HVCHC encryption process The main aim of this proposed system is to provide a secure transmission to avoid hacker activities in telemedicine or public networks. In order to fulfill the same, this paper has introduced a HVCHC cryptographic system for medical image transmission. The proposed scheme of HVCHC encryption process is described in this section. It has been classified into four major divisions of sub-band creation, 8-bit conversion, permutation, and substitution processes as shown in Fig. 1. HVCHC encryption process The first three processes are based on the VC and substitution is based on the HC. The main advantage of this encryption process is that the medical image can be converted into ciphertext of characters; no pixel expansion was performed in VC. In order to ensure the integrity of the data, a header is created and pixels are swapped as much as possible within the image. The header contains ciphertext information. Sub-band creation process The grayscale medical image is considered as an input for this process. Initially, the given medical image \( {\sum}_{i,j=0}^{\mathrm{m},\mathrm{n}}{M}_{\left(i,j\right)} \) splits into 2 × 2 sub-bands. In result, four equal sub-bands of \( {\sum}_{i,j=0}^{m/2,n/2}{M}_{\left(i,j\right)},{\sum}_{i=1,\kern0.5em j=\frac{n}{2}+1}^{\frac{m}{2},n}{M}_{\left(i,j\right)},\kern0.5em {\sum}_{i=\frac{m}{2}+1,j=1}^{m,\frac{n}{2}}{M}_{\left(i,j\right)} \), and \( {\sum}_{i=\frac{m}{2}+1,j=\frac{n}{2}+1}^{m,n}{M}_{\left(i,j\right)} \) can be obtained, in Eq. 1. Sub-band creation, 8-bit conversion, and permutation processes have an important role in order to generate the secret shares by VC. Equation 2 states the segregated sub-bands of A1, A2, A3, and A4. $$ {\sum}_{i,j=0}^{m,n}{M}_{\left(i,j\right)}={\sum}_{i,j=0}^{m/2,n/2}{M}_{\left(i,j\right)}\oplus {\sum}_{i=1,j=\frac{n}{2}+1}^{\frac{m}{2},n}{M}_{\left(i,j\right)}\oplus {\sum}_{i=\frac{m}{2}+1,j=1}^{m,\frac{n}{2}}{M}_{\left(i,j\right)}\oplus {\sum}_{i=\frac{m}{2}+1,j=\frac{n}{2}+1}^{m,n}{M}_{\left(i,j\right)} $$ $$ {\sum}_{i,j=0}^{m,n}{M}_{\left(i,j\right)}={A}_1\oplus {A}_2\oplus {A}_3\oplus {A}_4 $$ There are certain reasons why this process has been incorporated. When the image splits into various sub-bands, it is easy to swap the pixels/interchange the pixel's position as much as possible within the image. On the other hand, the complexity of the algorithm is improved. 8-Bit conversion process The second process of HVCHC is 8-bit conversion process. In this process, every segregated sub-band pixels are converted into 8-bit binary value \( {\sum}_{n=1}^{\mathrm{Max}}{\mathrm{Con}}_{8\mathrm{bit}}\ \left({A}_{\mathrm{n}}\right) \), the Max represents the maximum number of sub-bands. It has been illustrated in Eq. 3. In result \( {A}_1^{\prime },{A}_2^{\prime },{A}_3^{\prime } \), and \( {A}_4^{\prime } \) sub-bands are generated from the conversion process. $$ {\sum}_{n=1}^{\mathrm{Max}}{\mathrm{Con}}_{8\mathrm{bit}}\ \left({A}_{\mathrm{n}}\right)={\sum}_{n=1}^1{\mathrm{Con}}_{8\mathrm{bit}}\ \left({A}_{\mathrm{n}}\right)+{\sum}_{n=2}^2{\mathrm{Con}}_{8\mathrm{bit}}\ \left({A}_{\mathrm{n}}\right)+{\sum}_{n=3}^3{\mathrm{Con}}_{8\mathrm{bit}}\ \left({A}_{\mathrm{n}}\right)+{\sum}_{n=4}^4{\mathrm{Con}}_{8\mathrm{bit}}\ \left({A}_{\mathrm{n}}\right) $$ $$ {\sum}_{n=1}^{\mathrm{Max}}{\mathrm{Con}}_{8\mathrm{bit}}\ \left({A}_{\mathrm{n}}\right)={A}_1^{\prime}\oplus {A}_2^{\prime}\oplus {A}_3^{\prime}\oplus {A}_4^{\prime } $$ Permutation process The third process is permutation process. There are five different levels in this process. In level-1, every binary sub-bands \( {A}_1^{\prime },\kern0.5em {A}_2^{\prime },{A}_3^{\prime } \), and \( {A}_4^{\prime } \) bits are separated based on an odd and even positions, illustrated in Fig. 3 and Eq. 5. The \( {\sum}_{i,j=1}^{m,n}{A}_1^{\prime } \) is segregated into odd positioned bits \( {\sum}_{i,j=1}^{m,n}{B}_1 \) and even positioned bits \( {\sum}_{i,j=1}^{m,n}{B}_2 \). Similarly, \( {\sum}_{i,j=1}^{m,n}{A}_2^{\prime } \) is divided into \( {\sum}_{i,j=1}^{m,n}{B}_3 \) and \( {\sum}_{i,j=1}^{m,n}{B}_4 \); \( {\sum}_{i,j=1}^{m,n}{A}_3^{\prime } \) is divided into \( {\sum}_{i,j=1}^{m,n}{B}_5 \) and \( {\sum}_{i,j=1}^{m,n}{B}_6 \); \( {\sum}_{i,j=1}^{m,n}{A}_4^{\prime } \) is divided into \( {\sum}_{i,j=1}^{m,n}{B}_7 \) and \( {\sum}_{i,j=1}^{m,n}{B}_8 \). $$ {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{A}}_1^{\prime}\oplus {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{A}}_2^{\prime}\oplus {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{A}}_3^{\prime}\oplus {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{A}}_4^{\prime }=\left[{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_1+{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_2\right]\oplus \left[{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_3+{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_4\right]\oplus \left[{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_5+{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_6\right]\oplus \left[{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_7+{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_8\right] $$ In level-2, the odd sub-bands of \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_1 \), \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_3 \), \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_5 \), and \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_7 \) are combined as \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{C}}_1 \) and an even sub-bands of \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_2 \), \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_4 \), \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_6 \), and \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_8 \) are combined as \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{C}}_2 \) in Eqs. 6 and 7. In level-3, \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{C}}_1 \) and \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{C}}_2 \) are once again separated based on an odd and even positions \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_1 \), \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_2 \), \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_3 \), and \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_4 \) in Eqs. 8 and 9. $$ {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_1\oplus {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_3\oplus {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_5\oplus {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{B}}_7={\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{C}}_1 $$ $$ {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{C}}_1=\left[{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_1+{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_2\right] $$ In level-4, the above sub-bands are merged based on odd and even. \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_1 \) and \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_3 \) are combined as \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{E}}_1 \). Likewise, \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_2 \) and \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_4 \) are combined as \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{E}}_2 \). Finally, \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{E}}_1 \) and \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{E}}_2 \) are combined as a single sub-band \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{P} \) in level-5. After these steps, in \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{P} \) every 8-bits are converted into corresponding decimal value \( {\sum}_{\mathrm{n}=1}^{\mathrm{Max}}{\mathrm{Con}}_{\mathrm{b}2\mathrm{D}}\ \left({\mathrm{P}}_{\mathrm{n}}\right) \). In result, all binary subbands are converted into single secret share Per(i, j). These secret share pixels vary from 0 to 255, given in Eqs. 10–13. $$ {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_1\oplus {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{D}}_3={\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{E}}_1 $$ $$ {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{E}}_1\oplus {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}{\mathrm{E}}_2={\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{P} $$ $$ {\sum}_{\mathrm{n}=1}^{\mathrm{Max}}{\mathrm{Con}}_{\mathrm{b}2\mathrm{D}}\ \left({\mathrm{P}}_{\mathrm{n}}\right)={\mathrm{P}\mathrm{er}}_{\left(\mathrm{i},\mathrm{j}\right)} $$ Due to the substitution process of HC, the different secret shares are combined as single share. The main advantage of this process is that every sub-bands pixel is converted as binary bits and the same bits are interchanged as much as possible within the image. Finally, after the swapping process, every 8-bits are converted into corresponding decimal value. This process clearly states that the every pixel is encrypted without loss and pixel expansion. To improve the proposed scheme strength and complexity of the single secret share, \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{P} \) is encoded by substitution process of HC. Substitution process \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{Per} \) pixels are replaced by alphabet characters based on the character determination (CD) table, given in Table 1 and Eq. 14. The HC substitution process is a symmetric encryption technique, where the secret letters are encrypted into ciphers. It is also called a polygraphic system. In this process, the character information \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{c}\left(\mathrm{Per}\right) \) is converted into encoded information (cipher character) with the support of Table 2. Table 1 Character determination table Table 2 Lookup table In next step, the generated characters are encoded by proposed encode process as given in Eq. 15. In this process, \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{e} \) text is considered as a secret text S. The secret text S is encrypted as a ciphertext C using an encryption key κe in Eq. 15. After the substitution process, the ciphertext \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{C} \) is sent along with the encryption key κe to the other end/authenticated person for the decryption process. The complete computation for creating the ciphertext is computed by Eq. 15. $$ \mathrm{N}2\mathrm{C}\left[{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{Per}\right]={\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{e} $$ $$ {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{C}=\left[{\upkappa}_{\mathrm{e}}\times \mathrm{S}\right] \operatorname {mod}\ 26 $$ Header 'H' creation In this process, a header H is created. This H contains ciphertext information and substitution key; it can be used to ensure the integrity of the reconstructed secret image. After the encryption process, H along with ciphertext is sent to the receiver/authenticated person for reconstruction process in Eq. 16. $$ {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{C}+\mathrm{H}=\mathrm{Cipher} $$ HVCHC decryption process Receiver/authenticated person receives a ′Cipher′ from the sender. This ′Cipher′ is segregated into \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{C} \) and H. To decrypt the ciphertext of \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{C} \), inverse substitution, inverse permutation, inverse conversion, and combine-sub-bands processes have crucial roles, illustrated in Fig. 2. The HVCHC decryption process can be classified into two major divisions, one is based on the HC decode process and another one is based on VC decryption. The inverse substitution is designed based on the HC decode. The inverse permutation, inverse conversion, and combine sub-bands processes are based on VC decryption process. The merit of this process is that the pixel expansion is not performed, so the exact replica of the original image can be retrieved. However, the integrity is also measured after the reconstruction using a H (Fig. 3). HVCHC permutation process Inverse substitution process The received ciphertext of \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{C} \) and encryption key κe are used for the decryption process. In this process, \( {\upkappa}_{\mathrm{e}}^{-1} \) and the determinant of κe are computed from the encryption key κe in Eqs. 17–19. The D denotes the determinant of κe. To find the decryption key κd, the computational value B is calculated from Eqs. 20 and 21. Using κd and \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{C} \), the \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{e} \) is retrieved from Eq. 22. $$ {\upkappa}_{\mathrm{e}}=\left[\begin{array}{cc}\mathrm{a}& \mathrm{b}\\ {}\mathrm{c}& \mathrm{d}\end{array}\right] $$ $$ {\upkappa}_{\mathrm{e}}^{-1}=\left[\begin{array}{cc}\mathrm{d}& -\mathrm{b}\\ {}-\mathrm{c}& \mathrm{a}\end{array}\right] $$ $$ \mathrm{D}=\left|{\mathrm{k}}_{\mathrm{e}}\right|=\left(\mathrm{ad}-\mathrm{bc}\right) $$ $$ \mathrm{D}\times \mathrm{B}=1 \operatorname {mod}\ 26 $$ $$ {\mathrm{k}}_{\mathrm{d}}=\mathrm{B}\left[{\mathrm{k}}_{\mathrm{e}}^{-1}\right] \operatorname {mod}\ 26 $$ Finally, the inverse characters are converted into numbers based on the Table 1. In result, the \( {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{Per} \) can be obtained from the Eq. 23. $$ {\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{e}=\left[{\mathrm{k}}_{\mathrm{d}}\times \mathrm{C}\right] \operatorname {mod}\ 26 $$ $$ \mathrm{C}2\mathrm{N}\left[{\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{e}\right]={\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{Per} $$ Inverse permutation process This process is a reverse process of permutation process. In this process, every pixel in the obtained result of Per(i, j) (inverse substitute process) is converted into 8-bit binary ConD2b Per(i, j) in Eq. 24. In order to retrieve the \( {A}_1^{\prime },{A}_2^{\prime },{A}_3^{\prime } \), and \( {A}_4^{\prime } \), inverse process has been done from Eq. 5 to 12 in reverse order (equations from 12 to 5). The main aim of this process is that perfect replacement of the pixels back into their position and this is one of the main advantages. After the abovementioned computations, the \( {A}_1^{\prime },{A}_2^{\prime },\kern0.5em {A}_3^{\prime } \), and \( {A}_4^{\prime } \) can be obtained. These sub-bands are considered as an input for the inverse conversion process in Fig. 4. $$ {\sum}_{\mathrm{n}=1}^{\mathrm{Max}}{\mathrm{Con}}_{\mathrm{D}2\mathrm{b}}\ {\mathrm{Per}}_{\left(\mathrm{i},\mathrm{j}\right)}={\sum}_{\mathrm{i},\mathrm{j}=1}^{\mathrm{m},\mathrm{n}}\mathrm{P} $$ HVCHC inverse permutation process Inverse conversion process In an inverse conversion process, every 8-bit binary values are converted into the corresponding decimal value. Equations 25 and 26 represent the computation of binary to decimal conversion. $$ {\sum}_{\mathrm{n}=1}^{\mathrm{Max}}{\mathrm{Con}}_{8\mathrm{bit}2\mathrm{D}}\ \left({\mathrm{A}}_{\mathrm{n}}^{\prime}\right)={\mathrm{A}}_1^{\prime}\oplus {\mathrm{A}}_2^{\prime}\oplus {\mathrm{A}}_3^{\prime}\oplus {\mathrm{A}}_4^{\prime } $$ $$ {\sum}_{\mathrm{n}=1}^{\mathrm{Max}}{\mathrm{Con}}_{\mathrm{Dec}}\ \left({\mathrm{A}}_{\mathrm{n}}\right)={\sum}_{\mathrm{n}=1}^1{\mathrm{Con}}_{\mathrm{Dec}}\ \left({\mathrm{A}}_{\mathrm{n}}^{\prime}\right)+{\sum}_{\mathrm{n}=2}^2{\mathrm{Con}}_{\mathrm{Dec}}\ \left({\mathrm{A}}_{\mathrm{n}}^{\prime}\right)+{\sum}_{\mathrm{n}=3}^3{\mathrm{Con}}_{\mathrm{Dec}}\ \left({\mathrm{A}}_{\mathrm{n}}^{\prime}\right)+{\sum}_{\mathrm{n}=4}^4{\mathrm{Con}}_{\mathrm{Dec}}\ \left({\mathrm{A}}_{\mathrm{n}}^{\prime}\right) $$ In these computations, ′Max′ denotes the maximum number of the sub-bands. In this proposed scheme, the maximum sub-bands are four. This decision/constraint is made to minimize the computation time in both sides of the sender and receiver. In result, the segregated sub-bands can be retrieved in Eq. 27. Combine sub-bands process The combine subbands process is an inverse process of the split sub-bands process (Section 2.1). The obtained different sub-bands of inverse permutation process A1, A2, A3, and A4 are merged together as an image in Eq. 27. In result, the reconstructed medical image can be retrieved. Finally, the constructed image has been considered for the pixel by pixel analysis in an integrity check to ensure that the exact replica of the image is reconstructed or not. In this checking, 'H' has a vital role. $$ {\sum}_{\mathrm{n}=1}^{\mathrm{M}\mathrm{ax}}{\mathrm{Con}}_{\mathrm{Dec}}\ \left({\mathrm{A}}_{\mathrm{n}}\right)={\mathrm{A}}_1\oplus {\mathrm{A}}_2\oplus {\mathrm{A}}_3\oplus {\mathrm{A}}_4={\sum}_{\mathrm{i},\mathrm{j}=0}^{\mathrm{m},\mathrm{n}}{\mathrm{M}}_{\left(\mathrm{i},\mathrm{j}\right)} $$ Experimental and result discussion This section discusses the various experimental results of proposed and conventional system. Saad Al-Mutairi and S Manimurugan had implemented the clandestine image transmission scheme to prevent from the intruders in 2016 and 2017. In this paper [18, 19], three encryption methods are considered (VC, steganography, and HC) for the secure transmission of the image. However, to have a better encryption system in this proposed work, I have combined the two foremost encryptions of VC and HC. The previous work [18, 19] was considered as a conventional method. The difference between the conventional method and the proposed method is that the permutation and substitution steps are entirely different. Another important point of the proposed system is that it overcomes the existing method's limitations in an efficient manner. In this experimentation, we have demonstrated nearly 1000 medical images. Though for this documentation, we have included 25 Gy scale medical images as shown in Fig. 5. It includes computed tomography (CT), magnetic resonance imaging (MRI), X-ray, ultrasound (US), etc. The conventional and proposed methods are coded in MATLAB software. Input gray scale medical images All input images are 512 × 512 dimensions, .BMP format, and 256 kb size. Basically, some of the medical images are in. DICOM (digital imaging and communications in medicine) format; however, we have converted. DICOM into. BMP for this research work. In this proposed system, the given input medical images are encrypted by VC scheme. The output of this VC scheme is illustrated in Fig. 6. In order to make an efficient system, the same output of the proposed VC is encoded by HC scheme. In this encode process, two different processes are incorporated. Output of the proposed VC scheme The output image of proposed VC scheme is converted into a set of characters based on Table 1 and it has been illustrated in Fig. 7. In the second step, the converted characters are encoded by proposed HC scheme in Fig. 8. Steps 1 and 2 of HVCHC Steps 1, 2, and 3 of HVCHC In result, the given input of the medical image is converted into a set of characters. The main advantage of this character conversion is that two time's character conversions are performed in order to improve the confidentiality and strengthen the proposed scheme. In addition to examine the confidentiality of the proposed scheme, we have made pixel analysis in every stage. In Fig. 9, it have been illustrated that there are different sizes of the enlarged images for identifying the original image. This enlarge process have been done in four angles 100, 150, 175, and 200 percentages. However, the proposed scheme provides the better performance. In the result, it is very hard to identify the original secret image. While converting the same into different steps of characters, the proposed scheme is provides the better performances. The significance of this proposed VC is that no pixel expansions have been performed. Enlarged images in different angle Table 3 illustrates the proposed and convention schemes performances based on the different parameters of size, time, correlation coefficient (CC), mean squared error (MSE), and pixel expansion. The conventional scheme [18, 19] has been implemented and tested with the proposed system. Table 3 Proposed and conventional encryption methods performances The main difference between the conventional scheme and proposed scheme is that the conventional scheme takes high execution time in both phases of encryption and decryption. In addition, the conventional method reconstructs the partially exact replica of the original image. In order to overcome the same, the proposed scheme has reconstructed an exact replica of the image. These achievements are due to the perfect framework of the proposed system. To achieve good medical image processing, the exact replica of the pixels must be reconstructed. In case of any loss during the reconstruction process, the reconstructed medical image is not useful for further activities. Keeping this point in mind, the proposed scheme has been designed. The error rate is very minimal than the conventional system. Another important point is that both the methods have no pixel expansion. While comparing with the encryption and decryption size, the proposed scheme provides a superior result than the conventional. Figure 10 states the reconstruction performance of both the conventional and the proposed system. The reconstructed image quality is not measured by (peak signal-to-noise ratio (PSNR). Instead of PSNR, pixel by pixel analysis of CC has been taken. In result, the conventional method has performed good results between 0.98 to 0.99. When the CC value is exactly one, the reconstructed image pixels are exactly presented. The proposed scheme provides the exact copy of all images. It means that the exact replica of the image has been reconstructed by the proposed scheme. Comparison of conventional and proposed reconstruction image quality This scheme is mainly designed for the medical images. In Fig. 10, the conventional method reconstructed nearly exact replica of the image, due to the CC value of 0.98. It means there is a loss in the pixel during the process time. On the other hand, the proposed scheme has obtained the CC value exactly one. It proves that the proposed scheme has retrieved the exact replica of the original image. To make a better analysis report, the CC has been measured after enlarging the image. The results are given in the second row of Fig. 10. Figure 11 states the comparison of encryption size with conventional and proposed schemes. In this comparison, the proposed method obtained the less encryption size than the original size. It is due to the double character conversion. The convention method obtained the higher size than the original image size. This difference occurred is due to the VC and character conversion processes. In Fig. 12, both proposed and conventional methods decryption sizes are defined. Comparison of conventional and proposed schemes encryption size Comparison of conventional and proposed schemes decryption size Both methods provide the better performance. However, the conventional method reconstructs the partially exact replica of the original image. But the proposed scheme reconstructs the exact replica of the image. In Figs. 13 and 14, the encryption and decryption times are denoted. The proposed system provides the better execution time for encryption process compared to the conventional scheme. The time variations of proposed scheme are maximum 4 to 5 s. Comparison of conventional and proposed schemes encryption time Comparison of conventional and proposed schemes decryption time Similarly, the conventional scheme time variations are between 9 and 12.5 s. In decryption process, the proposed scheme time variations are from 4 to 5 s and from 9 to 10 s by the conventional scheme. In order to prove the algorithm complexity and strength, the pixels are swapped as much possible within the image itself. We have used different attacks for analysis the proposed scheme competency. In human visual attack, the proposed scheme has provided a good result and pixel by pixel analysis is also done. Therefore, the proposed scheme has obtained the magnificent results than other methods. Many encryption algorithms have been proposed for images. However, this paper has proposed a different type of encryption for medical images without any loss of the pixels. In this paper, we have compared with the traditional (conventional) approach to prove the efficiency of the proposed HVCHC scheme. Many parameters are considered for the comparison. It has been given in the experimental section. In the result, the traditional method of triple encryption is performed well. Although, there are some improvements in the proposed HVCHC compared to the conventional method. One of the improvements is that it reduces the execution time as much as possible compared than the conventional method. An important point is that, while processing the medical image, the pixel loss should not occur. This point is also addressed in this proposed method. In result, perfect/exact replica of the original medical image can be retrieved. The conventional method has provided error rate between 1 and 2%, but the proposed method provides error rate as zero. To measure the quality of the image, PSNR is not considered, instead of that CC has been considered for pixel by pixel analysis. In addition, the proposed algorithm can reduce the size during the encryption process than original and conventional encryption size. The visual and pixel attacks are also done by the expert groups. In result, the proposed scheme defended visual and pixel by pixel attacks in an efficient manner than conventional method. Therefore, this proposed method of encryption provides double encryption, the minimum execution time of encryption and decryption, reduces the size from the original after the encryption process, 100% perfect reconstructions, provides better performance against hacker/third parties/attackers. AHC: Affine Hill cipher CBW: VCS-colored-black-and-white visual cryptography scheme Character determination CIA: Confidentiality, integrity, and authentication CT: DICOM: Digital imaging and communications in medicine DWT: Discrete wavelet transform HC: Hill cipher HVCHC: Hybrid visual cryptography and Hill cipher Kb: MRI: SVD: Singular value decomposition SWIFT: Scale invariant feature transform VC: Visual cryptography M. Naor, A. Shamir, Visual cryptography, in: advances in cryptology, EUROCRYPT'94, in: LNCS, vol 950 (1994), pp. 1–12. L.S. Hill, Cryptography in an algebraic alphabet. Am. Math. Mon. 36(6), 306–312 (1929). L.S. Hill, Concerning certain linear transformation apparatus of cryptography. Am. Math. Mon. 38, 135–154 (1931). Z. Shao, Y. Shang, R. Zeng, H. Shu, G. Coatrieux, J. Wu, Robust watermarking scheme for color image based on quaternion-type moment invariants and visual cryptography. Signal Process. Image Commun. 48, 12–21 (2016). S. Cimato, R. De Prisco, A. De Santis, Optimal colored threshold visual cryptography schemes. Des. Codes Cryptography 35, 311–335 (2005). S. Cimato, R. De Prisco, A. De Santis, Probabilistic visual cryptography schemes. Comput. J. 49, 97–107 (2006). S. Cimato, A. De Santis, A.L. Ferrara, B. Masucci, Ideal contrast visual cryptography schemes with reversing. Inf. Process. Letter 93, 199–206 (2005). S. Cimato, R. De Prisco, A. De Santis, Colored visual cryptography without color darkening. Theor. Comput. Sci. 374, 261–276 (2007). S. Cimato, R. De Prisco, A. De Santis, Visual cryptography for color images, in: visual cryptography and secret image sharing (CRC Press, London, 2012), pp. 31–56 ISBN 978–1–4398-3721-4. R. De Prisco, A. De Santis, Color visual cryptography schemes for black and white secret images. Theor. Comput. Sci. 510, 62–86 (2013). C.-N. Yang, L.-Z. Sun, S.-R. Cai, Extended color visual cryptography for black and white secret image. Theor. Comput. Sci. 609, 143–161 (2016). S. Manimurugan, K. Porkumaran, Secure medical image compression using block pixel Sort algorithm. Eur. J. Sci. Res. 56(2), 129–138 (2011). S. Manimurugan, K. Porkumaran, Fast and efficient secure medical image compression schemes. Eur. J. Sci Res 56(2), 139–150 (2011). T. Amiri, M.E. Moghaddam, A new visual cryptography based watermarking scheme using DWT and SIFT for multiple cover images. Multimed. Tools Appl. 75, 8527–8543 (2016). X. Yan, S. Wang, X. Niu, Threshold progressive visual cryptography construction with unexpanded shares. Multimed. Tools Appl. 75, 8657–8674 (2016). G. Wang, F. Liu, W.Q. Yan, Basic visual cryptography using braille. Int. J. Digit. Crime Forensics 8(3), 85–93 (2016). S. Manimurugan, C. Narmatha, Secure and efficient medical image transmission by new tailored visual cryptography scheme with LS compressions. Int. J. Digit. Crime Forensics 7(1), 26–50 (2015). S. Al-Mutairi, S. Manimurugan, An efficient secret image transmission scheme using Dho-encryption technique. Int. J. Comput. Sci. Inf. Secur 14(10), 446–460 (2016). S. Al-Mutairi, S. Manimurugan, The clandestine image transmission scheme to prevent from the intruders. Int. J. Adv. Appl. Sci. 4(2), 52–60 (2017). S. Manimurugan, K. Porkumaran, C. Narmatha, "The new block pixel Sort algorithm for TVC encrypted medical image", imaging science. Journal 62(8), 403–414 (2014). S. Manimurugan, C. Narmatha, K. Porkumaran, The new approach of visual cryptography scheme for protecting the grayscale medical images. J. Theor. Appl. Inf. Technol. 69(3), 552–561 (2014). A.V.N. Krishna, K. Madhuravani, A Modified Hill cipher using randomized approach. Int. J. Comput. Netw. Inf. Secur. 5, 56–62 (2012). A.H.P. Suman Chandrasekhar, K. Adarsh, S. Sasi, A secure encryption technique based on advanced Hill cipher for a public key cryptosystem. IOSR J. Comput. Eng. 11(2), 10–14 (2013). M.N.A. Rahman, A.F.A. Abidin, M.K. Yusof, N.S.M. Usop, Cryptography: a new approach of classical Hill cipher. Int. J. Secur. Its Appl. 7(2), 179–190 (2013). D.C. Mishra, R.K. Sharma, R. Ranjan, M. Hanmandlu, Security of RGB image data by affine hill cipher over SLn(Fq) and Mn(Fq) domains with Arnold transform. Optik 126, 3812–3822 (2015). B. Acharya, M.D. Sharma, S. Tiwari, V.K. Minz, Privacy protection of biometric traits using modified Hill cipher with Involutory key and robust cryptosystem. Procedia Comput. Sci. 2, 242–247 (2010). K.A. Reddy, B. Vishnuvardhan, Madhuviswanatham, A.V.N. Krishna, A modified Hill cipher based on circulant matrices. Procedia Technol 4, 114–118 (2012). N. Sharma, S. Chirgaiya, A novel approach to Hill cipher. Int. J. Comput. Appl. 108(11), 34–37 (2014). The authors would like to thank the University of Tabuk, Tabuk City, Saudi Arabia for giving immense support to carry out this research work. The special thanks to all the reference authors, the journal editor and his team members. SAAD AL-MUTAIRI received the BSc from Al-Ahliyya Amman University, Jordon, the MSc and PhD degrees from De Montfort University, U.K. He is currently working as a Dean in Deanship of Information Technology, University of Tabuk, Saudi Arabia. His research interests are Software Engineering, Context Aware System, Cloud Computing, Cyber security, Steganography, etc. He has published ample of papers in international refereed journals and conferences in his research areas. He is a professional member in IEEE. S.MANIMURUGAN has completed his Bachelor, Master and Ph.D in Computer Science and Engineering, Anna University, India. Currently, he is working in Computer Engineering, Faculty of Computers and Information Technology, University of Tabuk, Tabuk City, Saudi Arabia. His research areas are Image processing, Information Security, Visual Cryptography, IoT, and Steganography. He is a professional member in IEEE and a life Member of Indian Society for Technical Education (MISTE). He has published nearly 70+ research papers in several international and national forums which include various ISI, Clarivate analytics, Scopus, and IEEE indexed international conferences as well. He also has been a celebrated editor and reviewer for many international journals like Elsevier and springer and so on. MAJED ABOROKBAH is a Dean in Faculty of Computers and Information Technology, University of Tabuk, Saudi Arabia. He received the BSc from Taif University, Saudi Arabia, the Msc from Bradford University, UK and the PhD degree from De Montfort University, U.K. His research is in the areas of Software Engineering, Context Aware System, Cyber security, Steganography, etc. He has published many papers in international journals and conferences, has organized various workshops and conferences in his research areas. He has established the robotics center in University of Tabuk. The University of Tabuk, Saudi Arabia has provided all research and financial supports for this work. (Mandatory for Biology and Medical journals): We thank the Vinayaga mission Hospital, India for providing the sample image data for this research work. There is no issue if the data shares among the research community. Department of Computer Science, Faculty of Computers and Information Technology, University of Tabuk, Tabuk, Saudi Arabia Saad Almutairi & Majed Aborokbah Department of Computer Engineering, Faculty of Computers and Information Technology, University of Tabuk, Tabuk, Saudi Arabia Manimurugan S Search for Saad Almutairi in: Search for Manimurugan S in: Search for Majed Aborokbah in: There are three authors that have completed this work in which SA has prepared the encryption section, SM has done the decryption section, and MA has completed the experimental and result sections. Finally, the entire paper is affirmed by all authors. All authors read and approved the final manuscript. Correspondence to Manimurugan S. Almutairi, S., S, M. & Aborokbah, M. A new secure transmission scheme between senders and receivers using HVCHC without any loss. J Wireless Com Network 2019, 88 (2019). https://doi.org/10.1186/s13638-019-1399-z Received: 06 January 2019 DOI: https://doi.org/10.1186/s13638-019-1399-z Medical images Character conversion Recent Challenges & Avenues in Wireless Communication through Advance computational Intelligence or Deep learning methods
CommonCrawl
Who first had the idea to study surfaces via rings of functions, as in algebraic geometry? This idea provides the foundations of algebraic geometry now; and they have certainly gone down the rabbit hole with it. As a student studying this subject, I have always found it such a great leap to think that some ring of functions could have such a strong influence on geometry. Given the idea, it seems natural. But to be the one that first had the idea, that seems a great leap. Does anyone have any information about the history of this idea? Who first thought about? Perhaps why they thought about it? I believe the answer is Riemann, when studying what we now call Riemann surfaces. But, he doesn't seem to have "gone down the rabbit hole" with it, so far as I can uncover. Were there any ideas like this beforehand? differential-geometry geometry algebraic-geometry Conifold A canonical reference on this is Dieudonne's History of Algebraic Geometry. An abridged version Historical Development of Algebraic Geometry is freely available, see also Easton's slides. Let me make a general comment first. When we wonder "however did someone first connect these two [modern ideas]?" we tacitly presuppose that they were always separately available, waiting to be connected. But the truth often is that they were developed connected to each other. Riemann was indeed instrumental in creating the modern algebro-geometric framework, but he did not have the idea to study surfaces via rings of functions for the simple reason than in his time the (general) concept of Riemann surfaces, let alone of rings of functions, did not exist. He was studying Abelian integrals, this led him to consider surfaces on which holomorphic and meromorphic functions, such as Abelian integrals, are defined. And by the time Kronecker and Dedekind-Weber developed the suitable algebraic concepts they already had the connection on display in Riemann's work. So nobody had such an idea first. Here are some details as described by Dieudonne: "It is quite a paradox that in the work of this prodigious genius , out of which algebraic geometry emerges entirely regenerated, there is almost no mention of algebraic curve, it is from his theory of algebraic functions and their integrals that all of the birational geometry of the nineteenth and the beginning of the twentieth century issues. [...] Instead of starting (as would all his predecessors and most of his immediate successors) from an algebraic equation $F(s, z) = 0$ and the Riemann surface of the algebraic function $s$ of $z$ which it defines, his initial object is an $n$-sheeted Riemann surface without boundary and with a finite number of ramification points, given a priori without any reference to an algebraic equation... Thus, the abstract Riemann surface $S$ is, in fact, identical to that of algebraic function $s(z)$ defined by $F(s, z) = 0$, and Riemann attaches to it what will, after Dedekind's time, be called the field of meromorphic (or rational) functions on $S$. [emphasis Dieudonne's] Riemann's insights were absorbed in two foundational papers from 1882, by Kronecker (Grundzüge einer arithmetischen Theorie der algebraischen Grössen, Crelle's journal, 92, 1–122) and Dedekind-Weber (Journal für die reine und angewandte Mathematik, 92, 181-290): "The first task to which each school of algebraic geometry addressed itself was therefore the systematization of the birational theory of algebraic plane curves, incorporating most of Riemann's results with proofs in conformity with the principles of the school... just as Riemann had revealed the close relationship between algebraic varieties and the theory of complex manifolds, Kronecker and Dedekind-Weber brought to light for the first time the deep similarities between algebraic geometry and the burgeoning theory of algebraic numbers... this conception of algebraic geometry is for us the clearest and simplest one, due to our familiarity with abstract algebra." Kronecker started defining varieties in terms of rings of polynomials vanishing on them, and developed the notions of subvariety and dimension in terms of ideals (which he called Modulsystems). "The goal of Dedekind and Weber in their fundamental paper was quite different and much more limited; namely, they gave purely algebraic proofs for all the algebraic results of Riemann. They start from the fact that, for Riemann, a class of isomorphic Riemann surfaces corresponds to a field $K$ of rational functions, which is a finite extension of the field $C(X)$ of rational fractions in one indeterminate over the complex field; what they set out to do, conversely, if a finite extension $K$ of the field $C(X)$ is given abstractly, is to reconstruct a Riemann surface $S$ such that $K$ will be isomorphic to the field of rational functions on $S$. ConifoldConifold $\begingroup$ Peter Freyd likes to say that asking who first invented an idea is the wrong question. The right question is who last invented it. Who invented it so well that no one else ever had to invent it again. $\endgroup$ – Colin McLarty Jul 3 '17 at 1:34 The idea is usually attributed to Dedekind and Weber in Theorie der algebraischen Functionen einer Veränderlichen (1882): [1, 2, 3, 4, 5,...]. Francois ZieglerFrancois Ziegler $\begingroup$ thanks for the references; this looks like it is exactly what I need to read. $\endgroup$ – User0112358 May 24 '17 at 0:31 I don't think this was Riemann, or that Riemann knew that any ring of functions determines the surface. In fact, Riemann studied compact surfaces on which the ring of regular functions is trivial, and he studied the field of meromorphic functions instead. The idea that a ring of functions determines the space is of much later origin. It can be traced to Gelfand's theory of commutative Banach algebras, and was brought to algebraic geometry by Grothendieck. Alexandre EremenkoAlexandre Eremenko Not the answer you're looking for? Browse other questions tagged differential-geometry geometry algebraic-geometry or ask your own question. Motivation behind Euler Theorem in differential geometry Who first introduced the notation $\mathcal{O}$ in algebraic geometry or algebraic number theory Material models of Riemann surfaces Riemann surfaces and covering How was the focus/directrix property of conic sections discovered? How did the integer degrees angles counting being first adopted in geometry and mathematics? What was the old system of using right circular cones to solve problems about circles in the plane? Reference request concerning an alleged Jewish contribution to the early theory of light refraction, and to the first geometry textbook in Europe What made Euclid/Heron define line as a length without breadth and point as that with no part? Meaning of a cryptic sentence by Gauss on "the mobility of figures in the hyperbolic plane"
CommonCrawl
Why is Rabin encryption equivalent to factoring? I don't understand the proof of equivalence I've read everywhere (e.g., in Rabin's paper or on Wikipedia). Here's my objection: let's say you have a Rabin decryption oracle that takes n and c and returns one of the square roots of c mod n. It always returns the same square root for a given n and c, but the choice of output root is otherwise random over combinations of n and c. In this case, the oracle would decrypt the ciphertext somewhere between 25 and 50% of the time (depending on the number of roots of the ciphertext), but it's unclear to me how you could factor n on this basis. I agree that an oracle that pops out both s and r allows one to easily factor n, but it's not necessary to do that to break Rabin at least 25% of the time. factoring rabin-cryptosystem CodesInChaos Kyle RoseKyle Rose migrated from security.stackexchange.com Jul 24 '15 at 7:51 This question came from our site for information security professionals. $\begingroup$ It is helpful if you link to the references you make. $\endgroup$ – schroeder Jul 23 '15 at 14:45 $\begingroup$ That's probably why I didn't find this question posted already. This site is what I got when I Google'd "crypto stackexchange". $\endgroup$ – Kyle Rose Jul 23 '15 at 14:53 Since n = pq, then when an integer modulo n is a square, then it has (in general) four square roots. This can be seen by reasoning modulo p and modulo q: a square has two roots modulo p, and two roots modulo q, which makes for four combinations. More precisely, modulo a prime p, if y has a square root x, it also has another square root which is -x. The same applies modulo q, and makes four combinations. What you look for is a pair of values (a,b) such that both are square roots of the same value (i.e. a2 = b2 mod n), a = b mod p, and a = -b mod q (or vice versa). If you have such a pair, then c = a-b mod n will be an integer that is equal to 0 modulo p but not modulo q; in other terms, c will be a multiple of p but not of q, so a simple GCD computation between c and n will reveal p. Suppose that you have a box that can compute square roots modulo n. Then, the attack works thus: generate a random a modulo n; compute a2 mod n and send it to the box. The box will return a square root b (one of the four possible square roots). If the box returns a itself, or n-a, then this round fails, and the attacker must start again with another random a. This happens 50% of the time. But if the box returns one of the two other square roots of a2, then the GCD explained above reveals a factor of n. There is no square root choosing strategy from the box that can prevent this attack from working, because the attacker chooses a completely at random. The above does NOT show that "Rabin encryption is equivalent to factoring". What it shows is that the general ability to extract square roots modulo n is equivalent to knowing the factors of n. The term "general" here means that the ability works for a substantial proportion of values which are squares modulo n. Anybody can compute square roots for some values (e.g. if we work modulo n and you challenge me with the value "9" then I can answer that a square root of that value modulo n is "3", and I can do that even without knowing the factors of n); but if you can do that for a non-negligible fraction of all integers modulo n then you can factor n. There is no actual standard that specifies Rabin encryption, but if there was, then that standard would probably entail some sort of padding, because, as explained above, a square has four square roots. The decryption engine must choose which one is the right one. A simple strategy for that is to add some redundant padding: when encrypting message m, convert it to an integer x by appending h(m) to m (for some hash function h) and then interpreting the whole as an integer. Then, upon decryption, recompute the hash to know whether you got the correct square root, and not one of the three others. (The padding would also have to include some randomness to avoid brute force on the plaintext.) With such a padding, a box that can decrypt things may return the decrypted value, OR it could say "this does not decrypt to anything that is properly padded". Then the attack explained above no longer works; the attacker will have to find a value a such that another square root b (which is neither a or -a) ends with a proper padding, otherwise the decryption oracle won't return it. Depending on how the padding is exactly defined, then the probability to hit such a value could be too small to actually happen. Therefore, while extracting square roots modulo n is equivalent to factoring n, it cannot be said, in all generality, that practical Rabin encryption is equivalent to factoring. This depends on the details of padding and usage, details which are not, currently, defined. After another 5 minutes of thought, I think I solved my own problem. Choose an arbitrary message m, compute c=m^2 % n and submit c and n to the Rabin oracle. If you repeat this enough times (by which I mean probably within 2 iterations) you will choose m in such a way that the oracle gives you ± the other root, which you can then use to factor n. Not the answer you're looking for? Browse other questions tagged factoring rabin-cryptosystem or ask your own question. Rabin/RSA four possible messages? Chosen-Ciphertext Attack on Rabin: Factorize n Factoring large $N$ given oracle to find square roots modulo $N$ Rabin encryption when M is not Disjoint to n Decryption in Rabin rabin decryption Rabin cryptosystem - How many solutions? Rabin: how discover P and Q? Rabin Encryption is not CCA secure Why is the Rabin mapping not a permutation over $\mathbb{Z}_N^*$
CommonCrawl
Teacher, teacher on the wall, Who's the dumbest of them all? A maths teacher writes a very large number on the blackboard and asks her pupils (of whom there are $n$ in the room) about its factors. The first pupil says, "The number is divisible by 2." The second says, "The number is divisible by 3." The third says, "The number is divisible by 4." The fourth pupil says, "The number is divisible by 5." $...$ The $n$th pupil says, "The number is divisible by ($n+1$)." The teacher says, "You were all right except for two of you, who spoke consecutively." Given this information, what can you say about: the value of $n$ which two pupils were wrong? If you want to list all possibilities, then we can limit $n$ to be less than $100$ to make the problem finite. However, there is a general answer for which values are possible, which works for arbitrarily large $n$. Don't worry about what the number on the blackboard is! You could find its smallest possible value in each case using the Chinese Remainder Theorem, but that would be boring and tedious. I'll upvote any answer which is correct and relies only on pencil, paper, and logic without resorting to computer power. The green tick will go to whichever answer gives the correct solution in the most simple and elegant way. NB: this is a maths puzzle and not a maths problem. There's a nice 'aha!' which narrows down the possibilities considerably, and the nature of the final solution is quite surprising. mathematics number-theory $\begingroup$ For the general case, are we assuming at least 5 students? $\endgroup$ – StephenTG Aug 10 '15 at 13:08 $\begingroup$ There is no unique answer to 'Which two pupils were wrong?' Are you expecting one? Reasoning: Let b be the number on the board, let n= 4. Suppose b = 6, then pupils 3 and 4 were wrong. Suppose b = 10, then pupils 2 and 3 are wrong. $\endgroup$ – chasly - supports Monica Aug 10 '15 at 14:24 $\begingroup$ @chaslyfromUK Consider a function f(n) that for every number n gives you which pupils are wrong. Does such a function exist, if no, why not, if yes, (how) can you compute it? $\endgroup$ – Alexander Aug 10 '15 at 14:27 $\begingroup$ @Alexander - Here's a definition of function. "A technical definition of a function is: a relation from a set of inputs to a set of possible outputs where each input is related to exactly one output." goo.gl/A8cEd4 --- There is no such function in this case. I have just disproved its existence by providing a counterexample. $\endgroup$ – chasly - supports Monica Aug 10 '15 at 14:57 $\begingroup$ @chaslyfromUK The question says "what can you say about which two pupils were wrong?" You can say something about a number (e.g. provide a small set it must belong to) without being able to determine it uniquely. $\endgroup$ – Rand al'Thor Aug 10 '15 at 15:29 If $x$ has at least two distinct prime factors, that is $x = p^n * q^m * r$, with $p, q$ primes, $n, m \ge 1$, and $r$ not divisible by $p$ or $q$, then $(p^n * r) | z$ and $(q^m * r)|z$ implies $(p^n * q^m * r = x)|z$. Therefore, if $x$ is a wrong answer, and all answers $< x-1$ were correct answers, $x$ cannot have two distinct prime factors; $x$ must be either a prime number or a power of a prime number. Further, if $x$ is a wrong answer, then $2x$ is also a wrong answer. Since exactly two answers $\le n+1$ were incorrect, and the two incorrect answers were consecutive, the two incorrect numbers are $x$ and $x+1$ with $x \ge 2$, and $n \le 2x-2$, and both $x$ and $x+1$ are either primes or powers of primes. The only two consecutive primes are $2$ and $3$; other than this at least one of $x$ and $x+1$ is a non-trivial power of a prime. So we have one number $p^k$, where $p$ is a prime and $k \ge 2$, and $p^k \pm 1$ which is a prime or a power of a prime. Assume $p \ge 3$, which implies $p$ is odd: $p^k \pm 1$ is even, therefore it is not a prime but must be power of $2$. Therefore, one of the incorrect numbers must be a power of two: The incorrect answers are $2^k$ and $2^k \pm 1$. If $2^k \pm 1$ is a prime, then it is either a Mersenne prime or a Fermat prime; the only known Fermat primes are $3, 5, 17, 257, 65537 = 2^1 + 1, 2^2 + 1, 2^4 + 1$ and $2^{16} + 1$; the smallest known Mersenne primes are $2^k - 1$ for $k = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657$. If we assume that the number of students is less than the world population, the possibilities are $(4,5)$, $(16,17)$, $(256,257)$, $(65536,65537)$, $(3,4)$, $(7,8)$, $(31,32)$, $(127,128)$, $(8191,8192)$, $(131071,131072)$, $(524287, 524288)$, $(2147483647,2147483648)$, where the other number is a prime. If the other number is a prime power, then the only pair is $(8,9)$ (Mihăilescu's theorem, better known as Catalan's conjecture but proven in 2002). Since $3$ is also a Fermat prime, in total the possibilities are $(8,9)$, all numbers $(2^k, 2^k + 1)$ where $2^k + 1$ is a Fermat prime, and $(2^k, 2^k - 1)$ where $2^k - 1$ is a Mersenne prime. So the first possible pairs of wrong answers and the only that are possible on earth with actual humans are $(2,3)$, $(3,4)$, $(4,5)$, $(7,8)$, $(8,9)$, $(16,17)$, $(31,32)$, $(127,128)$, $(256,257)$, $(8191,8192)$, $(65536,65537)$, $(131071,131072)$, $(524287,524288)$, $(2147483647,2147483648)$. The possible values for $n+1$ are $[3 \ldots 15], [17 \ldots 61], [128 \ldots 253], [257 \ldots 511]$ etc. and the possible values for $n$ are $[2 \ldots 14], [16 \ldots 60], [127 \ldots 252], [256 \ldots 510]$. So we don't have a class of $15$ students, or $61$ to $126$ students, or $253$ to $255$ students, or $511$ to $8190$ students. We could probably use the fact that the number actually fit on the board to exclude some large numbers. $\begingroup$ +1; this is a very complete answer! It'd look nicer if you added some LaTeX maths formatting though :-) $\endgroup$ – Rand al'Thor Aug 10 '15 at 21:09 $\begingroup$ Didn't know you could do this on this site... $\endgroup$ – gnasher729 Aug 10 '15 at 21:15 $\begingroup$ For large values of 1, you could have just two students, as neither 2, nor 3 would divide into 1. Zero being the counterpoint small value ... $\endgroup$ – Philip Oakley Aug 11 '15 at 14:32 $\begingroup$ +1 Great answer! Good observation on Mihăilescu's theorem. $\endgroup$ – Marconius Aug 12 '15 at 16:59 The consecutive numbers need both be powers of primes. Why? First consider this: If ($a\mid x$) and ($b\mid x$) and ($a$ and $b$ are coprime) then ($ab\mid x$) I'm not sure if this a well known Lemma but I think it is so I don't need to prove it. Now assume one of the consecutive numbers (call it $x$) is not a power of a prime. you can then always write $x=pq$ where $p$ and $q$ are coprime meaning that according to my Lemma that $x$ also divides the number, CONTRADICTION. Knowing that one of the numbers is divisible by 2 we now know that that number is actually always of the form $2^x$. And because of Catalan's conjecture we know that the other number is $3^2$ or a prime number. This answers which two pupils were wrong. What's left to determine is what this means for $n$. Unfortunately I don't know this (yet) Also nice to know that if these consecutive numbers are $x$ and $x+1$ then the number on the board is divisible by $LCM(1,2,...,x-2,x-1)$ Ivo BeckersIvo Beckers $\begingroup$ +1; this answer is correct on which two pupils were wrong. As for the possibilities for $n$, there are a lot of them! $\endgroup$ – Rand al'Thor Aug 10 '15 at 15:34 $\begingroup$ A prime of the form (2^n)-1 is a mersenne prime. In which case we know that n is prime. If (2^n)+1 is prime we have a fermat prima and know that n=2^k for some k. $\endgroup$ – Taemyr Aug 11 '15 at 9:19 $\begingroup$ LCM(1,2,...,x−2,x−1) can be replaced by LCM(1,2,...,n)/2x, if x is prime or LCM(1,2,...,n)/(2x+2) if x+1 is prime. $\endgroup$ – Taemyr Aug 11 '15 at 9:23 One of the two pupils that are wrong could be a prime. The other one cannot be prime unless $n<3$, it has to be an even number. So it has to have more occurrences of at least one prime factor than every number smaller than it. This requires it to be the highest number of the form $2^x$ that you have in the sequence. As an example, consider the students saying: 2, 3, 4, 5, 6, 7. In this case, it has to be 4,5 that are wrong, every other combo makes other students wrong as well. This also means there are $n$s that are impossible - like 15 students (15,16 -> 15 is not prime). 16 students works again, 17 being prime. Likewise, between 63 and 127 there's a huge gap, because 64 has no adjacent prime. On the other hand, some instances of $n$ have two possibilities. 2,3,4,5,6,7,8,9 Here it could be 7,8 that are wrong, or 8,9. Why 8,9, there's no prime in it? Because it has two numbers with more occurrences at least one prime factor than every number smaller than it. ($3\times 3$ and $2\times 2\times 2$). Such edge cases should be really few. EDIT: To generalize, you need any two adjacent numbers that only have exactly one prime factor, and have more occurrences of it than any other number in the list. This means that you can limit your search to the upper half of the list, but including the middle element, if available. GentlePurpleRain♦ AlexanderAlexander $\begingroup$ hmm I did not think of any adjacent pure powers of numbers case... I wonder if there are any ridiculous adjacent pairs x^n = y^m +1... that would be cool... (where x and y are prime) $\endgroup$ – Going hamateur Aug 10 '15 at 14:57 $\begingroup$ +1; this is correct, but not as neatly expressed (IMO) as some of the other answers. $\endgroup$ – Rand al'Thor Aug 10 '15 at 15:46 First, for each incorrect pupil, the number he said is a power of a prime. Otherwise its factors were already mentioned and were correct. One of those two said an even number, so one of the incorrect numbers is a power of 2. It is also the largest power of 2 smaller than n, otherwise next power of 2 would be incorrect as well. So the incorrect number $2^t$ is greater than $\frac n 2$. The other pupil named a number of either $2^t + 1$ or $2^t - 1$ and that number has to be a power of a prime. Probably any power. For example a group of 14 pupils where pupils 7 and 8 are wrong (number is not divisible by 8 and 9). aragaeraragaer $\begingroup$ +1; this is correct, but you can say a little more. "Probably any power" isn't right; in fact that number has to be prime unless it's 9. $\endgroup$ – Rand al'Thor Aug 10 '15 at 15:39 If at least one of them is a prime, then that prime can't be found among the others' divisors as a factor more than once. If at least one is a compound (which is true), that compound's divisors can be found separately, but not at once. For that, this number must be a power of a prime. The compound prime power is a power of 2, and the other is an odd prime (or they're 8 and 9). Both can't be non-trivial prime powers unless they're 8 and 9, because: If $n, m > 1, p > 2$ and $p^n-1 = 2^m$, $n$ is either odd and so is $(p^n-1)/(p-1)$, making $n=1$ (contradiction), or $n$ is even and both $p$^$(n/2)$-1 and $p$^$(n/2)$+1 are powers of 2, making them 2 and 4 respectively. If $n, m > 1, p > 2$ and $p^n+1 = 2^m$, then $p^n = 2^m-1$, so $m$ is odd (contradiction otherwise), and $p^n - 1 = 2^m-2 = 2* [2$^$(m-1)] = 2* [2$^$((m-1)/2) + 1] * [2$^$((m-1)/2)-1]$. Since $p^n - 1$ shouldn't be divisible by 4, $n$ is also odd. $p+1$ is a power of 2. $p^n+1$/($p+1$) is odd, so it must be 1, which is a contradiction. In short, the wrong number duo can be 4-5, 7-8, 8-9, 16-17 or 31-32 if there're fewer students than 100 and more than 3 (3 < $n$ < 100), but both have to be greater than $n/2$ no matter what $n$ is. In addition, if the prime is the smaller out of the two, the other's power value must be a prime, and if it's bigger, the power must be a power of 2 (respectively Mersenne and Fermat primes), both of which can be proven when the relevant expression is expanded. So the greatest such possible duo can be the wrong divisors, because other candidates would be too small. The most obvious answers would be 2-3 (for 2 students), 3-4 (3 to 4 students) and 4-5 (4 to 6 students). In addition, 8-9 would be feasible for 8 to 14 students. The generalized rule above applies to all the other solutions. NautilusNautilus $\begingroup$ +1; this is correct except for a few small/trivial cases you've missed (like 2 and 3 with n=2). $\endgroup$ – Rand al'Thor Aug 10 '15 at 15:37 $\begingroup$ Edited to cover the trivial cases. $\endgroup$ – Nautilus Aug 11 '15 at 8:14 I am not quite sure if I am correct, but lets see. My guess is: It has sth. to do with the primes. I started with the number x, lets say x = 5 (it doesn't really matter, if its small enough). Then I iterate through the pupils and check if x can be divided by the number said. If not I mutliply it with the number. #1 pupil has number = 2 can't be divided so x = 10 (x*2) can't be divided so x = 30 can't be divided so x = 120 can be divided, so x stays at 120 With this scheme every prime would not be a divisor of x. So my idea was to find a pupil number which is not a prime an still no divisor AND is directly before a prime. The number 16 is one of them, followed by the prime number 17. With this method x =1081080 until pupil #15 and this value for x can't be divided by 16 or 17. So the last two pupils are wrong, and the very last said number must be a prime. And I have found a sequence: This happens every $y = 2^z$ for z = the previous y z=2; y=4 z=4; y=16 z=16; y=256 z=256; y=65536 All these y values are followed by a prime! Wa KaiWa Kai $\begingroup$ The thing is, with a sufficiently large number, y - 2 will be a factor of x. Hmm. $\endgroup$ – jimsug Aug 10 '15 at 14:02 $\begingroup$ It can also be preceeded by a prime: 3,4 7,8 31,32 etc. I endorse an answer involving powers of 2's adjacent to primes with the Going Hamateur seal of approval § $\endgroup$ – Going hamateur Aug 10 '15 at 14:07 $\begingroup$ @Goinghamateur You are totally right, I just forget about the predecessor $\endgroup$ – Wa Kai Aug 10 '15 at 14:23 $\begingroup$ Some of this is right, but it's not complete. Remember you don't have to worry about the number x! And "the very last said number must be a prime" (i.e. n+1 is prime?) is wrong. $\endgroup$ – Rand al'Thor Aug 10 '15 at 15:41 Because the two pupils are consecutive one of those is divisible by 2. Let's call that pupil $x$. This means that $x/2$ also can't be a divisor because if $x/2$ is a divisor and $2$ is a divisor then $x$ is also a divisor, contradiction. This either means that the consecutive number are $x$ and $x/2$ or that the number is not divisible by $2$. Since the next number can never be double the previous number it must automatically mean that the number is not divisible by $2$ and, because it needs to be consecutive with another, also not divisible by $3$. This also means that the only valid $n$ is when $n=2$ because a number not divisible by $2$ can't be divisible by $4$ also giving more incorrect statements $\begingroup$ 12 is not divisible by 8 but it is divisible by 4, so your first argument isn't necessarily true $\endgroup$ – StephenTG Aug 10 '15 at 12:28 $\begingroup$ I don't understand what you're saying. All i say is that if $a$ and $b$ are divisors of a number then $a \cdot b$ is also a divisor of that number $\endgroup$ – Ivo Beckers Aug 10 '15 at 12:30 $\begingroup$ If we use 12 as that number (just as an example), and have a = 2, b = 4, then a and b are divisors of 12, but a*b is not $\endgroup$ – StephenTG Aug 10 '15 at 12:32 $\begingroup$ But a*b does not equal 12. In my example I say the following. Let's say that x=12. This means that the number is not divisable by 12. This means that number also not divisable by 6. 6 and 12 are not consecutive so this can't be the case. $\endgroup$ – Ivo Beckers Aug 10 '15 at 12:35 $\begingroup$ Aah. now I understand. You're right. thanks for making me see that. I guess my answer is incorrect. I'l just leave it here for others who might make the same mistake $\endgroup$ – Ivo Beckers Aug 10 '15 at 12:39 The incorrect students are any two students adjacent to each other who each hold a number which is the highest power of a prime of all students. Of course, it is trivial that one of those students must hold a power of two. Two odd numbers can't be adjacent to eachother. After that, it is clear that any number adjacent to the highest power of two held which is a power of a prime must be the highest power of that prime. We would hit a higher power of two before we would hit a higher power of any other prime. Knowing this, n can be arbitrarily high. Sure there are ranges n can't be, such as from 65 to 127 or so ( I haven't done the exact math here but anywhere where the highest power of 2 is not next to an included power of a prime) but we will always eventually find a power of a prime next to a power of 2. Ethan FineEthan Fine $\begingroup$ +1; this is correct but not complete since there are some restrictions on n. Welcome to Puzzling.SE btw :-) $\endgroup$ – Rand al'Thor Aug 10 '15 at 21:06 $\begingroup$ Ivo Beckers answer identifies Catalans conjecture as relevant. In particular it tells us what "the highest power of that prime" is. $\endgroup$ – Taemyr Aug 11 '15 at 9:29 I'm kind of late (almost 3 years late!), but I solved this without looking at others' answers. I found two abstract conditions necessary to qualify a candidate n value, and two specific conditions sufficient to disqualify one. Two integers in a row will always contain one odd and one even - that is, a number that has 2 as a factor. In order for both to not be factors of the number written on the board, and all the rest to be factors, both must be powers of prime numbers. Because one is divisible by 2, it must be a power of 2; otherwise, there will be a higher power of 2 in the range of interest. Likewise, the odd number must be a power of another prime, so that there are no equal or higher powers of its factors in the range 2 to n+1. Therefore, if n is legitimate for this scenario to play out, then one of the "dumb students" will be the student who named the highest power of 2 the other "dumb student" will be one who named a power of an odd prime, either 1 less or 1 greater than the greatest power of 2 both of these numbers will be greater than n/2 if n+1 is a power of 2, then n must be a power of an odd prime, and if n/2+1 is a power of 2, then n/2+2 must be a power of an odd prime Post169Post169 $\begingroup$ Impressive slavish adherence to "pencil, paper, and logic" notwithstanding, it really would be helpful if you transcribed your solution as text. :) $\endgroup$ – Rubio♦ Mar 31 '18 at 15:25 Nice puzzle. The maximum number of pupils in the class is We can argue that certain pupils must be correct. The argument goes this way: Say 2,3,6 are called. If 6 was wrong, 2 or 3 must also be wrong. But that's impossible since the two wrongs must be consecutive. Hence, 6 must be correct, and also 2 and 3. A variation of this argument applies to square numbers: If 2 and 4 are called, we know that 2 must be correct. (but not necessarily 4). Another observation is that once a divisor is proven correct via the argument above, it can never be false again, regardless how many pupils call something later. Now we can play a reverse Sieve of Eratosthenes until no two consecutive wrongs are possible: n=2: Both wrong :-) n=3: 2 must be correct n=4: nothing new. Two solutions exist. n=5: 2,3 must be correct, hence 6 also. n=7: 8 is called, so 4 must be correct. 5 also because both neighbors are already correct. n=8: Two solutions exist, either (7,8 wrong) (8,9 wrong) n=13: 14 is called, which makes 7 correct. 14 is also correct since 2 and 7 are already correct. Only the (8,9 wrong) solution exists. n=15: 16 is called, which makes 8 correct. Now we have a contradiction. Let's write the number and highlight those already proven correct: 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 From now on, whenever an even number n is added, is must be correct since n/2 is already marked as correct. (maybe this could have been formulated a bit shorter, but still, the arguments work) GullyGully $\begingroup$ Think of 16 pupils, with the number not being divisible by 16 and 17. $\endgroup$ – Alexander Aug 10 '15 at 14:23 $\begingroup$ Eww. You're right. I mistakenly marked 16 as correct. This would happen for any 2^n where one of its neighbors is prime. Hm, this makes it more complicated. Good catch. $\endgroup$ – Gully Aug 10 '15 at 15:00 $\begingroup$ Some of your argument is good, but the final answer is incorrect :-( $\endgroup$ – Rand al'Thor Aug 10 '15 at 15:33 (answers are highlighted with bold. (corresponds to encircling on paper). "/∶" means "not divisible by" (didn't find the proper symbol).) In mathematical terms, the puzzle looks like this: The number on the blackboard is divisible by everything in the list 2,3,4...n+1 except 2 consecutive numbers (let's call them m,m+1). What can we say about them? The number is divisible by 2 but not divisible by m => m*2 isn't present in the list1 The number is divisible by anything less than m,m+1 but not divisible by them => m,m+1 can't be factored by the (different) numbers in the list2 ____specifically: one of the m,m+1 is even ____=> m/2|(m+1)/2 isn't present in the list - unless the other multiplier is 2 as well => There are only a few possibilities for m,m+1: 2,3 => n+1<4 (2 students, both wrong. ha-ha) N=1 ROFLMAO _____________________________________all possibilities though are: N∈N, N/∶2,3 3,4 => n+1<6 (n<5) | => n=3 N=2 big number indeed _____________________________________all possibilities: N=2k,k/∶2,3 ________________| => n=4 (2,3,4,5) => N=10 _____________________________________all possibilities: N=10k,k/∶2,3 4,5 => n+1<8 => n can be at most 6 (list entry n+1 =7) The last case requires study in more detail: Possible values for n: - 4 at least <= to include "4,5" => n=4 (2,3,4,5) => N=6k,k/∶2,5 => n=5 (2,3,4,5,6) => same as above => n=6 (2,3,4,5,6,7) => multiplier=LCM(2,3,6,7)=42 => _________________=> N=42k,k/∶2,5 My, my. Big numbers these days... 1(If the number isn't divisible by 2 (m=2), this stands, too: 4 cannot be present) 2(without loss of generality, by two different numbers in the list: if e.g. (m+1)=k*k*l, k*l is also present in the list and is not m) $\begingroup$ Now, let me double-check this... $\endgroup$ – user15507 Aug 10 '15 at 17:06 $\begingroup$ Fixed the last case analysis, added a few notes on special cases. Looks good. $\endgroup$ – user15507 Aug 10 '15 at 18:33 Edited to resolve a prior issue with an incorrect solution: So, basically, for incorrect students p and p+1, 1 <= p < n, n may lie within the range p+1 < n+1 < (p+1)*2. Both p+1 and p+2 are either prime or have a single prime factor. This would mean that the pair p, p+1, for n up to 100, is as follows (p,p+1) 1,2 [divisors 2,3; n=2] 2,3 [divisors 3,4; n=3 or 4] 3,4 [divisors 4,5; 3 < n < 7] 6,7 [divisors 7,8; 6 < n < 13] 15,16 [divisors 16,17; 15 < n < 31] 30,31 [divisors 31,32; 30 < n <61] Wolf LarsonWolf Larson $\begingroup$ Sorry, this is wrong. Try e.g. n=32 with the incorrect students being 16 and 17. There's no upper bound on n. $\endgroup$ – Rand al'Thor Aug 10 '15 at 20:37 $\begingroup$ If student 17 is incorrect, then the number is not divisible by 18, yet it is divisible by both 2 and 9 (students 1 and 8 are correct). How is that possible? $\endgroup$ – Wolf Larson Aug 10 '15 at 20:49 $\begingroup$ I meant the students who talk about divisibility by 16 and 17 (so students 15 and 16, if you like). $\endgroup$ – Rand al'Thor Aug 10 '15 at 21:07 $\begingroup$ I still find that confusing. The number is divisible by 32 (n=32 would mean it is divisible by 2..33, except for 16 and 17), but not by 16. Do you mean n=30? $\endgroup$ – Wolf Larson Aug 10 '15 at 21:15 $\begingroup$ Yes, n=30 (or anything down to 16); sorry. $\endgroup$ – Rand al'Thor Aug 10 '15 at 21:20 I'm Back on a computer! So we know the two students who claimed wrong claimed wrong consecutively. We can say that these two students claimed that the numbers $a$ and $b$ were factors of the number written on the board by the teacher, which in reality it wasn't. The first thing to note is that one of these numbers, either $a$ or $b$ must be even, and one of these numbers must be odd. The second thing to note is that both $a$ and $b$ must be factored into only one type of prime, eg they must be a prime number raised to some power. To prove this, we can assume the contrary and say that $a$ can be factored to $ppqqq$ where $p$ and $q$ are primes. $pp$ will be smaller than $a$, but we know that all numbers smaller than $a$ were accepted by the teacher as being a factor of the number on the board. Similarly, $qqq$ will be accepted by the teacher as being a factor of the number on the board. If both $pp$ and $qqq$ are accepted as being a factor, then $a$ must also be a factor, breaking our assumption. Therefore both $a$ and $b$ must be some prime raised to a power. Because one of these is even and one of these is odd, the even claim must be a power of $2$, and the odd claim must be a power of an odd prime number. We know immediately some restrictions upon $n$. Firstly, it must be less than $2a$, and less than $2b$, because we know the number on the board is not divisible by either $a$ or $b$. But I believe this is the only restriction upon $n$. Joshua LinJoshua Lin $\begingroup$ I'm not sure what to make of this ... you tell me to ignore most of your post! I'll wait a while in the hope that you'll access SE not on your phone and improve the formatting to make your answer clearer :-) $\endgroup$ – Rand al'Thor Aug 10 '15 at 15:38 Not the answer you're looking for? Browse other questions tagged mathematics number-theory or ask your own question. Which two students spoke wrongly? Do better than chance What's the teacher's fractional addition algorithm? The lost drone at the Great Wall of China Students in a class with the same name Blackboard problem with polynomial Hexadecimal = n times decimal Find the value of $\bigstar$: Puzzle 2 - Switch-a-roo Find a number based on lies about divisibility
CommonCrawl
{{subColumn.name}} Mathematical Biosciences and Engineering {{newsColumn.name}} Copyright © AIMS Press 2020, Volume 17, Issue 6: 7692-7707. doi: 10.3934/mbe.2020391 Research article Special Issues Asymptotic flocking for the three-zone model Fei Cao 1 , , , Sebastien Motsch 1 , Alexander Reamy 1 , Ryan Theisen 2 School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287-1804, USA Department of Statistics, University of California, Berkeley, 367 Evans Hall, Berkeley, CA 94720-3860, USA Received: 06 July 2020 Accepted: 15 October 2020 Published: 05 November 2020 Full Text(HTML) We prove the asymptotic flocking behavior of a general model of swarming dynamics. The model describing interacting particles encompasses three types of behavior: repulsion, alignment and attraction. We refer to this dynamics as the three-zone model. Our result expands the analysis of the so-called Cucker-Smale model where only alignment rule is taken into account. Whereas in the Cucker-Smale model, the alignment should be strong enough at long distance to ensure flocking behavior, here we only require that the attraction is described by a confinement potential. The key for the proof is to use that the dynamics is dissipative thanks to the alignment term which plays the role of a friction term. Several numerical examples illustrate the result and we also extend the proof for the kinetic equation associated with the three-zone dynamics. agent-based models, collective-behavior, flocking, kinetic equations, energy estimates Citation: Fei Cao, Sebastien Motsch, Alexander Reamy, Ryan Theisen. Asymptotic flocking for the three-zone model[J]. Mathematical Biosciences and Engineering, 2020, 17(6): 7692-7707. doi: 10.3934/mbe.2020391 Related Papers: [1] I. Aoki, A Simulation Study on the Schooling Mechanism in Fish, Bulletin of the Japanese Society of Scientific Fisheries (Japan), 1982. [2] I. D. Couzin, J. Krause, R. James, G. D Ruxton, N. R. Franks, Collective memory and spatial sorting in animal groups, J. Theor. Biol., 218 (2002), 1-11. doi: 10.1006/jtbi.2002.3065 [3] A. Huth, C. Wissel, The simulation of the movement of fish schools, J. Theor. Biol., 156 (1992), 365-385. doi: 10.1016/S0022-5193(05)80681-2 [4] Y. X. Li, R. Lukeman, L. Edelstein-Keshet, Minimal mechanisms for school formation in self-propelled particles, Phys. D: Nonlinear Phenomena, 237 (2008), 699-720. [5] C. W. Reynolds, Flocks, herds and schools: A distributed behavioral model, in ACM SIGGRAPH Computer Graphics, (1987), 25-34. [6] F. Cucker, S. Smale, Emergent behavior in flocks, IEEE Trans. Automatic Control, 52 (2007), 852. [7] F. Cucker, S. Smale, On the mathematics of emergence, Jpn. J. Math., 2 (2007), 197-227. doi: 10.1007/s11537-007-0647-x [8] T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, O. Shochet, Novel type of phase transition in a system of self-driven particles, Phys. Rev. Lett., 75 (1995), 1226-1229. doi: 10.1103/PhysRevLett.75.1226 [9] M. Agueh, R. Illner, A. Richardson, Analysis and simulations of a refined flocking and swarming model of Cucker-Smale type, Kinetic Related Models, 4 (2011), 1-16. doi: 10.3934/krm.2011.4.1 [10] S. Motsch, E. Tadmor, A new model for self-organized dynamics and its flocking behavior, J. Stat. Phys., 144 (2011), 923-947. doi: 10.1007/s10955-011-0285-9 [11] S. Motsch, E. Tadmor, Heterophilious dynamics enhances consensus, SIAM Rev., 56 (2014), 577-621. doi: 10.1137/120901866 [12] J. A. Carrillo, M. Fornasier, J. Rosado, G. Toscani, Asymptotic flocking dynamics for the kinetic Cucker-Smale model, SIAM J. Math. Anal., 42 (2010), 218-236. doi: 10.1137/090757290 [13] S. Y. Ha, J. G. Liu, A simple proof of the Cucker-Smale flocking dynamics and mean-field limit, Commun. Math. Sci., 7 (2009), 297-325. doi: 10.4310/CMS.2009.v7.n2.a2 [14] S. Y. Ha, E. Tadmor, From particle to kinetic and hydrodynamic descriptions of flocking, Kinetic Related Models, 1 (2008), 415-435. doi: 10.3934/krm.2008.1.415 [15] J. A. Carrillo, Y. P. Choi, S. Pérez, A review on attractive-repulsive hydrodynamics for consensus in collective behavior, preprint, arXiv: 1605.00232. [16] J. A. Carrillo, Y. Huang, S. Martin, Explicit flock solutions for Quasi-Morse potentials, Eur. J. Appl. Math., (2014), 1-26. [17] J. Von Brecht, D. Uminsky, T. Kolokolnikov, A. Bertozzi, Predicting pattern formation in particle interactions, Math. Models Methods Appl. Sci., 22 (2012). [18] D. Bakry, P. Cattiaux, A. Guillin, Rate of convergence for ergodic continuous Markov processes: Lyapunov versus Poincaré, J. Funct. Anal., 254 (2008), 727-759. doi: 10.1016/j.jfa.2007.11.002 [19] C. Villani, Hypocoercivity, Memoirs of the American Mathematical Society, 2009. [20] P. E. Jabin, S. Motsch, Clustering and asymptotic behavior in opinion formation, J. Differ. Equations, 257 (2014), 4165-4187. doi: 10.1016/j.jde.2014.08.005 [21] T. Karper, A. Mellet, K. Trivisa, On strong local alignment in the kinetic Cucker-Smale model, in Hyperbolic Conservation Laws and Related Analysis with Applications, Springer, (2014), 227-242. [22] J. Haskovec, Flocking dynamics and mean-field limit in the Cucker-Smale-type model with topological interactions, Phys. D: Nonlinear Phenomena, 261 (2013), 42-51. doi: 10.1016/j.physd.2013.06.006 [23] M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I. Giardina, et al., Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study, Proceed. Natl. Acad. Sci., 105 (2008), 1232. [24] A. Blanchet, P. Degond, Topological interactions in a Boltzmann-type framework, J. Stat. Phys., 163 (2016), 41-60. doi: 10.1007/s10955-016-1471-6 [25] J. A. Carrillo, Y. Huang, Explicit Equilibrium solutions for the aggregation equation with power-law potentials, preprint, arXiv: 1602.06615. [26] R. Fetecau, Y. Huang, T. Kolokolnikov, Swarm dynamics and equilibria for a nonlocal aggregation model, Nonlinearity, 24 (2011), 2681. doi: 10.1088/0951-7715/24/10/002 [27] T. Kolokolnikov, H. Sun, D. Uminsky, A. Bertozzi, Stability of ring patterns arising from two-dimensional particle interactions, Phys. Rev. E, 84 (2011), 015203. doi: 10.1103/PhysRevE.84.015203 [28] Y. Chuang, M. R. D'Orsogna, D. Marthaler, A. L. Bertozzi, L. S. Chayes, State transitions and the continuum limit for a 2d interacting, self-propelled particle system, Phys. D: Nonlinear Phenomena, 232 (2007), 33-47. [29] M. R. D'Orsogna, Y. L. Chuang, A. L. Bertozzi, L. S. Chayes, Self-propelled particles with soft-core interactions: Patterns, stability, and collapse, Phys. Rev. Lett., 96 (2006), 104302. [30] D. Ruelle, Statistical Mechanics: Rigorous Results. World Scientific, 1969. [31] D. Balagué, J. A. Carrillo, T. Laurent, G. Raoul, Dimensionality of local minimizers of the Interaction energy, Arch. Rational Mech. Anal., 209 (2013), 1055-1088. doi: 10.1007/s00205-013-0644-6 [32] L. Desvillettes, C. Villani, On the trend to global equilibrium for spatially inhomogeneous kinetic systems: The Boltzmann equation, Inventiones Math., 159 (2005), 245-316. doi: 10.1007/s00222-004-0389-9 [33] F. Filbet, On deterministic approximation of the Boltzmann equation in a bounded domain, Multiscale Modell. Simul., 10 (2012), 792-817. doi: 10.1137/11082419X [34] F. Bolley, J. A. Canizo, J. A. Carrillo, Stochastic mean-field limit: non-Lipschitz forces and swarming, Math. Models Methods. Appl. Sci., 21 (2011), 2179-2210. doi: 10.1142/S0218202511005702 [35] J. Carrillo, Y. P. Choi, M. Hauray, The derivation of swarming models: mean-field limit and Wasserstein distances, in Collective Dynamics from Bacteria to Crowds, Springer, (2014), 1-46. [36] P. Degond, G. Dimarco, T. B. N. Mac, N. Wang, Macroscopic models of collective motion with repulsion, Commun. Math. Sci., 13 (2015), 1615-1638. doi: 10.4310/CMS.2015.v13.n6.a12 [37] P. Degond, J. G. Liu, S. Motsch, V. Panferov, Hydrodynamic models of self-organized dynamics: derivation and existence theory, Methods Appl. Anal., 20 (2013), 89-114. [38] P. Degond, S. Motsch, Continuum limit of self-driven particles with orientation interaction, Math. Models Methods Appl. Sci., 18 (2008), 1193-1215. doi: 10.1142/S0218202508003005 [39] P. E. Jabin, A review of the mean field limits for Vlasov equations, Kinetic Related Models, 7 (2014), 661-711. doi: 10.3934/krm.2014.7.661 [40] H. Spohn, Large Scale Dynamics of Interacting Particles, Springer, 1991. © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0) 通讯作者: 陈斌, [email protected] 沈阳化工大学材料科学与工程学院 沈阳 110142 百度学术搜索 万方数据库搜索 CNKI搜索 Article views(116) PDF downloads(9) Cited by(0) Show full outline Figures(5) Other Articles By Authors Fei Cao Sebastien Motsch Alexander Reamy Ryan Theisen Fei Cao, Sebastien Motsch, Alexander Reamy, Ryan Theisen. Asymptotic flocking for the three-zone model[J]. Mathematical Biosciences and Engineering, 2020, 17(6): 7692-7707. doi: 10.3934/mbe.2020391 RIS(for EndNote,Reference Manager,ProCite) DownLoad: Full-Size Img PowerPoint Figure 1. Left: Illustration of the three-zone model. The model includes three type of behavior: attraction/alignment/repulsion. Right: attraction and repulsion are represented through the function $ V $, alignment is described via $ \phi $ Figure 2. Attraction-repulsion $ V $ and alignment $ \phi $ used for the simulations. In both cases, $ V $ diverges at infinity (i.e., satisfies Eq (2.6)) Figure 3. Simulation of the three-zone models (2.1) and (2.2) with potential $ V $ and alignment function $ \phi $ given by Eq (2.8). Agents regroup on a disc of size $ R\approx1.8 $ for any group sizes. Parameters: $ \Delta t = 0.05 $, total time $ t = 200 $ unit time Figure 4. Simulation of the three-zone models (2.1) and (2.2) with potential $ V $ and alignment function $ \phi $ given by Eq (2.9). Agents regroup on a circle of size $ R\approx 0.5 $. Parameters: $ \Delta t = 0.05 $, total time $ t = 200 $ unit time Figure 5. Evolution of the energy $ \mathcal{E} $ for the solutions depicted in Figures 3 and 4 (left and right figure respectively). The energy is always decaying but also oscillates between fast and slow decays. These oscillations can be explained by the successive contraction-expansion of the spatial configuration. The decay of the energy is faster when agents are closer to each other
CommonCrawl
Swiss Journal of Economics and Statistics A daily fever curve for the Swiss economy Marc Burri1 na1 & Daniel Kaufmann1,2 Swiss Journal of Economics and Statistics volume 156, Article number: 6 (2020) Cite this article Because macroeconomic data is published with a substantial delay, assessing the health of the economy during the rapidly evolving COVID-19 crisis is challenging. We develop a fever curve for the Swiss economy using publicly available daily financial market and news data. The indicator can be computed with a delay of 1 day. Moreover, it is highly correlated with macroeconomic data and survey indicators of Swiss economic activity. Therefore, it provides timely and reliable warning signals if the health of the economy takes a turn for the worse. Because macroeconomic data is published with a substantial delay, assessing the health of the economy during the rapidly evolving coronavirus disease of 2019 (COVID-19) crisis is challenging. Usually, policy makers and researchers rely on early information from surveys and financial markets to construct leading indicators and estimate forecasting models (see, e.g., Abberger et al., 2014; Galli, 2018; Kaufmann and Scheufele, 2017; OECD, 2010; Stuart, 2020; Wegmüller and Glocker, 2019, for Swiss applications). These indicators and forecasts are published with a delay of 1 to 2 months.Footnote 1 During the COVID-19 crisis, however, we need high-frequency information to assess how stricter or looser health restrictions and economic stimulus programs affect the economy. We propose a novel daily fever curve (f-curve) for the health of the Swiss economy based on publicly available financial market and news data. We construct risk premia on corporate bonds, term spreads, and stock market volatility indices starting in 2000. In addition, we collect short economic news from online newspaper archives. We then estimate a composite indicator which has the interpretation of a fever curve: As for monitoring the condition of a patient, an increase of the fever curve provides a reliable and timely warning signal if health takes a turn for the worse. Panel a of Fig. 1 shows the f-curve (on an inverted scale) jointly with real gross domestic product (GDP) growth: the indicator closely tracks economic crises. It presages the downturn during the Global Financial Crisis, responds to the removal of the minimum exchange rate and to the euro area debt crisis. The f-curve also responds strongly to the COVID-19 crisis (see panel b). The indicator starts to rise in late February. By then, it became evident that the COVID-19 crisis will hit most European countries; in Switzerland, the first large events were canceled. It reaches a peak shortly after the lockdown. Afterward, the fever curve gradually declines with news about economic stimulus packages and gradual loosening of the lockdown. The peak during the COVID-19 crisis is comparable with the Global Financial Crisis. But the speed of the downturn is considerably higher. In addition, so far, the crisis is less persistent. Up to June 4, 2020, the f-curve improved to 1/4 of its peak value during the lockdown. A fever curve for the Swiss economy. Panel a compares the fever curve (inverted and rescaled) to quarterly GDP growth. Panel b panel gives daily values of the fever curve along with important policy decisions The indicator has several advantages we hope will make it useful for policy makers and the public at large. The methodology of the f-curve is simple; the data selection process is based on economic theory and intuition; the data sources are publicly available, and we provide the program codes and daily updates on https://github.com/dankaufmann/f-curve/.Footnote 2 Moreover, additional daily indicators that track economic activity are easily integrated in the modeling framework. There are various initiatives in Switzerland and abroad to satisfy the demand for reliable high-frequency information during the COVID-19 crisis. Becerra et al. (2020) develop sentiment indicators using Internet search engine data. Brown and Fengler (2020) provide information on Swiss consumption behavior based on debit and credit card payment data. Eckert and Mikosch (2020) develops a daily mobility index using data on traffic, payments, and cash withdrawals. For the USA, economists at the Federal Reserve Bank of New York estimate a weekly index of economic activity based on retail sales, unemployment insurance claims, and other rapidly available data on production, prices, and employment (Lewis et al. 2020). Moreover, Buckman et al. (2020) create a daily news sentiment indicator that leads the US traditional consumer sentiment based on surveys. Our paper is the first, to the best of our knowledge, to combine daily information from newspapers and financial market data in a daily measure of economic activity for Switzerland. In what follows, we describe the data and methodology. Then, we provide an analysis of the in- and out-of-sample performance. The last section concludes. We use publicly available bond yields underlying the SIX Swiss Bond Indices Ⓡ (SIX 2020a). These data are available on a daily basis and with a delay of 1 day. Because many bond yields start only around 2007, we extend the series with a close match of government and corporate bond yields from the Swiss National Bank (see Table A.2 and Figure A.2 in the Online Appendix).Footnote 3 Then, we compute various spreads that should be correlated with economic activity: a government bond term spread (8Y - 2Y), the interest rate differential vis-à-vis the euro area (1Y), and risk premia of short- and long-term corporate debt. Besides interest rate spreads for Switzerland, we compute risk premia of foreign companies that issue debt in Swiss franc for short- and long-term debt. We also include term spreads for the USA and for the euro area. For the latter, we use short-term interest rates in euro (European Central Bank 2020) and long-term yields of German government debt (Deutsche Bundesbank 2020). In addition, we include two implied volatility measures of the Swiss and US stock market. Swiss data stem from SIX (2020b) and are published with a delay of one day. The US data stem from the Chicago Board Options Exchange (2020). These financial market data should be related to the Swiss business cycle. Stuart (2020) shows that the term spread exhibits a lead on the Swiss business cycle.Footnote 4Kaufmann (2020) argues that a narrowing of the interest rate differential appreciates the Swiss franc and thereby dampens economic activity. Risk premia are correlated with the default risk of companies, which should increase during economic crises. Finally, recent research documents an increase in uncertainty during economic downturns (Baker et al. 2016; Scotti 2016). There are various ways to measure uncertainty (see e.g., Dibiasi and Iselin 2016). Because we aim to exploit quickly and freely available financial market data, we prefer a measure of stock market volatility. We complement the financial market data with sentiment indicators based on Swiss newspapers. We extract headlines and lead texts from the online archives of the Tages-Anzeiger, the Neue Zürcher Zeitung, and the Finanz und Wirtschaft.Footnote 5 We focus on the headline and lead text as these are publicly available and often contain the key messages of the articles. To reduce the number of potentially relevant articles, and to decompose the sentiment indicator into a domestic and foreign part, we only use articles satisfying specific search queries (see Table A.3 in the Online Appendix for a detailed description). To calculate a news sentiment, we use the lexical methodology (see, e.g., Ardia et al. 2019; Shapiro et al. 2017; Thorsrud, 2020). First, we filter out irrelevant information.Footnote 6 Second, we identify positive and negative words using the lexicon developed by Remus et al. (2010). Finally, we calculate for each article n and each day t a sentiment score: $$S_{t,n} = \frac{\#P_{t,n} - \#N_{t,n}}{\#T_{t,n}} \ ,$$ where #Pt,n,#Nt,n,#Tt,n represent, for each article and each time period, the number of positive, negative, and total words, respectively. Finally, we compute a simple average over all articles to obtain daily indicators for articles about the domestic and foreign economy. News sentiment indicators receive more and more attention for forecasting economic activity. Buckman et al. (2020) show that during the COVID-19 pandemic, news sentiment indicators provide reliable and early information on the economy, even compared to quickly available survey data. Moreover, Ardia et al. (2019) show that news sentiment helps forecast the US industrial production growth. The financial market data and news indicators are quite volatile, but also they are correlated with each other. To parsimoniously summarize the information content of the data and remove idiosyncratic noise, we estimate a factor model in static form:Footnote 7 $$X = F\Lambda + e$$ The model comprises N variables and T daily observations. Therefore, the data matrix X is (T×N), the common factors F are (T×r), the factor loadings Λ are (r×N), and the unexplained error term e is (T×N). The advantage of a factor model is that we can parsimoniously summarize the information content in the large data matrix X with a relatively small number of common factors r. Assuming that the idiosyncratic components are only weakly serially and cross-sectionally correlated, we can estimate the factors and loadings by principal components (Bai and Ng 2013; Stock and Watson 2002).Footnote 8 Our main indicator is the first principal component of the static factor model. We normalize the indicator that it increases during crises.Footnote 9 Because this factor has no clear economic interpretation, we decompose it into a contribution from domestic and foreign fluctuations. Suppose that there are only two factors driving the variables. One factor captures foreign fluctuations. The other factor captures domestic fluctuations. We allow for spillovers from abroad to the domestic economy, but not vice versa. Under these assumptions, the factor model reads: $$\left[\begin{array}{cc} X & X^{*} \end{array}\right] = \left[\begin{array}{cc} f & f^{*}\end{array}\right] \left[\begin{array}{cc} \lambda_{11} & 0 \\ \lambda_{21} & \lambda_{22} \end{array}\right] + e $$ where X,X∗ denote the data matrices comprising domestic and foreign variables, respectively. In addition f,f∗ represent the domestic and foreign factors and λ11,λ21,λ22 are the loading matrices. To estimate this factor model, we can use an iterative procedure inspired by Boivin et al. (2009). First, we estimate the foreign factor only on foreign data. This imposes that foreign variables only load on the foreign factor. Second, we estimate the domestic factor on \(\tilde X\), where $$\tilde X = X - \lambda_{21}f^{*} \ ,$$ removes variation explained by the foreign factor. We can estimate λ21 for every indicator comprised in X in a regression on the domestic and foreign factor. Because this regression depends on the value of the domestic factor, we repeat this step 50 times (see Boivin et al., 2009; Kaufmann and Lein 2013, for more details). Finally, we can estimate a decomposition by regressing the f-curve on the domestic and foreign factors. This procedure does not guarantee that the decomposition adds up exactly to the overall factor. However, the unexplained rest turns out to be relatively small. The decomposition involves additional estimation steps that may reduce the forecast accuracy; therefore, we only use this decomposition for the in-sample interpretation, but not for out-of-sample forecasting. The f-curve should primarily be used to quickly detect turning points of the business cycle. As such, it is correlated or leading many key macroeconomic variables (see Figure A.4 in the Online Appendix). In its current form, we have not optimized the indicator to track any particular measure of economic activity. We therefore first focus on the in-sample information content of the f-curve, highlighting that it is available earlier than most other leading indicators. For the sake of illustration, however, we additionally provide an evaluation of its pseudo out-of-sample performance for forecasting real GDP growth. In-sample analysis To compare the in-sample information content of the f-curve to other leading indicators, we perform a cross-correlation test (see Neusser, 2016, Ch. 12.1).Footnote 10 Figure 2 shows a substantial correlation between the f-curve and many prominent leading indicators.Footnote 11 There is a coincident or leading relationship with the KOF Economic Barometer, SECO's Swiss Economic Confidence, the Organisation for Economic Co-operation and Development composite leading indicator (OECD CLI), and consumer confidence.Footnote 12 There is a coincident relationship with trendEcon's perceived economic situation. This daily indicator starts only in 2006, however. There is a significant lagging relationship with the SNB's Business Cycle Index. But this index is published with a relevant delay. Overall, these results suggest the f-curve provides sensible information comparable with other existing indicators. The key advantage of the f-curve is its prompt availability and that it is available on a longer time period. Cross-correlation with other indicators. Cross-correlation between the f-curve and other prominent leading and sentiment indicators. We aggregate all data either to quarterly frequency (consumer sentiment) or monthly frequency (remaining indicators). The dashed lines give 95% confidence intervals. A bar outside of the interval suggests a statistically significant correlation between the indicators at a lead/lag of s. Before computing the cross-correlation, the series have been pre-whitened with an AR(p) model (see Neusser 2016, Ch. 12.1). The lag order has been determined using the Bayesian information criterion. The only exception is the OECD CLI for which we used an AR(4) model Another advantage is that we can decompose its fluctuations into domestic and foreign factors. Panel a of Fig. 3 shows that the foreign contribution rises after the collapse of Lehman Brothers, but also, during the euro area debt crisis. By contrast, the domestic contribution rises after the removal of the minimum exchange rate in 2015, but also, during the COVID-19 crisis. Focusing on the COVID-19 crisis, panel b shows the indicator rose already in the last week of February, before the actual COVID-19 lockdown. It reaches a peak during the first week of the lockdown and gradually declines thereafter. About half of the increase in the indicator can be traced back to foreign developments. Although the domestic lockdown is important, the f-curve suggests the Swiss economy would have suffered even in the absence of these restrictions. During the last 4 weeks, the contribution from foreign variables declines. The domestic contribution, however, remains elevated. Therefore, while the negative foreign demand shock seems to become less important, the model suggests economic activity will remain subdued also due to domestic headwinds. Decomposition domestic and foreign variables. Decomposition of the f-curve into foreign factors, domestic factors, and an unexplained rest Pseudo out-of-sample evaluation How reliable is the f-curve? To answer this question, we perform a pseudo-real-time forecast evaluation. Therefore, we use the real-time data set for quarterly GDP vintages by Indergand and Leist (2014).Footnote 13 In the evaluation, we use the following direct forecasting model: $$y_{\tau+h} = \alpha_{h} + \beta_{h,1}f_{\tau|t} + \beta_{h,2}f_{\tau-1}+\nu_{\tau+h}$$ where yτ denotes quarterly GDP growth, h is the forecast horizon, τ gives time in quarterly frequency, and t denotes time in daily frequency. fτ|t is our best guess of the f-curve for the entire quarter based on daily information at time t. We compute fτ|t and fτ as the simple average of available daily observations for a given quarter. Finally, ντ+h is an error term. At the time of our last update, τ= 2020 Q2 and t= 4 June 2020. We then conduct a forecast based on the state of information when a new quarterly GDP vintage is published by SECO.Footnote 14 This yields 70 nowcasts (69 one-quarter-ahead forecasts). These forecasts are compared to three benchmarks. First, we compare the forecasts to the first quarterly release of GDP growth for the corresponding quarter. Because quarterly GDP is substantially revised ex-post, we treat the initial quarterly GDP release as a forecast of the true GDP figure. Second, we use an autoregressive model of order 1, AR(1), estimated on the corresponding real-time vintage for GDP growth. Third, using the same forecasting equation as for the f-curve, we forecast GDP growth using the KOF Economic Barometer, a prominent monthly composite leading indicator (Abberger et al. 2014). To compute the forecast errors, we use the last available release of quarterly GDP from June 3, 2020. Table 1 panel a shows the root-mean-squared error (RMSE) of the f-curve is higher than the one of the first official GDP release. However, the difference is not statistically significant. The advantage of the f-curve is, of course, that its value for the entire quarter is available about 2 months earlier than the first GDP release. In addition, we compare the f-curve to an AR(1) model. Panel b shows we outperform the AR(1) benchmark. The RMSE is 18% lower for the current quarter. Moreover, the difference in forecast accuracy is statistically significant. For the next quarter, however, the f-curve does not provide a more accurate forecast than the AR(1) model. Panel c shows that the f-curve yields similar results as the KOF Economic Barometer. The difference in the RMSE is never statistically significant. This suggest the advantage of our indicator primarily lies in its prompt availability. Table 1 Pseudo-real-time evaluation. Root-mean-squared errors (RMSE) for forecasts on days with a new quarterly GDP release. A lower RMSE implies higher predictive accuracy. h=0 (h=1) denotes the forecast for the current (next) quarter. We use three benchmarks. First, we use the first quarterly release of the corresponding quarter (panel a). Second, we use an AR(1) model (panel b). Third, we use the KOF Economic Barometer (panel c). The Diebold-Mariano-West (DMW) test provides a p value for the null hypothesis of equal predictive accuracy against the alternative written in the column header (Diebold and Mariano 2002; West 1996). We assume a quadratic loss function We perform a subsample analysis in Table 2. The current vintage of GDP, which we use to compute the forecast errors, will likely be revised in the future. One of the reasons is that future vintages will include annual GDP estimates by the SFSO, which are based on comprehensive firm surveys. Therefore, we restrict the sample to years where the GDP figures already include these annual figures (panel a). The f-curve performs better on this sample. In fact, the RMSE is almost identical to the RMSE of the first GDP release for the current quarter. A similar picture emerges when excluding economic crises (panel b). This implies that the f-curve does not only signal deep economic crises, but tracks the economy well also during normal times. Table 2 Subsample evaluation for real GDP growth: First release vs. f-curve. Root-mean-squared errors (RMSE) for forecasts on days with a new quarterly GDP release. A lower RMSE implies higher predictive accuracy. h=0 (h=1) denotes the forecast for the current (next) quarter. Panel (a) shows the evaluation for GDP figures that include the annual SFSO estimates (until 2018). Panel (b) excludes economic crises. As benchmark, we use the first quarterly release of the corresponding quarter. The Diebold-Mariano-West (DMW) test provides a p value for the null hypothesis of equal predictive accuracy against the alternative written in the column header (Diebold and Mariano 2002; West 1996). We assume a quadratic loss function Are the financial market or news data more important for the forecasting performance of the f-curve? Figure 4 shows two indicators only calculated with financial market and news data, respectively. Although the indicators are positively correlated, there are two key differences. First, the financial market data respond more strongly during crises. Second, the news data are more volatile.Footnote 15 This suggests the financial market data provide a more accurate signal of the business cycle than the news data. Table 3 confirms this view. The RMSE for an indicator based only on financial market variables amounts to 0.57, the same as for the overall f-curve. Meanwhile, the RMSE of a forecast based only on news data amounts to 0.64. The news data does not worsen the f-curve because the factor model including financial market data removes the idiosyncratic fluctuations; taken in isolation, however, the news indicator performs worse. Comparison news and financial market data. Two indicators estimated only on financial market and news data, respectively Table 3 Comparison news vs. financial data. Root-mean-squared errors (RMSE) for forecasts on days with a new quarterly GDP release. A lower RMSE implies higher predictive accuracy. h=0 (h=1) denotes the forecast for the current (next) quarter. Panel (a) shows the evaluation for an indicator based only on financial market data. Panel (b) shows the evaluation for an indicator based only on news data. As benchmark, we use the first quarterly GDP release for the corresponding quarter. The Diebold-Mariano-West (DMW) test provides a p value for the null hypothesis of equal predictive accuracy against the alternative written in the column header (Diebold and Mariano 2002; West 1996). We assume a quadratic loss function Although it is too early to judge the actual real-time performance of the indicator, Fig. 5 provides some preliminary results on the stability of the f-curve over time. One reason why the indicator is revised is that not all data series are available in real-time (ragged edge problem). Panel (a) shows results over the first month we updated the indicator on a daily basis. On average, more than 8 out of 12 series are available with a delay of 1 day. After 3 days, almost all indicators are available. Real-time results since initial version of the f-curve. Panel a: average number of observations available for calculation the f-curve (left figure). The different shades of gray represent estimates over time from May 11, 2020, to May 29, 2020 (right figure). Panel b: Estimates of the f-curve using the methodology in the Working Paper (v1.0) and the current version (v2.0) The main reason why the average lies below 12 is that the archive of Tages-Anzeiger has not been updated since 12 May 12, 2020.Footnote 16 Therefore, we augmented the indicator with information from this newspapers' online edition. Adding this source resulted in a slightly larger revision of the indicator compared to the Working Paper version (see panel b). However, the correlation between the old and new version is 0.99 and the broad picture during the COVID-19 crisis is identical. We develop a daily indicator of Swiss economic activity. A major strength of the indicator is that it can be updated with a delay of only 1 day. An evaluation of the indicator shows that it is not only correlated with other business cycle indicators but also accurately tracks Swiss GDP growth. Therefore, the f-curve provides an accurate and flexible framework to track Swiss economic activity at high frequency. Having said that, there is still room for improvement. We see six promising avenues for future research. First, the news sentiment indicators could exploit other publicly available news sources, in particular, newspapers from the French- and Italian-speaking parts of Switzerland. Second, we could use a topic modeling algorithm, instead of our own search queries, to classify news according to countries, sectors, and economic concepts (see e.g., Thorsrud, 2020). Third, the lexicon could be tailored specifically to economic news (see e.g., Shapiro et al., 2017). Fourth, we could examine the predictive ability of multiple factors and for other macroeconomic data. Fifth, the information could be used to disaggregate quarterly GDP and industrial production into monthly or even weekly series. Finally, it would be desirable to collect and exploit the information from many different daily indicators that are currently developed into one single composite indicator or indicator data set. Exploiting all this new information will likely further improve our understanding of health of the Swiss economy at high frequency. Data are available on https://github.com/dankaufmann/f-curve/. See Table A.1 in the Online Appendix for publication lags of some important macroeconomic data and leading indicators. We plan to continuously extend the indicator. We therefore welcome suggestions for improvements and extensions. Data from the Swiss National Bank are published with a longer delay. Therefore, these bond yields cannot be used to track the economy on a daily basis. We therefore move forward all term spreads by half a year. During the first month of daily updates, we noticed that the Tages-Anzeiger updates its archive with a relevant delay or not at all. Therefore, in the revised version of the indicator, we additionally include articles from the Tages-Anzeiger website. We remove HyperText Markup Language (HTML) tags, punctuation, numbers, and so-called stop words (e.g., the German words der, wie, ob). The stop words are provided by Feinerer and Hornik (2019). Also, we transform all letters to lowercase. The news indicators are much more volatile than the financial market data (see Figure A.1 in the Online Appendix). We therefore compute a one-sided 2-day moving average before including them in the factor model. To account for missing values, we compute the indicator only if at least five underlying data series are observed. Moreover, we remove all weekends. Then, we interpolate few additional missing values using an EM-algorithm (Stock and Watson 2002), after normalizing the data to have zero mean and unit variance. For interpolation, we choose a relatively large number of factors for interpolating the data (r=4). Finally, we estimate the f–curve as the first principal component of the interpolated data set. An interesting extension would be to examine whether more than one factor comprises relevant information for Swiss economic activity. We leave this extension for future research. It is noteworthy that other indicators are estimated or smoothed such that they undergo substantial revisions over time; moreover, some of the indicators are published with significant delays (see Table A.1 in the Online Appendix); finally, some are based on lagged data (see, e.g., OECD 2010). Figure A.3 in the Online Appendix provides plots of these indicators. All data sources are given in the Online Appendix. The evaluation is not strictly a real-time forecast evaluation because we use three types of in-sample information. First, the f-curve is constructed based on knowledge of the business cycle in the past, in particular, the Global Financial Crisis. Second, the link of the underlying indicators with new data is based on inspecting whether different data sources are highly correlated. Third, the normalization of the indicators in the factor model may introduce revisions that we do not account for in the forecast evaluation. Arguably, using this in-sample information in the evaluation makes sense if the goal of the evaluation is to show whether the indicator is useful going forward rather than whether the indicator would have been useful in the past. These dates stem from Indergand and Leist (2014). This is also because we smooth the news indicator with a moving-average of only 2 days. Comparable studies smooth over a longer time period. For example, Thorsrud (2020) uses a moving average of 60 days. On the one hand, this reduces the volatility of the news sentiment. On the other hand, this obviously renders the indicator less useful for detecting rapid daily changes. On rare occasions, the websites of other sources were not available. AR(p): Autoregressive model of order p CH: Composite leading indicator Coronavirus disease of 2019 Europe / f-curve: Fever curve FuW: GDP: HyperText Markup Language ILO: International Labor Organization KOF: Konjunkturforschungsstelle NZZ: OECD: RMSE: Root-mean-squared error SECO: State Secretariat for Economic Affairs SFSO: Swiss Federal Statistical Office SNB: TA: Abberger, K., Graff, M., Siliverstovs, B., Sturm, J.-E. (2014). The KOF Economic Barometer, Version 2014. A composite leading indicator for the Swiss business cycle. KOF Working Papers 353, Swiss Economic Institute, KOF, ETH Zurich. https://doi.org/10.3929/ethz-a-010102658. Ardia, D., Bluteau, K., Boudt, K. (2019). Questioning the news about economic growth: sparse forecasting using thousands of news-based sentiment values. International Journal of Forecasting, 35(4), 1370–1386. https://doi.org/10.1016/j.ijforecast.2018.10.010. Bai, J., & Ng, S. (2013). Principal components estimation and identification of static factors. Journal of Econometrics, 176(1), 18–29. https://doi.org/10.1016/j.jeconom.2013.03.007. Baker, S.R., Bloom, N., Davis, S.J. (2016). Measuring economic policy uncertainty. The Quarterly Journal of Economics, 131(4), 1593–1636. https://doi.org/10.1093/qje/qjw024. Becerra, A., Eichenauer, V.Z., Indergand, R., Legge, S., Martinez, I., Mühlebach, N., Oguz, F., Sax, C., Schuepbach, K., Thöni, S. (2020). trendEcon. https://www.trendecon.org. Accessed 30 Apr 2020. Boivin, J., Giannoni, M.P., Mihov, I. (2009). Sticky prices and monetary policy: evidence from disaggregated us data. American Economic Review, 99(1), 350–84. https://doi.org/10.1257/aer.99.1.350. Brown, M., & Fengler, M. (2020). Monitoring Consumption Switzerland. https://public.tableau.com/profile/monitoringconsumptionswitzerland. Accessed 03 May 2020. Buckman, S.R., Shapiro, A.H., Sudhof, M., Wilson, D.J. (2020). News sentiment in the time of COVID-19. FRBSF Economic Letter, 2020(08), 1–05. Accessed 13 May 2020. Chicago Board Options Exchange (2020). CBOE Volatility Index: VIX [VIXCLS]. https://fred.stlouisfed.org/series/VIXCLS. Accessed 30 Apr 2020. Deutsche Bundesbank (2020). Zeitreihe BBK01.WT1010: Rendite der jeweils jüngsten Bundesanleihe mit einer vereinbarten Laufzeit von 10 Jahren. https://www.bundesbank.de/dynamic/action/de/statistiken/zeitreihen-datenbanken/zeitreihen-datenbank/723452/723452?tsId=BBK01.WT1010. Accessed 30 Apr 2020. Dibiasi, A., & Iselin, D. (2016). Measuring uncertainty. KOF Bulletin 101, KOF Swiss Economic Institute, ETH Zurich. https://ethz.ch/content/dam/ethz/special-interest/dual/kof-dam/documents/KOF_Bulletin/kof_bulletin_2016_11_en.pdf . Accessed 30 Apr 2020. Diebold, F.X., & Mariano, R.S. (2002). Comparing predictive accuracy. Journal of Business & economic statistics, 20(1), 134–144. https://doi.org/10.1198/073500102753410444. Eckert, F., & Mikosch, H. (2020). A mobility indicator for Switzerland. KOF Bulletin 140, KOF Swiss Economic Institute. kof.ethz.ch/en/news-and-events/news/kof-bulletin/kof-bulletin/2020/05/ein-mobilitaetsindikator-fuer-die-schweiz.html . Accessed 14 May 2020. European Central Bank (2020). Yield curve spot rate, 1-year maturity - government bond, nominal, all issuers whose rating is triple. https://sdw.ecb.europa.eu/browseExplanation.do?node=qview&SERIES_KEY=165.YC.B.U2.EUR.4F.G_N_A.SV_C_YM.SR_1Y. Accessed 30 Apr 2020. Feinerer, I., & Hornik, K. (2019). Tm: Text Mining Package. https://CRAN.R-project.org/package=tm, R package version 0.7-7. Accessed 13 May 2020. Galli, A. (2018). Which indicators matter? Analyzing the Swiss business cycle using a large-scale mixed-frequency dynamic factor model. Journal of Business Cycle Research, 14(2), 179–218. https://doi.org/10.1007/s41549-018-0030-4. Indergand, R., & Leist, S. (2014). A real-time data set for Switzerland. Swiss Journal of Economics and Statistics, 150(IV), 331–352. https://doi.org/10.1007/BF03399410. Kaufmann, D. (2020). Wie weiter mit der Tiefzinspolitik? Szenarien und Alternativen. IRENE Policy Reports 20-01, IRENE Institute of Economic Research. https://ideas.repec.org/p/irn/polrep/20-01.html. Accessed 13 May 2020. Kaufmann, D., & Lein, S.M. (2013). Sticky prices or rational inattention – what can we learn from sectoral price data?. European Economic Review, 64, 384–394. https://doi.org/10.1016/j.euroecorev.2013.10.001. Kaufmann, D., & Scheufele, R. (2017). Business tendency surveys and macroeconomic fluctuations. International Journal of Forecasting, 33(4), 878–893. https://doi.org/10.1016/j.ijforecast.2017. Lewis, D.J., Mertens, K., Stock, J.H. (2020). Monitoring real activity in real time: the weekly economic index. Liberty Street Economics 30/03/2020, Federal Reserve Bank of New York. https://libertystreeteconomics.newyorkfed.org/2020/03/monitoring-real-activity-in-real-time-the-weekly-economic-index.html. Accessed 13 May 2020. Neusser, K. (2016). Time Series Econometrics, (pp. 207–214). Cham: Springer. https://doi.org/10.1007/978-3-319-32862-1_11. OECD (2010). Review of the CLI for 8 countries. OECD Composite Indicators. https://www.oecd.org/fr/sdd/indicateurs-avances/44556466.pdf. Accessed 13 May 2020. Remus, R., Quasthoff, U., Heyer, G. (2010). SentiWS - a publicly available German-language resource for sentiment analysis. In Proceedings of the 7th International Language Resources and Evaluation (LREC'10). European Language Resources Association (ELRA), (pp. 1168–71). Scotti, C. (2016). Surprise and uncertainty indexes: real-time aggregation of real-activity macro-surprises. Journal of Monetary Economics, 82(C), 1–19. https://doi.org/10.1016/j.jmoneco.2016.06.002. Shapiro, A.H., Sudhof, M., Wilson, D.J. (2017). Measuring news sentiment. Working Paper Series 2017-1, Federal Reserve Bank of San Francisco. https://doi.org/10.24148/wp2017-01. Accessed 13 May 2020. SIX (2020a). SBI®–Swiss Bond Indices. https://www.six-group.com/exchanges/indices/data_centre/bonds/sbi_en.html. Accessed 30 Apr 2020. SIX (2020b). VSMI®–Volatility Index on the SMI®. https://www.six-group.com/exchanges/indices/data_centre/strategy_indices/vsmi_en.html. Accessed 30 Apr 2020. Stock, J.H., & Watson, M.W. (2002). Macroeconomic forecasting using diffusion indexes. Journal of Business & Economic Statistics, 20(2), 147–162. https://doi.org/10.1198/073500102317351921. Stuart, R. (2020). The term structure, leading indicators, and recessions: evidence from Switzerland, 1974–2017. Swiss Journal of Economics and Statistics, 156(1), 1–17. https://doi.org/10.1186/s41937-019-0044-4. Thorsrud, L.A. (2020). Words are the new numbers: a newsy coincident index of the business cycle. Journal of Business & Economic Statistics, 38(2), 393–409. https://doi.org/10.1080/07350015.2018.1506344. Wegmüller, P., & Glocker, C. (2019). 30 Indikatoren auf einen Schlag. Die Volkswirtschaft, 11, 19–22. West, K. (1996). Asymptotic inference about predictive ability. Econometrica, 64(5), 1067–84. https://doi.org/10.2307/2171956. We thank an anonymous referee, Ronald Indergand, Alexander Rathke, and Jan-Egbert Sturm for helpful discussions. Marc Burri and Daniel Kaufmann contributed equally to this work. Institute of Economic Research, University of Neuchâtel, Rue A.-L. Breguet 2, Neuchâtel, 2000, Switzerland Marc Burri & Daniel Kaufmann KOF Swiss Economic Institute, ETH Zurich, Zurich, Switzerland Marc Burri Correspondence to Marc Burri. Additional file 1 The Online Appendix to this paper is available on https://www.dankaufmann.com/publications/. Replication files. Codes for replication of the main indicator are available on https://github.com/dankaufmann/f-curve/. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Burri, M., Kaufmann, D. A daily fever curve for the Swiss economy. Swiss J Economics Statistics 156, 6 (2020). https://doi.org/10.1186/s41937-020-00051-z SJES Special Focus on Covid-19
CommonCrawl
International Journal for Equity in Health A cross sectional study of unmet need for health services amongst urban refugees and asylum seekers in Thailand in comparison with Thai population, 2019 Rapeepong Suphanchaimat ORCID: orcid.org/0000-0002-3664-90501,2, Pigunkaew Sinam2, Mathudara Phaiyarom2, Nareerut Pudpong2, Sataporn Julchoo2, Watinee Kunpeuk2 & Panithee Thammawijaya3 International Journal for Equity in Health volume 19, Article number: 205 (2020) Cite this article Although the Thai government has introduced policies to promote the health of migrants, it is still the case that urban refugees and asylum seekers (URAS) seem to be neglected. This study aimed to explore the degree of healthcare access through the perspective of unmet need in URAS, relative to the Thai population. A cross-sectional survey, using a self-reporting questionnaire adapted from the Thai Health and Welfare Survey (HWS), was performed in late 2019, with 181 URAS completing the survey. The respondents were were randomly selected from the roster of the Bangkok Refugee Center. The data of the URAS survey were combined with data of the Thai population (n = 2941) from the HWS. Unmet need for health services was defined as the status of needing healthcare in the past 12 months but failing to receive it. Bivariate analysis was conducted to explore the demographic and unmet need difference between URAS and Thais. Multivariable logistic regression and mixed-effects (ME) model were performed to determine factors associated with unmet need. Overall, URAS were young, less educated and living in more economically deprived households, compared with Thais. About 98% of URAS were uninsured by any of the existing health insurance schemes. The prevalence of unmet need among URAS was significantly higher than among Thais in both outpatient (OP) and inpatient (IP) services (54.1% versus 2.1 and 28.0% versus 2.1%, respectively). Being uninsured showed the strongest association with unmet need, especially for OP care. The association between insurance status and unmet need was more pronounced in the ME model, relative to multivariable logistic regression. URAS migrating from Arab nations suffered from unmet need to a greater extent, compared with those originating from non-Arab nations. The prevalence of unmet need in URAS was drastically high, relative to the prevalence in Thais. Factors correlated with unmet need included advanced age, lower educational achievement, and, most evidently, being uninsured. Policy makers should consider a policy option to enrol URAS in the nationwide public insurance scheme to create health security for Thai society. At present, cross-border mobility is a soaring global trend for many reasons, including people searching for better economic prospects, and escaping from war and political conflicts. In 2017, international cross-border populations amounted to 258 million (3.4% of global population) [1]. Of these 258 million, 68 million were forcibly displaced people. Of that 68 million, 25 million were refugees and three million were asylum seekers [2]. The situation of refugees has gained increasing attention in the global health field in recent years, particularly since the 2011 Syrian crisis which resulted in more than 6 million refugees fleeing from Syria to Europe [3]. Asia is another region that has encountered a refugee crisis. An obvious case is the exodus of more than 700,000 Rohingya refugees from Rakhine State in Myanmar to Bangladesh, during 2015–2017 [4]. The United Nations (UN) and the World Health Organization (WHO), as well as many other international development partners, have called for more concrete actions to protect refugees' rights to health and well-being. Some tangible outputs of these actions include the launch of the World Health Assembly (WHA) Resolution 70.15, entitled 'Promoting the health of refugees and migrants' [5], the New York Declaration for Refugees and Migrants [6, 7] and, recently, the Global Compact on Refugees in 2018 [8]. Thailand is one of the most popular destinations for international migrants and refugees in Southeast Asia. The majority of migrants are workers from Cambodia, Lao PDR, Myanmar and Vietnam (CLMV collectively). Some of them have entered the country unlawfully and are known as undocumented migrants. It is estimated that today, there are more than three million migrant workers living in Thailand [9]. The Thai government has implemented policies to protect the well-being of undocumented migrants for several years. One remarkable policy is the One Stop Service (OSS) registration measure for undocumented CLM migrants and their dependants [10]. Migrants who register with the OSS have their profile recorded in the civil registry and acquire a work permit, alongside undertaking nationality verification (NV). The Ministry of Public Health (MOPH) also instigated a nationwide public insurance policy, called the 'Health Insurance Card Scheme' (HICS), for these registered migrants and their dependants. The HICS benefit is comprehensive, covering inpatient (IP) care, outpatient (OP) care, high-cost care, disease prevention and health promotion [11]. According to the National Security Act, all Thai nationals are covered by one of the three main public insurance arrangements: (i) Civil Servant Medical Benefit Scheme (CSMBS) for civil servants; (ii) Social Security Scheme (SSS) for employees in the formal sector; and (iii) the Universal Coverage Scheme (UCS) for those who are not covered by the CSMBS and the SSS. With the function of the HICS (for registered CLM migrants) and the insurance schemes for Thais (USMBS, SSS and UCS), Thailand (in principle) has achieved Universal Health Coverage (UHC) for almost everybody within its territory [12, 13]. While undocumented migrants seem to be in the spotlight of health policies in Thailand, refugees and asylum seekers are often neglected [14]. None of the aforementioned policies include refugees and asylum seekers. The situation is more complicated among refugees and asylum seekers in urban areas compared with those in temporary shelters. This is because implementing health measures in a well-defined geographical space is relatively straightforward, and local healthcare providers are well aware of the existence of refugees in the camps. Besides, the United Nations High Commissioner for Refugees (UNHCR) and a number of international non-governmental organizations (NGOs), such as Médecins Sans Frontières and the International Rescue Committee, in coordination with public facilities along the border, have provided humanitarian assistance in the refugee camps for years [15, 16]. Unlike refugees in temporary shelters, urban refugees and asylum seekers (URAS) received little attention within the public health sphere in Thailand. Almost all URAS are residing in Bangkok, under the patronage of the United Nations High Commissioner for Refugees (UNHCR). So far, there are about 5000 URAS and 97,000 refugees in temporary shelters [17, 18]. URAS are neither covered by the HICS, nor by the public insurance schemes originally designed for Thais. Nonetheless, some private facilities or insurance companies have initiated a health insurance package for URAS, which are conditional upon affordability. Some media or local NGOs suggest that URAS in Thailand face many hindrances in accessing health services, for instance, poverty, language difficulty, and precarious citizenship status [19, 20]. Moreover, some government officials are unaware of the existence of URAS [19]. Finally, there is no systematic evaluation of the degree of healthcare access for URAS in Thailand. Therefore, the objective of this study is to explore the degree of healthcare access among URAS, in comparison with the Thai population. In this regard, we use 'unmet need' for health services as an indicator to gauge the ability to access health care. The concept of unmet need originates from the reproductive health field, but during the past two decades, its application has become widespread to other fields, including population health and critical care [21,22,23]. Study design, populations and samples Both primary and secondary data collection was applied. We performed a cross-sectional survey on URAS from October to December 2019, and examined prior survey data on the Thai population through the 2019 Health Welfare Survey (HWS). HWS is a nationwide biennial survey jointly conducted by the National Statistical Office (NSO) and the International Health Policy Programme (IHPP) of the MOPH. We first contacted the Bangkok Refugee Centre (BRC), a charitable agency in collaboration with UNHCR, whose work is to support the well-being of URAS. For this study, we focused on URAS of the top-ten most common nationalities in Thailand: namely, Pakistani, Vietnamese, Cambodian, Somali, Afghan, Palestinian, Chinese, Sri Lankan, Iraqi, and Syrian, comprising 3021 URAS in total. We then sampled 206 URAS from the pool of 3021 URAS in the BRC roster (more details in 'Sample size calculation, sampling methods and survey design'). Among these 206 samples, 181 completed the survey questionnaire. Once the primary survey on URAS was completed, we combined the data of these 181 URAS with Thai data from HWS, focusing on those living in Bangkok (n = 2941). The final dataset comprised 3122 observations in total, Fig. 1. Population frames, samples and data sources Sample size calculation, sampling methods, and survey design We used the prevalence of unmet need for healthcare as the main indicator for sample size estimation. The following formula, \( n=\frac{{\left({Z}_{1-\frac{\alpha }{2}}\sqrt{2 PQ}+{Z}_{1-\beta}\sqrt{P_1{Q}_1+{P}_2{Q}_2}\right)}^2}{{\left({P}_1-{P}_2\right)}^2} \) was used; where α = 0.05; β = 0.2 \( {Z}_{1-\frac{\alpha }{2}} \) = 1.96; Z1 − β = 0.84; P1 = 0.11, Q1 = 1- P1; P2 = 0.012, Q2 = 1- P2; P = (P1 + P2)/2 and Q = 1-P. P1 refers to the unmet need prevalence in URAS whereas P2 refers to similar prevalence in the Thai population. The most recent data on unmet need in Thai citizens suggested a prevalence of 1.2%, according to Thammatacharee et al. [24]. Thus P2 was replaced by 0.012. As there has been no study on unmet need among URAS in Thailand, we searched for the indicator in studies outside Thailand. We found a piece of work by Busetta et al., which examined the prevalence of unmet need of refugees in Italy while applying the same unmet need questions as the Thai HWS [25]. Busetta et al. reported that the degree of unmet need in refugees was about 11%. Hence we substituted 0.11 for P1. It should be noted that both HWS and the Italian survey followed the original questions proposed by the European Union Statistics on Income and Living Conditions (EU-SILC). Taking into account a 20% non-response rate and incomplete information, at least 140 samples were needed in each sample group (URAS and Thais). The existing records of Thai respondents in HWS already outnumbered the required number of samples; therefore no further sampling was required. For URAS, we used stratified random sampling with probability proportional to size (PPS), according to age group, sex and nationality. Fortunately, in the fieldwork, the BRC officers informed us that they were capable of recruiting 206 participants. We therefore expanded the sample size to the suggested number. However, during the survey process, 23 URAS refused to take part. Of the remaining 183, two did not complete the unmet need questions. As a result, only 181 URAS were enrolled in the study. Figure 2 displays the overview of total population in each nationality from the BRC list and the actual samples acquired. Details of the sample volume tallied by age groups, nationalities, and sex can be found in Supplementary file 1. Number of samples participating in the survey sorted by nationalities All selected participants were asked to travel to BRC to complete the paper questionnaire. The investigators provided financial support to cover the travelling cost of the participants (about US$ 9). For those who had difficulty travelling, a phone interview was performed instead. For a child below 15 years of age, parents or legal guardians would respond on his or her behalf. The questionnaire was translated into the respondents' own language. For those who had difficulty reading, a verbal interview was performed in place of a written questionnaire. On average, each respondent took approximately 30 min to complete the questionnaire. A focal coordinator was prepared for each nationality group. These coordinators were volunteers working with BRC. Preparatory meeting between the research team and focal coordinators was arranged prior to the survey in order to fine-tune understanding and to assess the survey feasibility. Operational definitions We set operational definitions as follows. Firstly, 'refugee' is a person who has been forced to flee his or her country because of persecution, war or violence, and his or her request for sanctuary is ratified by the UNHCR according to the 1951 Refugee Convention [26]. Secondly, asylum seeker means someone who has been forced to flee his or her country because of persecution, war or violence and his or her request for sanctuary has yet to be processed by the UNHCR according to the 1951 Refugee Convention [26]. Lastly, unmet need refers to a status where a person reported that he or she needed health examination or treatment for any type of health issue within the past 12 months, but he or she did not receive or did not seek it. This definition is adapted from the original unmet need survey by EU-SILC [27]. Questionnaire and determinants of interest The questionnaire for the URAS survey was adapted from the HWS questionnaire. Two rounds of consultative meetings between the research team, health system academics and BRC staff were arranged to ensure content validity and to make sure that the participants clearly understood the questions. The questionnaire contained two domains: (i) an individual's demography and (ii) unmet need for health services. Questions about an individual's demography (1st domain) consisted of sex, age, insurance status (insured with either public or private insurance versus uninsured); education background (primary level, secondary level, and degree or above), and household monthly income. For convenience, we classified age into age groups (≤ 15 years, > 15 but ≤ 60 years, and > 60 years) and created a new binary variable, called 'household economy', using a cut-off at 45,707 Baht (US$ 1428) - the average monthly income of a household in Bangkok according to the NSO [28]. Questions about unmet need for health services (2nd domain) asked a respondent to self-assess if, during the last 12 months, he or she had felt unwell and needed healthcare but did not receive it. These questions were sub-divided into OP care and IP care. Then, any respondent who experienced unmet need, was asked to recount the most important reason for not acquiring healthcare. Some examples of the reasons included 'cannot afford treatment cost', 'long waiting times', 'no time to seek treatment', 'too far to travel', and 'do not trust health staff'. All statistical analyses were performed by Stata v14.0 (StataCorp LP, College Station, Texas, US—serial number: 401406358220). We divided the analysis into two parts: (i) descriptive statistics and (ii) inferential analysis. In the first part, all categorical variables were expressed as frequency and percentage. Age and household income were presented by median and interquartile range (IQR). In the second part, we commenced with bivariate analysis, using Chi-square or Fisher's exact test (for categorical variables) and Mann-Whitney U test (for continuous variables), to identify: (a) the demographic difference between URAS and Thais; and (b) the relationship between unmet need and each demographic variable. Further, we performed multivariable logistic regression by regressing odds of unmet need in natural logarithm scale on the selected independent variables all at once. The independent variables enrolled in this step were those exhibiting P-value of less than 0.2 in the former bivariate analysis. For a dummy variable with three or more scales (such as age group and education achievement), if there was at least a sub-scale variable showing P-value of less than 0.2 in the bivariate analysis, the variables at all scales would be included in the multivariable logistic regression. We also conducted mixed-effects (ME) logistic regression, having done multivariable logistic regression at a prior stage. This time, the ME model took the nationalities of the participants into account. We categorised nationalities into three main clusters: Thai, non-Arab Asian, and Arab Asian. The results were presented in terms of crude and adjusted odds ratios (OR) with 95% confidence interval (CI). Inverse probability weighting was applied when assessing statistical significance in order to take the survey design into account. Subgroup analysis was exercised by limiting the analysis on URAS. We then broke down the degree of unmet need by nationalities and types of URAS (urban refugee versus asylum seeker). The analysis was performed in the same fashion as the full-sample analysis. Demographic profiles In total, we enrolled 3122 records in the analysis. Of these 3122 observations, 181 (5.8%) were URAS. Amongst 181 URAS, 160 (88.4%) were refugees and 21 (11.6%) were asylum seekers. Pakistanis constituted the largest single group of all URAS (39.8%), followed by Vietnamese (28.2%) and Cambodians (6.1%). The male to female ratio appeared to be similar in both Thais and URAS. About a third of Thai respondents had received primary education (34.6%), compared with 63.5% in URAS. The median age of Thais was 42 years and almost one fifth of them fell in the elderly category. In contrast, the median age of URAS was roughly 23 years with much a smaller proportion of elderly people (6.1%). The monthly household income of Thais was, on average, five times as large as that of URAS. Almost all URAS (98.7%) had a monthly household income less than the average income of most people in Bangkok. The insurance status of Thais was also in stark contrast to that of URAS. While over 99% of Thai respondents were covered by either public or private insurance, approximately 98% of URAS were completely uninsured. Only four URAS were insured, and answered in the questionnaire form that they held voluntary insurance from a private hospital in Bangkok. All of these demographic variables, except sex, yielded a statistically significant difference. Note that the number of missing data in each variable was negligible (less than 1% of the observations), except household income which appeared to be missed in over half of the samples, Table 1. Table 1 Demographic characteristics of the participantsa Unmet need profiles We estimated prevalence of unmet need by dividing the number of respondents who reported that they had faced unmet need in the past 12 months by the total number of respondents. The unmet need prevalence for Thais was about 2.1% in both OP and IP health services. The unmet need prevalence for URAS in IP care was approximately 28.0%, while the corresponding prevalence in OP care was 54.1%. The difference of unmet need between URAS and Thais demonstrated strong statistical significance (P-value < 0.001 in both types of care), Fig. 3. Prevalence of unmet need in Thais versus urban refugees and asylum seekers. Note: URAS = urban refugee and asylum seeker Determinants of unmet need The results in bivariate analysis and multivariable logistic regression were relatively similar. In OP care, being uninsured demonstrated a strong and significant association with unmet need (adjusted OR = 4.0 [95% CI = 1.5–10.6]). The odds of experiencing unmet need became lower among those with high education backgrounds. For instance, participants completing secondary education faced only half the odds of unmet need relative to those who completed only primary education (adjusted OR = 0.5 [95% CI = 0.2–0.9]). The likelihood of unmet need in the middle age group was about 2–3 times higher than for children and juveniles (crude OR = 2.7 [95% CI = 1.0–7.1;] adjusted OR = 1.9 [95% CI = 1.0–3.6]). Sex and household economy did not show a significant association with unmet need. In IP care, the findings appeared to follow the same direction as OP care. The only difference was that the relationship between insurance status and unmet need turned out to be non-significant despite maintaining a positive association (adjusted OR = 1.9 [95% CI = 0.7–5.1]). The findings from the ME model also demonstrated a similar pattern to results from the multivariable logistic regression, though with a marginal difference in the degree of effect size. For OP care, the adjusted OR amongst the middle age group slightly declined from 2.7 (95% CI = 1.0–7.1) in multivariable logistic regression to 1.9 (95% CI = 1.0–3.6) in the ME model, while still keeping statistical significance (P-value = 0.049 in multivariable logistic regression and 0.041 in the ME model). The models showed almost similar results. The most noticeable difference was the adjusted OR of the insurance variable in the ME model, which expanded about three to four times, relative to the ratio in multivariable logistic regression (adjusted OR = 14.5 [95% CI = 2.6–84.1] for OP care; and adjusted OR = 10.4 [95% CI = 1.9–55.6] for IP care). Statistically significant relationship between insurance status and unmet need was observed for both types of care (P-value = 0.003 for OP care, and 0.006 for IP care), Tables 2, 3. Table 2 Factors associated with unmet need for outpatient care Table 3 Factors associated with unmet need for inpatient care Among the 98 URAS who reported unmet need for OP care, 94 (95.9%) ascribed the inaccessibility of health services to unaffordable treatment cost. The remaining four URAS raised other reasons, such as language barriers, and a fear of being arrested by the police. Of the 61 Thais who reported unmet need for IP care, 38 (62.3%) pointed towards long waiting times as the most important cause for inaccessibility. The second most important reason was dissatisfaction with the facility's performance (11.5%). The most important reason raised for inaccessible IP care was very close to that for OP care: 'lack of money' in 93.9% of URAS and 'long waiting times' in 62.3% of Thais. Subgroup analysis found that, for OP care, the proportion of urban refugees facing unmet need (55.0%) was slightly larger than the corresponding proportion amongst asylum seekers (47.6%). For IP care, about one third of the participants experienced unmet need (27.1% for urban refugees and 35.0% for asylum seekers). No statistical significance difference was observed in either type of care when comparing urban refugees with asylum seekers (P-value = 0.523 for OP care and 0.459 for IP care), Fig. 4. Prevalence of unmet need in urban refugees versus asylum seekers Afghans, Iraqis, and Palestinians were the populations with the greatest degree of unmet need (85.7–100.0% in OP care and 71.4–83.3% in IP care). In contrast, URAS from Cambodia and Vietnam showed the smallest unmet need estimate (31.4–33.3% in OP care and 9.1–13.7% in IP care), in relation to other nationals, Fig. 5. Prevalence of unmet need by nationalities To our knowledge, this piece of work is among the first few studies in Asia that quantitatively investigate the degree of healthcare access among URAS through the perspective of unmet need. From a macro-perspective, demographic data showed that most URAS were relatively younger, were less educated, and were living in more economically deprived households. The evidence from this study suggests that about one fifth to one quarter of URAS faced unmet need for health services while the prevalence of unmet need in the Thai population was very small. This is not surprising; but a more interesting point is whether the degree of unmet need in URAS was larger than for other types of refugees or non-Thai populations. Unfortunately, we could not find peer-review studies on unmet need amongst any kinds of refugees in Thailand, published in the last decade. The only evidence we could identify is a study by Thein and Theptien, which reported that the prevalence for access to family planning amongst Myanmar migrant women in Bangkok was 15.8% [29]. This figure was still far lower than the prevalence found in our study (28.0% for IP care and 54.1% for OP care). Hence it is not an exaggeration to state that URAS are one of the most vulnerable groups in Thailand, even among non-Thais populations, let alone when compared with Thai citizens. Determinants that potentially contributed to unmet need included increasing age, less education, and, most prominently, the lack of health insurance. This finding is in line with those from some other studies. Wang et al. suggested that more education was negatively associated with unmet need for supportive care among Chinese women [30]. Hailemariam and Haddis also flagged that low levels of education resulted in increasing degrees of unmet need for family planning in the Ethiopian population [31]. Bhattathiry and Ethirajan reported that unmet need for family planning decreased as age advanced [32]. This finding contradicts our discovery, which found that people with advanced age were more likely to have unmet need than those in lower age groups. Some of the explanations for this phenomenon is, first, the difference in the care of interest between our survey (focusing on IP and OP care in general) and Bhattathiry and Ethirajan's survey (focusing only on family planning); and second, the in-house intervention of BRC. Based on our discussion with BRC staff, we found that BRC had created its own supportive measures for URAS by allowing children up to 5 years of age to enjoy free healthcare at public facilities. Parents of these children could be reimbursed for the full healthcare cost from BRC if their children visited a health facility. This might be a reason why our findings suggest a negative association between age and unmet need. Furthermore, BRC also offered partial financial support for URAS who were admitted to a public hospital. The authority pledged to subsidise the cost of IP care for URAS up to 20,000 Baht (US$ 625) per visit. This initiative might explain why being uninsured showed significant association with unmet need for OP care, but not for IP care, in multivariable logistic regression. It is worth noting that these in-house policies have not been systematically managed as an insurance scheme and still function as charitable activities, depending on financial resources of the organisation and ad hoc negotiation with the healthcare providers. Another interesting point from our findings was that insurance status appeared to be the most influential determinant of unmet need. The multivariable logistic regression indicated that the risk of facing unmet need for OP health services in the uninsured was about four-times as large as the risk in the insured. The degree of association became much stronger (approximately 15 times for OP care and 10 times for IP care) when applying the ME model. As, so far, there is no public insurance policy for URAS, it is not surprising that the prevalence of unmet need in URAS was staggering. This finding also corresponds with the fact that the majority of URAS pointed towards financial difficulties to afford the treatment cost as the most important concern. In other words, URAS are at huge risk of impoverishment at any time when they seek treatment, and it means that Thailand has not yet achieved UHC for everybody in its territories, as intended [33]. Since the concept of UHC covers not only the provision of essential quality health services, but also the prevention of impoverishment from healthcare spending, the issue of URAS accessing health care has considerable policy implications. Thailand is committed to the Sustainable Development Goals (SDG), including SDG target 3.8, which focuses on UHC [34]; therefore policies to enrol URAS in a public health insurance scheme should be seriously considered. In addition, leaving URAS uninsured potentially results in low access to essential healthcare, and this may undermine the health security of Thai society as a whole. Experiences from other countries that offer health insurance for URAS, such as Iran and Malaysia, are of great value and warrant further exploration [35, 36]. As Thailand is not a party to the 1951 Refugee Convention [37], the Thai government is neither obliged to guarantee any health measures for urban refugees, nor for asylum seekers whose application for refugee status is still in process. The subgroup analysis reflected this fact, showing no significant difference in the unmet need for healthcare in urban refugees, relative to asylum seekers. Despite not being a primary objective of the study, the varying degree of unmet need among diverse national groups was thought-provoking. This was evidenced by the fact that the adjusted OR in the ME model, which had already considered the clustering effect of nationalities on unmet need, greatly expanded, compared with the ratio in the multivariable logistic regression, which assumed no correlation between observations. The descriptive subgroup analysis also showed that Cambodian and Vietnamese URAS suffered least from unmet need, compared with other nationals. A possible explanation is that URAS from Southeast Asia nations may have lifestyle and beliefs close to Thais (including the Buddhist belief); and that Thai society is already acquainted with migrants travelling from neighbouring countries (especially from CLMV nations). In contrast, URAS from Arab nations (for instance, Iraqis, Palestinians and Syrians) presented a relatively large degree of unmet need. As Arab people are the minority in Bangkok, they possibly need a huge adaptation to incorporate the Arab way of life to South East Asian culture. This picture alludes to the concept of acculturation proposed by a great deal of prior research [38,39,40]. That is, refugees who can assimilate or integrate themselves into a new culture tend to have better health outcomes, compared with the poorly adjusted ones [38,39,40]. However, a thorough qualitative study or ethnographic research is needed to prove this presumption. The methodology of this study bears some strengths and limitations. Regarding strengths, the study employed a systematic approach for data sampling, and we recruited participants from a household level, even though there were no physical visits to the participants' households. Another strength of the study is the use of Thai respondent data as a comparator. We would not have a clear view on the extent of unmet need for health services in URAS had the comparator (HWS data) been missing. However, there remain some limitations. Firstly, as the nationalities of URAS are vastly diverse, we could not guarantee a perfect translation of the questionnaire. This problem would rarely occur in the HWS questionnaire as Thai is the only formal language for Thai citizens. Nonetheless, we tried to minimize language barriers by arranging a training workshop for the survey volunteers to achieve mutual understanding between the volunteers and the research team. These volunteers mostly worked with BRC and some of them were also URAS. Secondly, the unmet need question inquired about a history of healthcare access in the past 12 months, and therefore a recall bias was inevitable. This problem might not severely undermine the validity of the analysis as the bias could be present in both the URAS survey and the HWS. However, the bias might be more pronounced in the URAS survey compared with the HWS, because of the difference in survey practice. In our URAS survey, when we recruited people with difficulties travelling, we asked a surrogate respondent to answer the questionnaire on their behalf. In contrast, the HWS surveyors always visited the participants at their households, resulting in a lower reliance on surrogate respondents in comparison with the survey on URAS. Thirdly, as mentioned earlier, we did not perform a physical visit to the participants' households. This is because many URAS had precarious immigration status. Some of them were over-stayers. A physical visit meant that they needed to disclose their residential address to the surveyors. This issue was thoroughly discussed with the ethics committee and the BRC staff before the start of the fieldwork. With this limitation, some key household information that necessitates direct observation, such as household infrastructure and owner's equity, was missing. Such information serves as the main ingredient for estimating household prosperity through the indicator called 'asset index' [41]. The lack of this indicator, in combination with a fair amount of missing data on household economy, might explain why the economic wealth of URAS did not exhibit a statistically significant relationship with unmet need, although the direction of effect implied that the less affluent participants tended to face greater odds of unmet need, compared with the well-off group. The original HWS questionnaire contains questions about household properties, and the surveyors were able to use the answers from these questions to estimate asset index. However, we dropped such questions in the questionnaire for URAS after we decided not to perform a physical visit to URAS households. Fourthly, though the URAS survey and HWS followed the same set of questions, the timeline for conducting both surveys and human resources used were different. Therefore a direct comparison between URAS and Thais should take into account this limitation. Fifthly, as per the intrinsic nature of cross-sectional design, it is difficult to identify causal relationship between unmet need and the selected independent variables. A cohort-based survey on URAS is recommended; but this requires the establishment of a system to regularly monitor health status of URAS over the long term. The system cannot be set up without collaboration amongst all concerned parties, especially the Thai government, NGOs, and the UNHCR. This raises a key issue mentioned earlier; whether the Thai government views URAS as a population it needs to take care of. Sixthly, this study is not free from data bias, as some determinants between URAS and Thais were in stark contrast. The insurance variable was a clear example in this case. We found that less than 1% of the Thai participants were uninsured. In contrast, only 2.2% of URAS held some kind of health insurance. The lack of adequate case numbers for some combinations of exposure and outcome levels may cause an upward bias away from the null for the effect estimates [42]. This problem might also occur in other variables aside from insurance status, such as age groups and household prosperity. A more delicate analysis method, such as penalised regression, should be considered in further studies if the problem of sparse data appears again. Lastly, the people of interest in this study were those presenting on the BRC roster only, not all URAS in Bangkok. We did not include URAS in non-household settings, such as shelters or detention centres. This definitely limits the generalisability power of our study. To expand the academic richness in this field, further studies on other types of refugees are strongly recommended. Overall, URAS had lower educational attainment and faced more severe financial hardship than Thais. The prevalence of unmet need in URAS was extremely high, relative to the corresponding prevalence in Thais. Factors that suggested a positive relationship with unmet need included advanced age, lower educational achievement, and, most evidently, being uninsured. All relevant parties, such as policy makers, academics and high-level bureaucrats in the public health area, should consider measures to include URAS in some kind of nationwide public insurance. The benefit of this it to alleviate unmet need for health services in URAS, but also to strengthen health security for Thai society as a whole. Additional studies on the health status and access to healthcare of other types of refugees are also recommended. The raw data used by this study jointly belonged to BRC and IHPP. The analysed data are however available from the authors upon reasonable request. BRC: Bangkok Refugee Centre CLM: Cambodia, Lao PDR and Myanmar CSMBS: Civil Servant Medical Benefit Scheme EU-SILC: European Union Statistics on Income and Living Conditions HICS: Health Insurance Card Scheme IHPP: International Health Policy Programme Mixed-Effects MOPH: Ministry of Public Health NGO: Non-government Organisation NV: Nationality Verification OSS: One Stop Service PPS: Probability Proportional to Size SSS: UHC: UNHCR: UN: URAS: Urban Refugee and Asylum Seeker WHA: United Nations. International Migration Report 2017 Highlights (ST/ESA/SER.A/404) [Internet]. New York: United Nations; 2017 [cited 20 December 2018]; Available from: http://www.un.org/en/development/desa/population/migration/publications/migrationreport/docs/MigrationReport2017_Highlights.pdf. UNHCR. Figures at a glance: statistical yearbooks. Geneva: UNHCR; 2018. [Cited 13 June 2019]; Available from: https://www.unhcr.org/figures-at-a-glance.html. World Vision. Syrian refugee crisis: facts, FAQs, and how to help [internet]. Washington: World Vision; 2019. [Cited 13 June 2019]; Available from: https://www.worldvision.org/refugees-news-stories/syrian-refugee-crisis-facts. European Commission. The Rohingya crisis: ECHO factsheet [internet]. Brussels: European Commission; 2018. [Cited 17 June 2019]; Available from: http://ec.europa.eu/echo/files/aid/countries/factsheets/rohingya_en.pdf. World Health Organization. Promoting the health of refugees and migrants. Seventieth world health assembly resolution WHA70.15, 31 may 2017. Geneva: World Health Organization; 2017. [Cited 20 December 2018]; Available from: www.who.int/migrants/about/A70_R15-en.pdf. UNHCR. New York declaration for refugees and migrants. New York: UNHCR; 2019. [Cited Available from: https://www.unhcr.org/new-york-declaration-for-refugees-and-migrants.html. United Nations. Global compact for migration [internet]. New York: United Nations; 2018. [Cited 20 December 2018]; Available from: https://refugeesmigrants.un.org/migration-compact. United Nations. Global compact on refugees [internet]. New York: United Nations; 2018. [Cited 10 May 2020]; Available from: https://www.unhcr.org/5c658aed4.pdf. International Organization for Migration. Thailand migration report 2019. Bangkok: IOM; 2019. [Cited 27 January 2019]; Available from: https://thailand.iom.int/thailand-migration-report-2019-0. Suphanchaimat R, Putthasri W, Prakongsai P, Tangcharoensathien V. Evolution and complexity of government policies to protect the health of undocumented/illegal migrants in Thailand - the unsolved challenges. Risk Manag Healthc Policy. 2017;10:49–62. Health Insurance Group. Health card for uninsured foreigners and health card for mother and child. In: Seminar on measures and protocols of medical examination, insuring migrants and protecting maternal and child health. Bangkok: Office of the Permanent Secretary, Ministry of Public Health; 2013. Towse A, Mills A, Tangcharoensathien V. Learning from Thailand's health reforms. BMJ. 2004;328(7431):103–5. Kantamaturapoj K, Kulthanmanusorn A, Witthayapipopsakul W, Viriyathorn S, Patcharanarumol W, Kanchanachitra C, et al. Legislating for public accountability in universal health coverage, Thailand. Bull World Health Organ. 2020;98(2):117–25. Posttoday Online. Refugees from 40 countries flee to the urban area: a hope for survival. Bangkok: Posttoday Online. [Cited 17 June 2019]; Available from: https://www.posttoday.com/politic/report/499494. Alexakis LC, Athanasiou M, Konstantinou A. Refugee camp health services utilisation by non-camp residents as an indicator of unaddressed health needs of surrounding populations: a perspective from Mae La refugee camp in Thailand during 2006 and 2007. Pan Afr Med J. 2019;32:188. Plewes K, Lee T, Kajeechewa L, Thwin MM, Lee SJ, Carrara VI, et al. Low seroprevalence of HIV and syphilis in pregnant women in refugee camps on the Thai-Burma border. Int J STD AIDS. 2008;19(12):833–7. UNHCR. Thailand factsheet. Bangkok: UNHCR; 2016. [Cited 17 June 2019]; Available from: https://www.unhcr.org/50001e019.pdf. Office of Foreign Workers Administration. Statistics of remaining cross-border migrants holding work permit in Thailand as of October 2018: Department of Employment, Ministry of Labour; 2018. [Cited 18 December 2018]; Available from: https://www.doe.go.th/prd/assets/upload/files/alien_th/98802fed607243cb1c1afe248b3d29eb.pdf. Kangkun P. Life in limbo for Thailand's urban refugees. Bangkok: Nation Thailand; 2018. [Cited 10 May 2020]; Available from: https://www.nationthailand.com/opinion/30355070. Quinley C. Life in the shadows: Thailand's urban refugees. Geneva: The New Humanitarian; 2019. [Cited 10 May 2020]; Available from: https://www.thenewhumanitarian.org/news/2019/09/11/Thailand-refugee-policies-asylum-seekers-immigration-detention. Bradley SEK, Casterline JB. Understanding unmet need: history, theory, and measurement. Stud Fam Plan. 2014;45(2):123–50. Dixon-mueller R, Germain A. Unmet need from a woman's health perspective. Plan Parent Chall. 1994;1:9–12. Newell CP, Wallis S, Botting N, Sajdler C, Foo A, Bourdeaux C. Unmet need for critical care on the wards - how many critically ill patients are really out there? Intensive Care Med Exp. 2015;3(Suppl 1):A470. Thammatacharee N, Tisayaticom K, Suphanchaimat R, Limwattananon S, Putthasri W, Netsaengtip R, et al. Prevalence and profiles of unmet healthcare need in Thailand. BMC Public Health. 2012;12(1):923. Busetta A, Cetorelli V, Wilson B. A universal health care system? Unmet need for medical care among regular and irregular immigrants in Italy. J Immigr Minor Health. 2018;20(2):416–21. UNHCR. What is a refugee? Washington, DC: UNHCR; 2018. [Cited 18 June 2019]; Available from: https://www.unrefugees.org/refugee-facts/what-is-a-refugee/. Hernández-Quevedo C, Masseria C, Mossialos E. Methodological issues in the analysis of the socioeconomic determinants of health using EU-SILC data. In. Luxembourg: Publications Office of the European Union; 2010. National Statistical Office. Revenue and household expenditure. Bangkok: NSO; 2020. [Cited 18 May 2020]; Available from: http://statbbi.nso.go.th/staticreport/page/sector/th/08.aspx. Thein SS. Thepthien B-o: unmet need for family planning among Myanmar migrant women in Bangkok, Thailand. Br J Midwifery. 2020;28(3):182–93. Wang S, Li Y, Li C, Qiao Y, He S. Distribution and determinants of unmet need for supportive care among women with breast cancer in China. Med Sci Monit. 2018;24:1680–7. Hailemariam A, Haddis F. Factors affecting unmet need for family planning in southern nations, nationalities and peoples region, Ethiopia. Ethiop J Health Sci. 2011;21(2):77–89. Bhattathiry MM, Ethirajan N. Unmet need for family planning among married women of reproductive age group in urban Tamil Nadu. J Fam Community Med. 2014;21(1):53–7. National Health Security Office. NHSO vision/Mission [internet]. Bangkok: NHSO; 2020. [Cited 22 May 2020]; Available from: http://eng.nhso.go.th/view/1/Vision_Mission/EN-US. Witthayapipopsakul W, Kulthanmanusorn A, Vongmongkol V, Viriyathorn S, Wanwong Y, Tangcharoensathien V. Achieving the targets for universal health coverage: how is Thailand monitoring progress? WHO South-East Asia J Public Health. 2019;8:10–7. Matlin SA, Depoux A, Schütte S, Flahault A, Saso L. Migrants' and refugees' health: towards an agenda of solutions. Public Health Rev. 2018;39:27. Chuah FLH, Tan ST, Yeo J, Legido-Quigley H. Health system responses to the health needs of refugees and asylum-seekers in Malaysia: a qualitative study. Int J Environ Res Public Health. 2019;16(9):1584. UNHCR. States parties to the 1951 convention relating to the status of refugees and the 1967 protocol. Geneva: UNHCR; 1967. [Cited 23 May 2020]; Available from: https://www.unhcr.org/protection/basic/3b73b0d63/states-parties-1951-convention-its-1967-protocol.html. Schwartz SJ, Unger JB, Zamboanga BL, Szapocznik J. Rethinking the concept of acculturation: implications for theory and research. Am Psychol. 2010;65(4):237–51. Lincoln AK, Lazarevic V, White MT, Ellis BH. The impact of acculturation style and acculturative hassles on the mental health of Somali adolescent refugees. J Immigr Minor Health. 2016;18(4):771–8. Young M. Acculturation, identity and well-being: the adjustment of Somalian refugees. Sante Ment Que. 1996;21(1):271–90. Sartipi M, Nedjat S, Mansournia MA, Baigi V, Fotouhi A. Assets as a socioeconomic status index: categorical principal components analysis vs latent class analysis. Arch Iran Med. 2016;19(11):791–6. Greenland S, Mansournia MA, Altman DG. Sparse data bias: a problem hiding in plain sight. BMJ. 2016;352:i1981. We are immensely grateful for the support from BRC, IHPP and UNHCR during the survey process. Advice from Ms. Bongkot Napaumporn and Dr. Herve Isambert is hugely appreciated. This study received funding support from the Health Systems Research Institute, Thailand. Division of Epidemiology, Department of Disease Control, Ministry of Public Health, Nonthaburi, Thailand Rapeepong Suphanchaimat International Health Policy Program (IHPP), Ministry of Public Health, Nonthaburi, Thailand Rapeepong Suphanchaimat, Pigunkaew Sinam, Mathudara Phaiyarom, Nareerut Pudpong, Sataporn Julchoo & Watinee Kunpeuk Division of Innovation and Research, Department of Disease Control, Ministry of Public Health, Nonthaburi, Thailand Panithee Thammawijaya Pigunkaew Sinam Mathudara Phaiyarom Nareerut Pudpong Sataporn Julchoo Watinee Kunpeuk Conceptualization, RS, NP and PT; Methodology, RS and PT; Validation, RS and WK; Formal analysis, RS, WK and MP; Investigation, RS, WK, and MP; Resources, RS, PS and SJ; Data collection, RS, PS, MP, NP, SJ, and WK; Data management, PS, MP, SJ and WK; Project administration, PS and SJ; Writing—Original draft, RS; Writing—review and editing, RS, PS, MP, NP, SJ, WK and PT. All authors have read and approved the final manuscript. Correspondence to Rapeepong Suphanchaimat. This study obtained ethics approval from the Institute for the Development of Human Research Protections (IHRP)—letter head: IHRP 592/2562. Written consent was obtained from the participants. For those uncomfortable with providing written consent, verbal consent was used instead. All respondents were assured that their participation was voluntary and they had the right to withdraw from the survey at any time. All individual information was kept strictly confidential and would not be reported to the wider public. Table S1. Number of required samples and actual samples participating in the survey. Suphanchaimat, R., Sinam, P., Phaiyarom, M. et al. A cross sectional study of unmet need for health services amongst urban refugees and asylum seekers in Thailand in comparison with Thai population, 2019. Int J Equity Health 19, 205 (2020). https://doi.org/10.1186/s12939-020-01316-y DOI: https://doi.org/10.1186/s12939-020-01316-y Urban refugee Asylum seeker Unmet need
CommonCrawl
Morphing M/M/m: A New View of an Old Queue The following abstract has been accepted for presentation at the 21st Conference of the International Federation of Operational Research Societies — IFORS 2017, Quebec City, Canada. Update July 31, 2017: Here are my IFORS slides Update June 08, 2018: In response to an audience question at my IFORS 2017 session, I have now demonstrated that there is an upper bound for the error in the morphing approximation. See footnotes below. This year is the centenary of A. K. Erlang's paper [1] on the determination of waiting times in an M/D/m queue with $m$ telephone lines.* Today, M/M/m queues are used to model such systems as, call centers [3], multicore computers [4,5] and the Internet [6,7]. Unfortunately, those who should be using M/M/m models often do not have sufficient background in applied probability theory. Our remedy defines a morphing approximation† to the exact M/M/m queue [3] that is accurate to within 10% for typical applications‡. The morphing formula for the residence-time, $R(m,\rho)$, is both simpler and more intuitive than the exact solution involving the Erlang-C function. We have also developed an animation of this morphing process. An outstanding challenge, however, has been to elucidate the nature of the corrections that transform the approximate morphing solutions into the exact Erlang solutions. In this presentation, we show: The morphing solutions correspond to the $m$-roots of unity in the complex $z$-plane. The exact solutions can be expressed as a rational function, $R(m,z)$. The poles of $R(m,z)$ lie inside the unit disk, $|z| < 1$, and converge around the Szegő curve [8] as $m$ is increased. The correction factor for the morphing model is defined by the deflated polynomial belonging to $R(m,z)$. The pattern of poles in the $z$-plane provides a convenient visualization of how the morphing solutions differ from the exact solutions. * Originally, Erlang assumed the call holding time, or mean service time $S$, was deterministic with unit period, $S=1$ [1,2]. The generalization to exponentially distributed service periods came later. Ironically, the exponential case is easier to solve than the apparently simpler deterministic case. That's why the M/D/1 queue is never the first example discussed in queueing theory textbooks. † The derivation of the morphing model is presented in Section 2.6.6 of the 2005 edition of [4], although the word "morphing" is not used there. The reason is, I didn't know how to produce the exact result from it, and emphasizing it would likely have drawn unwarranted attention from Springer-Verlag editors. By the time I was writing the 2011 edition of [4], I was certain the approximate formula did reflect the morphing concept in its own right, even though I still didn't know how to connect it to the exact result. Hence, the verb "morphs" timidly makes its first and only appearance in the boxed text following equation 4.61. &ddagger; The relative error peaks at 10% for $m \sim 20$ and $\rho \sim 90\%$, then peaks again at 20% for $m \sim 2000$ and the servers running 99.9% busy. However, the rate of increase in peak error attenuates such that the maximum error is less than 25%, even as $m \rightarrow \infty$ and $\rho \rightarrow 100\%$. A plot of the corresponding curves gives a clearer picture. This behavior is not at all obvious. Prior to this result, it could have been that the relative error climbed to 100% with increasing $m$ and $\rho$. A. K. Erlang, "Solution of Some Problems in the Theory of Probabilities of Significance in Automatic Telephone Exchanges," Electroteknikeren, v. 13, p. 5, 1917. A. K. Erlang, "The Theory of Probabilities and Telephone Conversations," Nyt Tidsskrift for Matematik B, vol 20, 1909. E. Chromy, T. Misuth, M. Kavacky, "Erlang C Formula and Its Use In The Call Centers," Advances in Electrical and Electronic Engineering, Vol. 9, No. 1, March 2011. N. J. Gunther, Analyzing Computer System Performance with Perl::PDQ, Springer-Verlag, 2005 and 2011. N. J. Gunther, S. Subramanyam, and S. Parvu, "A Methodology for Optimizing Multithreaded System Performance on Multicore Platforms," in Programming Multicore and Many-core Computing Systems, eds. S. Pllana and F. Xhafa, Wiley Series on Parallel and Distributed Computing, February 2017. N. J. Gunther, "Numerical Investigations of Physical Power-law Models of Internet Traffic Using the Renormalization Group," IFORS 2005, Honolulu, Hawaii, July 11—15. T. Bonald, J. W. Roberts, "Internet and the Erlang formula," ACM SIGCOMM Computer Communication Review, Volume 42, Number 1, January 2012. C. Diaz Mendoza and R. Orive, "The Szegő curve and Laguerre polynomials with large negative parameters," Journal of Mathematical Analysis and Applications, Volume 379, Issue 1, Pages 305—315, 1 July 2011. Labels: Erlang, Mathematica, performance models, queueing theory
CommonCrawl
Hostname: page-component-6c8bd87754-h9sqt Total loading time: 0.797 Render date: 2022-01-18T07:54:22.951Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true } >Journals >High Power Laser Science and Engineering >Volume 7 >Monoenergetic proton beam accelerated by single reflection... Model and simulation Monoenergetic proton beam accelerated by single reflection mechanism only during hole-boring stage Published online by Cambridge University Press: 22 August 2019 Wenpeng Wang , Cheng Jiang , Shasha Li , Hao Dong , Baifei Shen , Yuxin Leng , Ruxin Li and Zhizhan Xu Wenpeng Wang* State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China Cheng Jiang Shasha Li Hao Dong Baifei Shen Department of Physics, Shanghai Normal University, Shanghai 200234, China Yuxin Leng Ruxin Li †Correspondence to: W. Wang, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China. Email: [email protected] Save PDF (0.96 mb) View PDF[Opens in a new window] Save hi-res PDF (0.96 mb) Save to Google Drive Save to Kindle Rights & Permissions[Opens in a new window] Multidimensional instabilities always develop with time during the process of radiation pressure acceleration, and are detrimental to the generation of monoenergetic proton beams. In this paper, a sharp-front laser is proposed to irradiate a triple-layer target (the proton layer is set between two carbon ion layers) and studied in theory and simulations. It is found that the thin proton layer can be accelerated once to hundreds of MeV with monoenergetic spectra only during the hole-boring (HB) stage. The carbon ions move behind the proton layer in the light-sail (LS) stage, which can shield any further interaction between the rear part of the laser and the proton layer. In this way, proton beam instabilities can be reduced to a certain extent during the entire acceleration process. It is hoped such a mechanism can provide a feasible way to improve the beam quality for proton therapy and other applications. proton accelerationradiation accelerationsharp-front laserhole-boring stagelight-sail stage High Power Laser Science and Engineering , Volume 7 , 2019 , e55 DOI: https://doi.org/10.1017/hpl.2019.40[Opens in a new window] This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. © The Author(s) 2019 With the development of laser technology[Reference Mourou, Tajima and Bulanov1–Reference Schreiber, Bolton and Parodi4], laser-driven ion beams have attracted much attention owing to potential applications such as fast ignition of inertial confinement fusion[Reference Tabak, Hammer, Glinsky, Kruer, Wilks, Woodworth, Campbell, Perry and Mason5, Reference Naumova, Schlegel, Tikhonchuk, Labaune, Sokolov and Mourou6], medical therapy[Reference Bulanov, Esirkepov, Khoroshkov, Kuznetsov and Pegoraro7–Reference Bulanov, Wilkens, Esirkepov, Korn, Kraft, Kraft, Molls and Khoroshkov9], proton imaging[Reference Borghesi, Campbell, Schiavi, Haines, Willi, MacKinnon, Patel, Gizzi, Galimberti, Clarke, Pegoraro, Ruhl and Bulanov10], neutron production[Reference Kar, Green, Ahmed, Alejo, Robinson, Cerchez, Clarke, Doria, Dorkings, Fernandez, Mirfayzi, McKenna, Naughton, Neely, Norreys, Peth, Powell, Ruiz, Swain, Willi and Borghesi11, Reference Roth, Jung, Falk, Guler, Deppert, Devlin, Favalli, Fernandez, Gautier, Geissel, Haight, Hamilton, Hegelich, Johnson, Merrill, Schaumann, Schoenberg, Schollmeier, Shimada, Taddeucci, Tybo, Wagner, Wender, Wilde and Wurden12], nuclear physics[Reference Ledingham, McKenna and Singhal13, Reference Zamfir14] and pre-accelerators for conventional acceleration devices[Reference Clark, Allott, Beg, Danson, Machacek, Malka, Najmudin, Neely, Norreys, Salvati, Santala, Tatarakis, Watts, Zepf and Dangor15]. Radiation pressure acceleration (RPA)[Reference Macchi, Cattani, Liseykina and Cornolti16–Reference Higginson, Gray, King, Dance, Williamson, Butler, Wilson, Capdessus, Armstrong, Green, Hawkes, Martin, Wei, Mirfayzi, Yuan, Kar, Borghesi, Clarke, Neely and McKenna30] is usually considered as an efficient mechanism to accelerate the whole target to gigaelectronvolts through the 'hole-boring' (HB)[Reference Macchi, Cattani, Liseykina and Cornolti16, Reference Robinson, Gibbon, Zepf, Kar, Evans and Bellei31] and 'light-sail' (LS)[Reference Yan, Lin, Sheng, Guo, Liu, Lu, Fang and Chen18, Reference Klimo, Psikal, Limpouch and Tikhonchuk20, Reference Qiao, Zepf, Borghesi and Geissler21, Reference Wang, Shen and Xu32] stages. However, the beam spectra in RPA experiments[Reference Henig, Steinke, Schnürer, Sokollik, Hörlein, Kiefer, Jung, Schreiber, Hegelich, Yan, Meyer-ter-Vehn, Tajima, Nickles, Sandner and Habs22, Reference Bin, Ma, Wang, Streeter, Kreuzer, Kiefer, Yeung, Cousens, Foster, Dromey, Yan, Ramis, Meyer-ter-Vehn, Zepf and Schreiber26, Reference Scullion, Doria, Romagnani, Sgattoni, Naughton, Symes, McKenna, Macchi, Zepf, Kar and Borghesi29] are much worse than what the theoretical results indicate[Reference Macchi, Cattani, Liseykina and Cornolti16–Reference Qiao, Zepf, Borghesi and Geissler21, Reference Bulanov, Echkina, Esirkepov, Inovenkov, Kando, Pegoraro and Korn23–Reference Yu, Pukhov, Shvets and Chen25, Reference Wan, Pai, Zhang, Li, Wu, Hua, Lu, Gu, Silva, Joshi and Mori27, Reference Shen, Qiao, Zhang, Kar, Zhou, Chang, Borghesi and He28]. One possible reason is the presence of multidimensional instabilities, such as Rayleigh–Taylor-like (RT-like) instability[Reference Robinson, Zepf, Kar, Evans and Bellei19, Reference Klimo, Psikal, Limpouch and Tikhonchuk20, Reference Pegoraro and Bulanov33–Reference Sgattoni, Sinigardi, Fedeli, Pegoraro and Macchi36] and Weibel-like instability[Reference Yan, Wu, Sheng, Chen and Meyer-Ter-Vehn37, Reference Zhang, Shen, Ji, Wang, Xu, Yu and Wang38]. Recent studies have given accurate predictions of the mode structures of the instabilities and the growth rates for a wide range of laser and plasma parameters[Reference Wan, Pai, Zhang, Li, Wu, Hua, Lu, Gu, Silva, Joshi and Mori27]. These show that the surface ripples are more likely induced by coupling between the transverse oscillating electrons and the quasistatic ions near the laser–plasma interface, although the target surface is initially flat. It indicates that instabilities are intrinsically generated as the laser irradiates the interface of the plasma. Previously, a model driven by the front or flat-top parts of the laser has been investigated[Reference Wan, Pai, Zhang, Li, Wu, Hua, Lu, Gu, Silva, Joshi and Mori27]. However, in real cases, the rear part of the laser pulse may further disturb the particle beam in RPA because it continues to interact with the laser–plasma interface. In addition, the accelerating gradient becomes lower in the LS stage, where the charge separation field is reduced due to the Doppler effects of the flying target on the driven laser, which is detrimental to controlling development of multidimensional instabilities in the relativistic region. In this paper, the proton beam is prevented from moving together with the laser–plasma interface during the entire acceleration process, which may intrinsically reduce the development of detrimental instabilities. Here, a single reflection mechanism is used to stably accelerate the proton beam by optimally designing the multilayer target (the proton layer is set between two carbon ion layers). Such a multilayer target is totally different from previous cases[Reference Ji, Shen, Zhang, Wang, Jin, Li, Wen and Cary39, Reference Zhang, Shen, Ji, Wang, Jin, Li, Wen and Cary40], where the heavy ion layer is set between the proton layers. There, the heavy ions in the middle are accelerated together with protons at a lower velocity in a collisionless shock acceleration manner. However, in our case, the middle proton layer is separated from the heavy ions by means of a sharp-front laser, which is reflected once to hundreds of MeV with a monoenergetic spectrum only during the HB stage. Hence, the proton beam has a greatly reduced chance of moving together with the laser–plasma interface during the entire acceleration process. In this way, some multidimensional instabilities can be reduced to a certain extent. It could potentially be used to improve the beam quality for proton therapy and other applications. 2 Model and simulation First, we review the traditional RPA process in a one-dimensional simulation to help us to design the target in the single reflection mechanism (SRM). A circularly polarized laser arrives at the target at $t=20T$ (see Figure 1(a)), where $T=\unicode[STIX]{x1D706}/c$ and $\unicode[STIX]{x1D706}=1~\unicode[STIX]{x03BC}\text{m}$ is the laser wavelength. $c$ is the speed of light in vacuum. In the simulation, the laser amplitude has a triangular profile in time (linear up-ramp $t_{\text{up}}=2.2T$ and linear down-ramp $t_{\text{down}}=2.2T$ ). The pressure of the circularly polarized laser stably pushes electrons forward, such that the electrons are piled up at the front of the laser, forming a compressed electron layer. Previous simulations have demonstrated that the velocity of such a compressed electron layer is uniform because the laser has a linear front[Reference Wang, Shen, Zhang, Ji, Wen, Xu, Yu, Li and Xu24, Reference Wang, Zhang, Wang, Zhao, Xu, Yu, Yi, Shi, Zhang, Xu, Liu, Pei and Shen41]. In this case, the velocity of such an electron layer $(v_{\text{CEL}})$ can be calculated from the balance between the laser pressure force $(2a^{2}(1-v_{\text{CEL}})/(1+v_{\text{CEL}}))$ and electrostatic force $(2(\unicode[STIX]{x1D70B}n_{0}v_{\text{CEL}}t)^{2})$ . In the following calculations, the length, time, velocity, density, charge and field are normalized by $\unicode[STIX]{x1D706},\unicode[STIX]{x1D706}/c,c,\unicode[STIX]{x1D714}_{\text{L}}^{2}m_{\text{e}}/4\unicode[STIX]{x1D70B}\text{e}^{2},-e$ and $e/m_{\text{e}}\unicode[STIX]{x1D714}_{\text{L}}c$ , respectively, for the theoretical compact calculation: (1) $$\begin{eqnarray}2a^{2}(1-v_{\text{CEL}})/(1+v_{\text{CEL}})=2(\unicode[STIX]{x1D70B}n_{0}v_{\text{CEL}}t)^{2},\end{eqnarray}$$ (2) $$\begin{eqnarray}\displaystyle v_{\text{CEL}} & = & \displaystyle \frac{1-3\unicode[STIX]{x1D705}^{2}}{3+3\unicode[STIX]{x1D705}^{2}}+\sqrt[3]{M+\sqrt{M^{2}+N^{3}}}\nonumber\\ \displaystyle & & \displaystyle +\,\sqrt[3]{M-\sqrt{M^{2}+N^{3}}},\end{eqnarray}$$ where $M=-(27\unicode[STIX]{x1D705}^{4}-36\unicode[STIX]{x1D705}^{2}+1)/[27(1+\unicode[STIX]{x1D705}^{2})^{3}],N=(15\unicode[STIX]{x1D705}^{2}-1)/[9(1+\unicode[STIX]{x1D705}^{2})^{2}]$ and $\unicode[STIX]{x1D705}=a_{0}/\unicode[STIX]{x1D70B}t_{\text{up}}n_{0}$ . Here $a_{0}=eE_{\text{L}}/m_{\text{e}}\unicode[STIX]{x1D714}_{\text{L}}c\approx 94$ is the dimensionless peak amplitude of laser, $E_{\text{L}}$ is the amplitude of the laser electric field, $\unicode[STIX]{x1D714}_{\text{L}}$ is the laser frequency, and $m_{\text{e}}$ and $e$ are the electron rest mass and charge, respectively. $a=(a_{0}/t_{\text{up}})(t-v_{\text{CEL}}t)$ is the normalized laser amplitude at the electron layer. The foil density is $n_{0}=50n_{\text{c}}$ and the foil thickness is $d=0.5~\unicode[STIX]{x03BC}\text{m}$ . Then $v_{\text{CEL}}\sim 0.183c$ can be obtained according to Equation (2). Here an idealized triangular laser profile is used in the present scheme, mainly for simplicity of the calculation and simulation. For the case of a Gaussian temporal profile, the velocity of the compressed electron layer is no longer uniform, and the calculation becomes complex, but can be solved by the time-dependent model used in our previous work[Reference Wang, Shen, Zhang, Ji, Yu, Yi, Wang and Xu42]. Figure 1. Electric field $E_{x}$ (blue solid line) and $E_{y}$ (red dash line), electron density (black dash–dot line), proton density (cyan dot line) at (a) $t=20T$ , (b) $t=22.5T$ and (c) $t=25T$ . (d) Trajectories of electrons (red solid line) and protons (gray solid line) in the simulations. (e) Phase space distributions of protons at $t=22.7T$ (red circles). The black solid line represents the velocity distribution at the end of the HB stage for protons initially at different positions of the foil ( $v_{\text{end}}$ versus $x_{\text{initial}}$ ). Initially, the protons lag behind the compressed electron layer because the proton mass $m_{\text{i}}=1836m_{\text{e}}$ is much greater than the electron mass. At the end of HB stage, the fastest protons reach the compressed electron layer (see Figures 1(b) and 1(d))[Reference Wang, Shen, Zhang, Ji, Wen, Xu, Yu, Li and Xu24] and the LS stage starts. It should be noted that the protons initially in the target center are accelerated faster than the others at the end of the HB stage in this case[Reference Wang, Shen, Zhang, Ji, Wen, Xu, Yu, Li and Xu24]. The velocities of these protons are mainly distributed around $0.4c$ from $x_{\text{initial}}=20.2~\unicode[STIX]{x03BC}\text{m}$ to $20.25~\unicode[STIX]{x03BC}\text{m}$ (see Figure 1(e)). It is also noted that a narrow energetic spectrum may be obtained if we selectively accelerate the protons in the middle layer. From Figure 1(e), it can also be seen that the spectrum of the middle proton layer becomes broadened with an increased thickness of the middle proton layer, so a thinner middle proton layer will be better. Then, the multilayer target is designed to ensure that the middle proton layer can be selectively accelerated. According to Figure 1(e), a proton layer from $x=20.2~\unicode[STIX]{x03BC}\text{m}$ to $x=20.25~\unicode[STIX]{x03BC}\text{m}$ is set between the carbon ( $\text{C}^{6+}$ ) layers (corresponding regions are $20~\unicode[STIX]{x03BC}\text{m}<x<20.2~\unicode[STIX]{x03BC}\text{m}$ and $20.25~\unicode[STIX]{x03BC}\text{m}<x<20.5~\unicode[STIX]{x03BC}\text{m}$ ). The electron density is $n_{0}=50n_{\text{c}}$ and the densities of the proton and $\text{C}^{6+}$ layers are $n_{\text{p}}=50n_{\text{c}}$ and $n_{\text{C}6+}=8.3n_{\text{c}}$ , respectively. Here, the charge-to-mass ratio of $\text{C}^{6+}$ is $1/2$ , which is half of the value for the proton (that is, 1), meaning that protons can be more easily accelerated compared with $\text{C}^{6+}$ ions. So, the $\text{C}^{6+}$ ions in the rear part of the target can be assumed to be at rest when the proton layer arrives. Different from the cases depicted in Figure 1, the trajectories of the protons initially at the middle of the target will cross trajectories of the $\text{C}^{6+}$ ions initially at the rear of the target. Hence, the protons can be accelerated by the charge separation field $E_{x}=E_{0}x(t)/d$ , where $E_{0}$ is the maximum charge separation field, given by $E_{0}=4\unicode[STIX]{x1D70B}en_{0}d$ . The proton velocity can be calculated using the following equation: (3) $$\begin{eqnarray}\displaystyle v_{\text{p}}\left(t+\text{d}t\right)=\frac{4\unicode[STIX]{x1D70B}^{2}n_{0}x\left(t\right)}{m_{\text{p}}\unicode[STIX]{x1D6FE}_{\text{p}}\left(t\right)}\,\text{d}t+v_{\text{p}}\left(t\right), & & \displaystyle\end{eqnarray}$$ where $m_{\text{p}}=1836$ is the proton mass and $\unicode[STIX]{x1D6FE}_{\text{p}}(t)=1\big/\sqrt{1-v_{\text{p}}^{2}(t)}$ is the relativistic factor for the protons. In addition, the position of the protons can be calculated using the following equation: (4) $$\begin{eqnarray}\displaystyle x_{\text{p}}\left(t+\text{d}t\right)=\frac{2\unicode[STIX]{x1D70B}^{2}n_{0}x\left(t\right)}{m_{\text{p}}\unicode[STIX]{x1D6FE}_{\text{p}}\left(t\right)}\left(\text{d}t\right)^{2}+v_{\text{p}}\left(t\right)\text{d}t+x_{\text{p}}\left(t\right).\quad & & \displaystyle\end{eqnarray}$$ The dynamics of the middle proton layer can be obtained from Equations (3) and (4) for $v_{\text{p}}(t=20T)=0$ and $x_{\text{initial}}=20~\unicode[STIX]{x03BC}\text{m}+0.5d$ . It should be noted that all these equations are related to the electron density $n_{0}$ , and the dynamics of the different ions are determined by their charge-to-mass ratios. The accelerating scheme will be totally different if their densities change. Based on Equations (3) and (4), the middle proton layer arrives at the back surface of the target $(x_{\text{initial}}=20.5~\unicode[STIX]{x03BC}\text{m})$ at $t\sim 22.6T$ . To start the LS stage as soon as possible, the compressed electron layer should also arrive at the back of the target simultaneously with the middle proton layer, forming a double layer. So, the velocity of the compressed electron layer should be $v_{\text{CEL}}=d/2.6T\sim 0.2c$ . For the linearly rising-up laser front, the velocity $v_{\text{CEL}}$ during the HB stage can be calculated according to Equations (1) and (2). Considering $a=(a_{0}/t_{\text{up}})(1-v_{\text{CEL}})t$ , the steepness of the laser front can be obtained as (5) $$\begin{eqnarray}\displaystyle a_{0}/t_{\text{up}}=\unicode[STIX]{x1D70B}n_{0}v_{\text{CEL}}(1+v_{\text{CEL}})^{1/2}(1-v_{\text{CEL}})^{-3/2}. & & \displaystyle\end{eqnarray}$$ Then, $a_{0}/t_{\text{up}}\sim 48$ is obtained for $n_{0}=50n_{\text{c}}$ and $v_{\text{CEL}}\sim 0.2c$ , as shown in Figure 2(a). The peak amplitude $a_{0}$ of the laser can be calculated as ${\sim}96$ , according to $a_{0}=(a_{0}/t_{\text{up}})(1-v_{\text{CEL}})d/v_{\text{CEL}}$ and $t_{\text{up}}\sim 2T$ . Here, the velocity of the proton layer can increase up to $v_{\text{p}}\sim 0.4c$ , which is almost twice the velocity $v_{\text{CEL}}\;({\sim}0.2c)$ . This indicates that the proton layer moves faster than the compressed layer. At the same time, the remaining electrons will move together with the other $\text{C}^{6+}$ ions, because the laser intensity begins to decrease after $t\sim 22.6T$ . The velocity of the $\text{C}^{6+}$ –electron double layer is lower than the velocity of the proton–electron double layer. So, the rear part of the laser can be reflected by the $\text{C}^{6+}$ –electron double layer. Only the carbon ions are heated and spread extensively, reducing the multidimensional instabilities of the proton layer to a certain extent. Ultimately, the high-quality proton bunch can be selectively accelerated to $E_{\text{p}}\sim 100~\text{MeV}$ at the end of the HB stage (see Figure 2(b)). It is believed that the single reflection mechanism can be realized only during the HB stage according to Equations (3)–(5). It should be noted that Equations (3)–(5) cannot be applied for the special case $t_{\text{up}}=0T$ , which should be specially solved by the time-dependent models[Reference Wang, Shen, Zhang, Ji, Yu, Yi, Wang and Xu42]. Figure 2. (a) Relation between the velocity of the compressed electron layer $v_{\text{CEL}}$ and the steepness of the laser front $a_{0}/t_{\text{up}}$ according to Equation (5) for $n_{0}=50n_{\text{c}}$ . (b) Evolutions of the velocity, $v_{\text{p}}$ (black solid line), and energy, $E_{\text{p}}$ , of the proton layer during the HB stage. Two-dimensional particle-in-cell simulations are carried out to verify the theoretical expectations of the single reflection mechanism. A multilayer target is designed as shown in Figure 3. The hydrogen layer lies in the middle $(20.2~\unicode[STIX]{x03BC}\text{m}\leqslant x\leqslant 20.25~\unicode[STIX]{x03BC}\text{m})$ of the foil. The carbon layer lies in the regions $20~\unicode[STIX]{x03BC}\text{m}<x<20.2~\unicode[STIX]{x03BC}\text{m}$ and $20.25~\unicode[STIX]{x03BC}\text{m}<x<20.5~\unicode[STIX]{x03BC}\text{m}$ . The hydrogen and carbon atoms are assumed to be ionized to $\text{H}^{+}$ and $\text{C}^{6+}$ before the main pulse is incident on the target. The electron density is $n_{0}=50n_{\text{c}}$ . The densities of the $\text{H}^{+}$ and $\text{C}^{6+}$ layers are $n_{\text{p}}=50n_{\text{c}}$ and $n_{\text{C}6+}=8.3n_{\text{c}}$ , respectively. The laser amplitude has a triangular profile in time (linear up-ramp $t_{\text{up}}=2T$ and linear down-ramp $t_{\text{down}}=2T$ ) with a peak value $a_{0}=96$ . The pulse waist is $10~\unicode[STIX]{x03BC}\text{m}$ (FWHM). The simulation box size is 50 $~\unicode[STIX]{x03BC}\text{m}(x)\times 60~\unicode[STIX]{x03BC}\text{m}(y)$ , and the number of spatial grids is $8000\times 9600$ . Each is filled with 20 macroelectrons and 20 macroprotons (or $\text{C}^{6+}$ ions). Figure 3. Distributions of (a)–(c) electric field $E_{y}$ , (d)–(f) electron density $n_{\text{e}}$ , (g)–(i) $\text{C}^{6+}$ density and (j)–(l) proton density at $t=21T$ (first row), $t=23T$ (second row) and $t=25T$ (third row). Figure 3 depicts the detailed progress from the HB to the LS stage as the CP laser irradiates the multilayer target. Initially, the electrons are stably pushed forward because there are no oscillating terms in the expression for the ponderomotive force for CP lasers[Reference Robinson, Zepf, Kar, Evans and Bellei19], as shown in Figure 3(d). The $\text{C}^{6+}$ ions lag behind the compressed electron layer initially. The charge separation field, $E_{x}$ , then becomes stronger with the increased distance between the electrons and the ions. Both $\text{C}^{6+}$ ions and protons begin to be accelerated by $E_{x}$ . At $t=23T$ , the compressed electron layer has reached the back of the target together with the proton layer, which is almost consistent with the theoretical assumption $(t\sim 22.6T)$ according to Equation (4) (refer to Figures 3(e) and 3(k)). Based on Equation (1), the velocity of the proton layer ( $v_{\text{p}}\sim 0.4c$ ) is much higher than the velocity of the compressed electron layer $(v_{\text{CEL}}=\sim 0.2c)$ . This means the proton layer is accelerated and separated from the compressed electron layer in the LS stage. In addition, the peak decreases after $t=22.6T$ . To maintain the balance between the laser pressure and charge separation forces again, the charge separation field, $E_{x}$ , weakens. This is realized by the $\text{C}^{6+}$ ions reaching the compressed electron layer after $t=22.6T$ . Finally, two double layers are formed (see Figures 3(f)–3(l)). The $\text{C}^{6+}$ –electron layer is heated and spreads extensively in space until the laser is totally reflected at $t=25T$ (see Figure 3(c)). In contrast, the proton–electron layer always moves ahead of the $\text{C}^{6+}$ –electron layer, maintaining a compact high-quality bunch, which verifies the theoretical model (Equations (3)–(5)). Figure 4. (a) Trajectories of the $\text{C}^{6+}$ layer (black square), the proton layer (blue triangle) and the interface between the laser and the compressed electron layer (red circle). Enlarged plots of the trajectories are shown in (b). (c) Phase space distributions of $\text{C}^{6+}$ ions and protons at $t=25T$ . (d) Energetic spectra for the proton layer in different initial regions at $t=25T$ . Here, protons in the region $-3~\unicode[STIX]{x03BC}\text{m}<y<3~\unicode[STIX]{x03BC}\text{m}$ are considered. It should be noted that the proton layer has a greatly reduced probability of moving together with the laser–plasma interface in this case, as can clearly be seen from Figures 4(a) and 4(b). Initially, the laser interface moves forward and overtakes the proton layer at $t\sim 21.5T$ . At the end of the HB stage $(t\sim 22.5T)$ , the proton layer overtakes the interface. After $t\sim 22.5T$ , the proton layer continues to move ahead of both the interface and $\text{C}^{6+}$ ions with a constant velocity, because the laser pulse is reflected by the $\text{C}^{6+}$ –electron double layer. As depicted in Figure 4(a), the laser pulse is completely reflected away from the $\text{C}^{6+}$ layer, and does not affect the proton layer. Here, the proton layer is just 'slingshot-likely' reflected once by the charge separation field only during the HB stage (refer to Figure 4(c)). Thus, instabilities of the laser–plasma interface are intrinsically reduced. A pure proton beam with a quasimonoenergetic spectrum $({\sim}5\,\%)$ centered at ${\sim}100~\text{MeV}$ is finally generated at $t=25T$ (see Figure 4(d)). From Figures 4(a) and 4(b), it can be found that the proton layer is separated from the carbon layers, and the carbon layer blocks the rear parts of the laser after $t\sim 23T$ , so the spectrum of the proton beam does not change much after $t\sim 23T$ . The accelerations of the proton layers in the different regions are compared to verify the optimum conditions for the single reflection mechanism, shown in Figure 4(c). The center energy of the proton layer $(20.4~\unicode[STIX]{x03BC}\text{m}<x_{\text{initial}}<20.45~\unicode[STIX]{x03BC}\text{m})$ is only $E_{\text{p}}=39~\text{MeV}$ . The main reason is that the proton layer, initially at rear part of the foil, is accelerated for a shorter time during the HB stage. In contrast, the proton layer initially at the front can be accelerated for a longer time to a higher maximum energy (137 MeV). However, the spectrum spread $({\sim}35\,\%)$ is much worse than the optimum case $({\sim}5\,\%)$ . On the one hand, this is caused by the disturbed charge separation field when the falling part of the laser pulse is reflected. On the other hand, the spectrum spread is intrinsically best for the proton layer in the middle layer, as shown in Figure 1(e). In the future, the development of 10 PW, and even 100 PW, laser systems[Reference Rus, Batysta, Čáp, Divoký, Fibrich, Griffiths, Haley, Havlíček, Hlavác, Hřebíček, Homer, Hříbek, Jand'ourek, Juha, Korn, Korouš, Košelja, Kozlová, Kramer, Krůs, Lagron, Limpouch, MacFarlane, Malý, Margarone, Matlas, Mindl, Moravec, Mocek, Nejdl, Novák, Olšovcová, Palatka, Perin, Pešlo, Polan, Prokůpek, Řídký, Rohlena, Růžička, Sawicka, Scholzová, Snopek, Strkula and Švéda43–Reference Bashinov, Gonoskov, Kim, Mourou and Sergeev46], and target fabrication[Reference Prencipe, Fuchs, Pascarelli, Schumacher, Stephens, Alexander, Briggs, Büscher, Cernaianu, Choukourov, De Marco, Erbe, Fassbender, Fiquet, Fitzsimmons, Gheorghiu, Hund, Huang, Harmand, Hartley, Irman, Kluge, Konopkova, Kraft, Kraus, Leca, Margarone, Metzkes, Nagai, Nazarov, Lutoslawski, Papp, Passoni, Pelka, Perin, Schulz, Smid, Spindloe, Steinke, Torchio, Vass, Wiste, Zaffino, Zeil, Tschentscher, Schramm and Cowan47, Reference Shavit, Ferber, Papeer, Schleifer, Botton, Zigler and Henis48] could lead to reaching laser intensities of the order of $10^{22}{-}10^{23}~\text{W}/\text{cm}^{2}$ , which can easily accelerate the proton layer to energies of hundreds of MeV (refer to Figure 5). For example, ${\sim}400~\text{MeV}$ protons can be generated by a ${\sim}10~\text{fs}$ , 250 J laser ( ${\sim}50\,\%$ energy in a $2\text{-}\unicode[STIX]{x03BC}\text{m}$ focal spot (FWHM), corresponding to a laser intensity of ${\sim}1\times 10^{23}~\text{W}/\text{cm}^{2}$ ) irradiating the multilayer foil (areal density ${\sim}44n_{\text{c}}\unicode[STIX]{x1D706}$ ). In proton cancer therapy, monoenergetic proton beams with a tunable energy of 50–250 MeV are required to target tumor locations, which can be realized by the single reflection mechanism proposed in this paper. It should be noted that optical components based on plasmas can provide a solution to the manipulation of the polarization state of the sharp-front laser[Reference Weng, Zhao, Sheng, Yu, Luan, Chen, Yu, Murakami, Mori and Zhang49], as well as its temporal profile[Reference Bin, Ma, Wang, Streeter, Kreuzer, Kiefer, Yeung, Cousens, Foster, Dromey, Yan, Ramis, Meyer-ter-Vehn, Zepf and Schreiber26, Reference Wang, Lin, Sheng, Liu, Zhao, Guo, Lu, He, Chen and Yan50, Reference Ji, Shen, Zhang, Wang, Jin, Xia, Wen, Wang, Xu and Yu51]. In fact, Equations (3)–(5) are general, helping us to find other optimum parameters of the laser and target under the present lab conditions. Figure 5. Rising-up duration of the laser front $t_{\text{up}}$ (red circle), energy (black square) and areal density (blue triangle) of the proton layer for different laser intensities calculated from Equations (3)–(5). Here the foil density is $n_{0}=50n_{\text{c}}$ . It should be noted that the efficiency of proton acceleration only in the HB stage seems lower, compared to the previous PRA process including the LS stage[Reference Yan, Lin, Sheng, Guo, Liu, Lu, Fang and Chen18, Reference Klimo, Psikal, Limpouch and Tikhonchuk20, Reference Qiao, Zepf, Borghesi and Geissler21, Reference Wang, Shen and Xu32]. In previous works, the laser interacts with the proton beam for a longer time, so that a higher beam energy (GeV, even 10 GeV) and higher accelerating efficiency can be obtained. Especially in the LS stage, the efficiency can be near 100%. However, such good acceleration does not occur in realistic experiments. The main reason for this is that serious multidimensional instabilities develop during the laser–plasma interaction. These instabilities may arise due to intrinsic turbulence as the laser irradiates the interface of the plasma, or the unpredictable irregular shapes of the target surface and laser front. So the maximum energy of the proton beam is only ${\sim}100~\text{MeV}$ in the present experiments. Also, the spectra are not as good as what the simulations predict. In this paper, we wanted simply to use a sharp-front laser irradiating a triple-layer target to reduce the interaction time between the laser and the proton beam. In this way, the instabilities can be reduced to a certain extent, although the acceleration efficiency is lower. It is believed that proton beams with monoenergetic spectra of hundreds of MeV can potentially be applied in proton therapy. In conclusion, a proton layer with pure spectra can be successfully accelerated once to hundreds of MeV only during the HB stage. The proton beam has a reduced probability of moving together with the laser–plasma interface during the entire acceleration process. In this manner, some multidimensional instabilities, such as Rayleigh–Taylor-like instability and Weibel-like instability, can be reduced to a certain extent. This provides a feasible method to realize proton therapy and other applications using multi-PW laser system in the future. This study was supported by the National Natural Science Foundation of China (No. 11575274), Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDB16010600), and Ministry of Science and Technology of the People's Republic of China (Nos. 2016YFA0401102 and 2018YFA0404803). This article has been amended since original publication to correct the first author's name. Mourou, G. A. Tajima, T. and Bulanov, S. V. Rev. Mod. Phys. 78, 309 (2006).CrossRefGoogle Scholar Macchi, A. Borghesi, M. and Passoni, M. Rev. Mod. Phys. 85, 751 (2013).CrossRefGoogle Scholar Daido, H. Nishiuchi, M. and Pirozhkov, A. S. Phys. Soc. 75, 056401 (2012).Google Scholar Schreiber, J. Bolton, P. R. and Parodi, K. Rev. Sci. Instrum. 87, 071101 (2016).CrossRefGoogle Scholar Tabak, M. Hammer, J. Glinsky, M. E. Kruer, W. L. Wilks, S. C. Woodworth, J. Campbell, E. M. Perry, M. D. and Mason, R. J. Phys. Plasmas 1, 1626 (1994).CrossRefGoogle Scholar Naumova, N. Schlegel, T. Tikhonchuk, V. T. Labaune, C. Sokolov, I. V. and Mourou, G. Phys. Rev. Lett. 102, 025002 (2009).CrossRefGoogle Scholar Bulanov, S. V. Esirkepov, T. Z. Khoroshkov, V. S. Kuznetsov, A. V. and Pegoraro, F. Phys. Lett. A 299, 240 (2002).CrossRefGoogle Scholar Bulanov, S. V. and Khoroshkov, V. S. Plasma Phys. Rep. 28, 453 (2002).CrossRefGoogle Scholar Bulanov, S. V. Wilkens, J. J. Esirkepov, T. Z. Korn, G. Kraft, G. Kraft, S. D. Molls, M. and Khoroshkov, V. S. Phys.-Usp. 57, 1149 (2014).CrossRefGoogle Scholar Borghesi, M. Campbell, D. H. Schiavi, A. Haines, M. G. Willi, O. MacKinnon, A. J. Patel, P. Gizzi, L. A. Galimberti, M. Clarke, R. J. Pegoraro, F. Ruhl, H. and Bulanov, S. Phys. Plasmas 9, 2214 (2002).CrossRefGoogle Scholar Kar, S. Green, A. Ahmed, H. Alejo, A. Robinson, A. P. L. Cerchez, M. Clarke, R. Doria, D. Dorkings, S. Fernandez, J. Mirfayzi, S. R. McKenna, P. Naughton, K. Neely, D. Norreys, P. Peth, C. Powell, H. Ruiz, J. A. Swain, J. Willi, O. and Borghesi, M. New J. Phys. 18, 053002 (2016).CrossRefGoogle Scholar Roth, M. Jung, D. Falk, K. Guler, N. Deppert, O. Devlin, M. Favalli, A. Fernandez, J. Gautier, D. Geissel, M. Haight, R. Hamilton, C. E. Hegelich, B. M. Johnson, R. P. Merrill, F. Schaumann, G. Schoenberg, K. Schollmeier, M. Shimada, T. Taddeucci, T. Tybo, J. L. Wagner, F. Wender, S. A. Wilde, C. H. and Wurden, G. A. Phys. Rev. Lett. 110, 044802 (2013).CrossRefGoogle Scholar Ledingham, K. W. D. McKenna, P. and Singhal, R. P. Science 300, 1107 (2003).CrossRefGoogle Scholar Zamfir, N. V. Eur. Phys. J. 223, 1221 (2014).Google Scholar Clark, E. L. Allott, R. Beg, F. N. Danson, C. N. Machacek, A. Malka, V. Najmudin, Z. Neely, D. Norreys, P. A. Salvati, M. R. Santala, M. I. K. Tatarakis, M. Watts, I. Zepf, M. and Dangor, A. E. IEEE Trans. Plasma Sci. 28, 1184 (2000).Google Scholar Macchi, A. Cattani, F. Liseykina, T. V. and Cornolti, F. Phys. Rev. Lett. 94, 165003 (2005).CrossRefGoogle Scholar Zhang, X. Shen, B. Li, X. Jin, Z. Wang, F. and Wen, M. Phys. Plasmas 14, 123108 (2007).Google Scholar Yan, X. Q. Lin, C. Sheng, Z. M. Guo, Z. Y. Liu, B. C. Lu, Y. R. Fang, J. X. and Chen, J. E. Phys. Rev. Lett. 100, 135003 (2008).Google Scholar Robinson, A. P. L. Zepf, M. Kar, S. Evans, R. G. and Bellei, C. New J. Phys. 10, 013021 (2008).CrossRefGoogle Scholar Klimo, O. Psikal, J. Limpouch, J. and Tikhonchuk, V. T. Phys. Rev. ST Accel. Beams 11, 031301 (2008).CrossRefGoogle Scholar Qiao, B. Zepf, M. Borghesi, M. and Geissler, M. Phys. Rev. Lett. 102, 145002 (2009).CrossRefGoogle Scholar Henig, A. Steinke, S. Schnürer, M. Sokollik, T. Hörlein, R. Kiefer, D. Jung, D. Schreiber, J. Hegelich, B. M. Yan, X. Q. Meyer-ter-Vehn, J. Tajima, T. Nickles, P. V. Sandner, W. and Habs, D. Phys. Rev. Lett. 103, 245003 (2009).Google Scholar Bulanov, S. V. Echkina, E. Y. Esirkepov, T. Z. Inovenkov, I. N. Kando, M. Pegoraro, F. and Korn, G. Phys. Rev. Lett. 104, 135003 (2010).Google Scholar Wang, W. P. Shen, B. F. Zhang, X. M. Ji, L. L. Wen, M. Xu, J. C. Yu, Y. H. Li, Y. L. and Xu, Z. Z. Phys. Plasmas 18, 013103 (2011).Google Scholar Yu, T.-P. Pukhov, A. Shvets, G. and Chen, M. Phys. Rev. Lett. 105, 065002 (2010).Google Scholar Bin, J. H. Ma, W. J. Wang, H. Y. Streeter, M. J. V. Kreuzer, C. Kiefer, D. Yeung, M. Cousens, S. Foster, P. S. Dromey, B. Yan, X. Q. Ramis, R. Meyer-ter-Vehn, J. Zepf, M. and Schreiber, J. Phys. Rev. Lett. 115, 064801 (2015).CrossRefGoogle Scholar Wan, Y. Pai, C. H. Zhang, C. J. Li, F. Wu, Y. P. Hua, J. F. Lu, W. Gu, Y. Q. Silva, L. O. Joshi, C. and Mori, W. B. Phys. Rev. Lett. 117, 234801 (2016).Google Scholar Shen, X. F. Qiao, B. Zhang, H. Kar, S. Zhou, C. T. Chang, H. X. Borghesi, M. and He, X. T. Phys. Rev. Lett. 118, 204802 (2017).CrossRefGoogle Scholar Scullion, C. Doria, D. Romagnani, L. Sgattoni, A. Naughton, K. Symes, D. R. McKenna, P. Macchi, A. Zepf, M. Kar, S. and Borghesi, M. Phys. Rev. Lett. 119, 054801 (2017).CrossRefGoogle Scholar Higginson, A. Gray, R. J. King, M. Dance, R. J. Williamson, S. D. R. Butler, N. M. H. Wilson, R. Capdessus, R. Armstrong, C. Green, J. S. Hawkes, S. J. Martin, P. Wei, W. Q. Mirfayzi, S. R. Yuan, X. H. Kar, S. Borghesi, M. Clarke, R. J. Neely, D. and McKenna, P. Nat. Commun. 9, 724 (2018).CrossRefGoogle Scholar Robinson, A. P. L. Gibbon, P. Zepf, M. Kar, S. Evans, R. G. and Bellei, C. Plasma Phys. Control. Fusion 51, 024004 (2009).Google Scholar Wang, W. P. Shen, B. F. and Xu, Z. Z. Phys. Plasmas 24, 013104 (2017).Google Scholar Pegoraro, F. and Bulanov, S. V. Phys. Rev. Lett. 99, 065002 (2007).CrossRefGoogle Scholar Palmer, C. A. J. Schreiber, J. Nagel, S. R. Dover, N. P. Bellei, C. Beg, F. N. Bott, S. Clarke, R. J. Dangor, A. E. Hassan, S. M. Hilz, P. Jung, D. Kneip, S. Mangles, S. P. D. Lancaster, K. L. Rehman, A. Robinson, A. P. L. Spindloe, C. Szerypo, J. Tatarakis, M. Yeung, M. Zepf, M. and Najmudin, Z. Phys. Rev. Lett. 108, 225002 (2012).CrossRefGoogle Scholar Eliasson, B. New J. Phys. 17, 033026 (2015).CrossRefGoogle Scholar Sgattoni, A. Sinigardi, S. Fedeli, L. Pegoraro, F. and Macchi, A. Phys. Rev. E 91, 013106 (2015).Google Scholar Yan, X. Q. Wu, H. C. Sheng, Z. M. Chen, J. E. and Meyer-Ter-Vehn, J. Phys. Rev. Lett. 103, 135001 (2009).Google Scholar Zhang, X. Shen, B. Ji, L. Wang, W. Xu, J. Yu, Y. and Wang, X. Phys. Plasmas 18, 073101 (2011).Google Scholar Ji, L. Shen, B. Zhang, X. Wang, F. Jin, Z. Li, X. Wen, M. and Cary, J. R. Phys. Rev. Lett. 101, 164802 (2008).Google Scholar Zhang, X. Shen, B. Ji, L. Wang, F. Jin, Z. Li, X. Wen, M. and Cary, J. R. Phys. Rev. ST Accel. Beams 12, 021301 (2009).CrossRefGoogle Scholar Wang, W. P. Zhang, X. M. Wang, X. F. Zhao, X. Y. Xu, J. C. Yu, Y. H. Yi, L. Q. Shi, Y. Zhang, L. G. Xu, T. J. Liu, C. Pei, Z. K. and Shen, B. F. High Power Laser Sci. Eng. 2, e9 (2014).Google Scholar Wang, W. P. Shen, B. F. Zhang, X. M. Ji, L. L. Yu, Y. H. Yi, L. Q. Wang, X. F. and Xu, Z. Z. Phys. Rev. ST Accel. Beams 15, 081302 (2012).Google Scholar Rus, B. Batysta, F. Čáp, J. Divoký, M. Fibrich, M. Griffiths, M. Haley, R. Havlíček, T. Hlavác, M. Hřebíček, J. Homer, P. Hříbek, P. Jand'ourek, J. Juha, L. Korn, G. Korouš, P. Košelja, M. Kozlová, M. Kramer, D. Krůs, M. Lagron, J. C. Limpouch, J. MacFarlane, L. Malý, M. Margarone, D. Matlas, P. Mindl, L. Moravec, J. Mocek, T. Nejdl, J. Novák, J. Olšovcová, V. Palatka, M. Perin, J. P. Pešlo, M. Polan, J. Prokůpek, J. Řídký, J. Rohlena, K. Růžička, V. Sawicka, M. Scholzová, L. Snopek, D. Strkula, P. and Švéda, L. Proc. SPIE 8080, 808010 (2011).Google Scholar Zamfir, N. V. J. Phys. Conf. Ser. 366, 012052 (2012).CrossRefGoogle Scholar Zou, J. P. Blanc, C. L. Papadopoulos, D. N. Ch́eriaux, G. Georges, P. Mennerat, G. Druon, F. Lecherbourg, L. Pellegrina, A. Ramirez, P. Giambruno, F. Fŕeneaux, A. Leconte, F. Badarau, D. Boudenne, J. M. Fournet, D. Valloton, T. Paillard, J. L. Veray, J. L. Pina, M. Monot, P. Chambaret, J. P. Martin, P. Mathieu, F. Audebert, P. and Amiranoff, F. High. Power Laser Sci. Eng. 3, e2 (2015).CrossRefGoogle Scholar Bashinov, A. V. Gonoskov, A. A. Kim, A. V. Mourou, G. and Sergeev, A. M. Eur. Phys. J. Spec. Top. 223, 1105 (2014).CrossRefGoogle Scholar Prencipe, I. Fuchs, J. Pascarelli, S. Schumacher, D. W. Stephens, R. B. Alexander, N. B. Briggs, R. Büscher, M. Cernaianu, M. O. Choukourov, A. De Marco, M. Erbe, A. Fassbender, J. Fiquet, G. Fitzsimmons, P. Gheorghiu, C. Hund, J. Huang, L. G. Harmand, M. Hartley, N. J. Irman, A. Kluge, T. Konopkova, Z. Kraft, S. Kraus, D. Leca, V. Margarone, D. Metzkes, J. Nagai, K. Nazarov, W. Lutoslawski, P. Papp, D. Passoni, M. Pelka, A. Perin, J. P. Schulz, J. Smid, M. Spindloe, C. Steinke, S. Torchio, R. Vass, C. Wiste, T. Zaffino, R. Zeil, K. Tschentscher, T. Schramm, U. and Cowan, T. E. High Power Laser Sci. Eng. 5, e17 (2017).CrossRefGoogle Scholar Shavit, O. Ferber, Y. Papeer, J. Schleifer, E. Botton, M. Zigler, A. and Henis, Z. High Power Laser Sci. Eng. 6, e7 (2018).CrossRefGoogle Scholar Weng, S. Zhao, Q. Sheng, Z. Yu, W. Luan, S. Chen, M. Yu, L. Murakami, M. Mori, W. B. and Zhang, J. Optica 4, 1086 (2017).CrossRefGoogle Scholar Wang, H. Y. Lin, C. Sheng, Z. M. Liu, B. Zhao, S. Guo, Z. Y. Lu, Y. R. He, X. T. Chen, J. E. and Yan, X. Q. Phys. Rev. Lett. 107, 265002 (2011).Google Scholar Ji, L. L. Shen, B. F. Zhang, X. M. Wang, F. C. Jin, Z. Y. Xia, C. Q. Wen, M. Wang, W. P. Xu, J. C. and Yu, M. Y. Phys. Rev. Lett. 103, 215005 (2009).Google Scholar View in content Figure 1. Electric field $E_{x}$ (blue solid line) and $E_{y}$ (red dash line), electron density (black dash–dot line), proton density (cyan dot line) at (a) $t=20T$, (b) $t=22.5T$ and (c) $t=25T$. (d) Trajectories of electrons (red solid line) and protons (gray solid line) in the simulations. (e) Phase space distributions of protons at $t=22.7T$ (red circles). The black solid line represents the velocity distribution at the end of the HB stage for protons initially at different positions of the foil ( $v_{\text{end}}$ versus $x_{\text{initial}}$). Figure 2. (a) Relation between the velocity of the compressed electron layer $v_{\text{CEL}}$ and the steepness of the laser front $a_{0}/t_{\text{up}}$ according to Equation (5) for $n_{0}=50n_{\text{c}}$. (b) Evolutions of the velocity, $v_{\text{p}}$ (black solid line), and energy, $E_{\text{p}}$, of the proton layer during the HB stage. Figure 3. Distributions of (a)–(c) electric field $E_{y}$, (d)–(f) electron density $n_{\text{e}}$, (g)–(i) $\text{C}^{6+}$ density and (j)–(l) proton density at $t=21T$ (first row), $t=23T$ (second row) and $t=25T$ (third row). Figure 4. (a) Trajectories of the $\text{C}^{6+}$ layer (black square), the proton layer (blue triangle) and the interface between the laser and the compressed electron layer (red circle). Enlarged plots of the trajectories are shown in (b). (c) Phase space distributions of $\text{C}^{6+}$ ions and protons at $t=25T$. (d) Energetic spectra for the proton layer in different initial regions at $t=25T$. Here, protons in the region $-3~\unicode[STIX]{x03BC}\text{m} are considered. Figure 5. Rising-up duration of the laser front $t_{\text{up}}$ (red circle), energy (black square) and areal density (blue triangle) of the proton layer for different laser intensities calculated from Equations (3)–(5). Here the foil density is $n_{0}=50n_{\text{c}}$. A correction has been issued for this article: Monoenergetic proton beam accelerated by single reflection mechanism only during hole-boring stage – ERRATUM Wenpeng Wang, Cheng Jiang, Shasha Li, Hao Dong, Baifei Shen, Yuxin Leng, Ruxin Li and Zhizhan Xu High Power Laser Science and Engineering, Volume 7 Linked content Please note a has been issued for this article. This article has been cited by the following publications. This list is generated based on data provided by CrossRef. Wang, Wenpeng Jiang, Cheng Li, Shasha Dong, Hao Shen, Baifei Leng, Yuxin Li, Ruxin and Xu, Zhizhan 2019. Monoenergetic proton beam accelerated by single reflection mechanism only during hole-boring stage – ERRATUM. High Power Laser Science and Engineering, Vol. 7, Issue. , Yuan, Y. Ma, Y. Y. Yang, X. H. Wang, W. P. Zhang, G. B. Cui, Y. Chen, S. J. Wu, F. Y. Zi, M. Zheng, P. F. Xu, B. H. Ke, Y. Z. and Kawata, S. 2021. Enhancement of the conversion efficiency of soft x-ray by colliding gold plasmas. Physics of Plasmas, Vol. 28, Issue. 11, p. 113301. Jiang, C. Wang, W. P. Weber, S. Dong, H. Leng, Y. X. Li, R. X. and Xu, Z. Z. 2021. Direct acceleration of an annular attosecond electron slice driven by near-infrared Laguerre–Gaussian laser. High Power Laser Science and Engineering, Vol. 9, Issue. , Yuan, Y Ma, Y Y Wang, W P Chen, S J Cui, Y Zi, M Yang, X H Zhang, G B and Leng, Y X 2022. Enhancing the conversion efficiency of extreme ultraviolet light sources using a 2 µm wavelength laser. Plasma Physics and Controlled Fusion, Vol. 64, Issue. 2, p. 025001. View all Google Scholar citations for this article.
CommonCrawl
Mathematics > Dynamical Systems arXiv:2107.05149 (math) [Submitted on 11 Jul 2021 (v1), last revised 2 Jun 2022 (this version, v5)] Title:Infinite Lifting of an Action of Symplectomorphism Group on the set of Bi-Lagrangian Structures Authors:Bertuel Tangue Ndawa Abstract: We consider a smooth $2n$-manifold $M$ endowed with a bi-Lagrangian structure $(\omega,\mathcal{F}_{1},\mathcal{F}_{2})$. That is, $\omega$ is a symplectic form and $(\mathcal{F}_{1},\mathcal{F}_{2})$ is a pair of transversal Lagrangian foliations on $(M, \omega)$. Such structures have an important geometric object called the Hess Connection. Among the many importance of these connections, they allow to classify affine bi-Lagrangian structures. In this work, we show that a bi-Lagrangian structure on $M$ can be lifted as a bi-Lagrangian structure on its trivial bundle $M\times\mathbb{R}^n$. Moreover, the lifting of an affine bi-Lagrangian structure is also an affine bi-Lagrangian structure. We define a dynamic on the symplectomorphism group and the set of bi-Lagrangian structures (that is an action of the symplectomorphism group on the set of bi-Lagrangian structures). This dynamic is compatible with Hess connections, preserves affine bi-Lagrangian structures, and can be lifted on $M\times\mathbb{R}^n$. This lifting can be lifted again on $\left(M\times\mathbb{R}^{2n}\right)\times\mathbb{R}^{4n}$, and coincides with the initial dynamic (in our sense) on $M\times\mathbb{R}^n$ for some bi-Lagrangian structures. Results still hold by replacing $M\times\mathbb{R}^{2n}$ with the tangent bundle $TM$ of $M$ or its cotangent bundle $T^{*}M$ for some manifolds $M$. Comments: 25 pages, 2figure Subjects: Dynamical Systems (math.DS) Cite as: arXiv:2107.05149 [math.DS] (or arXiv:2107.05149v5 [math.DS] for this version) From: Bertuel Tangue Ndawa [view email] [v1] Sun, 11 Jul 2021 23:48:14 UTC (101 KB) [v2] Mon, 2 Aug 2021 15:29:24 UTC (101 KB) [v3] Wed, 2 Mar 2022 16:21:34 UTC (18 KB) [v4] Wed, 16 Mar 2022 08:50:33 UTC (19 KB) [v5] Thu, 2 Jun 2022 11:48:18 UTC (18 KB) math.DS
CommonCrawl
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. View all journals Permafrost is warming at a global scale Boris K. Biskaborn ORCID: orcid.org/0000-0003-2378-03481, Sharon L. Smith2, Jeannette Noetzli3, Heidrun Matthes ORCID: orcid.org/0000-0001-9913-76961, Gonçalo Vieira ORCID: orcid.org/0000-0001-7611-34644, Dmitry A. Streletskiy5, Philippe Schoeneich6, Vladimir E. Romanovsky ORCID: orcid.org/0000-0002-9515-20877, Antoni G. Lewkowicz ORCID: orcid.org/0000-0002-9307-21478, Andrey Abramov ORCID: orcid.org/0000-0002-3602-11599, Michel Allard10, Julia Boike ORCID: orcid.org/0000-0002-5875-21121,11, William L. Cable ORCID: orcid.org/0000-0002-7951-39461, Hanne H. Christiansen12, Reynald Delaloye13, Bernhard Diekmann1,14, Dmitry Drozdov15, Bernd Etzelmüller ORCID: orcid.org/0000-0001-5156-365316, Guido Grosse ORCID: orcid.org/0000-0001-5895-21411,14, Mauro Guglielmin17, Thomas Ingeman-Nielsen ORCID: orcid.org/0000-0002-0776-486918, Ketil Isaksen ORCID: orcid.org/0000-0003-2356-533019, Mamoru Ishikawa20, Margareta Johansson21, Halldor Johannsson22, Anseok Joo22, Dmitry Kaverin23, Alexander Kholodov7,9, Pavel Konstantinov24, Tim Kröger25, Christophe Lambiel26, Jean-Pierre Lanckman22, Dongliang Luo ORCID: orcid.org/0000-0001-5844-363827, Galina Malkova15, Ian Meiklejohn ORCID: orcid.org/0000-0001-8890-293828, Natalia Moskalenko15, Marc Oliva29, Marcia Phillips3, Miguel Ramos ORCID: orcid.org/0000-0003-3648-681830, A. Britta K. Sannel ORCID: orcid.org/0000-0002-1350-651631, Dmitrii Sergeev32, Cathy Seybold33, Pavel Skryabin24, Alexander Vasiliev15,34, Qingbai Wu27, Kenji Yoshikawa7, Mikhail Zheleznyak24 & Hugues Lantuit ORCID: orcid.org/0000-0003-1497-67601,14 Nature Communications volume 10, Article number: 264 (2019) Cite this article Permafrost warming has the potential to amplify global climate change, because when frozen sediments thaw it unlocks soil organic carbon. Yet to date, no globally consistent assessment of permafrost temperature change has been compiled. Here we use a global data set of permafrost temperature time series from the Global Terrestrial Network for Permafrost to evaluate temperature change across permafrost regions for the period since the International Polar Year (2007–2009). During the reference decade between 2007 and 2016, ground temperature near the depth of zero annual amplitude in the continuous permafrost zone increased by 0.39 ± 0.15 °C. Over the same period, discontinuous permafrost warmed by 0.20 ± 0.10 °C. Permafrost in mountains warmed by 0.19 ± 0.05 °C and in Antarctica by 0.37 ± 0.10 °C. Globally, permafrost temperature increased by 0.29 ± 0.12 °C. The observed trend follows the Arctic amplification of air temperature increase in the Northern Hemisphere. In the discontinuous zone, however, ground warming occurred due to increased snow thickness while air temperature remained statistically unchanged. One quarter of the Northern Hemisphere and 17% of the Earth's exposed land surface is underlain by permafrost1, that is ground with a temperature remaining at or below 0 °C for at least two consecutive years. The thermal state of permafrost is sensitive to changing climatic conditions and in particular to rising air temperatures and changing snow regimes2,3,4,5,6,7. This is important, because over the past few decades, the atmosphere in polar and high elevation regions has warmed faster than elsewhere8. Even if global air temperature increased by no more than 2 °C by 2100, permafrost may still degrade over a significant area9. Such a change would have serious consequences for ecosystems, hydrological systems, and infrastructure integrity10,11,12. Carbon release resulting from permafrost degradation will potentially impact the Earth's climate system because large amounts of carbon previously locked in frozen organic matter will decompose into carbon dioxide and methane13,14,15. This process is expected to augment global warming by 0.13–0.27 °C by 2100 and by up to 0.42 °C by 230015. Despite this, permafrost change is not yet adequately represented in most of the Earth System Models14 that are used for the IPCC projections for decision makers. One major reason for this was the absence of a standardized global data set of permafrost temperature observations for model validation. Prior to the International Polar Year (IPY, 2007–2009), ground temperatures were measured in boreholes scattered across permafrost regions. However, a globally organized permafrost data network and a standard reference period against which temperature change could be measured did not exist. One key outcome of the IPY was strenghtening the Global Terrestrial Network for Permafrost (GTN-P)16,4. This initiative established a temperature reference baseline for permafrost and led to an increase in the number of accessible boreholes used for temperature monitoring. To analyze the thermal change of permafrost we assembled a global permafrost-temperature data set that includes time series of data attributed to the IPY reference boreholes. We compiled a time series for the decade from 2007 to 2016 that comprises mean annual ground temperatures \(\bar T\), determined from temperatures measured in boreholes within the continuous and discontinuous permafrost zones in the Arctic (including the Subarctic), Antarctica and at high elevations outside the polar regions. The measurements were made at, or as close as possible to the depth of zero annual amplitude Z*, where seasonal changes in ground temperature are negligible (≤0.1 °C). Rates of permafrost temperature change calculated for the 2007–2016 decade were indexed in each borehole to suppress near-surface and deep geothermal changes. Regional and global change rates were calculated as area-weighted means. To compare single borehole sites, due to the higher availability of full-year records after 2007, we ranked the temperature difference between the biennial means of 2008–2009 and 2015–2016. We used linear regression on \(\bar T\) between 2007–2016 to estimate decadal change rates. To calculate annual departures, we compared consecutive years to the reference mean of 2008–2009. We concluded, that ground temperature near the depth of zero annual amplitude increased in all permafrost zones on Earth, that is continuous and discontinuous permafrost in the Northern Hemisphere, as well as permafrost in the mountains and in Antarctica. The observed trend followed increased air temperature and snow thickness, each in varying degrees depending on the region. Permafrost temperature changes Measurements from borehole sites established prior to the IPY generally indicated warming driven by higher air temperatures (Fig. 1)4,17,18. Our new data set contains 154 boreholes of which 123 allow calculation of decadal temperature change rates based on adequate time series. The remaining 31 boreholes provide additional information on annual departures. Our results show that in the decade after the IPY permafrost warmed within 71 boreholes, cooled in 12, and remained unchanged (within measurement accuracy) in the remaining 40 (Fig. 2). The ground temperature rose above 0 °C in five boreholes, indicating thawing at the measurement depth of 10 m at Z*. The largest increase of \(\bar T\) over the observed reference decade between 2007 and 2016 was 0.39 ± 0.15 °C \(dec_{{\mathrm{Ref}}}^{ - 1}\)in the Arctic continuous permafrost zone. The greatest permafrost temperature changes observed in individual boreholes (\({\mathrm{\Delta }}\bar T_b\)) since 2008–2009 were 0.93 and 0.90 °C in northwestern Siberia (Marre Sale, 10 m) and northeastern Siberia (Samoylov Island, 20.75 m), respectively. The discontinuous permafrost zone experienced warming of 0.20 ± 0.10 °C \(dec_{{\mathrm{Ref}}}^{ - 1}\). The largest \({\mathrm{\Delta }}\bar T_b\) since 2008–2009 of 0.95 °C was observed in southeastern Siberia, Magadan (Olsky pass, 10 m). Permafrost at this site started thawing after the IPY period at the measurement depth. Long permafrost temperature records for selected sites. a Location of boreholes with long time-series data. Because some regions lack long temperature records, shorter temperature records from Greenland and Chinese mountains are included for comparison. Depth of measurements is according to the Global Terrestrial Network for Permafrost ID16: 24.4 m (ID 356), 20 m (ID 55, 79, 102, 117, 501, 710, 831, 1113, and 1710), 18 m (ID 386), 16.75 m (ID 871), 15 m (ID 854), 12 m (ID 287), 10 m (ID 265, 431), and 5 m (ID 528). The light blue area represents the continuous permafrost zone (>90% coverage) and the light purple area represents the discontinuous permafrost zones (<90% coverage). b Mean annual ground temperature over time. Colors indicate the location of the boreholes in a. Permafrost zones are derived from the International Permafrost Association (IPA) map46. World Borders data are derived from http://thematicmapping.org/downloads/world_borders.php and licensed under CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/) Permafrost temperature and rate of change near the depth of zero annual amplitude. a, b Mean annual ground temperatures for 2014–2016 in the Northern Hemisphere and Antarctica, n = 129 boreholes. c, d Decadal change rate of permafrost temperature from 2007 to 2016, n = 123 boreholes (Eq. 3). Changes within the average measurement accuracy of ~±0.1 °C are coded in green. Continuous permafrost zone (>90% coverage); discontinuous permafrost zones (<90% coverage). Permafrost zones are derived from the International Permafrost Association (IPA) map46. World Borders data are derived from http://thematicmapping.org/downloads/world_borders.php and licensed under CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/) Mountain permafrost in the data set is mainly represented by boreholes in the European Alps, the Nordic countries, and central Asia. Although absolute \(\bar T\) values in mountain permafrost are highly heterogeneous, depending on elevation, local topography, snow regime, and subsurface characteristics, changes in mountain permafrost temperatures were analyzed for all regions and settings19 as one group. They can vary considerably, however, between sites of low and high ground ice content at temperatures just below 0 °C. Mountain permafrost \(\bar T\) increased20,21 by 0.19 ± 0.05 °C \(dec_{{\mathrm{Ref}}}^{ - 1}\). The greatest \({\mathrm{\Delta }}\bar T_b\) since 2008–2009 was 1.15 °C, observed in the Aldan mountain tundra of southern Yakutia, Siberia (Taezhnoe, 25 m). On average, permafrost across zones warmed by 0.33 ± 0.16 °C over the reference decade in northern Asia and by 0.23 ± 0.11 °C \(dec_{{\mathrm{Ref}}}^{ - 1}\) in North America. This difference is most likely due to stronger warming of the atmosphere over North Asia compared to North America, as indicated by reconstructed decadal air temperature changes (1998–2012) that showed cooling in Alaska22. Similar to warming of the Arctic continuous permafrost zone, the Antarctic permafrost warmed by 0.37 ± 0.10 °C \(dec_{{\mathrm{Ref}}}^{ - 1}\). However, the remoteness of the continent and its limited accessibility resulted in far fewer boreholes drilled to Z* compared to the Northern Hemisphere. Consequently, permafrost temperature departures and trends were statistically not significant and had large uncertainty bands (Fig. 3d). Annual permafrost temperature change. a–d Permafrost temperature departure \({\mathrm{\Delta }}\bar T_{y,b}\) calculated from mean annual ground temperatures in boreholes near the depth of zero annual amplitude Z* relative to the 2008–2009 reference period. Mean values calculated as de-clustered, indexed area-weighted averages (Eq. 1). Temperature uncertainties are expressed at 95% confidence. Sample size shown is the number of borehole sites per year and region Air temperature changes The relation between air and soil temperature development in permafrost regions is not straightforward due to highly variable buffer layers such as vegetation, active layer soils, or snow cover. To compare permafrost temperature changes to those in the atmosphere, we applied the same calculation method for each borehole site using mean annual air temperatures (\(\hat T\)) at 2-m height above ground level (Fig. 4a, d), spatially interpolated from the ERA Interim gridded reanalysis data set23. We calculated general snow thickness changes for Arctic sites in Fig. 4a, b. However, there is not, as yet, a reliable consistent data set on snow thickness applicable for high elevation regions or Antarctica. Annual air temperature and snow depth changes. a–d Air temperature anomaly \({\mathrm{\Delta }}\hat T_{y,b}\) relative to the 1981–2010 reference period calculated from mean annual air temperatures at 2-m height above the ground level interpolated from the ERA Interim reanalysis data set. Mean values calculated as de-clustered, indexed area-weighted averages (Eq. 4). Dark colored dashed lines indicate 4-year end-point running means. Snow depth changes \({\mathrm{\Delta }}\hat S_{y,b}\) in a and b, indicated in gray, calculated as the difference relative to the 1999–2010 reference period from the CMC reanalysis data set (eq. 5). e, f Onset of snow SO, snow insulation maximum SIM (dashed line), and the end of snow melt SE. Uncertainties are expressed as shading at 95% confidence. Sample size n indicates the number of boreholes The propagation of temperature change in the atmosphere downward to the depth of Z* can take up to several years, but the time varies depending on the surface characteristics, the subsurface ice content, and the soil thermal diffusivitiy24,25. We took this lag into account by averaging over the previous 4 years for each year considered, but there was no significant correlation at an annual resolution between permafrost temperature departures at Z* depth and 2-m air temperature anomalies derived from ERA Interim data alone (Fig. 4). This lack of correlation can be attributed to the discrepancy between the scale at which borehole observations are conducted and the spatial resolution of 80 km for the gridded air-temperature reanalysis data26 and because in permafrost regions, the reanalysis output is more dependent on the model structure and data assimilation methods than in data-rich regions27; local micro- and secondary climate effects28; and buffering layers at the air-ground interface5 that influence the thermal response of permafrost to short-term changes in air temperature. Previous studies have shown that these surface effects, along with the thermal diffusivity of the underlying materials, act as a buffer that reduces the effect of short-term climate variation2,3,5,6,7,29. Thus, short-term meteorological phenomena are increasingly attenuated and delayed with depth, and the mean permafrost temperature changes near the depth of Z* generally follow the atmosphere's long-term trend. Mean surface air temperature changes calculated from ERA Interim data at the borehole locations (Fig. 5b) are similar to those for permafrost temperature with respect to direction and order of magnitude. The decadal change rates of air temperature were estimated to 0.86 ± 0.84 °C per reference decade in the Arctic continuous permafrost zone, 0.63 ± 0.91 °C \(dec_{{\mathrm{Ref}}}^{ - 1}\) in the Arctic discontinuous permafrost zone, and 0.1 ± 0.50 °C \(dec_{{\mathrm{Ref}}}^{ - 1}\) in mountain permafrost. Air temperature trends in Antarctica (annual mean 0.10 ± 0.55°C \(dec_{{\mathrm{Ref}}}^{ - 1}\), June–August mean –0.48 ± 0.91 °C \(dec_{{\mathrm{Ref}}}^{ - 1}\), unweighted median –0.12, Fig. 5b), however, do not match the observed strong permafrost warming. This discrepancy is due to large climatic differences between the Antarctic Peninsula and eastern Antarctica30,31, the small number of boreholes that fulfill the quality criteria, and the principal climate model bias in Antarctica32. Decadal temperature change rates at permafrost borehole sites. a Boxplots showing the regional (unweighted) distribution of permafrost temperature change rates near the depth of zero annual amplitude Z* calculated for 2007–2016 in °C per decade (Eq. 3). * indicate significant difference to 0, p < 0.05 defined by the Wilcoxon Signed-Rank test. p values ACP: 0.000, ADP: 0.002, MP: 0.016, and ANP: 0.156 (rounded to three digits). The Kruskal–Wallis test indicated p values > 0.05 for couples that are tied with brackets in the graph. b Air temperature change rates at 2 m height above ground at borehole sites in °C per decade, calculated from the ERA Interim reanalysis data for 2004–2016, separated by regions. Symbols: n number of boreholes, ACP Arctic continuous permafrost, ADP Arctic discontinuous permafrost, MP mountain permafrost, ANP Antarctic permafrost. Boxes represent 25–75% quartiles and whiskers are 1.5 interquartile ranges from the median. Medians are shown as black lines Air temperature trends in the Arctic continuous permafrost zone correspond well with permafrost temperature change rates (Figs. 3a and 4a), suggesting that enhanced warming of permafrost in the High Arctic reflects the polar amplification of recent atmospheric warming22. However, in the Arctic discontinuous permafrost zone, air temperatures were statistically unchanged between 2006 and 2014 while permafrost temperatures increased. We found that snow dynamics, the time lag between air and ground temperature, and the latent heat effect serve as concurrent explanations for this phenomenon. Snow thickness changes The snow cover reduces the upward transfer of energy from the ground to the air during winter33,34. Distinct peaks in the mean snow depth in 2009, 2011 and from 2013 onward (Fig. 4a, b) suggest that the observed continued warming of discontinuous permafrost is facilitated by increasing snow thickness. Compared to the Arctic continuous permafrost zone, the mean snow cover in the discontinuous zone arrived about 1 week later, reached its maximum insulation 1 month earlier, and also disappeared half a month earlier. Compared to 2007–2009 the snow cover in 2014–2016 in the discontinuous zone started to form 13.7 days earlier, reached its maximum insulation effect 37.7 days earlier, and disappeared 9.3 days earlier (Fig. 4f). It was shown previously that a difference of only 10 days caused significant warming in Alaska35. Increases of shrub height and density that trap wind drifting snow is likely also a contributing factor36. All of these changes provide evidence of increased protection of the ground from low temperatures during winter37,38. Snow timing differences within the continuous zone are less distinct but show a generally similar trend (Fig. 4e, f). An important factor that explains the general discrepancy between mean annual temperature changes at Z* in permafrost and the atmosphere is that permafrost progressively with depth "remembers" the surface temperature history of the past several years25,39. The temporal dimension of episodes with lower air temperatures between 2009 and 2013 in the Arctic (Fig. 4a, b), and around 2012 in the mountains (Fig. 4c), relative to preceding period of higher air temperatures, however, was not large enough to sustainably impact the general warming trend of permafrost. We partly attribute the difference in ground temperature change between the continuous permafrost and the discontinuous permafrost zones to the latent heat effect. In this process, the ice-water phase change associated with warmer permafrost in the discontinuous zone (Fig. 2a, b) reduces the response of ground temperature to changes in air temperature4. Cold permafrost therefore exhibits a greater response to changing air temperature compared to permafrost with a temperature close to 0 °C4,40. The warming of permafrost observed since IPY continues the trends documented prior to IPY41. Our global analysis suggests that the future increases in air temperature projected under current climate scenarios42 will result in continued permafrost warming. The duration of our time-series, however, does not yet permit predictive analysis of non-linear climate-permafrost relations as the latent heat effect is stronger near 0 °C and surface characteristics are not constant. However, observations of thaw at some of the observation sites demonstrate that the latent heat requirement cannot indefinitely delay permafrost warming down to depths of about 15 m observed in this study (Fig. 6), nor prevent the eventual thawing of permafrost. This could have wide implications in terms of permafrost degradation and release of greenhouse gases from decomposition of organic matter. Depth distribution of borehole temperatures. a Boxplots showing the depth distribution of temperature measurements (sensor depths) and of the zero annual amplitude Z* in the boreholes. b Temperature distribution in borehole sensors that are shallower (≤12 m) and deeper (>12 m) than the median of Z*. * indicate significant difference to 0, p < 0.05 defined by the Wilcoxon Signed-Rank test; p values ≤12 m: 0.000, >12 m: 0.000 (rounded to three digits). The Kruskal–Wallis test between these zones (b) resulted in p = 0.908, indicating that the zones are not significantly different to each other. Boxplots represent 25–75% quartiles and whiskers are 1.5 interquartile ranges from the median. Medians are shown as black lines and labeled with values. The number of boreholes (sensors), and the number of available Z*values is indicated by n The SWIPA 2017 report41 gave an estimate of 0.5 °C warming of permafrost in very cold areas such as the High Arctic since IPY (2007–2009). This is similar to our network observations of strong warming within the Arctic continuous permafrost zone and of continued warming elsewhere. The assessment of permafrost temperature trends presented in this paper can facilitate validation of models to project thawing of permafrost down to the depth of Z* and associated impacts with respect to feedbacks to the climate system. The current global coverage of permafrost temperature monitoring is not yet ideal, due to the limited sampling in regions such as Siberia, central Canada, Antarctica, and the Himalayan and Andes mountains. Furthermore, even though the data used were quality checked and are as complete as possible, logistical challenges during fieldwork caused gaps in the time series. Better assessments of the evolution of the thermal state of permafrost, including consideration of non-linear system behavior, will benefit from ongoing efforts to enhance the global network spatially and extend the length of the record. Enhancing existing monitoring sites through co-location with meteorological stations could further improve understanding of microclimate and buffer-layer influences, and would also provide the data necessary for a comprehensive assessment of permafrost responses to ongoing climate change. The newly compiled GTN-P data set has facilitated assessment of trends in permafrost temperatures and can also contribute to improved representation of permafrost dynamics in climate models and the reduction of uncertainty in the prediction of future conditions. Field observations of permafrost temperatures Boreholes were established and temperatures were recorded during annually repeated fieldwork campaigns in polar and high-elevation areas. Temperature was measured either by lowering a calibrated thermistor into a borehole, or recorded using permanently installed multi-sensor cables43. Measurements were recorded either manually with a portable temperature system or by automated continuous data logging. At some borehole sites, permafrost thawed at the measurement depth during the period of observations. The criterion to include non-permafrost sites in the global change calculation was that ground temperatures near the depth of the Z* were below 0 °C until the end of the IPY reference period in 2009. Compiling permafrost temperature data Permafrost temperature data are assembled in the Global Terrestrial Network for Permafrost (GTN-P) Database16. They are then transferred to a global data set after a 1-year embargo to allow authors to publish their local findings first. Within the GTN-P Data Management System the data presented were harmonized, quality checked and filtered to generate a standardized global permafrost borehole data set. Data standardization was performed during data entry into the database following international geospatial metadata standards ISO 19115/2 and TC/221. The data management system is based on an object-oriented data model, accessible online at http://gtnpdatabase.org. The GTN-P mean annual ground temperature \(\bar T\) compilation is accessible online at https://doi.org/10.1594/PANGAEA.884711. A total of 154 boreholes with 1264 \(\bar T\) values were used in this study. Data analyses of decadal permafrost temperature change were based on 123 boreholes and 1033 \(\bar T\) values calculated from > 105 sensor observations. Calculating permafrost temperature change We used the R environment44 to calculate the mean permafrost temperature change for every borehole from quality-filtered \(\bar T\) data. The same measurement depth was used each year for a borehole. The depth was chosen to be the nearest available sensor to the depth of Z*, the depth at which seasonal changes in temperature are ≤0.1 °C (Fig. 7). The nearest depth to Z* was detected by either an algorithm calculating the difference between annual maximum (summer) and minimum (winter) temperature in the original data starting from the shallowest depth downwards and using cubic spline interpolation between thermistors and a threshold set to sensor accuracy, or by visual inspection of annual maximum and minimum temperature measurements plotted versus depth (Fig. 7). Because the depth of Z* varies over time as temperature changes, we used an average estimated for the observation period. The data revealed that 19.5% of measurements were from above Z*. 59.8% of measurements represented Z* and 20.7% were from below Z*. Measurements from boreholes that had no reliable indication of Z* had a mean depth of 17.1 m, which is well below the average of all indicated Z* values (mean 14.1 m, median 12 m). Thus, the data distribution represents an approximation to Z* which minimizes the potential bias caused by seasonal fluctuations. Thermal regime of permafrost. Schematic showing the maximum (red line) and minimum ground temperature (blue line) during the year, and their convergence to give the mean annual ground temperature \(\bar T\) at the depth of zero annual amplitude Z*. Black dots show the schematic mean temperature for permafrost soils. Compiled guided by French53 We created a data set that reflects long-term climate change and avoids large temperature fluctuations caused by seasonal phenomena, e.g., in Antarctica, by excluding data from shallow boreholes that did not reach Z*. Because Z* could not be determined in all boreholes the minimum depth was set to 10 m. However, five boreholes with depths between 6.7 m and 10 m were included (GTN-P ID's16: 137, 860, 861, 877, and 1192), because their depths were equal to Z*, and seasonal fluctuations were less than the instrument precision and accuracy. Boreholes that fulfilled the quality criteria but were not included in this analysis due to depth constraints, represented 22.6% of the original data set. 8.6% were excluded from the Arctic continuous data set; 23.4% from the Arctic discontinuous data set; 30.0% from the mountain data set; and 57.1% from the Antarctic data set. Statistically indifferent temperature trends of the remaining shallow (≤12 m) and deeper (>12 m, max. 40 m) boreholes in the utilized data set confirm that the observed depths near Z* (Fig. 6b) provide a representative sample tracking climate variability coherently. We applied different methods to extract information on permafrost temperature changes in single years, in single boreholes and for decadal changes in the permafrost regions, described as follows: We define a set i = {2007,...,2016} to identify the years. To identify the boreholes b we use the GTN-P Database ID. Continuous (full-year) records started at a large number of borehole sites in 2008, the second year of the 4th International Polar Year (IPY). To base the reference period for the annual departure calculation on the largest possible number of boreholes we exclude 2007 and estimate the annual differences in \(\bar T\) in year \(y \in i\) and borehole b as $${\mathrm{\Delta }}\bar T_{y,b} = \bar T_{y,b}-1/2\left( {\bar T_{2008,b} + \bar T_{2009,b}} \right)$$ The last term on the right-hand side of Eq. (1) serves as our mean value for the reference period. We compare this reference period to the latest available mean value period and calculate\({\mathrm{\Delta }}\bar T_b\) to rank total temperature differences among boreholes. $${\mathrm{\Delta }}\bar T_b = 1/2\left( {\bar T_{2015,b} + \bar T_{2016,b}} \right)-1/2\left( {\bar T_{2008,b} + \bar T_{2009,b}} \right)$$ Equations (1) and (2) require data to be available in each of the observation years. To calculate the rate of temperature change per decade we follow a third approach using the primary mean annual ground temperature data set \(\bar T_b\) for all available years in i and perform linear regression, according to the following attribution of our data in the regression equation: $$\bar T_b^{{\mathrm{reg}}} = a_b + c_bx$$ where \(\bar T_b^{{\mathrm{reg}}}\) is the regression estimate of \(\bar T_b\), ab is the vertical intercept (the starting temperature in a borehole), cb is the slope of the regression line, and x is the range of years involved. The requirement to perform linear regression on b was that i included at least one value y in the IPY period (2007, 2008, or 2009), one value in the modern reference period (2015 or 2016) and a minimum of five values in total. We calculated the rate of temperature change in each borehole as the slope of the linear regression cb using the linear model function (lm) in the R environment. To generate decadal change values, we extrapolated 37.7% of the borehole data in the Arctic continuous zone, 47.3% in the Arctic discontinuous zone, 29.3% in the mountain zone and 100% in Antarctica for 1–3 years. The consistency of temperature time series in boreholes depends on sustained data collection at remote sites. At some boreholes, instrumentation was destroyed, damaged or malfunctioned leading to interruptions in data collection45. To avoid broken data runs affecting the annual means, measurements at frequencies greater than monthly (e.g. daily or hourly), were aggregated to monthly means before calculating annual means. Mean annual values were based on at least monthly primary data. Data points based on fewer than one measurement every month were allowed only if the sensor depth was equal to or below the depth of zero annual amplitude. Annual means were calculated from original measurements as calendar-year means in the GTN-P Database. Meteorological years in permafrost areas depend on the onset and termination of the freezing and thaw periods, and in previous studies varied spatially. We therefore indicated the starting month of the period in the data set. Mean values contain only the available valid \(\bar T\) data in each year, and thus the number of borehole temperatures included in change-rate calculations varies between years. To evaluate temperature changes in the Arctic continuous and discontinuous permafrost zones, in the mountain permafrost and in permafrost in Antarctica, we applied a spatial de-clustering prior to calculating mean values of temperature changes from the boreholes. The spatial de-clustering reduces the bias in the calculation of means caused by an inhomogeneous (clustered) spatial distribution of the boreholes. We grouped the boreholes into ten world zones (Fig. 8) and defined the areas underlain by permafrost by correlating the boreholes with the International Permafrost Association (IPA) permafrost zones46. Arctic continuous permafrost represents the mean of four different zones: Arctic continuous permafrost West (2.41 × 106 km2), Arctic continuous permafrost West islands (1.57 × 106 km2), Arctic continuous permafrost Europe (0.22 × 106 km2), and Arctic continuous permafrost East (Asia) (6.62 × 106 km2). Arctic discontinuous permafrost is averaged over three zones: Arctic discontinuous permafrost West (3.91 × 106 km2), Arctic discontinuous permafrost East (Asia) (3.86 × 106 km2), and Arctic discontinuous permafrost Europe (0.28 × 106 km2). Mountain permafrost is averaged over two zones: Chinese mountains (2.07 × 106 km2), and Other mountains (2.33 × 106 km2) including the Alps and other sites with high elevations >1000 m a.s.l. such as in Scandinavia and the North American Cordillera. Antarctica is treated as one zone (0.05 × 106 km2 6,47). For comparing temperature trends between North American and north Asian permafrost we define two separate data sets by excluding southern, European, and central Asian boreholes. Within the zones, clusters of boreholes close together were grouped when the sum of longitude and latitude differences were <0.1 decimal degree and the \(\bar T\) values of adjacent boreholes were averaged before calculating the mean temperature change. Weighting and grouping of boreholes. Map showing the indices and zoning of boreholes prior to area-weighting and calculation of mean temperature changes. a Northern Hemisphere. b Antarctica. Permafrost zones are derived from the International Permafrost Association (IPA) map46. World Borders data are derived from http://thematicmapping.org/downloads/world_borders.php and licensed under CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/) To estimate the mean annual temperature change in each zone we applied area-weighted arithmetic averaging of \(\bar T\) values in boreholes. To preserve the signal of local outlier trends showing atypical temperature change directions and magnitudes (e.g., in parts of Antarctica and in Québec, Canada), we did not use medians. To suppress near-surface and geothermal changes indices of boreholes were distributed as three possible integers to multiply the sites before averaging, according to the following criteria: (i) \(\bar T\) is available in each year of the reference periods indicated in Eqs. (1) and (2), and (ii) \(\bar T\) depth is equal to the depth of Z* and >10 m (few exceptions were made according to the depth of Z* as described above). Calculating air temperature change The set of air temperature data monitored at borehole sites is incomplete. To develop data comparable to the permafrost temperature data, we calculated mean annual air temperatures (\(\hat T\)) from ERA Interim 2 m air temperature data set with 80 km spatial resolution. We derived the reanalysis time series for each borehole from linear interpolation of the four nearest grid points surrounding the borehole coordinates. Mean annual values were calculated from December until November. Given, that the propagation of atmospheric temperature change downward to the depth of Z* takes up to several years25,37, depending on the local thermal diffusivity24, we extended the time series shown in Fig. 4 backwards to 2000 and used the standard reference period 1981–2010 to estimate anomalies. We define a set j = {1981,...,2016} to identify the years being considered. We use the coordinates of boreholes b defined in Eq. (1) and calculate the annual difference for specific years \(y \in j\) in \(\hat T\) as $${\mathrm{\Delta }}\hat T_{y,b} = \hat T_{y,b}-\frac{1}{{30}}\mathop {\sum }\limits_{j = 1981}^{2010} \hat T_{j,b}$$ Based on the average propagation of surface temperature towards Z* of 4 years25 we calculated 4-year end-point running means to compare air temperature with permafrost temperature changes. To calculate the rate of temperature change over a decade, we apply linear regression on \(\hat T_{y,b}\) for all \(y \in j\) using the linear model function in the R environment and the slope of the linear regression in an annual array between 2004 and 2016 and multiplied the annual change rates by 10. Data analyses of air temperature change were based on 137 borehole sites and 4932 \(\hat T\) values. Calculating snow thickness change We calculated the mean annual snow thickness (\(\hat S\)) for the Arctic continuous and the discontinuous permafrost zone from the Canadian Meteorological Centre (CMC) daily snow depth analysis data with 24 km spatial resolution48. We derived the reanalysis time series for each borehole from linear interpolation of the four nearest grid points surrounding the borehole coordinates. Mean values were calculated from December until February for each year in the data set. To identify winters we use subsequent years, e.g. in the time series we assign the 1999–2000 winter to 2000. Given that 1999 is the earliest available year in the data set we define a set k = {1999,...,2016} to identify the winter years, where \(y \in k\). We use the coordinates of boreholes b defined in Eq. (1) and calculate the annual difference in \(\hat S\) as $${\mathrm{\Delta }}\hat S_{y,b} = \hat S_{y,b}-\frac{1}{{12}}\mathop {\sum }\limits_{k = 1999}^{2010} \hat S_{k,b}$$ The onset snow has an impact on the ground thermal regime. To assess the onset of snow cover, we assemble a set of snow depths dates at daily resolution between 1 September and 30 April in a set of days l = {1,2,3,...,242} for every year in k. In leap years l = {1,2,3,...,243}. To calculate the onset date of snow SO we use the first day \(d_{k,b}^{{\mathrm{SO}}}\) reaching 6 cm in l for which the following 5 days, adding up to a synoptic time scale of 6 days, retain a daily snow cover of at least 6 cm49. The insulation maximum of snow SIM is reached when the snow cover accumulated to a thickness between 40 and 50 cm33,37. Accordingly, we set SIM based on the first day \(d_{k,b}^{{\mathrm{SIM}}}\) in l reaching 50 cm, or, if it is not reached, take the day representing the maximum snow cover in l (below 50 cm). To assess the end of snow cover SE, we assemble the snow depth dates at daily resolution between 1 September and 30 August in a set m = {1,2,3,...,365} for every year in k. In leap years m = {1,2,3,...,366}. To calculate SE we use the first day \(d_{k,b}^{{\mathrm{SE}}}\) in m reaching down to less than 1 cm after a decreasing gradient of at least 8 cm over 6 days, or, if this gradient is not reached, the first day of at least 6 subsequent snow free (<1 cm) days. Measurement accuracy The reported measurement accuracy of our temperature observations, including manual and automated logging systems, varied from ±0.01 to ±0.25 °C with a mean of ±0.08 °C. Previous tests have shown the comparability of different measurement techniques to have an overall accuracy of ±0.1 °C3. Thermistors are the most commonly used sensors for borehole measurements. Their accuracy depends on (1) the materials and process used to construct the thermistor, (2) the circuitry used to measure the thermistor resistance, (3) the calibration and equation used to convert measured resistance to temperature, and (4) the aging and resulting drift of the sensor over time. Thermistors are typically calibrated to correct for variations due to (1) and (2). About 20% of the boreholes are visited once per year and measured at or below Z* using single thermistors and a data logger. In this case the system is routinely validated in an ice-bath allowing correction for any calibration drift. The accuracy of an ice-bath is ~± 0.01 °C50. Using the offset determined during this validation to correct the data greatly increases the measurement accuracy near 0 °C, an important reference point for permafrost. The remaining systems are permanently installed and typically ice-bath calibrated at 0 °C before deployment. The calibration drift is difficult to quantify as thermistor chains are not frequently removed for re-calibration or validation. In many cases removal of thermistor chains becomes impossible some time after deployment, e.g. because of borehole shearing. The drift rate among bead thermistors from different manufacturers was <0.01 °C per year during a 2 year experiment at 0, 30, and 60 °C51. The calibration drift of glass bead thermistors was found to be 0.01 mK per year52, at an ambient temperature of 20 °C. A single drifting thermistor in a chain is detectable through its anomalous temporal trend. Such data were excluded from our data set. The absolute accuracy of borehole temperature measurements, in terms of their representativeness of the temperature distribution in undisturbed soil, also depends on the depth accuracy of the sensors' positions in the borehole. This study is concerned with temperatures at Z*, where temperature gradients are typically small (<0.1 °C m−1). Consequently, mm-level positioning accuracy does not significantly impact measurement accuracy. Finally, as this study is concerned with annual averages, adequate chronometry is ensured. The above discussion of accuracy relates to the absolute temperature values measured, but the detection of temperature change is more accurate because errors in calibration offset have no impact, sensor nonlinearities are generally small and not of concern. We therefore consider <0.1 °C a conservative average estimate of the accuracy of temperature change on an individual sensor basis. Confidence intervals and statistical significance Permafrost and air temperature departure from 2008 until 2016 (\({\mathrm{\Delta }}\bar T_{i,b}\) and \({\mathrm{\Delta }}\hat T_{y,b}\)) and the regression from 2007 until 2016 of each borehole were used to calculate the 95% confidence intervals within each world zone using a Student t-test in the R environment (52% p < 0.05, 48% p > 0.05, mean |t| = 3.4). The upper and lower confidence boundaries were calculated from de-clustered and indexed boreholes. Mean confidence intervals for composite permafrost zones (global, Arctic continuous, Arctic discontinuous, mountain, Asian and American) were area-weighted. Antarctica consists of one zone and thus area-weighting is not applicable. Given a non-normal, unimodal, and only slightly skewed distribution of data in similarly shaped subsets (regions) gained by eq. 3 (Figs. 5, 6), we performed a Wilcoxon Signed-Rank test and a Kruskal–Wallis test to assess the significance of the difference to zero and the differences between medians, respectively. To consider false positives, we performed a False Discovery Rate adjustment of the p-values, resulting in 43.3% p < 0.05, 56.6% p > 0.05, median 0.08 in a data matrix of 9 years (eq. 1) versus 10 permafrost world zones indicated in Fig. 8. Boxplots represent 25–75% quartiles and whiskers are 1.5 interquartile ranges from the median. The GTN-P global mean annual ground temperature data for permafrost near the depth of zero annual amplitude (2007–2016) is accessible online at https://doi.org/10.1594/PANGAEA.884711. Gruber, S. Derivation and analysis of a high-resolution estimate of global permafrost zonation. Cryosphere 6, 221–233 (2012). ADS Article Google Scholar Christiansen, H. H. et al. The thermal state of permafrost in the Nordic Area during the International Polar Year 2007-2009. Permafrost Periglac. Process. 21, 156–181 (2010). Romanovsky, V. E. et al. Thermal state of permafrost in Russia. Permafrost Periglac. Process. 21, 136–155 (2010). Romanovsky, V. E., Smith, S. L. & Christiansen, H. H. Permafrost thermal state in the Polar Northern Hemisphere during the international polar year 2007-2009: a synthesis. Permafrost Periglac. Process. 21, 106–116 (2010). Smith, S. L. et al. Thermal state of permafrost in North America: a contribution to the international polar year. Permafrost Periglac. Process. 21, 117–135 (2010). Vieira, G. et al. Thermal state of permafrost and active-layer monitoring in the antarctic: advances during the international polar year 2007–2009. Permafrost Periglac. Process. 21, 182–197 (2010). Zhao, L., Wu, Q., Marchenko, S. S. & Sharkhuu, N. Thermal state of permafrost and active layer in central asia during the international polar year. Permafrost Periglac. Process. 21, 198–207 (2010). Pepin, N. et al. Elevation-dependent warming in mountain regions of the world. Nat. Clim. Change 5, 424–430 (2015). Chadburn, S. E. et al. An observation-based constraint on permafrost loss as a function of global warming. Nat. Clim. Change 7, 340–344 (2017). Larsen, J. N. & Fondahl, G. Arctic human development report: regional processes and global linkages (Nordic Council of Ministers, 2015). Hjort, J. et al. Degrading permafrost puts Arctic infrastructure at risk by mid-century. Nat. Commun. 9, 5147 (2018). Haeberli, W., Schaub, Y. & Huggel, C. Increasing risks related to landslides from degrading permafrost into new lakes in de-glaciating mountain ranges. Geomorphology 293, 405–417 (2017). Bowden, W. B. Climate change in the arctic – permafrost, thermokarst, and why they matter to the Non-Arctic world. Geogr. Compass 4, 1553–1566 (2010). Schaefer, K., Lantuit, H., Romanovsky, V. E., Schuur, E. A. & Witt, R. The impact of the permafrost carbon feedback on global climate. Environ. Res. Lett. 9, 9 (2014). Schuur, E. A. G. et al. Climate change and the permafrost carbon feedback. Nature 520, 171–179 (2015). ADS CAS Article Google Scholar Biskaborn, B. K. et al. The new database of the Global Terrestrial Network for Permafrost (GTN-P). Earth Syst. Sci. Data 7, 245–259 (2015). Serreze, M. C. et al. Observational evidence of recent change in the northern high-latitude environment. Clim. Change 46, 159–207 (2000). Harris, C. et al. Warming permafrost in European mountains. Glob. Planet. Change 39, 215–225 (2003). Harris, C. et al. Permafrost and climate in Europe: monitoring and modelling thermal, geomorphological and geotechnical responses. Earth-Sci. Rev. 92, 117–171 (2009). Streletskiy, D. et al. Permafrost thermal state. Bull. Amer. Meteor. Soc. 98, 19–21 (2017). Liu, G. et al. Permafrost warming in the context of step-wise climate change in the tien shan mountains, China. Permafrost Periglac. Process. 28, 130–139 (2017). Huang, J. et al. Recently amplified arctic warming has contributed to a continual global warming trend. Nat. Clim. Change 7, 875–879 (2017). Dee, D. P. et al. The ERA-Interim reanalysis: configuration and performance of the data assimilation system. Q. J. R. Meteorol. Soc. 137, 553–597 (2011). Kusuda, T. & Achenbach, P. R. Earth temperature and thermal diffusivity at selected stations in the United States (National Bureau of Standards. Gaithersburg MD, 1965). Lachenbruch, A. H. & Marshall, B. V. Changing climate: geothermal evidence from permafrost in the Alaskan Arctic. Science 234, 689–696 (1986). Fiddes, J. & Gruber, S. TopoSCALE v. 1.0: downscaling gridded climate data in complex terrain. Geosci. Model Dev. 7, 387–405 (2014). Lindsay, R., Wensnahan, M., Schweiger, A. & Zhang, J. Evaluation of seven different atmospheric reanalysis products in the Arctic. J. Clim. 27, 2588–2606 (2014). Biskaborn, B. K. et al. Late Quaternary vegetation and lake system dynamics in north-eastern Siberia: implications for seasonal climate variability. Quat. Sci. Rev. 147, 406–421 (2016). Gubler, S., Fiddes, J., Keller, M. & Gruber, S. Scale-dependent measurement and analysis of ground surface temperature variability in alpine terrain. Cryosphere 5, 431–443 (2011). Turner, J. et al. Absence of 21st century warming on Antarctic Peninsula consistent with natural variability. Nature 535, 411–415 (2016). Oliva, M. et al. Recent regional climate cooling on the Antarctic Peninsula and associated impacts on the cryosphere. Sci. Total Environ. 580, 210–223 (2017). Jones, P. D. & Lister, D. H. Antarctic near‐surface air temperatures compared with ERA‐Interim values since 1979. Int. J. Climatol. 35, 1354–1366 (2015). Park, H., Fedorov, A. N., Zheleznyak, M. N., Konstantinov, P. Y. & Walsh, J. E. Effect of snow cover on pan-Arctic permafrost thermal regimes. Clim. Dynam. 44, 2873–2895 (2015). Park, H., Sherstiukov, A. B., Fedorov, A. N., Polyakov, I. V. & Walsh, J. E. An observation-based assessment of the influences of air temperature and snow depth on soil temperature in Russia. Environ. Res. Lett. 9, https://doi.org/10.1088/1748-9326/9/6/064026 (2014). Ling, F. & Zhang, T. J. Impact of the timing and duration of seasonal snow cover on the active layer and permafrost in the Alaskan Arctic. Permafrost Periglac. Process. 14, 141–150 (2003). Paradis, M., Levesque, E. & Boudreau, S. Greater effect of increasing shrub height on winter versus summer soil temperature. Environ. Res. Lett. 11, https://doi.org/10.1088/1748-9326/11/8/085005 (2016). Zhang, T. J. Influence of the seasonal snow cover on the ground thermal regime: an overview. Rev. Geophys. 43, https://doi.org/10.1029/2004rg000157 (2005). Burn, C. R. & Zhang, Y. Permafrost andclimate change at Herschel Island (Qikiqtaruq), Yukon Territory, Canada. J. Geophys. Res. Atmos. 114, https://doi.org/10.1029/2008jf001087 (2009). Zhang, Y., Chen, W. J. & Riseborough, D. W. Temporal and spatial changes of permafrost in Canada since the end of the Little Ice Age. J. Geophys. Res. Atmos. 111, https://doi.org/10.1029/2006jd007284 (2006). James, M., Lewkowicz, A. G., Smith, S. L. & Miceli, C. M. Multi-decadal degradation and persistence of permafrost in the Alaska Highway corridor, northwest Canada. Environ. Res. Lett. 8, https://doi.org/10.1088/1748-9326/8/4/045013 (2013). Romanovsky, V. et al. in Snow, Water, Ice and Permafrost in the Arctic (SWIPA) 65-102 (Arctic Monitoring and Assessment Programme, 2017). Raftery, A. E., Zimmer, A., Frierson, D. M. W., Startz, R. & Liu, P. R. Less than 2° C warming by 2100 unlikely. Nat. Clim. Change 7, 637–641 (2017). Streletskiy, D et al. Strategy and Implementation Plan 2016-2020 for the Global Terrestrial Network for Permafrost (GTN-P). (The George Washington University, Washington D.C., 2017). R Development Core Team. R: A language and environment for statistical computing. (R Foundation for Statistical Computing, 2012). Luethi, R. & Phillips, M. Challenges and solutions for long-term permafrost borehole temperature monitoring and data interpretation. Geogr. Helv. 71, 121–131 (2016). Brown, J., Ferrians, O. J. J., Heginbottom, J. A. & Melnikov, E. S. Circum-Arctic Map of Permafrost and Ground-Ice Conditions(National Snow and Ice Data Center, 1998). Bockheim, J., Campbell, I., Guglielmin, M. & López-Martınez, J. in Proc. 9th International Conference on Permafrost 125–130 (University of Alaska, 2008). Brown, R. D. & Brasnett, B. Canadian Meteorological Centre (CMC) Daily Snow Depth Analysis Data (NASA National Snow and Ice Data Center Distributed Active Archive Center, 2010). Roesch, A., Wild, M., Gilgen, H. & Ohmura, A. A new snow cover fraction parametrization for the ECHAM4 GCM. Clim. Dynam. 17, 933–946 (2001). Wise, J. Liquid-in-glass Thermometer Calibration Service (National Inst. of Standards and Technology - Temperature and Pressure Div., 1988). Wood, S. D., Mangum, B. W., Filliben, J. J. & Tillett, S. B. An investigation of the stability of thermistors. J. Res. Natl Bur. Stand. (1934). 83, 247–263 (1978). Lawton, K. M. & Patterson, S. R. Long-term relative stability of thermistors: Part 2. Precis Eng. 26, 340–345 (2002). French, H. M. The Periglacial Environment 3rd edn (John Wiley & Sons. Ltd., 2007). This research would not have been possible without the long-term commitment of all observers to site maintenance, data collection, and their willingness to share permafrost borehole data. All data were compiled by the Global Terrestrial Network for Permafrost (GTN-P). We thank the International Permafrost Association for financial support. We thank Jerry Brown for initiating the borehole metadata collection and Christina Roolfs for mathematical review. This research was supported by grants from (in alphabetical order) AGAUR ANTALP #2017-SGR-1102 (Catalonia); BMBF PALMOD #01LP1510D (Germany); ERC PETA-CARB #338335 (EU); FCT #PERMANTAR2017-18/PROPOLAR (Portugal); Formas #214-2014-562 (Sweden); HGF COPER #VH-NG-801 (Germany); Horizon 2020 Nunataryuk #773421 (EU); JSPS KAKENHI #25350416, #21310001 (Japan); MESC #RFMEFI58718X0048, #14.587.21.0048-SODEEP (Russia); MeteoSwiss in the framework of GCOS Switzerland, FOEN and SCNAT for the Swiss Permafrost Monitoring Network PERMOS (Switzerland); Natural Resources Canada; NNSF #41690144, #41671060 (China); NRC TSP #176033/S30, #157837/V30, #176033/S30, #185987/V30 (Norway); NSERC #2014-04084, #2015-05411 (Canada); NSF OPP #1304271, #1304555 #1836377; ICER #1558389, #1717770 (USA); PNRA #16_00194 (Italy); Ramon y Cajal #RYC-2015-17597 (Spain); RAS PP #15, #51, #55, GP #AAAA-A18-118022190065-1, #18-218012490093-1 (Russia); RFBR #18-05-60004, #18-55-11003, #16-05-00249, #16-45-890257-YaNAO, #18-55-11005 AF_t(ClimEco), #18-05-60222-Arctica (Russia); RSCF #16-17-00102 (Russia); National Research Foundation, SNA #14070874451 (South Africa). Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, Potsdam, 14473, Germany Boris K. Biskaborn, Heidrun Matthes, Julia Boike, William L. Cable, Bernhard Diekmann, Guido Grosse & Hugues Lantuit Geological Survey of Canada, Natural Resources Canada, Ottawa, ON-K1A 0E8, Canada Sharon L. Smith WSL Institute for Snow and Avalanche Research SLF, Davos, CH-7260, Switzerland Jeannette Noetzli & Marcia Phillips CEG/IGOT, Universidade de Lisboa, Lisbon, 1600-276, Portugal Gonçalo Vieira George Washington University, Washington DC, 20052, USA Dmitry A. Streletskiy Institut de Géographie Alpine, Grenoble, F-38100, France Philippe Schoeneich University of Alaska Fairbanks, Fairbanks, AK-99775, USA Vladimir E. Romanovsky, Alexander Kholodov & Kenji Yoshikawa University of Ottawa, Ottawa, K1N 6N5, Canada Antoni G. Lewkowicz Institute of Physicochemical and Biological Problems of Soil Science, RAS, Moscow, 142290, Russia Andrey Abramov & Alexander Kholodov Université Laval, Centre d'études nordiques, Québec, G1V 0A6, Canada Michel Allard Humboldt-Universität, Geography Department, Berlin, 10099, Germany Julia Boike The University Center in Svalbard, Longyearbyen, N-9171, Norway Hanne H. Christiansen University of Fribourg, Fribourg, CH-1700, Switzerland Reynald Delaloye University of Potsdam, Potsdam, 14469, Germany Bernhard Diekmann, Guido Grosse & Hugues Lantuit Earth Cryosphere Institute, Tyumen Scientific Centre SB RAS, Tyumen, 625000, Russia Dmitry Drozdov, Galina Malkova, Natalia Moskalenko & Alexander Vasiliev University of Oslo, Department of Geosciences, Oslo, N-0316, Norway Bernd Etzelmüller Insubria University, Department of Theoretical and Applied Sciences, Varese, 21100, Italy Mauro Guglielmin Technical University of Denmark, Department of Civil Engineering, Kgs. Lyngby, DK-2800, Denmark Thomas Ingeman-Nielsen Norwegian Meteorological Institute, Oslo, 0313, Norway Ketil Isaksen Hokkaido University, Sapporo, 060-0810, Japan Mamoru Ishikawa Lund University, Lund, 22362, Sweden Margareta Johansson Arctic Portal, Akureyri, 600, Iceland Halldor Johannsson, Anseok Joo & Jean-Pierre Lanckman Komi Science Centre, RAS, Syktyvkar, 167972, Russia Dmitry Kaverin Melnikov Permafrost Institute, RAS, Yakutsk, 677010, Russia Pavel Konstantinov, Pavel Skryabin & Mikhail Zheleznyak Free University Berlin, Geography Department, Berlin, 12249, Germany Tim Kröger University of Lausanne, Lausanne, 1015, Switzerland Christophe Lambiel Northwest Institute of Eco-environment and Resource, CAS, Lanzhou, 730000, China Dongliang Luo & Qingbai Wu Rhodes University, Grahamstown, 6140, South Africa Ian Meiklejohn University of Barcelona, Barcelona, 08001, Spain Marc Oliva Universidad de Alcalá, Madrid, 28801, Spain Miguel Ramos Stockholm University, Stockholm, SE-106 91, Sweden A. Britta K. Sannel Institute of Environmental Geoscience, RAS, Moscow, 101000, Russia Dmitrii Sergeev National Soil Survey Center, Lincoln, NE-68508, USA Cathy Seybold Tyumen State University, Tyumen, 625003, Russia Alexander Vasiliev Boris K. Biskaborn Jeannette Noetzli Heidrun Matthes Vladimir E. Romanovsky Andrey Abramov William L. Cable Bernhard Diekmann Dmitry Drozdov Guido Grosse Halldor Johannsson Anseok Joo Alexander Kholodov Pavel Konstantinov Jean-Pierre Lanckman Dongliang Luo Galina Malkova Natalia Moskalenko Marcia Phillips Pavel Skryabin Qingbai Wu Kenji Yoshikawa Mikhail Zheleznyak Hugues Lantuit The study was initially conceived during a GTN-P workshop in 2015. B.K.B. led the analyses and writing of the manuscript. S.L.S., J.N., H.M., G.V., D.S., P.S., V.E.R. and A.G.L. are principal co-authors. A.A., M.A., J.B., W.L.C., H.H.C., B.D., R.D., D.D., B.E., G.G., M.G., T.I.-N., K.I., M.I., M.J., D.K., A.K., P.K., H.L., C.L., D.L., G.M., I.M., N.M., M.O., M.P., M.R., A.B.K.S., D.S., C.S., P.S., A.V., Q.W., K.Y. and M.Z. contributed with data collection and expert assessment of borehole data. H.J., A.J., T.K. and J.-P.L. performed database coding, data processing, and data analyses. All authors contributed to analysis of the results and revision of the manuscript. Correspondence to Boris K. Biskaborn. The authors declare no competing interests. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Biskaborn, B.K., Smith, S.L., Noetzli, J. et al. Permafrost is warming at a global scale. Nat Commun 10, 264 (2019). https://doi.org/10.1038/s41467-018-08240-4 Lake and drained lake basin systems in lowland permafrost regions Benjamin M. Jones Kenneth M. Hinkel Nature Reviews Earth & Environment (2022) The changing thermal state of permafrost H. Brendan O'Neill Impacts of permafrost degradation on infrastructure Jan Hjort Dmitry Streletskiy Miska Luoto Permafrost carbon emissions in a changing Arctic Kimberley R. Miner Merritt R. Turetsky Charles E. Miller Drivers, dynamics and impacts of changing Arctic coasts Anna M. Irrgang Mette Bendixen By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Permafrost in a warming world Top 50 Earth and Planetary Sciences Articles Editorial Values Statement Editors' Highlights Search articles by subject, keyword or author Show results from All journals This journal Explore articles by subject Nature Communications (Nat Commun) ISSN 2041-1723 (online) nature.com sitemap Articles by subject Protocol Exchange Nature portfolio policies Author & Researcher services Scientific editing Nature Masterclasses Nature Research Academies Libraries & institutions Librarian service & tools Librarian portal Nature Conferences Nature Africa Nature Italy Nature Korea Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
BIT Numerical Mathematics pp 1–29 | Cite as Linear gradient structures and discrete gradient methods for conservative/dissipative differential-algebraic equations Shun Sato In this paper, the use of discrete gradients is considered for differential-algebraic equations (DAEs) with a conservation/dissipation law. As one of the most popular numerical methods for conservative/dissipative ordinary differential equations, the framework of the discrete gradient method has been intensively developed over recent decades. Although discrete gradients have been applied to several specific DAEs, no unified framework has yet been constructed. In this paper, the author moves toward the establishment of such a framework, and introduces concepts including an appropriate linear gradient structure for DAEs. Then, it is revealed that the simple use of discrete gradients does not imply the discrete conservation/dissipation laws. Fortunately, however, for the case of index-1 DAEs, an appropriate reformulation and a new discrete gradient enable us to successfully construct a novel scheme, which satisfies both of the discrete conservation/dissipation law and the constraint. This first attempt may provide an indispensable basis for constructing a unified framework of discrete gradient methods for DAEs. Discrete gradient method Differential-algebraic equations Linear gradient form Conservation law Dissipation law Communicated by Antonella Zanna Munthe-Kaas. The author is supported by the Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists. Mathematics Subject Classification 65L80 The author is grateful to Kensuke Aishima and Takayasu Matsuo for valuable comments. The author thanks the anonymous reviewers for many helpful comments. Appendix A: Continuity of \( {\overline{\nabla }}_{\mathrm {P}}V \) V is quadratic, i.e., \( V(z) = (1/2) z^{\top } B z \) for a symmetric matrix B: in this case, since \( {\overline{\nabla }}_{\mathrm {P}}V(z,z') = B (z+z')/2 \) holds, it is clearly continuous map. Note that, in this case, \( {\overline{\nabla }}_{\mathrm {P}}V = {\overline{\nabla }}_{\mathrm {AVF}} V\). V is strictly convex: in this case, the inequality $$\begin{aligned} 0< V(z) - V(z') - \langle \nabla V(z') , z - z' \rangle < \left\langle \nabla V(z) - \nabla V(z'), z - z' \right\rangle \end{aligned}$$ holds for any \( z \ne z' \). It implies that \( \theta (z,z') \in (0,1) \) holds for \( z \ne z'\). As \( {\overline{\nabla }}_{\mathrm {P}}V \) can also be written in the form $$\begin{aligned} {\overline{\nabla }}_{\mathrm {P}}V(z,z') = \nabla V(z') + \theta (z,z') \left( \nabla V(z) - \nabla V(z') \right) , \end{aligned}$$ (A.1) the boundedness of \( \theta \) proves the continuity of \( {\overline{\nabla }}_{\mathrm {P}}V\). V is convex and \(L_V\)-smooth: in this case, we also use (A.1), but we prove that \( \theta (z,z') \) is bounded when \( \nabla V(z) \ne \nabla V(z') \). Since V is a convex function, we see $$\begin{aligned} 0 \le V(z) - V(z') - \langle \nabla V(z') , z - z' \rangle \le \left\langle \nabla V(z) - \nabla V(z'), z - z' \right\rangle . \end{aligned}$$ This implies that \( \theta (z,z') \in [0,1] \) holds when the denominator of \( \theta (z,z') \) does not vanish. Moreover, \(L_V\)-smoothness provides us with the inequality $$\begin{aligned} \left\langle \nabla V(z) - \nabla V(z'), z - z' \right\rangle \ge \frac{1}{L_V} \left\| \nabla V(z) - \nabla V(z') \right\| ^2 \end{aligned}$$ (note that the conjugate function \( V^{*} \) is \( 1 / L_V\)-strong convex). Summing up, we see that \( \theta \in [0,1] \) holds when \( \nabla V(z) \ne \nabla V(z') \). Appendix B: Non-convex case: sine-Gordon equation Here we consider the sine-Gordon equation \( u_{tx} = \sin u \) which has the conservation law \( {\mathscr {H}} (u) = - \int _{{\mathbb {S}}} \cos u \, {\mathrm {d}}x = \text {const.} \) and the implicit constraint \( {\mathscr {F}} (u) = \int _{{\mathbb {S}}} \sin u \, {\mathrm {d}}x = 0 \). The spatial discretization \( D {\dot{u}} = M \sin u \) satisfies the discrete counterparts \( H(u) = - \sum _{k=1}^K \cos u_k \) and \( F(u) = \sum _{k=1}^K \sin u_k \) of the conservation law and the implicit constraint. However, since H is not convex (and not quadratic), we do not know whether \( {\overline{\nabla }}_{\mathrm {P}}H \) is continuous. Therefore, we use a trick to construct a discrete gradient which is compatible with properness, i.e., belongs to \( {\mathrm {car}}(D) \). Let us consider the expression \( H(u) = H_1 (u) + H_2 (u) \), where \( H_1 (u) = H(u) + (\alpha / 2) \langle u , P u \rangle \) and \( H_2 (u) = - (\alpha / 2) \langle u , P u \rangle \) (\( \alpha \in {\mathbb {R}}\) is a constant and P is the orthogonal projector on the set of zero-mean vectors). In the expression, (a) \( H_1\) and \(H_2\) are proper, and (b) \( H_2 \) is quadratic so that \( {\overline{\nabla }}_{\mathrm {P}}H_2 \) is continuous. Thus, roughly speaking, it is sufficient to choose the constant \( \alpha \) such that \(H_1\) is convex. This itself cannot be done since P is a singular matrix, but a sufficiently large \( \alpha \) provides us with the similar result as shown in Lemma B.1. To this end, we assume \( H_0 := H(u_0) < 0 \) (the case \( H_0 > 0 \) can be treated similarly). Lemma B.1 Let \( H_0 < 0 \) be a constant. Then, if \( \alpha > 1 - K / H_0 \), the Hessian \( \nabla ^2 H_1 (u ) = \mathrm {diag} ( \cos u_k ) + \alpha P \) of \( {\widetilde{H}} \) is positive definite on the domain \( \{ u \in {\mathbb {R}}^K \mid H(u) = H_0 \} \). It is sufficient to prove \( \langle {\widetilde{v}} , \nabla ^2 H_1 (u) {\widetilde{v}} \rangle > 0 \) holds for any \( {\widetilde{v}} \in \{ w \mid \Vert w \Vert = 1 \} = \{ v \pm \sqrt{ (1-x^2)/K } {\mathbf {1}} \mid v \in {\mathbb {R}}^K , \ \Vert v \Vert = x, \, \langle v,\ {\mathbf {1}} \rangle = 0, x \in [0,1] \} \). Since $$\begin{aligned} 2 x \sqrt{1-x^2}&= 2 \sqrt{ \left( - \frac{1-x^2}{K} H_0 \right) \left( -\frac{K}{H_0} x^2 \right) } \le - \frac{1-x^2}{K} H_0 - \frac{K}{H_0} x^2 \end{aligned}$$ holds due to the inequality of arithmetic and geometric means, we see $$\begin{aligned} \langle {\widetilde{v}} , \nabla ^2 H_1 (u) {\widetilde{v}} \rangle&= \sum _{k=1}^K (\cos u_k )(v_k)^2 \pm 2 \sqrt{ \frac{1-x^2}{K} } \langle v, \cos u \rangle - \frac{1-x^2}{K} H_0 + \alpha x^2 \\&\ge - \Vert v \Vert ^2 - 2 \sqrt{ \frac{1-x^2}{K} } \Vert v \Vert \Vert \cos u \Vert - \frac{1-x^2}{K} H_0 + \alpha x^2\\&\ge - x^2 - 2 x \sqrt{1-x^2} - \frac{1-x^2}{K} H_0 + \alpha x^2 \\&\ge - x^2 + \frac{1-x^2}{K} H_0 + \frac{K}{H_0} x^2 - \frac{1-x^2}{K} H_0 \\&\quad + \alpha x^2 = \left( - 1 + \frac{K}{H_0} + \alpha \right) x^2. \end{aligned}$$ Thus, for \( x \in (0,1] \), \( \langle {\widetilde{v}} , \nabla ^2 H_1 (u) {\widetilde{v}} \rangle > 0\) holds under the assumptions of this lemma. On the other hand, for the case \( x = 0\), i.e., \( {\widetilde{v}} = \sqrt{1/K} {\mathbf {1}} \), \( \langle {\widetilde{v}} , \nabla ^2 H_1 (u) {\widetilde{v}} \rangle = - H_0/K > 0 \) holds. \(\square \) By using \( H_1 \) and \(H_2\), we can construct $$\begin{aligned} {\widetilde{\nabla }} H(u,u') = {\overline{\nabla }}_{\mathrm {P}}H_1 (u,u') - \alpha P \frac{u + u'}{2} \end{aligned}$$ (B.1) which can be used like as a discrete gradient (see Proposition B.1). Proposition B.1 Suppose that \( \alpha > 1 - K / H_0 \). The function \( {\widetilde{\nabla }} H :{\mathbb {R}}^K \times {\mathbb {R}}^K \rightarrow {\mathbb {R}}^K \) defined by (B.1) satisfies \( H(u) - H(u') = \langle {\widetilde{\nabla }} H (u,u'), u - u' \rangle \), \( {\widetilde{\nabla }} H(u,u) = \nabla H(u) \), \( {\widetilde{\nabla }} H(u,u+\epsilon ) \) is continuous in the neighborhood of \( \epsilon = 0\) as a function of \(\epsilon \) for each \( u \in \{ u \in {\mathbb {R}}^K \mid H(u) = H_0 \} \). The first two conditions are immediate. The third condition also holds since the denominator of \( \theta \) with respect to \(H_1\) is positive in the neighborhood due to the positive definiteness of the Hessian. \(\square \) Since we do not know the continuity of \( {\widetilde{\nabla }} H \) on the whole domain \( {\mathbb {R}}^K \), it cannot be referred to as a discrete gradient (recall Definition 1.1). Fortunately, however, we can construct a consistent numerical scheme by using \( {\widetilde{\nabla }} H \), because the continuity of the discrete gradient is just a sufficient condition for the consistency of the resulting numerical scheme. In fact, the scheme (6.3) using \( {\widetilde{\nabla }} H \) defined by (B.1) was found to be solvable by "fsolve" of MATLAB R2016b. Since the behavior is quite similar to the case of sinh-Gordon equation, we omit the numerical results. Ascher, U.M., Petzold, L.R.: Computer Methods for Ordinary Differential Equations and Differential-algebraic Equations. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (1998). https://doi.org/10.1137/1.9781611971392 CrossRefzbMATHGoogle Scholar Bajić, V.B.: Lyapunov function candidates for semistate systems. Int. J. Control 46(6), 2171–2181 (1987). https://doi.org/10.1080/00207178708934041 MathSciNetCrossRefzbMATHGoogle Scholar Ben-Israel, A., Greville, T.N.E.: Generalized Inverses. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, vol. 15, 2nd edn. Springer, New York (2003)Google Scholar Betsch, P.: Energy-consistent numerical integration of mechanical systems with mixed holonomic and nonholonomic constraints. Comput. Methods Appl. Mech. Eng. 195(50–51), 7020–7035 (2006). https://doi.org/10.1016/j.cma.2005.01.027 MathSciNetCrossRefzbMATHGoogle Scholar Bloch, A.M.: Nonholonomic Mechanics and Control. Interdisciplinary Applied Mathematics, vol. 24, 2nd edn. Springer, New York (2015). https://doi.org/10.1007/978-1-4939-3017-3 CrossRefGoogle Scholar Burger, M., Gerdts, M.: A survey on numerical methods for the simulation of initial value problems with sDAEs. In: Ilchmann, A., Reis, T. (eds.) Surveys in Differential-Algebraic Equations, vol. IV, pp. 221–300. Springer, Cham (2017)CrossRefGoogle Scholar Celledoni, E., Eidnes, S., Owren, B., Ringholm, T.: Dissipative numerical schemes on Riemannian manifolds with applications to gradient flows. SIAM J. Sci. Comput. 40(6), A3789–A3806 (2018). https://doi.org/10.1137/18M1190628 MathSciNetCrossRefzbMATHGoogle Scholar Celledoni, E., Eidnes, S., Owren, B., Ringholm, T.: Energy preserving methods on Riemannian manifolds. eprints. arXiv:1805.07578 (2018) Celledoni, E., Grimm, V., McLachlan, R., McLaren, D., O'Neale, D., Owren, B., Quispel, G.: Preserving energy resp. dissipation in numerical PDEs using the "average vector field" method. J. Comput. Phys. 231, 6770–6789 (2012)MathSciNetCrossRefzbMATHGoogle Scholar Celledoni, E., Owren, B.: Preserving first integrals with symmetric Lie group methods. Discrete Contin. Dyn. Syst. 34(3), 977–990 (2014). https://doi.org/10.3934/dcds.2014.34.977 MathSciNetCrossRefzbMATHGoogle Scholar Furihata, D.: Finite difference schemes for \(\partial u/\partial t=(\partial /\partial x)^\alpha \delta G/\delta u\) that inherit energy conservation or dissipation property. J. Comput. Phys. 156(1), 181–205 (1999). https://doi.org/10.1006/jcph.1999.6377 MathSciNetCrossRefzbMATHGoogle Scholar Furihata, D., Matsuo, T.: Discrete Variational Derivative Method—A Structure-Preserving Numerical Method for Partial Differential Equations. CRC Press, Boca Raton (2011)zbMATHGoogle Scholar Furihata, D., Mori, M.: General derivation of finite difference schemes by means of a discrete variation. Trans. Jpn. Soc. Ind. Appl. 8(3), 317–340 (1998) (in Japanese)Google Scholar Furihata, D., Sato, S., Matsuo, T.: A novel discrete variational derivative method using "average-difference methods". JSIAM Lett. 8, 81–84 (2016). 10.14495/jsiaml.8.81MathSciNetCrossRefzbMATHGoogle Scholar Gonzalez, O.: Time integration and discrete Hamiltonian systems. J. Nonlinear Sci. 6, 449–467 (1996)MathSciNetCrossRefzbMATHGoogle Scholar Gonzalez, O.: Mechanical systems subject to holonomic constraints: differential-algebraic formulations and conservative integration. Phys. D 132(1–2), 165–174 (1999). https://doi.org/10.1016/S0167-2789(99)00054-8 MathSciNetCrossRefzbMATHGoogle Scholar Hairer, E., Lubich, C., Wanner, G.: Geometric Numerical Integration, Structure-Preserving Algorithms for Ordinary Differential Equations, Springer Series in Computational Mathematics, vol. 31. Springer, Heidelberg (2010)zbMATHGoogle Scholar Ishikawa, A., Yaguchi, T.: Application of the variational principle to deriving energy-preserving schemes for the Hamilton equation. JSIAM Lett. 8, 53–56 (2016). https://doi.org/10.14495/jsiaml.8.53 MathSciNetCrossRefzbMATHGoogle Scholar Itoh, T., Abe, K.: Hamiltonian-conserving discrete canonical equations based on variational difference quotients. J. Comput. Phys. 76(1), 85–102 (1988). https://doi.org/10.1016/0021-9991(88)90132-5 MathSciNetCrossRefzbMATHGoogle Scholar Kojima, H.: Invariants preserving schemes based on explicit Runge–Kutta methods. BIT 56(4), 1317–1337 (2016). https://doi.org/10.1007/s10543-016-0608-y MathSciNetCrossRefzbMATHGoogle Scholar Li, M., Yin, Z.: Blow-up phenomena and travelling wave solutions to the periodic integrable dispersive Hunter–Saxton equation. Discrete Contin. Dyn. Syst. 37(12), 6471–6485 (2017). https://doi.org/10.3934/dcds.2017280 MathSciNetCrossRefzbMATHGoogle Scholar Liberzon, D., Trenn, S.: Switched nonlinear differential algebraic equations: solution theory, Lyapunov functions, and stability. Autom. J. IFAC 48(5), 954–963 (2012). https://doi.org/10.1016/j.automatica.2012.02.041 MathSciNetCrossRefzbMATHGoogle Scholar McLachlan, R.I., Quispel, G.R.W., Robidoux, N.: Geometric integration using discrete gradients. Philos. Trans. R. Soc. Lond. A Math. Phys. Eng. Sci. 357, 1021–1045 (1999)MathSciNetCrossRefzbMATHGoogle Scholar Miyatake, Y., Cohen, D., Furihata, D., Matsuo, T.: Geometric numerical integrators for Hunter–Saxton-like equations. Jpn. J. Ind. Appl. Math. 34(2), 441–472 (2017). https://doi.org/10.1007/s13160-017-0252-1 MathSciNetCrossRefzbMATHGoogle Scholar Miyatake, Y., Yaguchi, T., Matsuo, T.: Numerical integration of the Ostrovsky equation based on its geometric structures. J. Comput. Phys. 231(14), 4542–4559 (2012). https://doi.org/10.1016/j.jcp.2012.02.027 MathSciNetCrossRefzbMATHGoogle Scholar Olver, P.J.: Equivalence, Invariants, and Symmetry. Cambridge University Press, Cambridge (1995). https://doi.org/10.1017/CBO9780511609565 CrossRefzbMATHGoogle Scholar Quispel, G.R.W., Capel, H.W.: Solving ODE's numerically while preserving a first integral. Phys. Lett. A 218, 223–228 (1996)MathSciNetCrossRefzbMATHGoogle Scholar Quispel, G.R.W., McLaren, D.I.: A new class of energy-preserving numerical integration methods. J. Phys. A. Math. Theor. 41, 045,206 (2008)MathSciNetCrossRefzbMATHGoogle Scholar Quispel, G.R.W., Turner, G.S.: Discrete gradient methods for solving ODE's numerically while preserving a first integral. J. Phys. A 29, L341–349 (1996)MathSciNetCrossRefzbMATHGoogle Scholar Reich, S.: On a geometrical interpretation of differential-algebraic equations. Circuits Syst. Signal Process. 9(4), 367–382 (1990). https://doi.org/10.1007/BF01189332 MathSciNetCrossRefzbMATHGoogle Scholar Robinson, J.C.: Infinite-Dimensional Dynamical Systems. Cambridge University Press, Cambridge (2001)CrossRefGoogle Scholar Sato, S.: Stability and convergence of a conservative finite difference scheme for the modified Hunter-Saxton equation. BIT 59(1), 213–241 (2019). https://doi.org/10.1007/s10543-018-0726-9 MathSciNetCrossRefzbMATHGoogle Scholar Sato, S., Matsuo, T.: On spatial discretization of evolutionary differential equations on the periodic domain with a mixed derivative. J. Comput. Appl. Math. 358, 221–240 (2019)MathSciNetCrossRefGoogle Scholar Sato, S., Matsuo, T., Suzuki, H., Furihata, D.: A Lyapunov-type theorem for dissipative numerical integrators with adaptive time-stepping. SIAM J. Numer. Anal. 53(6), 2505–2518 (2015). https://doi.org/10.1137/140996719 MathSciNetCrossRefzbMATHGoogle Scholar Uhlar, S., Betsch, P.: On the derivation of energy consistent time stepping schemes for friction afflicted multibody systems. Comput. Struct. 88(11), 737–754 (2010). https://doi.org/10.1016/j.compstruc.2010.03.003 CrossRefGoogle Scholar Wan, A.T.S., Bihlo, A., Nave, J.C.: The multiplier method to construct conservative finite difference schemes for ordinary and partial differential equations. SIAM J. Numer. Anal. 54(1), 86–119 (2016). https://doi.org/10.1137/140997944 MathSciNetCrossRefzbMATHGoogle Scholar Wan, A.T.S., Bihlo, A., Nave, J.C.: Conservative methods for dynamical systems. SIAM J. Numer. Anal. 55(5), 2255–2285 (2017). https://doi.org/10.1137/16M110719X MathSciNetCrossRefzbMATHGoogle Scholar Wan, A.T.S., Nave, J.C.: On the arbitrarily long-term stability of conservative methods. eprints. arXiv:1607.06160 (2016) © Springer Nature B.V. 2019 Email authorView author's OrcID profile 1.Department of Mathematical Informatics, Graduate School of Information Science and TechnologyThe University of TokyoBunkyo-kuJapan Sato, S. Bit Numer Math (2019). https://doi.org/10.1007/s10543-019-00759-2 Received 14 May 2018 Accepted 05 June 2019 First Online 14 June 2019 Publisher Name Springer Netherlands
CommonCrawl
Prove this inequality with $a+b+c=3$ Let $a,b,c>0$,and $a+b+c=3$,show that $$\dfrac{a}{2b^3+c}+\dfrac{b}{2c^3+a}+\dfrac{c}{2a^3+b}\ge 1$$ such Use Cauchy-Schwarz inequality we have $$\left(\dfrac{a}{2b^3+c}+\dfrac{b}{2c^3+a}+\dfrac{c}{2a^3+b}\right)\left(a(2b^3+c)+b(2c^3+a)+c(2a^3+b)\right)\ge (a+b+c)^2=9$$ Therefore,it suffices to prove that $$(2ab^3+2bc^3+2ca^3)+(ab+bc+ca)\le 9$$ The last inequality doesn't hold for $a=1,b=1.9$,then $2ab^3>9$ I just do it now inequality cauchy-schwarz-inequality Michael Rozenberg asked May 5 '16 at 4:25 partofshapartofsha By C-S $\sum\limits_{cyc}\frac{a}{2b^3+c}=\sum\limits_{cyc}\frac{a^2(a+c)^2}{a(a+c)^2(2b^3+c)}\geq\frac{\left(\sum\limits_{cyc}(a^2+ab)\right)^2}{\sum\limits_{cyc}a(a+c)^2(2b^3+c)}$. Hence, it remains to prove that $(a+b+c)^2\left(\sum\limits_{cyc}(a^2+ab)\right)^2\geq9\sum\limits_{cyc}a(a+c)^2(2b^3+c)$, which is $\sum\limits_{cyc}(a^6+3a^5b+3a^5c+4a^4b^2+4a^4c^2-14a^3b^3+10a^4bc-a^3b^2c-19a^3c^2b+9a^2b^2c^2)\geq0$, which is obvious. For example, $LS\geq\sum\limits_{cyc}(a^6-a^5b-a^5c+a^4bc)+\sum\limits_{cyc}(3a^5b+3a^5c+4a^4b^2+4a^4c^2-14a^3b^3)+$ $+abc\sum\limits_{cyc}(11a^3-a^2b-19a^2c+9abc)\geq0$. Michael RozenbergMichael Rozenberg $\begingroup$ Nice! Thank you very much $\endgroup$ – partofsha May 5 '16 at 12:22 $\begingroup$ maybe $\sum\dfrac{a}{nb^n+c}\ge \dfrac{3}{n+1}?$ $\endgroup$ – partofsha May 5 '16 at 12:23 $\begingroup$ Sorry,but I want to ask where is the term $9a^3c$ $\endgroup$ – cxz May 7 '16 at 1:58 $\begingroup$ @cxz $9c=(a+b+c)^2c$. $\endgroup$ – Michael Rozenberg May 7 '16 at 2:17 $\begingroup$ @MichaelRozenberg yeah,i got it now, but how did you came up with this,i mean, even if you got the long formula, i don't it's that obvious, one can't tell if it is true at first glance $\endgroup$ – cxz May 7 '16 at 2:22 Not the answer you're looking for? Browse other questions tagged inequality cauchy-schwarz-inequality or ask your own question. Proving :$\frac{1}{2ab^2+1}+\frac{1}{2bc^2+1}+\frac{1}{2ca^2+1}\ge1$ How prove this inequality $\sum_{k=1}^{n}\frac{2k-1}{k\binom{n}{k}}\ge \frac{n}{2^{n-1}}$ How prove this $(xy+yz+xz)\left(\frac{xy}{z^2+1}+\frac{yz}{x^2+1}+\frac{zx}{y^2+1}\right)\le\frac{1}{10}$ Inequality with three variables How prove this complex inequality with same as (2014 china CMO) Cauchy-Schwarz inequality How prove this inequality $\sum_{cyc}\frac{a^4}{a^2+2b^4}\ge 1$ How to prove this inequality using Cauchy-Schwarz inequality How prove this inequality with complex numbers Prove that $\frac{3a^3+7b^3}{2a+3b}+\frac{3b^3+7c^3}{2b+3c}+\frac{3c^3+7a^3}{2c+3a}\ge 3\left(a^2+b^2+c^2\right)-\left(ab+bc+ca\right)$ show this inequality with $xy+yz+zx=3$
CommonCrawl
Stimulants are the smart drugs most familiar to people, starting with widely-used psychostimulants caffeine and nicotine, and the more ill-reputed subclass of amphetamines. Stimulant drugs generally function as smart drugs in the sense that they promote general wakefulness and put the brain and body "on alert" in a ready-to-go state. Basically, any drug whose effects reduce drowsiness will increase the functional IQ, so long as the user isn't so over-stimulated they're shaking or driven to distraction. In terms of legal status, Adrafinil is legal in the United States but is unregulated. You need to purchase this supplement online, as it is not a prescription drug at this time. Modafinil on the other hand, is heavily regulated throughout the United States. It is being used as a narcolepsy drug, but isn't available over the counter. You will need to obtain a prescription from your doctor, which is why many turn to Adrafinil use instead. What worries me about amphetamine is its addictive potential, and the fact that it can cause stress and anxiety. Research says it's only slightly likely to cause addiction in people with ADHD, [7] but we don't know much about its addictive potential in healthy adults. We all know the addictive potential of methamphetamine, and amphetamine is closely related enough to make me nervous about so many people giving it to their children. Amphetamines cause withdrawal symptoms, so the potential for addiction is there. While the mechanism is largely unknown, one commonly mechanism possibility is that light of the relevant wavelengths is preferentially absorbed by the protein cytochrome c oxidase, which is a key protein in mitochondrial metabolism and production of ATP, substantially increasing output, and this extra output presumably can be useful for cellular activities like healing or higher performance. "I think you can and you will," says Sarter, but crucially, only for very specific tasks. For example, one of cognitive psychology's most famous findings is that people can typically hold seven items of information in their working memory. Could a drug push the figure up to nine or 10? "Yes. If you're asked to do nothing else, why not? That's a fairly simple function." The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal. White, Becker-Blease, & Grace-Bishop (2006) 2002 Large university undergraduates and graduates (N = 1,025) 16.2% (lifetime) 68.9%: improve attention; 65.2:% partying; 54.3%: improve study habits; 20%: improve grades; 9.1%: reduce hyperactivity 15.5%: 2–3 times per week; 33.9%: 2–3 times per month; 50.6%: 2–3 times per year 58%: easy or somewhat easy to obtain; write-in comments indicated many obtaining stimulants from friends with prescriptions Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%. It was a productive hour, sure. But it also bore a remarkable resemblance to the normal editing process. I had imagined that the magical elixir coursing through my bloodstream would create towering storm clouds in my brain which, upon bursting, would rain cinematic adjectives onto the page as fast my fingers could type them. Unfortunately, the only thing that rained down were Google searches that began with the words "synonym for"—my usual creative process. "In 183 pages, Cavin Balaster's new book, How to Feed A Brain provides an outline and plan for how to maximize one's brain performance. The "Citation Notes" provide all the scientific and academic documentation for further understanding. The "Additional Resources and Tips" listing takes you to Cavin's website for more detail than could be covered in 183 pages. Cavin came to this knowledge through the need to recover from a severe traumatic brain injury and he did not keep his lessons learned to himself. This book is enlightening for anyone with a brain. We all want to function optimally, even to take exams, stay dynamic, and make positive contributions to our communities. Bravo Cavin for sharing your lessons learned!" I was contacted by the Longecity user lostfalco, and read through some of his writings on the topic. I had never heard of LLLT before, but the mitochondria mechanism didn't sound impossible (although I wondered whether it made sense at a quantity level14151617), and there was at least some research backing it; more importantly, lostfalco had discovered that devices for LLLT could be obtained as cheap as $15. (Clearly no one will be getting rich off LLLT or affiliate revenue any time soon.) Nor could I think of any way the LLLT could be easily harmful: there were no drugs involved, physical contact was unnecessary, power output was too low to directly damage through heating, and if it had no LLLT-style effect but some sort of circadian effect through hitting photoreceptors, using it in the morning wouldn't seem to interfere with sleep. In our list of synthetic smart drugs, Noopept may be the genius pill to rule them all. Up to 1000 times stronger than Piracetam, Noopept may not be suitable for everyone. This nootropic substance requires much smaller doses for enhanced cognitive function. There are plenty of synthetic alternatives to Adderall and prescription ADHD medications. Noopept may be worth a look if you want something powerful over the counter. Using the 21mg patches, I cut them into quarters. What I would do is I would cut out 1 quarter, and then seal the two edges with scotch tape, and put the Pac-Man back into its sleeve. Then the next time I would cut another quarter, seal the new edge, and so on. I thought that 5.25mg might be too much since I initially found 4mg gum to be too much, but it's delivered over a long time and it wound up feeling much more like 1mg gum used regularly. I don't know if the tape worked, but I did not notice any loss of potency. I didn't like them as much as the gum because I would sometimes forget to take off a patch at the end of the day and it would interfere with sleep, and because the onset is much slower and I find I need stimulants more for getting started than for ongoing stimulation so it is better to have gum which can be taken precisely when needed and start acting quickly. (One case where the patches were definitely better than the gum was long car trips where slow onset is fine, since you're most alert at the start.) When I finally ran out of patches in June 2016 (using them sparingly), I ordered gum instead. But though it's relatively new on the scene with ambitious young professionals, creatine has a long history with bodybuilders, who have been taking it for decades to improve their muscle #gains. In the US, sports supplements are a multibillion-dollar industry – and the majority contain creatine. According to a survey conducted by Ipsos Public Affairs last year, 22% of adults said they had taken a sports supplement in the last year. If creatine was going to have a major impact in the workplace, surely we would have seen some signs of this already. The abuse of drugs is something that can lead to large negative outcomes. If you take Ritalin (Methylphenidate) or Adderall (mixed amphetamine salts) but don't have ADHD, you may experience more focus. But what many people don't know is that the drug is very similar to amphetamines. And the use of Ritalin is associated with serious adverse events of drug dependence, overdose and suicide attempts [80]. Taking a drug for another reason than originally intended is stupid, irresponsible and very dangerous. Competitors of importance in the smart pills market have been recorded and analyzed in MRFR's report. These market players include RF Co., Ltd., CapsoVision, Inc., JINSHAN Science & Technology, BDD Limited, MEDTRONIC, Check-Cap, PENTAX Medical, INTROMEDIC, Olympus Corporation, FUJIFILM Holdings Corporation, MEDISAFE, and Proteus Digital Health, Inc. The main area of the brain effected by smart pills is the prefrontal cortex, where representations of our goals for the future are created. Namely, the prefrontal cortex consists of pyramidal cells that keep each other firing. However in some instances they can become disconnected due to chemical imbalances, or due to being tired, stressed, and overworked. Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive. The intradimensional– extradimensional shift task from the CANTAB battery was used in two studies of MPH and measures the ability to shift the response criterion from one dimension to another, as in the WCST, as well as to measure other abilities, including reversal learning, measured by performance in the trials following an intradimensional shift. With an intradimensional shift, the learned association between values of a given stimulus dimension and reward versus no reward is reversed, and participants must learn to reverse their responses accordingly. Elliott et al. (1997) reported finding no effects of the drug on ability to shift among dimensions in the extradimensional shift condition and did not describe performance on the intradimensional shift. Rogers et al. (1999) found that accuracy improved but responses slowed with MPH on trials requiring a shift from one dimension to another, which leaves open the question of whether the drug produced net enhancement, interference, or neither on these trials once the tradeoff between speed and accuracy is taken into account. For intradimensional shifts, which require reversal learning, these authors found drug-induced impairment: significantly slower responding accompanied by a borderline-significant impairment of accuracy. Since coffee drinking may lead to a worsening of calcium balance in humans, we studied the serial changes of serum calcium, PTH, 1,25-dihydroxyvitamin D (1,25(OH)2D) vitamin D and calcium balance in young and adult rats after daily administration of caffeine for 4 weeks. In the young rats, there was an increase in urinary calcium and endogenous fecal calcium excretion after four days of caffeine administration that persisted for the duration of the experiment. Serum calcium decreased on the fourth day of caffeine administration and then returned to control levels. In contrast, the serum PTH and 1,25(OH)2D remained unchanged initially, but increased after 2 weeks of caffeine administration…In the adult rat group, an increase in the urinary calcium and endogenous fecal calcium excretion and serum levels of PTH was found after caffeine administration. However, the serum 1,25(OH)2D levels and intestinal absorption coefficient of calcium remained the same as in the adult control group. Price discrimination is aided by barriers such as ignorance and oligopolies. An example of the former would be when I went to a Food Lion grocery store in search of spices, and noticed that there was a second selection of spices in the Hispanic/Latino ethnic food aisle, with unit prices perhaps a fourth of the regular McCormick-brand spices; I rather doubt that regular cinnamon varies that much in quality. An example of the latter would be using veterinary drugs on humans - any doctor to do so would probably be guilty of medical malpractice even if the drugs were manufactured in the same factories (as well they might be, considering economies of scale). Similarly, we can predict that whenever there is a veterinary drug which is chemically identical to a human drug, the veterinary drug will be much cheaper, regardless of actual manufacturing cost, than the human drug because pet owners do not value their pets more than themselves. Human drugs are ostensibly held to a higher standard than veterinary drugs; so if veterinary prices are higher, then there will be an arbitrage incentive to simply buy the cheaper human version and downgrade them to veterinary drugs. When you hear about nootropics, often called "smart drugs," you probably picture something like the scene above from Limitless, where Bradley Cooper's character becomes brilliant after downing a strange pill. The drugs and supplements currently available don't pack that strong of a punch, but the concept is basically the same. Many nootropics have promising benefits, like boosting memory, focus, or motivation, and there's research to support specific uses. But the most effective nootropics, like Modafinil, aren't intended for use without a prescription to treat a specific condition. In fact, recreational use of nootropics is hotly-debated among doctors and medical researchers. Many have concerns about the possible adverse effects of long-term use, as well as the ethics of using cognitive enhancers to gain an advantage in school, sports, or even everyday work. When it comes to coping with exam stress or meeting that looming deadline, the prospect of a "smart drug" that could help you focus, learn and think faster is very seductive. At least this is what current trends on university campuses suggest. Just as you might drink a cup of coffee to help you stay alert, an increasing number of students and academics are turning to prescription drugs to boost academic performance. "They're not regulated by the FDA like other drugs, so safety testing isn't required," Kerl says. What's more, you can't always be sure that what's on the ingredient label is actually in the product. Keep in mind, too, that those that contain water-soluble vitamins like B and C, she adds, aren't going to help you if you're already getting enough of those vitamins through diet. "If your body is getting more than you need, you're just going to pee out the excess," she says. "You're paying a lot of money for these supplements; maybe just have orange juice." The above information relates to studies of specific individual essential oil ingredients, some of which are used in the essential oil blends for various MONQ diffusers. Please note, however, that while individual ingredients may have been shown to exhibit certain independent effects when used alone, the specific blends of ingredients contained in MONQ diffusers have not been tested. No specific claims are being made that use of any MONQ diffusers will lead to any of the effects discussed above. Additionally, please note that MONQ diffusers have not been reviewed or approved by the U.S. Food and Drug Administration. MONQ diffusers are not intended to be used in the diagnosis, cure, mitigation, prevention, or treatment of any disease or medical condition. If you have a health condition or concern, please consult a physician or your alternative health care provider prior to using MONQ diffusers. Weyandt et al. (2009) Large public university undergraduates (N = 390) 7.5% (past 30 days) Highest rated reasons were to perform better on schoolwork, perform better on tests, and focus better in class 21.2% had occasionally been offered by other students; 9.8% occasionally or frequently have purchased from other students; 1.4% had sold to other students Following up on the promising but unrandomized pilot, I began randomizing my LLLT usage since I worried that more productive days were causing use rather than vice-versa. I began on 2 August 2014, and the last day was 3 March 2015 (n=167); this was twice the sample size I thought I needed, and I stopped, as before, as part of cleaning up (I wanted to know whether to get rid of it or not). The procedure was simple: by noon, I flipped a bit and either did or did not use my LED device; if I was distracted or didn't get around to randomization by noon, I skipped the day. This was an unblinded experiment because finding a randomized on/off switch is tricky/expensive and it was easier to just start the experiment already. The question is simple too: controlling for the simultaneous blind magnesium experiment & my rare nicotine use (I did not use modafinil during this period or anything else I expect to have major influence), is the pilot correlation of d=0.455 on my daily self-ratings borne out by the experiment? Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment. …Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. For instance, they point to the U.S. Army's use of stimulants for soldiers to stave off sleep and to stay sharp. But the Army cares little about the long-term health effects of soldiers, who come home scarred physically or mentally, if they come home at all. It's a risk-benefit decision for the Army, and in a life-or-death situation, stimulants help. Because these drugs modulate important neurotransmitter systems such as dopamine and noradrenaline, users take significant risks with unregulated use. There has not yet been any definitive research into modafinil's addictive potential, how its effects might change with prolonged sleep deprivation, or what side effects are likely at doses outside the prescribed range. Amongst the brain focus supplements that are currently available in the nootropic drug market, Modafinil is probably the most common focus drug or one of the best focus pills used by people, and it's praised to be the best nootropic available today. It is a powerful cognitive enhancer that is great for boosting your overall alertness with least side effects. However, to get your hands on this drug, you would require a prescription. …It is without activity in man! Certainly not for the lack of trying, as some of the dosage trials that are tucked away in the literature (as abstracted in the Qualitative Comments given above) are pretty heavy duty. Actually, I truly doubt that all of the experimenters used exactly that phrase, No effects, but it is patently obvious that no effects were found. It happened to be the phrase I had used in my own notes. Government restrictions and difficulty getting approval for various medical devices is expected to impede market growth. The stringency of approval by regulatory authorities is accompanied by the high cost of smart pills to challenge the growth of the smart pills market. However, the demand for speedy diagnosis, and improving reimbursement policies are likely to reveal market opportunities. Absorption of nicotine across biological membranes depends on pH. Nicotine is a weak base with a pKa of 8.0 (Fowler, 1954). In its ionized state, such as in acidic environments, nicotine does not rapidly cross membranes…About 80 to 90% of inhaled nicotine is absorbed during smoking as assessed using C14-nicotine (Armitage et al., 1975). The efficacy of absorption of nicotine from environmental smoke in nonsmoking women has been measured to be 60 to 80% (Iwase et al., 1991)…The various formulations of nicotine replacement therapy (NRT), such as nicotine gum, transdermal patch, nasal spray, inhaler, sublingual tablets, and lozenges, are buffered to alkaline pH to facilitate the absorption of nicotine through cell membranes. Absorption of nicotine from all NRTs is slower and the increase in nicotine blood levels more gradual than from smoking (Table 1). This slow increase in blood and especially brain levels results in low abuse liability of NRTs (Henningfield and Keenan, 1993; West et al., 2000). Only nasal spray provides a rapid delivery of nicotine that is closer to the rate of nicotine delivery achieved with smoking (Sutherland et al., 1992; Gourlay and Benowitz, 1997; Guthrie et al., 1999). The absolute dose of nicotine absorbed systemically from nicotine gum is much less than the nicotine content of the gum, in part, because considerable nicotine is swallowed with subsequent first-pass metabolism (Benowitz et al., 1987). Some nicotine is also retained in chewed gum. A portion of the nicotine dose is swallowed and subjected to first-pass metabolism when using other NRTs, inhaler, sublingual tablets, nasal spray, and lozenges (Johansson et al., 1991; Bergstrom et al., 1995; Lunell et al., 1996; Molander and Lunell, 2001; Choi et al., 2003). Bioavailability for these products with absorption mainly through the mucosa of the oral cavity and a considerable swallowed portion is about 50 to 80% (Table 1)…Nicotine is poorly absorbed from the stomach because it is protonated (ionized) in the acidic gastric fluid, but is well absorbed in the small intestine, which has a more alkaline pH and a large surface area. Following the administration of nicotine capsules or nicotine in solution, peak concentrations are reached in about 1 h (Benowitz et al., 1991; Zins et al., 1997; Dempsey et al., 2004). The oral bioavailability of nicotine is about 20 to 45% (Benowitz et al., 1991; Compton et al., 1997; Zins et al., 1997). Oral bioavailability is incomplete because of the hepatic first-pass metabolism. Also the bioavailability after colonic (enema) administration of nicotine (examined as a potential therapy for ulcerative colitis) is low, around 15 to 25%, presumably due to hepatic first-pass metabolism (Zins et al., 1997). Cotinine is much more polar than nicotine, is metabolized more slowly, and undergoes little, if any, first-pass metabolism after oral dosing (Benowitz et al., 1983b; De Schepper et al., 1987; Zevin et al., 1997). The leadership position in the market is held by the Americas. The region has favorable reimbursement policies and a high rate of incidence for chronic and lifestyle diseases which has impacted the market significantly. Moreover, the region's developed economies have a strong affinity toward the adoption of highly advanced technology. This falls in line with these countries well-develop healthcare sectors. Speaking of addictive substances, some people might have considered cocaine a nootropic (think: the finance industry in Wall Street in the 1980s). The incredible damage this drug can do is clear, but the plant from which it comes has been used to make people feel more energetic and less hungry, and to counteract altitude sickness in Andean South American cultures for 5,000 years, according to an opinion piece that Bolivia's president, Evo Morales Ayma, wrote for the New York Times. The title question, whether prescription stimulants are smart pills, does not find a unanimous answer in the literature. The preponderance of evidence is consistent with enhanced consolidation of long-term declarative memory. For executive function, the overall pattern of evidence is much less clear. Over a third of the findings show no effect on the cognitive processes of healthy nonelderly adults. Of the rest, most show enhancement, although impairment has been reported (e.g., Rogers et al., 1999), and certain subsets of participants may experience impairment (e.g., higher performing participants and/or those homozygous for the met allele of the COMT gene performed worse on drug than placebo; Mattay et al., 2000, 2003). Whereas the overall trend is toward enhancement of executive function, the literature contains many exceptions to this trend. Furthermore, publication bias may lead to underreporting of these exceptions. Racetams, specifically Piracetam, an ingredient popular in over-the-counter nootropics, are synthetic stimulants designed to improve brain function. Patel notes Piracetam is the granddaddy of all racetams, and the term "nootropic" was originally coined to describe its effects. However, despite its popularity and how long it's been around and in use, researchers don't know what its mechanism of action is. Patel explained that the the most prominent hypothesis suggests Piracetam enhances neuronal function by increasing membrane fluidity in the brain, but that hasn't been confirmed yet. And Patel elaborated that most studies on Piracetam aren't done with the target market for nootropics in mind, the young professional: On the other end of the spectrum is the nootropic stack, a practice where individuals create a cocktail or mixture of different smart drugs for daily intake. The mixture and its variety actually depend on the goals of the user. Many users have said that nootropic stacking is more effective for delivering improved cognitive function in comparison to single nootropics. Stimulants are drugs that accelerate the central nervous system (CNS) activity. They have the power to make us feel more awake, alert and focused, providing us with a needed energy boost. Unfortunately, this class encompasses a wide range of drugs, some which are known solely for their side-effects and addictive properties. This is the reason why many steer away from any stimulants, when in fact some greatly benefit our cognitive functioning and can help treat some brain-related impairments and health issues. Somewhat ironically given the stereotypes, while I was in college I dabbled very little in nootropics, sticking to melatonin and tea. Since then I have come to find nootropics useful, and intellectually interesting: they shed light on issues in philosophy of biology & evolution, argue against naive psychological dualism and for materialism, offer cases in point on the history of technology & civilization or recent psychology theories about addiction & willpower, challenge our understanding of the validity of statistics and psychology - where they don't offer nifty little problems in statistics and economics themselves, and are excellent fodder for the young Quantified Self movement4; modafinil itself demonstrates the little-known fact that sleep has no accepted evolutionary explanation. (The hard drugs also have more ramifications than one might expect: how can one understand the history of Southeast Asia and the Vietnamese War without reference to heroin, or more contemporaneously, how can one understand the lasting appeal of the Taliban in Afghanistan and the unpopularity & corruption of the central government without reference to the Taliban's frequent anti-drug campaigns or the drug-funded warlords of the Northern Alliance?) Though coffee gives instant alertness, the effect lasts only for a short while. People who drink coffee every day may develop caffeine tolerance; this is the reason why it is still important to control your daily intake. It is advisable that an individual should not consume more than 300 mg of coffee a day. Caffeine, the world's favorite nootropic has fewer side effects, but if consumed abnormally in excess, it can result in nausea, restlessness, nervousness, and hyperactivity. This is the reason why people who need increased sharpness would instead induce L-theanine, or some other Nootropic, along with caffeine. Today, you can find various smart drugs that contain caffeine in them. OptiMind, one of the best and most sought-after nootropics in the U.S, containing caffeine, is considered best brain supplement for adults and kids when compared to other focus drugs present in the market today. Overall, the studies listed in Table 1 vary in ways that make it difficult to draw precise quantitative conclusions from them, including their definitions of nonmedical use, methods of sampling, and demographic characteristics of the samples. For example, some studies defined nonmedical use in a way that excluded anyone for whom a drug was prescribed, regardless of how and why they used it (Carroll et al., 2006; DeSantis et al., 2008, 2009; Kaloyanides et al., 2007; Low & Gendaszek, 2002; McCabe & Boyd, 2005; McCabe et al., 2004; Rabiner et al., 2009; Shillington et al., 2006; Teter et al., 2003, 2006; Weyandt et al., 2009), whereas others focused on the intent of the user and counted any use for nonmedical purposes as nonmedical use, even if the user had a prescription (Arria et al., 2008; Babcock & Byrne, 2000; Boyd et al., 2006; Hall et al., 2005; Herman-Stahl et al., 2007; Poulin, 2001, 2007; White et al., 2006), and one did not specify its definition (Barrett, Darredeau, Bordy, & Pihl, 2005). Some studies sampled multiple institutions (DuPont et al., 2008; McCabe & Boyd, 2005; Poulin, 2001, 2007), some sampled only one (Babcock & Byrne, 2000; Barrett et al., 2005; Boyd et al., 2006; Carroll et al., 2006; Hall et al., 2005; Kaloyanides et al., 2007; McCabe & Boyd, 2005; McCabe et al., 2004; Shillington et al., 2006; Teter et al., 2003, 2006; White et al., 2006), and some drew their subjects primarily from classes in a single department at a single institution (DeSantis et al., 2008, 2009; Low & Gendaszek, 2002). With few exceptions, the samples were all drawn from restricted geographical areas. Some had relatively high rates of response (e.g., 93.8%; Low & Gendaszek 2002) and some had low rates (e.g., 10%; Judson & Langdon, 2009), the latter raising questions about sample representativeness for even the specific population of students from a given region or institution. The effect? 3 or 4 weeks later, I'm not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn't expect to. An effect? Possibly. I take my piracetam in the form of capped pills consisting (in descending order) of piracetam, choline bitartrate, anhydrous caffeine, and l-tyrosine. On 8 December 2012, I happened to run out of them and couldn't fetch more from my stock until 27 December. This forms a sort of (non-randomized, non-blind) short natural experiment: did my daily 1-5 mood/productivity ratings fall during 8-27 December compared to November 2012 & January 2013? The graphed data28 suggests to me a decline: 11:30 AM. By 2:30 PM, my hunger is quite strong and I don't feel especially focused - it's difficult to get through the tab-explosion of the morning, although one particularly stupid poster on the DNB ML makes me feel irritated like I might on Adderall. I initially figure the probability at perhaps 60% for Adderall, but when I wake up at 2 AM and am completely unable to get back to sleep, eventually racking up a Zeo score of 73 (compared to the usual 100s), there's no doubt in my mind (95%) that the pill was Adderall. And it was the last Adderall pill indeed. Increasing incidences of chronic diseases such as diabetes and cancer are also impacting positive growth for the global smart pills market. The above-mentioned factors have increased the need for on-site diagnosis, which can be achieved by smart pills. Moreover, the expanding geriatric population and the resulting increasing in degenerative diseases has increased demand for smart pills One reason I like modafinil is that it enhances dopamine release, but it binds to your dopamine receptors differently than addictive substances like cocaine and amphetamines do, which may be part of the reason modafinil shares many of the benefits of other stimulants but doesn't cause addiction or withdrawal symptoms. [3] [4] It does increase focus, problem-solving abilities, and wakefulness, but it is not in the same class of drugs as Adderall, and it is not a classical stimulant. Modafinil is off of patent, so you can get it generically, or order it from India. It's a prescription drug, so you need to talk to a physician.
CommonCrawl
Dynamic adaptation of service-based applications: a design for adaptation approach Martina De Sanctis1, Antonio Bucchiarone2 & Annapaola Marconi2 A key challenge posed by the Next Generation Internet landscape is that modern service-based applications need to cope with open and continuously evolving environments and to operate under dynamic circumstances (e.g., changes in the users requirements, changes in the availability of resources). Indeed, dynamically discover, select and compose the appropriate services in such environment is a challenging task. Self-adaptation approaches represent effective instruments to tackle this issue, because they allow applications to adapt their behaviours based on their execution environment. Unfortunately, although existing approaches support run-time adaptation, they tend to foresee the adaptation requirements and related solutions at design-time, while working under a "closed-world" assumption. In this article our objective is that of providing a new way of approaching the design, operation and run-time adaptation of service-based applications, by considering the adaptivity as an intrinsic characteristic of applications and from the earliest stages of their development. We propose a novel design for adaptation approach implementing a complete lifecycle for the continuous development and deployment of service-based applications, by facilitating (i) the continuous integration of new services that can easily join the application, and (ii) the operation of applications under dynamic circumstances, to face the openness and dynamicity of the environment. The proposed approach has been implemented and evaluated in a real-world case study in the mobility domain. Experimental results demonstrate the effectiveness of our approach and its practical applicability. The Internet of Services (IoS) is widespread and it is becoming more and more pervasive, due to the trend of delivering everything as a service [1], from applications to infrastructures, passing through platforms [2]. Furthermore, the IoS is envisioned as one of the founding pillars of the Next Generation Internet [3], together with new metaphors, such as those of the Internet of Things (IoT) and the Internet of People (IoP) [4]. In last decades, the aim of service-oriented computing has been that of encouraging the creation and delivery of services. Automated service composition is a powerful technique allowing to compose and reuse the existing services as building blocks for new services (and applications) with higher-level functionalities. To date, service-based applications are employed in a multitude of domains, such as e-Health, smart homes, e-learning, education, smart mobility and many others. Additionally, the role played by companies and organizations is also considerable. They are publicly providing their services to allow third-party developers to exploit them in defining new services, thus enhancing their accessibility [5] (e.g., Google Maps, Paypal). This is of relevant importance in the Future Internet scenario, since it implies the availability of a multitude of reliable services offering even complex functionalities. Different organizations are building on this trend to provide online platforms for the management of well-defined RESTful APIs—REpresentational State Transfer Application Program Interface, through which these services can be accessed. For instance, ProgrammableWeb Footnote 1 has now more than 10,000 APIs in its directory. As a consequence, both researchers and practitioners are highly motivated in defining solutions allowing the development of service-based applications, by exploiting existing available services. In this scenario, service-based applications must face the increased flexibility and dynamism offered by modern service-based environments. The number and the quality of available services is continuously increasing and improving. This makes service-based environments open and highly dynamic, since service–oriented computing takes place in an "open world" [6]. These premises demand self-adaptive service-based applications, that is, applications able to both adapt to their context (i.e., the currently available services) and react when facing new contextual situations (e.g., missing services, changes in the user requirements and needs). However, there are still major obstacles that hinder the development and potential realization of service computing in the real world [5]. In fact, the latest Next Generation Internet vision further challenge the IoS paradigm. Service-oriented computing has to face the ultra large scale and heterogeneity of the Future Internet, which are orders of magnitude higher than those of today's service-oriented systems [7]. In this context, self-adaptation is still one of the main concerns. Many service-based methodologies and approaches have been proposed with the aim of increasing the flexibility of applications and supporting their adaptation needs. They span from microservices [8, 9], to DevOps (e.g., [10]), passing through dynamic software product lines [11], to name a few. Nevertheless, none of them is specifically meant for open environments, where the available services might not be known a priori and/or not available at execution time. However, to perform accurately, service-based applications must be aware of the specific execution environment during their execution, thus operating differently for different contextual situations. The openness of the environment makes traditional adaptation mechanisms no longer sufficient. Differently from applications where traditional adaptation mechanisms can be used, the IoS requires applications that are adaptive by design. These premises motivated the work presented in this article about a novel design for adaptation approach of service-based applications. To this aim, the adaptation must be hold by a coherent design approach, supporting both the definition and the application of adaptation. In very general terms, the idea of the approach consists in defining the complete lifecycle for the continuous development and deployment of service-based applications, by facilitating (1) the continuous integration of new services that can easily join the applications, and (2) the applications operation under dynamic circumstances, to face the openness and dynamicity of the environment. This article is an extension of [12, 13] where we have introduced and formalized a design for adaptation approach of service-based applications relying on incremental service composition. The novel contributions of this article are: (i) the overall lifecycle of the design for adaptation approach that gives a complete overview of the different perspectives (i.e., modeling, adaptation, interaction) of the approach, the involved components (e.g., artefacts, performed activities, engines) and the connections among them, while also considering the role played by the potentially involved actors; (ii) presenting the approach as a whole, gave us the possibility to shape a clear positioning of the presented approach in the literature about existing approaches for the design of service-based applications and their dynamic adaptation; (iii) further details about previously unpublished constructs of the approach, and extended experimental results that include new elements on the approach efficiency. The article is organized as follows: a motivating scenario and research challenges are described in Section 2. In Section 3 a high level overview of the whole approach in introduced. The subsequent two sections present the novel design for adaptation approach, in Section 4, and how the defined applications operate at run-time, in Section 5. Validation results are reported in Section 6 where the approach is applied to a real case study, in the Smart Mobility domain. Section 7 describes the overall lifecycle of the design for adaptation approach. Related work are discussed in Section 8. Section 9 discusses the open issues raised by the approach, while Section 10 concludes the article with final considerations and directions for future work. Motivating scenario and research challenges In this section we introduce the travel assistant scenario, in Section 2.1, and the research challenges arising from these applications, in Section 2.2. Travel assistant scenario The travel assistant scenario belongs to the mobility domain, which is particularly suitable to show the challenges of open and dynamic environments. It concerns with the management and operation of mobility services, within a smart city as well as among different cities/countries. Nowadays, users dispose of a large offer of mobility services that may differ depending on diverse aspects, such as the offered functionalities, the provider, the geographical applicability scope, etc. In addition, mobility services span from journey planners to specific mobility services, such as those referring to specific transport modes (e.g., bus, train, bikes) or provided by specific transport companies. Moreover, an emerging trend is that of shared mobility services that are based on the shared use of vehicles, bicycles, or other means. Mobility services can offer disparate functionalities (e.g., journey planning, booking, online ticket payment, seat reservation, check-in and check-out, user profiling, and so on). Some functionalities may be peculiar to specific services and/or require particular devices (i.e., unlocking a bike from a rack is peculiar for bike-sharing services, and a smart-card might be needed to do it). These services are made available through a large variety of technologies (e.g., web pages, mobile applications), with different constraints on their availability (e.g., free vs. pay). A journey organization, from a user perspective, consists of a set of different mandatory and/or optional phases that must be carried out (e.g., planning, booking, check-in, check-out). While these phases define what should be done, how they can be accomplished strongly depends on the users requirements and preferences, and from the procedures that need to be followed, as provided by the available mobility services. A user plans her journey by looking for the available (multi-modal) alternatives satisfying her needs. The journey can be both local, in the context of a city, or global involving different cities/countries. A multi-modal solution can involve different transportation means, each requiring for different procedures to be followed. During the execution, if extraordinary events affect the journey, it can be re-planned and recovery solutions can be suggested to the user. Thus, users need support or the whole travel duration. To this aim, different mobility services need to synergistically cooperate. While the idea of an intelligent travel assistant has already been figured out in the past, as for instance in [14], our opinion is that we are still far from making it happens. Modern service-based applications need to satisfy different requirements to deal with the features of modern execution environments, thus arising the following research challenges: Applicability in open environments. Applications must be capable to operate in open environments with continuously entering and leaving services. Nonetheless, traditional approaches work under a "closed–world" assumption, although the today scenario is that service–oriented computing happens in an "open world" [6]. Autonomy and heterogeneity of services. Applications must take into account the autonomous nature of the services involved as well as the heterogeneity among services. Context-awareness. The application must take into account the state of the environment in which it operates, to behave according to it. Services interoperability. Applications must be capable to propose complex solutions taking advantages of the variety of services. Moreover, different solutions can be applied for the same goal (e.g., user goal), depending on, i.e., the available services or the user requirements. This means that the composition of services must be performed dynamically. Adaptivity and scalability. The application must be able to react and adapt to changes in the environment that might occur and affect its operations. Moreover, due to the dynamicity of the environment, the adaptation must be postponed as much as possible to the runtime execution of applications, when the environment is known. User centricity and personalization. Applications must take into account the nature of users, which are proactively involved in the applications they use and increasingly demanding. Applications must provide users with personalized solutions. Portability. Modern applications should be deployable in different environments without an ad-hoc reconfiguration from the developers. Overview of the approach The work presented in this article has been inspired from the work presented in [15] where the authors argue that mechanisms enabling adaptation should be introduced in the lifecycle of applications, both in the design and in the run-time phases. In other words, applications must be adaptive by design. They should rely on a dynamic set of autonomous and heterogeneous services that are composed dynamically without any a-priori knowledge between the applications and the exploited services. To this aim, three conditions are required, as depicted in Fig. 1: the models adopted for the applications design must allow the definition of dynamically customizable applications behaviors, through the adoption of adequate constructs. This is done in the Modeling phase of Fig. 1, where specific models are used to wrap-up in a uniform way existing or new services in a given domain (Real services wrapping & Value Added Services (VAS) modeling activity). High level overview of the design for adaptation approach The approach must implement or exploit adaptation mechanisms and strategies whose application allows for a context-aware and dynamic adaptation, during their execution. To this aim, during the Adaptation phase of Fig. 1 adaptation strategies must be implemented (Adaptation mechanisms & strategies configuration activity), while the adaptation logic of the defined applications must be configured accordingly (Adaptation logic configuration activity). In open world, the adaptation must be postponed as much as possible to the Execution phase of applications (Application interaction activity), when the environment is known, without any a-priori definition of adaptive solutions. Eventually, we specify that the approach is domain-independent and it can be applied in multiple domains (e.g., logistic, traveling, entertainment, smart environments). Notably, in [16] its has been applied in the IoT domain. Nonetheless, in this article we only focus on a scenario belonging to the IoS domain. In the following sections we will deeper illustrate the models of the approach, in Section 4, and the adaptation mechanisms and strategies in Section 5. Adaptive service-based applications: modeling In our approach is central the use of two separate models, namely the domain model and the domain objects model, which implement the separation of concerns principle (adaptation vs. application logic). Keeping the two models separate allows the operational semantic of services (i.e., in the domain model), to be detached from the different implementations that might be provided by a plethora of different concrete services (i.e., in the domain objects model). We start with an overview on the general framework and its models, in Section 4.1. Afterwards we give formal definitions of the models elements, in Section 4.2. The design for adaptation approach In this section, we describe the models, by also mapping each element with a corresponding example within the travel assistant application. The travel assistant is modeled through a set of domain objects representing the services provided by the application (e.g. Travel Assistant, Journey Manager). In particular, existing or newly defined services can be wrapped-up as domain objects. Wrapping a service as a domain object means shape it in terms of the domain objects components, which we are going to introduce in the following. More precisely, the service's implementation already exists and is made available, e.g., through APIs. The wrapping activity consists in modeling the service in a uniform way, namely as a domain object, in which the provided APIs are exploited. As depicted in Fig. 2, each domain object is characterized by a core process, implementing its own behavior, and a set of process fragments, representing the functionalities it provides. Domain Object Model Fragments [17, 18] are executable processes that can be received and executed by other domain objects to exploit a specific functionality of the provider domain object. Exposed fragments and the core process communicate through the execution of input/output activities. This concerns the fact that fragments act as an interface for the internal behavior of a domain object, thus they need to interact with the core process to eventually accomplish the functionalities they model. Both core processes and fragments are modeled in Adaptive Pervasive Flow Language–APFL [19]. Unlike traditional application specifications, where services' behavior are completely specified pre deployment, our approach allows the partial specification of the expected operation of domain objects. Indeed, APFL handles the use of abstract activities labeled with goals and acting as open points enabling the customization and adaptation of processes (see the white activities with goals in Fig. 2). These activities are then refined at run-time according to the fragments offered by the other domain objects in the application. We illustrate this notion with a simple example. In Fig. 3 we show a portion of the travel assistant made by a subset of its services and their potential dependencies. The Journey Planners Manager can partially define the functionality allowing the planning of a journey. Then, different journey planners can join the application and publish different planning procedures, covering areas of varying size and boundaries (i.e., local and global journey planners). Only at run-time, when the user's source and destination points are known, the Journey Planners Manager will discover those domain objects modeling journey planners, with their fragments, and it will exploit them to refine its abstract activity and to eventually get the list of available multi-modal alternatives for the specified input. Portion of the travel assistant application An important aspect of the design model that strongly supports the application's dynamicity consists in the fact that abstract activities can be used in the core process of a domain object as well as in the fragments it provides. In the first case, the domain object leaves under-specified some activities, in his own behavior, that are automatically refined at run-time. The latter case is more complex, and it enables a higher level of dynamicity, since it allows a domain object to expose a partially specified fragment whose execution does not rely only on communications with its core process but also on fragments provided by other domain objects, thus enabling a chain of refinements. This will be shown and discussed in Section 5. These dynamic features rely on a set of domain concepts describing the operational environment of the application, on which each domain object has a partial view. In particular (see Fig. 2), the internal domain knowledge captures the behavior of the domain concept implemented by the domain object, while the external domain knowledge represents domain concepts that are required to accomplish its behavior but for whose implementation it relies on other domain objects. The domain knowledge (both internal and external) makes the domain model. It is defined by domain properties, each giving a high-level representation of a domain concept (e.g. journey planning, ride-sharing journey). Domain properties are modeled as State Transition Systems (STS) evolving as an effect of the execution of service-based applications, or because of exogenous events in the operational context [20, 21]. At this point we must clarify that even if in Fig. 2 we show the domain properties as part of the domain object, which is actually true, we say that domain properties exist independently of the domain objects implementing or relying on them, if anyFootnote 2. Indeed, they are identified and defined by domain experts before the application is developed (i.e., before domain objects are designed). Each STS is obtained by analyzing the behavior of those services that will implement it. For instance, the Ride Sharing STS in Fig. 4 comes from an analysis and an abstraction of the ride sharing services. Domain properties modeled as state transition systems In Fig. 4 we provide some examples of (simplified) domain properties and we give a domain property's evolution example in the following. The Travel Assistance domain property models the behavior of a travel assistant. First of all, the journey needs to be planned (JOURNEY PLANNED state), after that a specific request from the user arrives (REQUEST RECEIVED state). Then, the user receives the list of possible alternatives (ALTERNATIVES SENT state) and she chooses the preferred solution among them (USER CHOICE RECEIVED state). At this point her plan can be further refined by considering the transportation means effectively composing the chosen alternative (PLAN REFINED state). If required by the involved transportation means, the plan can be also booked (PLAN BOOKED state), otherwise the user can start her journey (JOURNEY EXECUTION state) until she reaches her destination (ASSISTANCE COMPLETE state). During the normal behavior of the application, a domain property may evolve as an effect of the execution of a fragment activity (e.g., if the journey planning activity is successful, the travel assistant moves in the state JOURNEY PLANNED). Otherwise, if something unexpected occurs, a domain property may also evolve as a result of exogenous changes (e.g., because of roadworks the bus is not passing). Eventually, a domain configuration is given by a snapshot of the domain at a specific time of the journey, capturing the current status of all its domain properties. The link between the domain model and the domain objects model is given by annotations. Indeed, APFL gives the possibility to relate the execution of processes with the application domain, through the use of annotations on process activities. Annotations represent domain-related information and they implicitly define a mapping between the execution of processes and fragments and corresponding changes in the status of domain properties. Note that, by properly annotating services (i.e., processes in domain objects) and without changing the domain properties, it is easy to add new services implementations (i.e., new domain objects). Annotations can be of different types. In particular, each abstract activity is defined in terms of the goal it needs to achieve, expressed as domain knowledge states to be reached. Then, the annotated abstract activity is automatically refined at run time, by considering (1) the set of fragments currently provided by other domain objects, (2) the current domain knowledge configuration, and (3) the goal to be reached. In particular, goals are defined over the external domain knowledge, since they refer to functionalities which belongs to domain properties implemented by other domain objects. They can be defined as disjunctions of conjunctions over states of domain properties, as we will see further on. To show how annotations are defined, in Fig. 5 we report an example of a fragment modeling the functionality of paying for a rideshare (Rideshare Payment fragment), as it might be exposed by a ride-sharing mobility service, such as BlaBlaCar. Moreover, in Fig. 6 we give the (partial) APFL listing for the same fragment. Example of an annotated fragment modeling the functionality of paying for a ride-share APFL listing of the Rideshare_Payment fragment The activity Pay for rideshare is an abstract activity, represented with a dotted line, labeled with the goal G1 that is defined over the Payment Management domain property (see lines 25-35 in Fig. 6). Indeed, the BlaBlaCar service does not implement the online paying, but it relies on external payment services for the secure payment over internet. In addition to goal annotations, activities in processes and fragments are annotated with preconditions and effects. Preconditions constrain the activity execution to specific domain knowledge configurations. In Fig. 5, the precondition P1 says that, to execute the fragment Rideshare Payment, the domain property RIDE SHARING (see Fig. 4) must be in the state PICK-UP POINT DEFINED (see lines 10-16 in Fig. 6). This precondition constrains the execution of the Rideshare Payment fragment only in those configurations in which the driver and the passenger already defined the pick-up point. Effects, instead, model the expected impact of the activity execution on the domain and represent its evolution in terms of domain properties events. The effect E1 in Fig. 5, models the evolution of the RIDE SHARING domain property (see Fig. 4). It is caused by the event PayRideshare, triggered by the Receive payment ack activity and it brings the property in the state RIDESHARE PAYED (see lines 38-40 in Fig. 6). Preconditions and effects are used to model how the execution of fragments is constrained by and evolve the domain knowledge. This information is used to identify the fragment (or composition of fragments) that can be used to refine an abstract activity in a specific domain knowledge configuration. As shown in Fig. 3, the RIDE SHARING domain property belongs to the internal domain knowledge of the BlaBlaCar domain object and to the external domain knowledge of the Journey Manager. This property can be used to specify goals of abstract activities within the Journey Manager (e.g. to handle a ride-share journey). Similarly, fragments offered by the BlaBlaCar domain object are annotated with preconditions and effects on the RIDE SHARING domain property. Potential dependencies (soft dependencies, from here on) are established between a domain object and all those domain objects in the application whose modeled domain concept (internal domain knowledge) matches with one of its required behaviors (domain property in its external domain knowledge). Figure 3 shows the soft dependencies (dashed arrows) among some of the domain objects modeling the travel assistant application. A soft dependency between two domain objects becomes a strong dependency if, during the application execution, they inter-operate by exchanging their fragments and domain knowledge. In Section 5, which is about the execution of service-based applications, we present a run-time scenario and we show how soft dependencies become strong dependencies after the refinement of abstract activities. Eventually, the resulting adaptive application can be seen as a dynamic network of interconnected domain objects which dynamically inter-operate. In particular, the network is structured as a hierarchy of domain objects, where the abstract activities refinement mechanism enables a bottom-up approach allowing fragments, once they are selected for the composition, to climb the domain objects' hierarchy to be injected in the running processes. Notice that the external domain knowledge of a domain object is not static since, it can be extended during the execution of domain objects, due to specific operational cases, as we will better see in Section 5. As regards the entrance/exit of new domain objects, the approach explicitly handle the domain by managing the dynamicity of services, which can enter or leave the application at any moment. This is due to the use of the domain model that provides an abstract representation of the domain concepts, which can be concretized by different services, each giving their own implementation of a specific concept. Models formalization In this section, we give formal definitions of the core elements of our approach. Firstly we define the domain model in Section 4.2.1 and then we formalize the domain objects model elements in Section 4.2.2. Domain model In this section we formalize the domain model through the definition of the domain property concept as its founding element. (Domain Property) A domain property is a state transition system dp=〈L,l0,E,T〉, where: L is a set of states and l0∈L is the initial state; E is a set of events; and T⊆L×E×L is a transition relation. We denote with L(dp), E(dp), T(dp) the corresponding elements of dp. Examples of domain properties are shaped in Fig. 4. (Domain model) A domain model is a set of domain properties C={dp1,dp2,…,dpn} with dpi=〈Li,l0i,Ei,Ti〉 for every 1≤i≤n, and such that for every pair 1≤i,j≤n, if i≠j, then Ei∩Ej=∅. The set of all domain states is defined as \({L}_{{C}} = \prod \limits _{i=1}^{n} {L}_{i}\) and the initial context state is l0C=(l01,l02,…,l0n). The set of all domain events is \({E}_{{C}} = \bigcup \limits _{i=1}^{n} {E}_{i}\). Finally, the transition relation in the domain model is given as TC such that for every pair of states (l1,…,ln)∈LC and (l1′,…,ln′)∈LC, and for every event e∈EC, if e∈Ei then ((l1,…,ln),e,(l1′,…,ln′))∈TC iff $$(l_{i},e,l'_{i}) \in {T}_{i} \text{, and for every} j \not = i \text{ we have} l_{j} = l'_{j}. $$ A domain model consists in a set of domain properties. We assume that two distinct domain properties pi,pj∈C in a domain model do not intersect. The states of a domain model is the product of its domain properties. A state in a domain model can then be seen as the conjunction of states of domain properties. The events of a domain model is the union of the events of its domain properties. Transitions in a domain model are component-wise: each transition changes the state of at most one domain property. Given a domain model C={dp1,dp2,…,dpn}, it will be convenient to denote with \(l_{i}=\bar {l}{\downarrow }_{dp_{i}}\) the projection of state \(\bar {l} \in {L}_{{C}}\) onto the domain property dpi. Domain objects model In this section we start by introducing all the elements that form a domain object, then we show how domain objects combine to form an adaptive system. The domain model previously defined is instrumental in the definitions of internal and external knowledge of domain objects. A domain object has an internal domain knowledge. (Internal Domain Knowledge) An internal domain knowledge is a domain model \({\mathbb {D}\mathbb {K}}_{I}= \{dp_{I}\}\) where dpI is a domain property that represents the domain concept implemented by the domain object. For instance, let us consider the FLIXBUS domain object in Fig. 3. Its internal domain knowledge is given by the singleton containing the BUS JOURNEY domain property. A domain object has also an external domain knowledge. (External Domain Knowledge) An external domain knowledge is a domain model \({\mathbb {D}\mathbb {K}}_{E}= \{dp_{1},\dots, {dp}_{n} \}\), where each dpi, 1≤i≤n, are domain properties that the domain object uses for its operation but that are not under its own control. For instance, in the FLIXBUS domain object in Fig. 3, its external domain knowledge is given by the singleton containing the PAYMENT MANAGEMENT domain property, since the Flixbus service requires for the online booking and payment of the tickets, but it does not implements the payment service. Notice that in general, the external knowledge can contain more than one domain property. The external domain knowledge and the internal domain knowledge are domain models. Hence, they have a set of states, and set of events, and a transition relation as specified in Definition 2. For convenience, we denote \({\mathbb {L}}_{E}\) and \({\mathbb {E}}_{E}\) the set of states and the set of events in the external domain knowledge. We also denote \({\mathbb {L}}_{I}\) and \({\mathbb {E}}_{I}\) the set of states and the set of events in the internal domain knowledge. Both the internal behavior of a domain object, as well as the fragments it provides to others, are modeled as processes. A process is a state transition system, where each transition corresponds to a process activity. In particular, we distinguish four kind of activities: input and output activities model communications among domain objects; concrete activities model internal operations; and abstract activities correspond to abstract tasks to be refined at run-time. All activities can be annotated with preconditions and effects, while abstract activities are annotated also with goals. For instance, let consider the example of fragment shown in Fig. 5: input/output activities are represented with an entering/outgoing message; abstract activities are drawn with a dotted line, while concrete activities are defined by solid lines. We define a process as follows: (Process) A process defined over an internal domain knowledge \({\mathbb {D}\mathbb {K}}_{I}\) and an external domain knowledge \({\mathbb {D}\mathbb {K}}_{E}\) is a tuple p=〈S,S0,A,T,Ann〉, where: S is a set of states and S0⊆S is a set of initial states; A=Ain∪Aout∪Acon∪Aabs is a set of activities, where Ain is a set of input activities, Aout is a set of output activities, Acon is a set of concrete activities, and Aabs is a set of abstract activities. Ain, Aout, Acon, and Aabs are disjoint sets; T⊆S×A×S is a transition relation; Ann=〈Pre,Eff,Goal〉 is a process annotation, where \({Pre} : {A}\rightarrow {2^{{\mathbb {L}}_{I}}} \cup {2^{{\mathbb {L}}_{E}}}\) is the precondition labeling function, \({Eff}:{A} \rightarrow {2^{{\mathbb {E}}_{I}}} \cup {2^{{\mathbb {E}}_{E}}} \) is the effect labeling function, and \({Goal}:{A_{abs}}\rightarrow {2^{{\mathbb {L}}_{E}}}\) is the goal labeling function. We denote with S(p), A(p), and so on, the corresponding elements of p. We say that the precondition of the activity a is satisfied in the domain knowledge state \(\bar {l}\in {\mathbb {L}}_{I} \cup {\mathbb {L}}_{E}\), and denote it with \(\bar {l}\models {Pre}(a)\), if \(\bar {l}\in {Pre}(a)\). Similarly, we say that the goal of the activity a is satisfied in \(\bar {l}\in {\mathbb {L}}_{I} \cup {\mathbb {L}}_{E}\), and denote it with \(\bar {l}\models {Goal}(a)\), if \(\bar {l}\in {Goal}(a)\). Notice that the goal of an abstract activity specifies a subset of states in the external domain knowledge. As mentioned earlier, a goal can thus effectively be seen as a disjunction of conjunctions of states of domain properties. We say that the effects of activity a are applicable in the domain knowledge state \(\bar {l}\in {\mathbb {L}}_{I} \cup {\mathbb {L}}_{E}\), if for each event e∈Eff(a) there exists a \({dp}_{i}\in {\mathbb {D}\mathbb {K}}\) and \(l^{\prime }_{i} \in L({dp}_{i})\) such that \((\bar {l}{\downarrow }_{dp_{i}},e,l'_{i})\in {T}({dp}_{i})\). In particular, in our approach, processes are modeled as Adaptable Pervasive Flows (APF) that is an extension of traditional work-flow languages making processes suitable for adaptation and execution in dynamic environments. (Domain Object) A domain object is a tuple \(o= \langle {{\mathbb {D}\mathbb {K}}_{I}, {\mathbb {D}\mathbb {K}}_{E}, \linebreak p,{\mathbb {F}}}\rangle \), where: \({\mathbb {D}\mathbb {K}}_{I}\) is an internal domain knowledge, \({\mathbb {D}\mathbb {K}}_{E}\) is an external domain knowledge, p is a process, called core process, defined on \({\mathbb {D}\mathbb {K}}_{I}\) and \({\mathbb {D}\mathbb {K}}_{E}\), \(\mathbb {F}=\{f_{1},\ldots,f_{n}\}\) is a set of processes, called fragments, defined on \({\mathbb {D}\mathbb {K}}_{I}\) and \({\mathbb {D}\mathbb {K}}_{E}\), where for each fi∈F, a∈Ain(fi) implies a∈Aout(p) and a∈Aout(fi) implies a∈Ain(p). The latter constraint on fragments specification concerns the fact that input/output activities in fragments represent explicit communication with the provider domain object. Thus fragments, once received by other domain objects and injected in their own process, start a peer-to-peer communication with the core process of the provider, that implements the required functionality. A graphical representation of a domain object is reported in Fig. 2. (Adaptive System) An adaptive system is modeled as a set of domain objects: \(AS=\{o_{1}, \dots, o_{n}\}\). Figure 3, for instance, shows a portion of the travel assistant adaptive system. In it, we say that there is a soft dependency between objects o1 and o2, denoted with \(o_{1} \dashleftarrow o_{2}\), if o1 requires a functionality that is provided by o2. A soft dependency is formally defined as follows: (Soft Dependency) ∀oi,oj∈AS with oi≠oj, \(o_{i} \dashleftarrow o_{j}\) if there exists \({dp}_{E} \in {\mathbb {D}\mathbb {K}}_{E}(o_{i})\) then there exists \({dp}_{I} \in {\mathbb {D}\mathbb {K}}_{I}(o_{j})\) such that dpE=dpI. In the next section we introduce the adaptation mechanisms and strategies exploited and facilitated by our design for adaptation approach, as well as the enablers for the execution and adaptation of service-based applications. Adaptive service-based applications: execution In this section, we first provide an overview on the adaptation mechanisms and strategies exploited by our approach [22], in Section 5.1. In Section 5.2, we give a description of the enablers of the design for adaptation. For illustration purpose, we provided a running scenario of the travel assistant example in Section 5.3. The execution model is formalized in Section 5.4. Adaptation mechanisms and strategies The adaptation mechanisms and strategies that we employ implement the dynamic adaptation of fragment-based and context-aware business processes proposed in [22], which are in turn based on AI planning [23]. The link between the approach presented in this article and the approaches in [22] is the use of the APFL to model processes. It allows developers to define flexible processes that are particularly suitable for adaptation and execution in dynamic environments. The used adaptation mechanisms deal with three types of adaptation needs. The first, which is the one we mainly use in our scenario, refers to the need for refining an abstract activity. This is made by triggering the refinement mechanism whose execution allows the approach to automatically find and compose available fragments in the application, on the basis of the goal of the abstract activity and the current context. As a result, an executable process whose execution guarantees to reach the abstract activity's goal is provided (details are given in Section 5.3). The second is called local adaptation mechanism and it refers to the violation of the precondition of an activity that has to be performed. It requires for a solution helping in re-starting a faulted process. For instance, booking a place in a ride-share is constrained by a precondition requiring that the user is subscribed to the specific ride-sharing service. The last is called compensation mechanism and it allows designers to avoid the explicit definition of activities' compensation procedures, and to dynamically provide a context-aware compensation process (i.e., when a travel ticket refund is needed). Furthermore, the AI planning on which the goal-based adaptation relies is able to deal with stateful and non-deterministic services. In addition, the fragments composition (i.e., a plan) returned by the AI planner as a result to an adaptation problem is correct by construction [23], that is, if a plan is found, it is guaranteed that its execution allows the application to reach a situation in which the goal of the adaptation problem is satisfied. However, dealing with stateful services implies that the planner might even not find a solution to an adaptation problem. For these reasons, adaptation strategies have been designed. Indeed, the mechanisms introduced above can be further combined into adaptation strategies allowing the application to handle more complex adaptation needs (e.g., the failure of an abstract activity refinement). The before-mentioned mechanisms and strategies have all been implemented in an adaptation engine [24]. This engine is one of the enablers of our design for adaptation approach. Enablers of the design for adaptation approach The run-time operation of service-based applications realized with our approach relies on different execution and adaptation enablers, shown in Fig. 7. Approach Enablers The Execution Enablers, namely the Domain Objects Manager and the Process Engine, leverage on the different services wrapped up as domain objects and stored in the application's knowledge base. The execution enablers are in charge of executing the domain objects processes (i.e., core processes and fragments) during the operation of service-based applications. The Adaptation Enablers, namely the Refinement Handler, the Adaptation Manager and the AI planner, instead, leverage on the adaptation mechanisms and strategies, described in Section 5.1. They are in charge of managing the adaptation needs of applications, arising at run-time. Consided as a whole, they represent the adaptation engine. To start, it is required that developers select the available services in a given domain (e.g., mobility) and wrap-up them as domain objects. These are stored in the Domain Objects Models repository in Fig. 7. To understand how the execution and adaptation enablers interact, we defined a sequence diagram in Fig. 8. Interaction-flow among the execution and adaptation enablers Domain objects core processes (simply processes from here on) are executed by the Process Engine. It manages service requests among processes and, when needed, it sends requests for domain objects instantiation to the Domain Objects Manager. A request is sent for each demanded service whose corresponding process has not yet been instantiated. The domain objects manager replies by deploying the requested process on the process engine. In this way, a correlation between the two processes is defined. During the normal execution of processes, abstract activities can be met. These activities need to be refined with one or a composition of fragments modeling services functionalities. To this aim, the process engine sends a request for abstract activity refinement to the Refinement Handler component. This component is in charge of defining the adaptation problem corresponding to the received request. In particular, the adaptation problem is represented by: (i) a set of fragments that can potentially be part of the final fragments composition. The selection is driven by the goal defined by the abstract activity. (ii) A set of domain properties, and (iii) the adaptation goal. The planning domain is then derived from the adaptation problem by transforming fragments and domain properties into STS, by applying transformation rules, such as those presented in [25]. The adaptation goal is, instead, transformed into a set of configuration of the planning domain. Then, the refinement handler submits the adaptation problem to the Adaptation Manager. This translates the adaptation problem into a planning problem so that it can be solved by the AI Planner component. After the plan generation (i.e., made as a STS), the AI planner sends the plan to the adaptation manager that will transform it into an executable process. This process can now be sent to the process engine and injected into the abstract activity being refined. At this point, depending on the fragments in the composition, the process engine can request for the instantiation of one or more domain objects, whose processes will be deployed. At the end, the execution of the refinement process can be performed. Travel assistant: running scenario In this section, we show a concrete example on the running execution of the travel assistant. The focus of this section is that of showing (i) how domain objects dynamically inter-operate by exchanging and injecting (composition of) fragments, thus enabling a chain of incremental refinements (such as that in Fig. 9); (ii) how the refinement process allows domain objects to span their external knowledge on the domain, by establishing new soft dependency. A detailed example of the travel assistant execution through incremental and dynamic refinements The main features of the travel assistant are the following: (i) collect the user's requirements (e.g., source and destination points, travel preferences, etc) and set up a journey planning request; (ii) run a local or a global planning; (iii) identify the transport means in the journey's legs of the solution selected by the user. This way, it goes vertically to find the proper service(s) to use (e.g., the ones of the specific transport companies), if existing in the application. Executing the travel assistant. Our user, Sara, wants to organize a journey from Trento to Vienna. In Fig. 9, we report examples of chains of incremental refinements, as they are dynamically set up and executed after the specific request of SaraFootnote 4. The travel assistant is provided as a mobile application (modeled by the domain object Travel Assistant Application in Fig. 3), through which Sara uses it. The execution starts from the core process of this mobile app, modeling the user process. Then, a sequence of three abstract activities (represented with dotted lines and labeled with a goal) need to be refined (see the top side of Fig. 9). Here we focus on the first one, Plan Journey, whose goal models the situation in which Sara ends up with a specific travel plan. The refinement mechanism is triggered and the following steps are performed (see Fig. 9). Step 1. The fragment PlanJourney provided by the Travel Assistant is selected for the refinement, and injected in the behavior of the mobile app core process. It implements a wider journey planning functionality, allowing for looking for available alternatives and performing a more detailed planning after that a specific alternative has been found and selected by the user. To start, it allows Sara to insert the departure and destination locations. Step 2. To identify the proper planning mode (local vs. global), the travel assistant domain object relies on the Planners Management domain property, as shown by the abstract activity Travel Assistant Plan Journey in the PlanJourney fragment in execution. The Journey Planners Manager domain object implements the Planners Management domain property. Its fragment SelectPlanningMode is selected for the refinement. This fragment does not implement any logic. Indeed, its activities Plan Request and Receive Planning Type model the communication with its core process, where the request is effectively handled. In particular, the Journey Planners Manager knows only at runtime if a local or global planning is required. In our scenario, having Trento and Vienna, the Journey Planners Manager will reply with a global planning type. This will drive the execution of its fragment through the Plan Global Journey abstract activity. Step 3. At this point, one or more fragments provided by the available global journey planners existing as domain objects in the application's knowledge base can be selected for the refinement. In our scenario, we suppose that the Plan Global Journey abstract activity is refined with the fragment Plan provided by the Rome2RioFootnote 5 domain object, a open global planner service. The execution of this fragment will end up with a list of travel alternatives, if any. Step 4. After that the chain of incremental refinements made by the steps 1, 2 and 3 has been accomplished, the execution returns to the PlanJourney fragment, by continuing with the DataViewerPattern abstract activity. Indeed, an appropriate data visualization pattern must be selected, based on the data format (e.g., a list, a message). This is defined at run-time, when the data (and its format) is known. The Data Viewer domain object provides the DefineDataViewer Pattern fragment for this purpose. At this point, Sara can receive and visualize on her smartphone the list of the found travel alternatives satisfying her requirements. Step 5. Sara can now select her preferred alternative (we suppose that she selects a multi-modal solution made by a train and a bus travels). Based on the user choice, the Define Journey Legs abstract activity is refined with the HandleJourneyLegs fragment provided by the Journey Manager domain object. It is able to dynamically define the goal for the Refine Journey abstract activity, that will be G: TJ = Response Sent AND BJ = Response Sent, being the selected solutions made by a train and a bus journeys. The refinement of this abstract activity will allow the Travel Assistant to look for and find the proper fragments for each journey leg. Notice that the Refine Journey activity is a so-called higher order abstract activity, that we are going to define in the subsequent paragraph. Step 6. The last step shows a composition of fragments provided by the transport companies involved in the legs of the journey alternative selected by the user (i.e., Sudtirol Alto Adige and Hello). Their execution provides to Sara the proper solutions, from the two companies, that combined together satisfy her need of planning a journey from Trento to Vienna, passing through Bozen. Higher Order Abstract Activities. In step 5 of the running example, we have presented the Refine Journey activity as a Higher Order Abstract Activity (HOAA). This kind of activity is actually a regular abstract activity and it is managed as such, with the only difference that its goal is defined at execution time, within the fragment or core process it belongs to. For instance, in Fig. 9 – Step 5, we can notice that the Receive Goal for Legs Specialization activity, is in charge of receiving the HOAA's goal and labeling the Refine Journey HOAA with it, so that, at the next step, the process engine can execute it. HOAAs are used for those abstract activities whose goal's specification is fully depending from the run-time execution environment. Specifying such a goal (i.e., a composition requirements) at design time, would mean defining all the possible alternatives that the goal could assume. But this is exactly what must be avoided. For this reason, we introduced the HOAA construct allowing for the dynamic definition of goals when the execution domain is known. The HandleJourneyLegs fragment exploited at Step 5 in the running example is exposed by the Journey Manager domain object. Its main task is that of relating a specific travel alternative selected by a user with the proper domain objects able to handle it. It is easy to notice that a travel solution can be made from any possible combination of transport means. This implies that the goal of the Refine Journey HOAA, if defined at design time, should model any possible configuration to cover all the corresponding combination of transport means. To the contrary, the Journey Manager implements the logic to dynamically relate a combination of transport means (e.g., train and bus as in our example) with the right goal to be associated with the HOAA handling it (e.g., the goal G: TJ = Response Sent AND BJ = Response Sent in Fig. 9), which is dynamically generated. Dynamic knowledge extension An important feature of our approach is represented by the ability of domain objects to span their knowledge on the whole application domain. The dynamic extension of the knowledge concerns with the external domain knowledge and it is triggered by the execution of the abstract activity refinement mechanism. In particular, it takes place every time that a domain object injects in its own core process one or more fragments containing abstract activities. Indeed, since abstract activities are labeled with a goal, the receiving domain object receives, together with the fragments, also those domain properties on which fragments execution rely on. These domain properties will extend the external domain knowledge. For instance, in Fig. 10 we depicted the evolution of the external domain knowledge in the Travel Assistant Application domain object, after the execution of step 1 and step 2 of Fig. 9. Both steps, indeed, are characterized by the injection of fragments, namely PlanJourney and SelectPlanningMode, equipped with abstract activities, whose goals (i.e., G4, G7, G8 – see table in Fig. 9) rely on domain properties which are automatically inherited by the Travel Assistant Application domain object. More specifically, the Planning Management, Local Planning and Global Planning properties are received. Example of the dynamic extension of a domain object's knowledge This dinamicity is now reflected in the soft dependencies of the Travel Assistant Application because new dependencies are established. In particular, it will establish new dependencies with all the domain objects in the application implementing the three just inherited domain properties. We can notice how the dynamic knowledge extension allows domain objects to dynamically discover new services that they can, in turn, exploit for the refinement of inherited abstract activities. It is easy to note that the refinement at the step 3 in Fig. 9 would not have been possible without the dynamic extension of the knowledge because, in its design time version, the Travel Assistant Application did not have the Global Planning knowledge required to do it. Lastly, we want to highlight that if new global planners enter the application, the Travel Assistant Application will be able to know and exploit them in its further execution, thanks to the establishment of new soft dependencies. Execution model formalization The following definition captures the current status of the execution of a given core process. The process instance is a hierarchical structure, obtained through the refinement of abstract activities into fragments. A process instance is hence modeled as a list of tuples process-activity: the first element in the list describes the fragment currently under execution and the current activity; the other tuples describe the hierarchy of ancestor fragments, each one with abstract activities currently under execution. The last element in the list is the process model from which the running instance has been created. A process instance is defined as follows: (Process Instance) We define a process instance Ip of a process p as a non-empty list of tuples Ip=(p1,a1),(p2,a2),…,(pn,an), where: each pi is a process and pn=p; ai∈A(pi) are activities in the corresponding processes, with ai∈Aabs(pi) for i≥2 (i.e., all activities that are refined are abstract). An example of process instance is given by the process of the Travel Assistant Application domain object, shown in Fig. 9, where we reported an example of its execution. A domain object instance, instead, is specified as follows. Definition 10 (Domain Object Instance) A domain object instance δ of a domain object \(o=\langle {{\mathbb {D}\mathbb {K}}_{I}, {\mathbb {D}\mathbb {K}}_{E},p,\mathbb {F}}\rangle \) is a tuple \(\delta = \langle {{\mathbb {D}\mathbb {K}}_{I}, {\mathbb {D}\mathbb {K}}_{E^{+}},\bar {l}_{I},\bar {l}_{E^{+}},I_{p}}\rangle \) where: \({\mathbb {D}\mathbb {K}}_{E^{+}}\supseteq {\mathbb {D}\mathbb {K}}_{E}\), is the current set of domain properties in the external domain knowledge; \(\bar {l}_{I} \in \mathbb {L}_{DK_{I}}\) and \(\bar {l}_{E^{+}} \in \mathbb {L}_{DK_{E^{+}}}\) are the current state of the domain properties in the internal and external domain knowledge; Ip is its process instance. Notice that \({\mathbb {D}\mathbb {K}}_{E^{+}} = {\mathbb {D}\mathbb {K}}_{E}\) when the domain object is instantiated. Then, \({\mathbb {D}\mathbb {K}}_{E^{+}} \) might grow during the domain object execution; this mechanism is formally defined later on. We define now an adaptive system instance. (Adaptive System Instance) An adaptive system instance ASI of an adaptive system \(AS=\{o_{1}, \dots, o_{n}\}\) is a set of domain object instances ASI={δij} where each δij is an instance of domain object oi. For instance, if we consider the running scenario depicted in Fig. 9 of the travel assistant system, we can say that the adaptive system instance, for that specific execution, is made by instances of the Travel Assistant Application, Travel Assistant, Journey Planners Manager, Journey Manager, Data Viewer, Rome2Rio, Train and Bus domain objects. We will now formally define the execution model of domain objects. In the following a refinement need is formalized. (Refinement need) A refinement need is a tuple η=〈ASI,δ,a〉 where: ASI is an adaptive system instance; δ∈ASI is the domain object instance for which the refinement is needed; a is the abstract activity of δ to be refined. For instance, considering the process whose refinement is shown in Fig. 9, the domain object instance for which the refinement is needed is an instance of the Travel Assistant Application, while the abstract activity to be refined is the Plan Journey activity. A refinement is defined as follows. (Refinement) A refinement for a refinement need η=〈ASI,δ,a〉, denoted with REF(η), is a tuple \(\langle {p_{\eta },{\mathbb {D}\mathbb {K}}_{\eta },\bar {l}_{\eta }}\rangle \) where: pη is the process to be injected; \({\mathbb {D}\mathbb {K}}_{\eta }\) is the set of domain properties to be added to the external domain knowledge; for each a∈Aabs(pη), \(Goal(a) \subseteq {2^{{\mathbb {L}}_{DK_{\eta }}}}\); \(\bar {l}_{\eta } \in {\mathbb {L}}_{DK_{\eta }}\) is the current state of the domain properties. The last two items of the previous definition require that, in case the refinement process contains abstract activities, the domain knowledge needed for their refinement is part of the refinement solution. Indeed, this is how the domain knowledge extension is performed. We will now characterize a correct solution for a refinement need η. Intuitively, a refinement \(\langle {p,{\mathbb {D}\mathbb {K}},\bar {l}}\rangle \) is a correct solution to a refinement need η=〈ASI,δ,a〉, if the execution of p brings the external domain knowledge of object δ in a state that satisfies the goal of a. Notice that p, being a composition of fragments provided by other domain objects, might contain abstract activities that will be refined later on, when the refinement is executed. Our definition of correct refinement is based on the assumption that abstract activities, once refined, will behave as declared in their specification (preconditions and effects on their activities). That is, we treat them as all other activities in the process, assuming that their behavior is correctly specified through their annotations in terms of preconditions and effects. In the following we give the definitions of action executability, action impact, and abstract run of a process. These definitions are the basis for the formal characterization of a correct refinement. (Action Executability) An action a of a process p is executable from domain knowledge state \(\bar {l}\in \mathbb {L}_{DK}\), denoted with \({Executable}(a,\bar {l})\), if \(\bar {l}\models {Pre}(a)\) and the effects of action a are applicable in domain knowledge state \(\bar {l}\). In other words, an action is executable from a given domain knowledge state if, in that state, its precondition is verified and its effects can be applied. (Action Impact) The impact of action a belonging to some process p when executed from domain knowledge state \(\bar {l}\in {\mathbb {L}}_{DK}\), denoted with \({Impact}(a,\bar {l})\), is a domain configuration \(\bar {l^{\prime }}\in {\mathbb {L}}_{DK}\) such that for every \({dp}_{i} = \langle {{L}_{i}, {{l}^{0}}_{i}, {E}_{i}, {T}_{i}}\rangle \in {\mathbb {D}\mathbb {K}}\), if exists an e∈Eff(a) such that \(\left (\bar {l}{\downarrow }_{dp_{i}},e,l'_{i}\right)\in {T}_{i}\) then \({\bar {l}^{\prime }}{\downarrow }_{dp_{i}}=l'_{i}\), otherwise \({\bar {l}^{\prime }}{\downarrow }_{dp_{i}}=\bar {l}{\downarrow }_{dp_{i}}\). The action impact is given by the domain configuration in which the domain knowledge of the domain object that is executing the activity evolves. (Abstract Process Run) Given a process p=〈S,S0,A,T,Ann〉 and a domain knowledge state \(\bar {l}\in \mathbb {L}_{DK}\), π=(s1,a1,s2,…,an−1,sn) is an abstract run of p from \(\bar {l}\) if: s1∈S0 and ∀i,∈[1,n]:si∈S; ∀i∈[1,n−1]:ai∈A and (si,ai,si+1)∈T; there exists a domain knowledge evolution of \({\mathbb {D}\mathbb {K}}\), \({\pi }_{DK} = (\bar {l}_{1},\bar {l}_{2},\ldots,\bar {l}_{n})\) such that: \(\bar {l_{1}} = \bar {l}\); \({Impact}(a_{i},\bar {l}_{i})=\bar {l}_{i+1}\) for all i∈[1,n−1]; \({Executable}(a_{i},\bar {l}_{i})\) for all i∈[1,n−1]. A process run that terminates in a state with no outgoing transitions (final state) is called a complete run. We denote with \({\Pi _{ABS}}(p,\bar {l})\) the set of all possible complete abstract runs of process p from domain knowledge state \(\bar {l}\in \mathbb {L}_{DK}\). We can now define a correct refinement. (Correct Refinement) Given a refinement need η=〈ASI,δ,a〉, with \(\delta =\langle {{\mathbb {D}\mathbb {K}}_{I}, {\mathbb {D}\mathbb {K}}_{E^{+}},\bar {l}_{I},\\ \bar {l}_{E^{+}},I_{p}}\rangle \), we say that a refinement \(\langle {p_{\eta },{\mathbb {D}\mathbb {K}}_{\eta },\bar {l}_{\eta }}\rangle \) is a correct solution for η, if for each complete abstract run \(\pi \in {\Pi _{ABS}}(p_{\eta },\bar {l}_{E^{+}})\), its associated domain knowledge evolution \({\pi }_{DK} = (\bar {l}_{1},\bar {l}_{2},\ldots,\bar {l}_{n})\) is such that \(\bar {l}_{n}\models Goal(a)\). Intuitively, a refinement is a correct solution for a refinement need if all its complete abstract runs satisfy the goal of the abstract activity to be refined. As regards the execution of an adaptive system instance, intuitively, it evolves in three different ways. First, through the execution of activities in domain object instances, which will be presented in detail in the following. Second, through the interaction among domain object instances, which happens according to the standard rules of peer-to-peer process communication. Third, through a change in the behavior, or entrance / exit, of domain objects and domain object instances into the system. In the following we formalize the execution model of a domain object, considering also the injection of a refinement solution in the case in which an abstract activity is executed. (Action Execution) Given a domain object instance \(\delta = \langle {{\mathbb {D}\mathbb {K}}_{I}, {\mathbb {D}\mathbb {K}}_{E^{+}},\bar {l}_{I},\\ \bar {l}_{E^{+}},I_{p}}\rangle \), with δ∈ASI and Ip=(p1,a1),(p2,a2),…,(pn,an), the execution of action a1, denoted with exec(δ,ASI), evolves δ to \(\langle {{\mathbb {D}\mathbb {K}}_{I}, {\mathbb {D}\mathbb {K}}'_{E^{+}},\bar {l}'_{I},\bar {l}'_{E^{+}},I_{p}}\rangle \), where: if a1∈Ain(p1)∪Aout(p1)∪Acon(p1) then \({\mathbb {D}\mathbb {K}}'_{E^{+}} = {\mathbb {D}\mathbb {K}}_{E^{+}}\); \(\bar {l}'_{I}={Impact}(a,\bar {l}_{I})\) and \(\bar {l}'_{E^{+}}={Impact}(a,\bar {l}_{E^{+}})\); if next(p1,a1)≠∅ then I′p=(p1,next(p1,a1)),(p2,a2),…,(pn,an), otherwise \(I^{\prime }_{p}=(p_{2}, {next}(p_{2},a_{2})),\ldots, (p_{n}, a_{n})\). if a1∈Aabs(p1), given \(\langle {p_{\eta }, {\mathbb {D}\mathbb {K}}_{\eta }, \bar {l}_{\eta }}\rangle =REF(\eta)\), with η=〈ASI,δ〉, then \({\mathbb {D}\mathbb {K}}'_{E^{+}} = {\mathbb {D}\mathbb {K}}_{E^{+}}\cup {\mathbb {D}\mathbb {K}}_{\eta }\); \(\bar {l}'_{E^{+}}\in {\mathbb {L}}_{DK^{\prime }_{E^{+}}}\) is such that for every \({dp}_{i} = \langle {{L}_{i}, {{l}^{0}}_{i}, {E}_{i}, {T}_{i}}\rangle \in {\mathbb {D}\mathbb {K}}'_{E^{+}}\), if \({dp}_{i}\in {\mathbb {D}\mathbb {K}}_{\eta }\) then \(\bar {l}'_{E^{+}}{\downarrow }_{dp_{i}}=\bar {l}_{\eta }{\downarrow }_{dp_{i}}\), otherwise \(\bar {l}'_{E^{+}}{\downarrow }_{dp_{i}}=\bar {l}_{E^{+}}{\downarrow }_{dp_{i}}\); \(I^{\prime }_{p}=(p_{\eta },a^{0}_{\eta })(p_{1}, a_{1}), (p_{2},a_{2}),\ldots, (p_{n}, a_{n})\). Eventually, we previously said as a soft dependency among two domain objects becomes a strong dependency, denoted with δih←δjk, if the domain object δih injects in its internal process a fragment provided by δjk. This is formally defined as follows: (Strong Dependency) ∀δih,δjk∈ASI with i≠j and h≠k, δih←δjk if \(\exists (f,a) \in I_{p}(\delta _{ih}) | f \in {\mathbb {F}}(o_{j})\). In the next section, we show how the refinement problem previously presented can be solved by applying the automated fragment composition approach based on AI planning [22]. Automated refinement via AI planning Within the approach presented in [23] and summarized in Section 7, we said that a fragment composition problem is transformed into a planning problem. Relevantly to our purposes, such techniques cover uncertainty, in order to allow the composition of services whose dynamics is only partially exposed, and is able to deal with complex goals and data flow [25]. In the following we briefly describe how a refinement need η=〈ASI,δ,a〉, with \(\delta =\langle {{\mathbb {D}\mathbb {K}}_{I}, {\mathbb {D}\mathbb {K}}_{E^{+}},\bar {l}_{I},\bar {l}_{E^{+}},I_{p}}\rangle \) is transformed into an AI planning problem. In other words, we say how the approach in [23] is adjusted and used in our framework. First of all, a set of n fragments, \((f_{1},\dots,f_{n})\), is selected from the soft dependencies of δ: for some δ′∈ASI, with \(\delta \dashleftarrow \delta '\), \(f_{i}\in {\mathbb {F}}(\delta ')\). Advanced optimization techniques, as the one described in [26], can be used to further reduce the set of fragments on the basis of the functionalities they provide and of the preconditions satisfiability of their preconditions in current domain knowledge state. Both fragments \((f_{1},\dots,f_{n})\) and the set of domain properties \(({dp}_{1},\dots,{dp}_{m})\in {\mathbb {D}\mathbb {K}}_{E}^{+}\), on which the fragments are annotated, are transformed into state transition systems (STS) using transformation rules similar to those presented in [23]. During this encoding, all goals on abstract activities in fragments are ignored, while preconditions and effects are maintained. With this measure, the refinement plan will be built under the assumption that abstract activities will behave according to their annotation, independently from the way in which they will be refined (see Definition 17). The planning domain Σ is obtained as the product of the STSs \({\Sigma }_{f_{1}}\) …\({\Sigma }_{f_{n}}\) and \({\Sigma }_{dp_{1}}\) …\({\Sigma }_{dp_{m}}\), where STSs of fragments and domain properties are synchronized on preconditions and effects, \( {\Sigma } = {\Sigma }_{f_{1}}\|\dots \|{\Sigma }_{f_{n}}~\|~{\Sigma }_{dp_{1}}\|\linebreak \dots \|{\Sigma }_{dp_{m}}. \) The initial state of the planning domain is derived from the initial state of all fragments and the current state of the domain properties \(\bar {l}_{E^{+}}\), by interpreting it as states of the STSs defining the planning domain. Similarly, the refinement goal Goal(a) is transformed into a planning goal ρ by interpreting the states in \({\mathbb {D}\mathbb {K}}_{E^{+}}\) as states in the planning domain. Finally, the approach of [23] is applied to domain Σ and planning goal ρ to generate a plan Ση that guarantees achieving goal ρ once executed on system Σ. State transition system Ση can be further translated into an executable process pη, which implements the identified solution. Prototype implementation and validation This section is devoted to the architecture of the design for adaptation approach, and the implementation and evaluation of an application on top of it, namely ATLAS, which implements the travel assistant scenario. The aim of this section is to demonstrate the feasibility of the approach for realizing adaptive applications. Design for adaptation architecture From a technical perspective, the architecture is organized in three main layers, as shown in Fig. 11. Domain Object-based Architecture The Enablers leverage on our results on the design for adaptation approach, described in Section 4. Developers can exploit and wrap up as domain objects the available services in the target domain. Besides the design of services, execution and adaptation enablers allow also for their run-time operation, as described in Section 5.2. Moreover, to deal with IoT domains, or more generally with IoT things, the IoT Platform Services has been added, together with the Things States repository. The former can relate to any cloud platform providing IoT services (e.g., Amazon AWS-IoT platformFootnote 6) enabling the management and interaction with things. The Process Engine can send instructions to things through the IoT Platform Services component (e.g., when executing activities including calls to things API). The Domain Objects Manager is responsible for answering queries about available IoT things and their capabilities. The latter, stores knowledge about things operational states. The Provided Services layer exposes the functionalities implemented by the Enablers. These services can exploit and/or combine into value-added services (e.g., a travel assistant in the mobility domain) the services previously wrapped up and made available by the Enablers. The key idea is that the architecture is open to continuous extensions with new services, wrapped as domain objects, whose functionalities can be exploited in a transparent way to provide value-added services to the end-users. All the provided services can be eventually delivered to final users through a range of multi-channels front-end applications that constitute the Front-end layer. These can be mobile or desktop applications, and they can also rely on existing services, such as chat-bots (e.g., Telegram chat-bot). Case study: ATLAS, a smart travel assistant In this section, we introduce ATLAS – a world-wide personAlized TraveL AssiStant [27]. ATLAS consists both in (i) a demonstrator showing the application's models and its execution and evolution through automatic run-time adaptation, and (ii) a Telegram chat-bot, for the interaction with the users. The demonstrator is based on a process-engine for the execution of automated and adaptable processes. Before implementing ATLAS, we looked for a process engine suitable for the integration into our design for adaptation framework, as for instance extensible with abstract activities. We come out with a subset of eligible process engines, namely jBPMFootnote 7, CamundaFootnote 8 and ActivitiFootnote 9. However, none of them are thought for dealing with (i) the decentralized management of processes and (ii) the correlation among different processes that are fundamental in our framework. As a consequence, we decided to realize from scratch a process engine implementing the features required by our framework. Essentially, it is a conventional process engine, extended with some adaptation-related constructs. It handles the multiple-instance processes management, the dynamic correlation among processes and the abstract activity management. The demonstrator also implements the enablers. In this article we focus on the chat-botFootnote 10. We clarify here that the implementation effort for developing ATLAS consist only in the modeling of the involved domain objects and in the realization of the dedicated Telegram chat-bot. The enablers shown in Fig. 11 are part of the design for adaptation framework and are reusable in the implementation of any other application different than ATLAS. To realize a world-wide travel assistant we selected real-world mobility services exposed as open APIs. We identified their behavior, functionalities and their input and output data. Then, we wrapped them up as domain objects and stored them in the knowledge base. For instance, we wrapped Rome2Rio and Google TransitFootnote 11 as global journey planners. To overcome the limitations of global planners in terms of accuracy, we wrapped local planners too, such as ViaggiaTrentoFootnote 12. It can be exploited for those journey located in the city of Trento, which can also be part of a wider inter-modal travel solution provided by a global planner, but for which the global planner does not give enough or accurate information. Combining the geographical coverage of global planners with the accuracy of local planners is a concrete example of services interoperability promoted by our approach. Other open mobility services we considered are Travel for LondonFootnote 13 as planner for the city of London, BlaBlaCar as ride sharing service, CityBikesFootnote 14 as bike sharing service applying to about 400 cities, to give a few examples. We emphasize that the more (mobility) services are wrapped up and stored in the application's knowledge base, the more responsive and accurate the travel assistant will be. At the Provided Services level we defined the Travel Assistant. It has been realized as a value-added service leveraging on the services available in the application's knowledge base. Its main features have been described in Section 5.3. Finally, among the multi-channel front-ends that can be exploited, we realized ATLAS as a Telegram chat-bot, exploiting the open API provided by Telegram. The same travel assistant might be furnished via a different front-end, too. Hereafter, we show how ATLAS runs in the Telegram chat-bot interface. The chain of incremental refinements that is dynamically set up from the execution of the following scenarios, is similar to that given in Fig. 9. Local journey organization use case. Sara lives in Trento, Italy, and she wants to find her way to reach the Christmas markets located in Piazza Fiera. Her departure place is in via Fogazzaro. In Fig. 12, we show the relevant screenshots of the ATLAS chat-bot running on her smartphone. Screenshots of the ATLAS chat-bot – Local Journey Organization Sara enters her departure and destination points (see the screenshot on the left side in Fig. 12). Being both places in the same city, Trento, a local planning would be more appropriate. Thus, the Viaggia Trento journey planner is dynamically selected. The journey planner's response is further handled and parsed to be showed on the chat-bot. The result is shown to Sara as in the central screenshot in Fig. 12. Since she opted for a healthy solution, the Viaggia Trento journey planner replies with a bike-sharing service, whose racks are close to both her source and destination places. At this point, to know if there are available bikes to be used, the travel assistant continues with its execution and it identifies the bike-sharing service available in Trento, namely e-motionFootnote 15. It selects its fragment whose execution allows the application to get information about the available bikes at the closest bike-sharing racks. The result is shown as in the right-side screenshot in Fig. 12. Three bikes up to 11 are still available at the rack close to Sara (first element in the result list). The e-motion bike-sharing service does not allow for the booking of bikes, so that the execution of ATLAS stops here. Global journey organization use case. Paolo must organize his working journey from Trento to Torino. The relevant screenshots for his journey are reported in Fig. 13. Screenshots of the ATLAS chat-bot – Global Journey Organization In this case, the travel assistant opts for a global planning solution served by the Rome2Rio global journey planner. The found travel alternatives are shown to Paolo as in the central screenshot in Fig. 13. Different alternatives are available (e.g., rideshare, bus, train, etc). Paolo selects the rideshare solution, which is also the less expensive. It is provided by the BlaBlaCar ride-sharing service. Further details about the selected solution are shown to Paolo, as in the right-side screenshot in Fig. 13. We highlight here that, to continue with the booking of the ride-share solution, it is required that he is subscribed to the BlaBlaCar service. These execution examples exhibit two important aspects of our approach. Firstly, they show its bottom-up nature, where mobility services functionalities go through the domain objects hierarchy (refer to Fig. 3) till the user process where they are executed. Secondly, this happens in a completely transparent way for the user that interacts with only one application. ATLAS evaluation To evaluate ATLAS, both in terms of effectiveness and efficiency, we have run a set of experiments. The tests are done on real-world problem that were generated by randomly choosing an origin and a destination points. The specification of ATLAS used to evaluate it contains 14 domain object models, 17 fragment models and 12 types of domain properties. We ran ATLAS using a dual-core CPU running at 2.7 GHz, with 8 Gb memory. To show its feasibility, we evaluate the following aspects: (i) how long it takes to wrap up real services as domain objects; (ii) how much automatic refinement (service selection and composition) affects the execution of the travel assistant. To answer to the first point, and based on our experience acquired during the development of ATLAS, we can argue the following. To wrap a real service as a domain object, the developer needs (i) to master the domain objects modeling notation and (ii) to understand the service behavior, its functionalities, its input/output data format and how to query it. Wrapping time clearly changes between experienced and non-expert developers. From our analysis, it ranges from 4 to 6 hours, considering average complex services. Moreover, it is also relevant to claim that this activity is done una tantum: after its wrapping, the service is seamlessly part of the approach and exploited for automatic composition and refinement. To answer to the second point, we collected both the adaptation and mobility services execution statistics, to understand how long they take, on average, to be executed. To evaluate the automatic refinement, we carried out an experiment in which we considered 10 runs of ATLAS handling various end-users' requests. We collected adaptation data such as the number of adaptation cases, their complexity and the time required to generate adaptation solutions. For each run, more than 150 refinement cases were generated. Figure 14 shows the distribution of problem complexity considering the 10 runs. Distribution of Problems Complexity The complexity of an adaptation problem is calculated as the total amount of transitions in the state transition systems representations of the domain properties and fragments present in the problem. For simplicity, in the graph we aggregated the problem complexities in ranges of 20. The majority of the problems have a complexity in-between 0 and 19 transitions and 40 and 59 transitions. Notice that the occurrence of complex problems (complexity ranging from 80 to 100 transitions) is relatively rare (in this real-world battery of tests). Figure 15 shows the percentage of refinement problems solved within a certain time. We can see that, for all the runs, 93% of problems are solved within 0.2 s. Only 3% of the problems require more than 0.5 s to be solved, and the worst case is anyhow below 1.5 s. Percentage of problems solved within time t To measure how much automatic refinement influences the execution of ATLAS, we compared the data about the time required for adaptation with the response time of real-world services wrapped in ATLAS. Figure 16 relates the (average) time required to solve a composition problem to the problem complexity. Trend of the Adaptation Time The average time is computed considering in the 10 runs all the refinement problems having the same complexity. As expected, problems with higher number of transitions (and hence the most complex planning domain) take more planning time than problem with less complexity. Figure 17, instead, relates to the (average) response time of (a subset of) real mobility services, which are part of ATLAS. (Average) Services Execution Time We can notice that, in the worst case, the adaptation requires a time close to 1.5 seconds, while the services response time ranges from 0.23 to 3.20 seconds. Moreover, the adaptation takes more time for the most complex problems that, however, are the less frequent to be executed. We can argue that the automatic refinement responsiveness is equivalent to that of mobility services. Eventually, extended experimental results have been obtained in [16] where the presented approach has been used to realize an application in the IoT domain, where devices (e.g., sensors and actuators) act as service providers. To analyze the scalability of the approach, we measured the overall execution time of the application by considering up to 100 devices. Figure 18 shows the execution times (expressed in seconds) when varying the number of device instances. We found that the execution time values vary within a narrow interval, i.e., from 1.96 to 2.10 seconds. Scalability of the approach In conclusion, these results demonstrate the effectiveness and the efficiency of our approach when applied to a real-world complex scenario. Lifecycle of the design for adaptation of service-based applications In this section, we illustrate the overall lifecycle that we envisage for modeling and executing adaptive service-based applications, as depicted in Fig. 19. It gives a complete overview of the different perspectives of the approach (i.e., modelling, adaptation, interaction), the potentially involved actors (i.e., platform provider, service providers, end-users) and an abstraction of the main activities and artifacts. Overall design for adaptation process In the following, each subsection is devoted to a particular perspective of the overall lifecycle (i.e., a row in Fig. 19). For each perspective, we also highlight the view of the different involved actors (i.e., a column in Fig. 19). The modeling perspective From a modeling perspective, each actor has a different view on the models of the application and is differently involved in its development and/or operation. Platform provider view. The Platform Provider, with his team, is in charge to realize, maintain and provide to third parties a comprehensive platform allowing them to build and execute adaptive service-based applications on top of it. The design of the foundations for realizing adaptive service-based applications is made by the two models we have introduced in Section 4, namely the Domain model and the Domain Objects model. Given a specific domain (e.g., mobility), the domain model is specified by domain experts and it describes the operational environment of the application (see Domain Analysis activity and Domain Specification document in Fig. 19). Furthermore, another important contribution from domain experts is an accurate analysis of the available services that are part of the targeted domain (see Real Service Analysis activity in Fig. 19). As outcome of the analysis, the domain experts release a high level description of the features, behavior, usage and offered functionalities of the analyzed services (see Services Specification document in Fig. 19). The domain model and the services analysis constitute the input for the activity of defining the domain object model, accomplished by the application's developers (see Real Services Wrapping as Domain Objects activity in Fig. 19). In other words, developers wrap-up the services identified by domain experts as the concrete implementations of the abstract concepts in the domain model. This allows us also to overcome the typical mismatch among services' interfaces. The domain model and the domain objects models contribute to enrich the application knowledge base where they are stored (see Domain Objects and Domain Model artifact in Fig. 19). We highlight that the activity of wrapping services as domain objects (or defining new ones) is not executed only once, and certainly not only during the initial design of an application. To the contrary, it is a continuous running activity, due to the continuous discovery and availability of new services (see the loop arrows on the Real Services Analysis and Domain Analysis activities in Fig. 19). Moreover, this activity can be performed as a collective co-development process [28], in a crowd-sourcing style [29], where each developer contributes to add new interesting services, thus enriching the application knowledge base. For these reasons we say that our approach supports the continuous development of service-based adaptive applications. Service providers view. The role played by service providers is that of using and exploiting the tools, the engines and the models provided by the platform, in order to define, develop and execute their own service-based applications on top of it, such as ATLAS. This can be done by selecting and customizing the already available domain objects (see Domain Objects Selection, Customization & Modeling activity in Fig. 19) and by defining new value-added services as domain objects (see Value-added Services artifact in Fig. 19), together with the corresponding new domain concepts they implement. Also the domain objects defined by service providers can be stored in the application knowledge base and made available to the outside. This way they contribute to the continuous development of adaptive by design services and corresponding applications. Moreover, service providers can decide to develop and release their newly defined applications (see Service-based Application artifact in Fig. 19), by using whatever technologies. End-users view. End-users are the final beneficiaries of the deployed service-based applications. Different application instances will be instantiated for different users (see Application Instances artifact in Fig. 19) and each instance will be characterized by its own network of domain objects instances, dynamically raised from the execution of adaptable process. The domain objects network is made by instances of the domain objects corresponding to the services effectively exploited by the user. The adaptation perspective In this section we describe the adaptation perspective in the lifecycle of our approach, as depicted in Fig. 19–adaptation layer. Platform provider view. The platform provider must supply all the tools and enablers (see Adaptation Tools artifact in Fig. 19) allowing the platform users (i.e., service providers) to define adaptive applications on top of it, as well as applications to effectively perform the adaptation, when executed. In other words, the adaptation mechanisms and strategies used by the platform (see Adaptation Mechanisms and Strategies Definition activity in Fig. 19) must be exposed in such a way that external users can benefit from and exploit them. The platform provider must also provide to the platform's users a way to use the available adaptation tools, allowing them to understand and exploit the adaptation techniques, when defining their applications on top of the platform. In conclusion, other adaptation approaches can be exploited, as an alternative or in addition to the AI-based planning approach. Service providers view. Different service providers can exploit the platform for defining their own applications or new value-added services. They have just to configure the adaptation mechanisms provided by the platform (see Adaptation Mechanisms & Strategies Configuration activity in Fig. 19). As a result, service providers will be able to release adaptive service-based applications (see Adaptive Service-based Applications artefact in Fig. 19) that can be customized and executed on top on the platform. End-users view. The end-users use the available adaptive applications. They effectively enact the adaptation techniques (see Adaptation Enactment activity in Fig. 19). Indeed, adapted application instances (see Fig. 19) are dynamically created, customized and run over their requirements, based on their applications usage. This happens thanks to the adaptation mechanisms (e.g., local and refinement). Once the specific user execution environment is known, appropriate services can be selected, composed and exploited to satisfy the different user's goals. The interaction perspective From the operation and usage perspective of the application, each actor differently interacts with it (see the interaction level in Fig. 19). Platform provider view. The platform provider, together with his team, is in charge of realizing the platform and its enablers, and then using it to realize and provide different adaptive service-based applications or simply adaptive services. In order to allow external service providers to exploit these applications through the platform, the platform provider should make available all the tools, the modeling environment and languages, the access to the different engines running in the platform, through an access console (see the Platform Console component in Fig. 19). Service providers view. The service providers play a double role. From one side, they act as platform users. Indeed, they use the platform (i.e., its tools, enablers, engines, services) as a third-party service, or a PaaS (i.e., a platform as a service). To this aim, service providers access to and interact with the platform (see the Back-end Applications Services & Tools component in Fig. 19). From another side, service providers can decide to release their value-added services as applications. To this aim and from an interaction point of view, they can decide about the technologies to use for developing their applications (e.g., mobile apps, web applications) and also define the corresponding user interfaces (see the Multi-channel Application Front-end component in Fig. 19). While for the back-end of their applications service providers exploit the platform, for the front-end they are independent from the platform and its console. End-users view. The end-users, finally, are not aware of the platform itself and exploit it in a completely transparent way. End-users just interact with the available applications through their interfaces, also using different devices, such as their smartphones, laptop, tablet and so on, depending on the specific technologies through which the service providers released their applications (see the Multi-channel Application Instances component in Fig. 19). Many modern software systems are increasingly required to offer continuous services [30, 31]. Traditional software maintenance supports software evolution by providing updates that are applied off-line: the system is shut down, updated, and restarted. This solution, however, is not applicable when the system management must be carried out at runtime. This need has motivated two parallel and independent approaches. Software engineers have started conceiving self-adaptive software systems [32, 33] that is, systems able to exploit internal capabilities to diagnose problems or changes in the context, and react accordingly. The advent of virtualized computing resources has also fostered DevOps [34] principles that suggest the idea of continuous evolution and release through the strict collaboration between development and operations. However, both solutions intrinsically embed some weaknesses. Conceiving a purely self-adaptive system means that any possible problem or change should be foreseen beforehand; otherwise the system would not be able to react. While self-adaptation may be extremely effective at solving specific problems, widening their scope can be problematic. The analysis required to foresee potential issues may be expensive, and not always feasible. In contrast, focusing on rapidly changing implementations and on their automated deployment imposes continuous changes, even when they are not required, and can have severe consequences on the quality of released software. Self-adaptation refers to the ability of a system to autonomously adapt at runtime, based on adaptation models, to maintain its non-functional requirements, by reacting to changes in the context it operates in [32, 33]. However, the increased interdependencies between software components and the complexity of execution contexts make the task of fully defining a priori adaptation need and solutions more difficult. A variety of runtime adaptation approaches have been proposed in the literature. Within Dynamic Software Product Lines (DSPL) [11], the notion of software families, used to refer to common and reusable software assets [35, 36] is combined with predefined feature models that specify alternative variations that can be used for adaptation. These solutions have also been combined with aspect-oriented modelling methods for expressing self-adaptive systems at design time by separating implementation concerns [37, 38] and enable both design and runtime adaptations to meet new requirements [38]. Rule-based approaches have also been proposed [39–41] for defining adaptations. Cutting points in software models are identified at design time and rules are used to capture adaptations in terms of actions to take at different cutting points. Context-oriented programming [42] has also been suggested as a paradigm for programming adaptable systems. In this case, adaptation relies on a pool of code variants chosen according to predefined program's context. Many languages have been extended (e.g., Lisp, Python, Ruby, Java) to integrate the notion of code fragments (e.g., methods or functions) that can be specialized with respect to each possible context. Adaptations have also been defined as (rule-based) mechanisms that allow the system to pass from one implementation logic, expressed in terms of behavior models, to another [39]. Similar solutions have been proposed for Mode Automata [43], Featured Transition Systems [44], and Labelled Transition System [45, 46] models. More recently, Artificial Intelligence planning frameworks have also been proposed [26, 47] in combination with state transition models of system behaviors, to address software adaptation in terms of a classical planning problem. In our approach, we also make use of AI planning to realize automated service compositions. Similarly, in [48] the authors automatically realize choreography-based service-oriented systems, by exploiting the CHOReVOLUTION approach [49]. In particular, the work proposed in [48] represents the concrete implementation of a real case study in the mobility domain employing the CHOReVOLUTION synthesis process that allows for realizing dynamic choreographies via distributed coordination of services. In the context of this work, we can not but argue about microservices and the microservices architectural style [8]. Most existing work on microservices focus on general architectural principles and migration guidelines [10, 50, 51]. Very rare works propose self-adaptation solutions in this context; Sampaio et al. [52] propose and approach to optimize microservice-based applications at runtime. All the above mentioned approaches for adaptation rely on the underlying assumption of close world context and system. Consequently, adaptations can be pre-defined (e.g., rules-based models for adaptations are developed at design time) and any dynamic change in the software system components and/or functionality would require developers' intervention. Most approaches also provide methods where the implementation and adaptation logic are not clearly separated. This makes the systems' design much more complex and the runtime adaptation execution less flexible in managing dynamic context changes. Our approach aims at providing the following contributions in the area of self-adaptive software systems: Definition of models and programming paradigms for the development of software systems that are adaptable "by design", whereby runtime adaptations is not just an exception handling mechanism but an intrinsic characteristic of the system. Development of adaptation mechanisms and strategies for identifying the best runtime adaptations, without modifying the implementation logic to make software systems resilient to changes whilst preserving their qualities. Critical discussion We hereby discuss a set of limitations. Data-driven composition requirements. Currently, composition requirements are expressed in terms of goals on abstract activities. In particular, they reflect functional properties of services allowing the definition of control-flow composition requirements. As a future work, we plan to define an extension of the domain properties such that to consider data variables related to context states. Monitoring of unexpected events. It may happen that the execution of service-based applications is affected by unexpected events coming from the context that should be handled. These events are not devised by domain experts at design time, they are triggered by the operational context. In the current version of our approach, we do not deal with the monitoring of context events. This limitation can be overcome by extending our approach with existing approaches [20] dealing with the monitoring of evolving contexts. Further adaptation mechanisms and strategies implementation. We highlight that while the Adaptation Engine implements all the adaptation mechanisms and strategies reported in Section 5.1, in the implementation of our design for adaptation approach we currently handle the refinement mechanism. As future work we plan to extend the approach to the management of the other mechanisms and strategies. However, overcoming this limitation requires more for an implementation effort that for an extension of the approach, which already includes all the required constructs to handle the local and compensation adaptation mechanisms (e.g., preconditions, effects, goals). Users involvement and flexible adaptations. When executing ATLAS, we can notice that the selection of the proper services is transparent to the user. From one hand, the service selection is indirectly affected by the user preferences. Obviously, it can also happen that there are no solutions satisfying the user's preferences, or the provided service composition represents an undesired outcome for the user. However, this strictly relates to the availability of services in the application, too. On the other hand, the indirect involvement of users in the service selection and composition tasks can be seen as a limitation of the approach. Nevertheless, it can be overcome in different ways. For instance, by considering all the different adaptation solutions that might satisfy a specific adaptation need, if more than one solution are available, and involve the user in the selection of the preferred one. QoS-driven service selection and composition. Although the fragments discovery and selection is currently functional, the presented approach has been recently extended to include a QoS-driven service selection and composition [53]. Promising results have been obtained in the IoT scenario of [16]. Threats to validity A threat to the internal validity of our approach is represented by the Process Engine, which currently operates in a centralized manner. Obviously, and this is part of our future work directions, it should evolve to better deal with the execution of applications running in distributed environments. A threat to the external validity is that the presented results have been obtained on a set of case studies modeled by us, with the support of a developers group, comprising also experts in both the mobility and IoT domains. To increase the representativeness of the input models (i.e., domain model and domain object model) to our approach, further domain experts and software architects should be involved in a wider experimentation. A second threat to the external validity consists in the fact that each service (or thing) needs to be wrapped-up as domain object for being available in the application. However, we plan to extend the approach to support the automated wrapping of services/things as domain objects, thus also enabling the definition of new goal types at runtime. Conclusion and future work The design for adaptation approach presented in this article is a proposal to solve the current open issues related to the modeling and execution of adaptive service-based applications. Its aim is to provide a complete solution for services management and exploitation, while considering the evolving nature of the environments in which they operate. Thanks to this general approach, we can facilitate services integration and interoperability, via service-based adaptive applications, thus better exploiting their functionalities and meeting users needs. By applying our approach, a novel ecosystem of customizable services that are easily personalized in different contexts can be designed, deployed, adapted and made available to the interested stakeholders. Indeed, it offers a lightweight-model, with respect to the existing languages for service modeling and adaptation, and it can be implemented with every object-oriented languages. As already anticipated, different tasks represent our future work agenda, that are: (i) studying the usability of our approach by exposing the defined models and tools to users with different levels of experience; (ii) experimenting the approach on real applications coming from industrial experiences; and (iii) introducing automation in the initial activity of the design for adaptation process, by devising a technique to wrap-up services/things into domain objects. The implementation of the ATLAS travel assistant framework is available at https://github.com/das-fbk/ATLAS-Personalized-Travel-Assistant. A supporting video illustrating the main features and its live demonstration can be found at: https://vimeo.com/357367106. http://www.programmableweb.com For simplifying the graphical representation of complex applications made by different interconnected domain objects, through this article we draw domain properties as part of domain objects. https://www.blablacar.it/ We remark that more complex alternatives of our scenario can be modeled within our approach. In this article we use a trivial but exhaustive example to highlight the features of the approach. For presentation purposes and without loss of generality, we report only portions of the processes involved in the scenario. For each fragment, we specify its name and the domain object which it belongs to (e.g., fragmentName@domainObjectName). https://www.rome2rio.com https://aws.amazon.com/iot/ https://www.jbpm.org/ https://camunda.org/ https://www.activiti.org/ To see the travel assistant in action (both demonstrator and chat-bot), or simply inspect its full specification, one can freely download ATLAS at: https://bit.ly/2V2JNy8 http://www.google.com/transit http://www.smartcommunitylab.it/apps/viaggia-trento/ https://api.tfl.gov.uk/ https://www.citybik.es/ http://www.provincia.tn.it/bikesharing Internet of services IoT: IoP: Internet of people APFL: Adaptive pervasive flow language STS: State transition systems AI: HOAA: Higher order abstract activity ATLAS: A world-wide personAlized TraveL AssiStant DSPL: Dynamic software product lines Pallis G. Cloud computing: The new frontier of internet computing. IEEE Internet Comput. 2010; 14(5):70–73. Moreno-Vozmediano R, Montero RS, Llorente IM. Key challenges in cloud computing: Enabling the future internet of services. IEEE Internet Comput. 2013; 17(4):18–25. Commission E. Next Generation Internet initiative. 2016. https://ec.europa.eu/digital-single-market/en/policies/next-generation-internet. Accessed 19 Mar 2020. Group C-ETPX-E. Future Internet Strategic Research Agenda, Ver. 1.1. 2010. https://ec.europa.eu/programmes/horizon2020/en/h2020-section/future-internet. Accessed 19 Mar 2020. Bouguettaya A, Singh MP, Huhns MN, Sheng QZ, Dong H, Yu Q, Neiat AG, Mistry S, Benatallah B, Medjahed B, Ouzzani M, Casati F, Liu X, Wang H, Georgakopoulos D, Chen L, Nepal S, Malik Z, Erradi A, Wang Y, Blake MB, Dustdar S, Leymann F, Papazoglou MP. A service computing manifesto: the next 10 years. Commun ACM. 2017; 60(4):64–72. Baresi L, Nitto ED, Ghezzi C. Toward open-world software: Issue and challenges. IEEE Comput. 2006; 39(10):36–43. Issarny V, Georgantas N, Hachem S, Zarras AV, Vassiliadis P, Autili M, Gerosa MA, Hamida AB. Service-oriented middleware for the future internet: state of the art and research directions. J Internet Serv Appl. 2011; 2(1):23–45. Lewis J, Fowler M. Microservices in a Nutshell. 2014. https://www.thoughtworks.com/insights/blog/microservices-nutshell. Accessed 19 Mar 2020. Newman S. Building Microservices – Designing Fine-Grained Systems: O'Reilly Media; 2015. Accessed 19 Mar 2020. Taibi D, Lenarduzzi V, Pahl C. Continuous architecting with microservices and devops: A systematic mapping study. In: Cloud Computing and Services Science - 8th International Conference, CLOSER 2018, Revised Selected Papers. Springer: 2018. p. 126–51. Hinchey M, Park S, Schmid K. Building dynamic software product lines. IEEE Comput. 2012; 45(10):22–26. Bucchiarone A, De Sanctis M, Marconi A, Pistore M, Traverso P. Design for adaptation of distributed service-based systems. In: Service-Oriented Computing - 13th International Conference, ICSOC 2015, Proceedings: 2015. p. 383–93. https://doi.org/10.1007/978-3-662-48616-0_27. Bucchiarone A, De Sanctis M, Marconi A, Pistore M, Traverso P. Incremental composition for adaptive by-design service based systems. In: IEEE International Conference on Web Services, ICWS 2016: 2016. p. 236–43. https://doi.org/10.1109/icws.2016.38. Marchau V, Walker W, van Duin R. An adaptive approach to implementing innovative urban transport solutions. Transp Policy. 2008; 15(6):405–12. Bucchiarone A, Cappiello C, Nitto ED, Kazhamiakin R, Mazza V, Pistore M. Design for adaptation of service-based applications: Main issues and requirements. In: Service-Oriented Computing. ICSOC/ServiceWave 2009 Workshops - International Workshops, ICSOC/ServiceWave 2009, Revised Selected Papers: 2009. p. 467–76. https://doi.org/10.1007/978-3-642-16132-2_44. Alkhabbas F, De Sanctis M, Spalazzese R, Bucchiarone A, Davidsson P, Marconi A. Enacting emergent configurations in the iot through domain objects. In: Service-Oriented Computing - 16th International Conference, ICSOC, Proceedings: 2018. p. 279–94. https://doi.org/10.1007/978-3-030-03596-9_19. Eberle H, Unger T, Leymann F. Process fragments. In: On the Move to Meaningful Internet Systems: OTM 2009, Confederated International Conferences, CoopIS, DOA, IS, and ODBASE 2009, Vilamoura, Portugal, November 1-6, 2009, Proceedings, Part I. Springer: 2009. p. 398–405. Sirbu A, Marconi A, Pistore M, Eberle H, Leymann F, Unger T. Dynamic composition of pervasive process fragments. In: IEEE International Conference on Web Services, ICWS 2011, Washington, DC, USA, July 4-9, 2011: 2011. p. 73–80. https://doi.org/10.1109/icws.2011.70. Bucchiarone A, Lluch-Lafuente A, Marconi A, Pistore M. A formalisation of adaptable pervasive flows. In: Web Services and Formal Methods, 6th International Workshop, WS-FM, Revised Selected Papers: 2009. p. 61–75. https://doi.org/10.1007/978-3-642-14458-5_4. Saralaya S, D'Souza R. A review of monitoring techniques for service based applications. In: 2nd International Conference on Advanced Computing, Networking and Security, Mangalore, India, December 15-17: 2013. p. 96–101. https://doi.org/10.1109/adcons.2013.18. Guermah H, Fissaa T, Hafiddi H, Nassar M, Kriouile A. Context modeling and reasoning for building context aware services. In: ACS International Conference on Computer Systems and Applications, AICCSA 2013: 2013. p. 1–7. https://doi.org/10.1109/aiccsa.2013.6616439. Bucchiarone A, Marconi A, Pistore M, Raik H. A context-aware framework for dynamic composition of process fragments in the internet of services. J Internet Serv Appl. 2017; 8(1):6–1623. Bertoli P, Pistore M, Traverso P. Automated composition of web services via planning in asynchronous domains. Artif Intell. 2010; 174(3-4):316–61. Raik H, Bucchiarone A, Khurshid N, Marconi A, Pistore M. Astro-captevo: Dynamic context-aware adaptation for service-based systems. In: Eighth IEEE World Congress on Services, SERVICES 2012, Honolulu, HI, USA, June 24-29, 2012: 2012. p. 385–92. https://doi.org/10.1109/services.2012.14. Marconi A, Pistore M, Traverso P. Automated composition of web services: the ASTRO approach. IEEE Data Eng Bull. 2008; 31(3):23–26. Bucchiarone A, Marconi A, Mezzina CA, Pistore M, Raik H. On-the-fly adaptation of dynamic service-based systems: Incrementality, reduction and reuse. In: Service-Oriented Computing - 11th International Conference, ICSOC, Proceedings: 2013. p. 146–61. https://doi.org/10.1007/978-3-642-45005-1_11. Bucchiarone A, De Sanctis M, Marconi A. ATLAS: A world-wide travel assistant exploiting service-based adaptive technologies. In: Service-Oriented Computing - 15th International Conference, ICSOC 2017, Proceedings: 2017. p. 561–70. https://doi.org/10.1007/978-3-319-69035-3_41. Deck M, Strom M. Model of co-development emerges. Res-Technol Manag. 2002; 45(3):47–53. Estellés-Arolas E, González-Ladrón-de-Guevara F. Towards an integrated crowdsourcing definition. J Inf Sci. 2012; 38(2):189–200. Shahin M, Babar MA, Zhu L. Continuous integration, delivery and deployment: A systematic review on approaches, tools, challenges and practices. IEEE Access. 2017; 5:3909–43. Chen L. Microservices: Architecting for continuous delivery and devops. In: IEEE International Conference on Software Architecture, ICSA 2018: 2018. p. 39–46. https://doi.org/10.1109/icsa.2018.00013. In: Cheng BHC, de Lemos R, Giese H, Inverardi P, Magee J, (eds).Software Engineering for Self-Adaptive Systems [outcome of a Dagstuhl Seminar]. Lecture Notes in Computer Science, vol. 5525: Springer; 2009. de Lemos R, Giese H, Müller HA, Shaw M, (eds).Software Engineering for Self-Adaptive Systems, 24.10. - 29.10.2010. Dagstuhl Seminar Proceedings, vol. 10431. Germany: Schloss Dagstuhl - Leibniz-Zentrum für Informatik; 2010. Bass L, Weber I, Zhu L. DevOps: A Software Architect's Perspective: Addison-Wesley; 2015. Cubo J, Gámez N, Fuentes L, Pimentel E. Composition and self-adaptation of service-based systems with feature models. In: Safe and Secure Software Reuse - 13th International Conference on Software Reuse, ICSR 2013, Proceedings: 2013. p. 326–42. https://doi.org/10.1007/978-3-642-38977-1_25. Murguzur A, Trujillo S, Truong HL, Dustdar S, Ortiz Ó,., Sagardui G. Run-time variability for context-aware smart workflows. IEEE Softw. 2015; 32(3):52–60. Popovici A, Alonso G, Gross TR. Just-in-time aspects: efficient dynamic weaving for java. In: AOSD: 2003. https://doi.org/10.1145/643603.643614. Parra C, Romero D, Mosser S, Rouvoy R, Duchien L, Seinturier L. Using constraint-based optimization and variability to support continuous self-adaptation. In: Proceedings of the ACM Symposium on Applied Computing, SAC 2012: 2012. p. 486–91. https://doi.org/10.1145/2245276.2245370. Ehrig H, Ermel C, Runge O, Bucchiarone A, Pelliccione P. Formal analysis and verification of self-healing systems. In: Fundamental Approaches to Software Engineering, 13th International Conference, FASE 2010, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2010, Proceedings: 2010. p. 139–53. https://doi.org/10.1007/978-3-642-12029-9_10. Yu J, Sheng QZ, Swee JKY. Model-driven development of adaptive service-based systems with aspects and rules. In: Web Information Systems Engineering - WISE 2010 - 11th International Conference, Hong Kong, China, December 12-14, 2010. Proceedings: 2010. p. 548–63. https://doi.org/10.1007/978-3-642-17616-6_48. Hussein M, Han J, Yu J, Colman A. Enabling runtime evolution of context-aware adaptive services. In: 2013 IEEE International Conference on Services Computing: 2013. p. 248–55. https://doi.org/10.1109/scc.2013.77. Hirschfeld R, Costanza P, Nierstrasz O. Context-oriented programming. J Object Technol. 2008; 7(3):125–51. Maraninchi F, Rémond Y. Mode-automata: About modes and states for reactive systems. In: Programming Languages and Systems - ESOP'98, 7th European Symposium on Programming, Held as Part of the European Joint Conferences on the Theory and Practice of Software, ETAPS'98, Proceedings: 1998. p. 185–99. https://doi.org/10.1007/bfb0053571. Cordy M, Classen A, Heymans P, Legay A, Schobbens P. Model checking adaptive software with featured transition systems. In: Assurances for Self-Adaptive Systems - Principles, Models, and Techniques: 2013. p. 1–29. https://doi.org/10.1007/978-3-642-36249-1_1. Schaefer I, Poetzsch-Heffter A. Using abstraction in modular verification of synchronous adaptive systems. In: Workshop "Trustworthy Software" 2006, May 18-19, 2006, Saarland University, Saarbrücken, Germany. Germany: Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl: 2006. Zhang J, Cheng BHC. Using temporal logic to specify adaptive program semantics. J Syst Softw. 2006; 79(10):1361–9. Marrella A. Automated planning for business process management. J Data Semant. 2019; 8(2):79–98. Autili M, Salle AD, Gallo F, Pompilio C, Tivoli M. A choreography-based and collaborative road mobility system for l'aquila city. Futur Internet. 2019; 11(6):132. Autili M, Salle AD, Gallo F, Pompilio C, Tivoli M. Chorevolution: Automating the realization of highly-collaborative distributed applications. In: Coordination Models and Languages - 21st IFIP WG 6.1 International Conference, COORDINATION 2019, Proceedings: 2019. p. 92–108. https://doi.org/10.1007/978-3-030-22397-7_6. Bucchiarone A, Dragoni N, Dustdar S, Larsen ST, Mazzara M. From monolithic to microservices: An experience report from the banking domain. IEEE Softw. 2018; 35(3):50–55. Francesco PD, Lago P, Malavolta I. Architecting with microservices: A systematic mapping study. Journal of Systems and Software. 2019; 150:77–97. Sampaio AR, Rubin J, Beschastnikh I, Rosa NS. Improving microservice-based applications with runtime placement adaptation. J Internet Serv Appl. 2019; 10(1):4–1430. De Sanctis M, Spalazzese R, Trubiani C. Qos-based formation of software architectures in the internet of things. In: Software Architecture - 13th European Conference, ECSA 2019, Proceedings: 2019. p. 178–94. Gran Sasso Science Institute, Computer Science department, Viale Francesco Crispi, L'Aquila, 67100, Italy Martina De Sanctis Fondazione Bruno Kessler, Via Sommarive, 18, Trento, 38123, Italy Antonio Bucchiarone & Annapaola Marconi Antonio Bucchiarone Annapaola Marconi This manuscript is a contribution that originates from the doctoral thesis of M.D.S. All the authors contributed to the definition of the design for adaptation model for adaptive service-based applications. M.D.S. and A.B. carried out the implementation of the framework and of the travel assistant described in the case study. M.D.S. and A.B. wrote the manuscript with input from all authors. All authors read and approved the final manuscript. Correspondence to Martina De Sanctis. De Sanctis, M., Bucchiarone, A. & Marconi, A. Dynamic adaptation of service-based applications: a design for adaptation approach. J Internet Serv Appl 11, 2 (2020). https://doi.org/10.1186/s13174-020-00123-6 Service-based adaptive applications Next generation internet Design for adaptation Incremental service composition Automated and verifiable internet services and applications development
CommonCrawl
B. Parent • AE23815 Heat Transfer 2018 Heat Transfer Final Exam Sunday June 10th 2018 NO NOTES OR BOOKS; USE HEAT TRANSFER TABLES THAT WERE DISTRIBUTED; ALL QUESTIONS HAVE EQUAL VALUE; FOR EACH PROBLEM STATE ALL ASSUMPTIONS; ANSWER ALL 6 QUESTIONS; LEAVING THE EXAMINATION ROOM ENDS YOUR EXAM. Consider liquid water flowing over a flat plate of length $L=1$ m. The water has the following properties: $$ \rho=1000~{\rm kg/m^3},~~~c_p=4000~{\rm J/kgK},~~~\mu=10^{-3}~{\rm kg/ms},~~~k=0.6~{\rm W/m\cdot^\circ C} $$ Midway through the plate at $x=0.5~$m, you measure a heat flux to the surface of: $$ q^"_{x=0.5~{\rm m}}=3181 ~{\rm W/{m^2}} $$ You also measure an average heat flux to the surface over the length of the plate of: $$ \overline{q^"}=4500 ~{\rm W/{m^2}} $$ Knowing the latter, and knowing that the plate temperature is equal to $20^\circ$C do the following: (a) Is the flow laminar or turbulent, or a mix of both? You must provide proof of this using the data provided. (b) What is the possible range of the freestream velocity $U_{\infty}$? (c) Find a relationship between $T_\infty$ and $U_\infty$ Consider a journal bearing with a shaft diameter $D_i$ and a casing diameter $D_o$ as follows: The shaft rotates at a speed $\omega$ (in rad/s), and the oil has a density $\rho$ (in kg/m$^3$), a viscosity $\mu$ (in kg/ms), and a thermal conductivity $k$ in (W/mK). Knowing that there is heat generation inside the shaft of $S$ (in W/m$^3$) and that the temperature of the casing is of ${T_o}$ (in $^\circ$C), do the following: (a) From the momentum equation, derive the velocity distribution within the oil as a function of $D_i$, $D_o$, $\omega$ and the distance from the casing, $y$. (b) From the energy equation, derive the temperature distribution within the oil as a function of $D_i$, $D_o$, $\omega$, $y$, $T_o$, $S$, $\mu$, and $k$. After graduation, you are working for a natural gas power plant. In a natural gas power plant, the heat generated by burning the natural gas is used to produce high pressure steam. The high pressure steam then passes through a steam turbine generator, hence producing electrical power. One of the most important components of this type of power plant is the condenser located downstream of the turbine. The purpose of the condenser is to transform all of the steam coming out from the turbine into liquid water. The liquid water is afterwards directed to the burner, hence closing the cycle. Your first design project at the power plant consists of improving the performance of the condenser. The condenser is made of a multitude of pipes in which a cooling fluid is flowing. The cooling fluid temperature at the pipe entrance is of 50$^\circ$C. To ensure that the cooling fluid flows rapidly enough, one pump is connected to each pipe. Each pipe has a length of 3 m, a diameter of 0.01 m, a relative wall roughness $e/D=0.02$, and should condensate at least 0.03 kg/s of steam for the power plant to operate normally. Your task is to determine the minimum amount of power that should be given to each pump in order to obtain the desired amount of steam condensation. Your design should take into consideration the fact that the convective heat transfer coefficient of the cooling fluid may be off by as much as 30%. The saturation temperature and the latent heat of vaporization of the steam is of $T_{\rm sat}=100^\circ$C and $\Delta H_{\rm vap}=2260$ kJ/kg, respectively. The properties of the cooling fluid can be taken as $\rho=1000$ kg/m$^3$, $\mu=0.001$ kg/ms, $k=0.6$ W/m$^\circ$C, $c_p=4000$ J/kgK. Hint: Assuming a pump efficiency of 100%, it can be shown that the pump power is related to the bulk velocity inside the pipe through the following expression: $$ {\cal P}=\frac{\rho u_{\rm b}^3 \pi f L D}{8}$$ where $L$ is the length of the pipe, $D$ is the diameter of the pipe, $f$ the friction factor, and $u_{\rm b}$ the bulk velocity inside the pipe. I will give a bonus to the those who can prove the latter from basic principles. Consider a 0.01 m diameter sphere made of magnesium initially at a uniform temperature of 80$^\circ$C. The sphere is then immersed in a large pool of water with the water being still and at an initial temperature of 20$^\circ$C. Because of the gravitational force, the sphere accelerates towards the bottom of the pool and quickly reaches a constant velocity. Knowing that the drag coefficient of the sphere is of 1.1, do the following: (a) When the sphere velocity becomes constant, find the velocity of the sphere with respect to the water. (b) Find the temperature at the center of the sphere after a time of 2 seconds. (c) Find the temperature on the surface of the sphere after a time of 2 seconds. (d) Find the amount of energy (in Joules) lost by the sphere to the water after a time of 2 seconds. Hints: (i) the buoyancy force is equal to the weight of the displaced fluid; (ii) the drag coefficient is equal to $C_D=F_{\rm drag}/ (\frac{1}{2} \rho_\infty u_\infty^2 A)$ with the frontal area $A=\pi R^2$ and $R$ the radius of the sphere. ​ Use the following data for magnesium and water: Property Water Magnesium $\rho$, kg/m$^3$ 1000 1700 $c$, kJ/kgK 4 1 $k$, W/m$^\circ$C 0.6 171 $\mu$, kg/ms 0.001 -- Consider a rectangular fin with an insulated tip attached to a wall as follows: The fin is made of aluminum with a length $L=0.1~$m, a width $W=1$ m, and a thickness $t=0.03~$m. The wall temperature $T_0$ is fixed to $20^\circ$C. Some water vapor at a temperature $T_\infty=200^\circ$C is blown towards the fin. Because the fin temperature is less than the water vapor saturation temperature ($T_{\rm sat}=100^\circ$C), a thin layer of condensate forms all around the fin. Knowing that $h_{\rm condensate}$ can be assumed constant and equal to 500 W/m$^2$$^\circ$C, do the following: (a) Find the temperature of the fin at $x=L$. (b) Find $h$ (the convective heat transfer coefficient of the incoming water vapor) at $x=0$. (c) Find $h$ at $x=L$. ​ Use the following data for liquid water, water vapor, and aluminum: Property Liquid water Water vapor Aluminum $\rho$, kg/m$^3$ 1000 0.5 2700 $c_p$, kJ/kgK 4 2 0.9 $k$, W/m$^\circ$C 0.6 0.04 200 $\mu$, kg/ms 0.001 $2\times 10^{-5}$ -- Hint: $h$ can not be assumed constant. Consider a micro satellite in the shape of a hollow sphere orbiting around the earth in space as follows: Electrical circuits located within the satellite generate power with the amount $q_{\rm gen}$ (in Watts). The temperature within either matter A or matter B can not exceed 600 K for safety reasons. The incoming radiation heat flux from the sun varies between being 0 and being $q^"_{\rm sun}=1200~$W/m$^2$. The radiation heat flux from the sun may reflect on adjacent solar panels and may thus englobe the micro satellite from all directions. The thermal conductivities are of $k_{\rm A}=0.5~$W/mK and of $k_{\rm B}=0.2~$W/mK, while the contact conductance between matter A and matter B is of $h_{\rm c}=24.68~$W/m$^2$K. Knowing that the outer surface of the micro-satellite is a black body, and that the dimensions are of $r_1=8~$cm, $r_2=9~$cm, $r_3=10~$cm, do the following: (a) Indicate where the maximum temperature will occur (i.e. the precise location within either matter A or matter B). (b) Find the maximum allowable $q_{\rm gen}$ that maintains the temperature within both matter A and matter B to less than 600 K. (c) Find the temperature on the outer surface of the satellite when the maximum temperature within either matter A or B is of 600 K.
CommonCrawl
Analytical and Bioanalytical Chemistry August 2019 , Volume 411, Issue 20, pp 5099–5113 | Cite as Comparison of electrospray and UniSpray, a novel atmospheric pressure ionization interface, for LC-MS/MS analysis of 81 pesticide residues in food and water matrices Joseph Hubert Yamdeu Galani Michael Houbraken Marijn Van Hulle Pieter Spanoghe First Online: 31 May 2019 In mass spectrometry, the type and design of ionization source play a key role on the performance of a given instrument. Therefore, it is of paramount importance to evaluate newly developed sources for their suitability to analyze food contaminants like pesticide residues. Here, we carried out a head-to-head comparison of key extraction and analytical performance parameters of an electrospray ionization (ESI) source with a new atmospheric pressure ionization source, UniSpray (US). The two interfaces were evaluated in three matrices of different properties (coffee, apple, and water) to determine if multiresidue analysis of 81 pesticides by QuEChERS extraction and LC-MS/MS analysis could be improved. Depending on the matrix and irrespective of the chemical class, US provided a tremendous gain in signal intensity (22- to 32-fold in peak area, 6- to 7-fold in peak height), a threefold to fourfold increase in signal-to-noise ratio, a mild gain in the range of compounds that can be quantified, and up to twofold improvement of recovery. UniSpray offered comparable linearity and precision of the analyses with ESI, and did not affect the ion ratio. A gain in sensitivity of many compounds was observed with US, but in general, the two ionization interfaces did not show significant difference in LOD and LOQ. UniSpray suffered less signal suppression; the matrix effect was in average 3 to 4 times more pronounced, but showed better values than ESI. With no effect on recovery efficiency, US improved the overall process efficiency 3 to 4 times more than ESI. Graphical abstract Pesticide residues Electrospray UniSpray Mass spectrometry Matrix effects Process efficiency The online version of this article ( https://doi.org/10.1007/s00216-019-01886-z) contains supplementary material, which is available to authorized users. To gather surveillance data from the occurrence and background levels of both recognized and newly identified contaminants in foods, low limits of quantification (LOQs) are required, in order to estimate human daily intake for risk assessment [1]. Therefore, to analyze compounds like pesticide residues in foods and beverages, there is a constant need for more precise and accurate methods and instruments. The ability to quantitatively determine trace levels of residues in samples is essential to monitor and preserve consumer's health in a precise and more effective way. Among the various techniques of analysis of pesticide residues in water and food items, liquid chromatography (LC) coupled by an atmospheric pressure ionization (API) source to tandem mass spectrometric (MS/MS) detection is the technique of choice, because it offers high throughput, selectivity, and sensitivity as well as its suitability for a wide range of compounds in various sample matrices [2, 3, 4]. It has been observed that the type and the design of an ionization source can have a significant influence on the performances of a bioanalytical method like LC-MS/MS [5]. Furthermore, several studies have demonstrated the differences on the ionization of specific classes of compounds and differences effects of the matrix, observed between different sources [6, 7, 8, 9, 10, 11]. It is therefore of high interest for LC-MS/MS pesticide residue analysis, to evaluate the performances of newly introduced ionization sources in order to highlight their benefits and limitations in comparison with the source that is most commonly applied, i.e., electrospray ionization (ESI) [12]. UniSpray (US) ionization or impactor ionization is a novel atmospheric ionization technique developed by Waters Corporation that makes use of a high-velocity spray, created from a grounded nebulizer impacting on a high-voltage target (stainless steel rod), to ionize analytes in a similar fashion to ESI but promotes extra droplet break-up and desolvation via additional Coandă and vortex effects [13]. Comparatively with ESI, US was proven more performant in analysis of various compounds. The US interface showed a fivefold increase in method sensitivity, with an improved signal intensity, linearity, and repeatability on various matrices in comparison with ESI, for the analysis of prostaglandins and thromboxanes [14]. Similarly, for 24 pharmaceutical and biological compounds, US above ESI improved the dynamic range of analytes at lower concentrations and the sensitivity of late eluting compounds [12]. The novel source US generates very similar spectra compared with ESI, predominantly producing protonated or deprotonated species, but improves the intensity of the MS signal by more than twofold on average. The differences in source design between ESI and US have no significant effect on the adduct formation (e.g., proton, sodium, potassium adducts) and up-front fragmentation [6]. However, little is known on the performance of US for routine multiresidue analysis of pesticides in different matrices, as compared with the current largely used ESI. Despite the numerous advantages of LC-API-MS/MS over other analytical techniques, the quantitative analysis of biological samples is complicated by the presence of matrix components that co-elute with the compound(s) of interest and can interfere with the ionization process in the mass spectrometer, causing ionization suppression or enhancement [15]. This phenomenon, called matrix effect (ME), was first described in 1993 [16] and until today, its mechanism is not fully understood. The ME is defined as the change in the signal intensity of an analyte in a matrix solution compared with the signal intensity in the corresponding solvent [17]. Matrix effects cause a compound's response to differ when analyzed in a biological matrix, with signal suppression or enhancement effects, and therefore, must be determined and quantified to ensure acceptable quantitative results in pesticide residue analysis. The extent of ME can be influenced by some instrumental parameters such as the ionization source [18] and ionization mode [7]. Differences were observed in ME percentages of US and ESI analysis of pharmaceutical and biological compounds from plasma and bile [12]. These unpredictable effects are a regular problem for API sources [15], so the ME of novel sources must be investigated for analysis of specific compounds like pesticides in various matrices. Besides, the ME is used to describe the analyte ionization efficiency, while the efficiency of separating analyte from the sample is measured by the recovery. The process efficiency (PE) then summarizes the efficiency of sample preparation (extraction recovery) and analyte ionization during LC-MS/MS analysis (ME). Hence, PE is the suitable parameter for assessing the overall performance of an analysis method [2]. Therefore, this study aimed at determining whether multiresidue analysis of pesticides in food and water on the same LC-MS/MS system can be improved with US, comparatively with the commonly used ESI. The selected active ingredients (a.i.) belong to largely used pesticide classes, i.e., insecticides, fungicides, herbicides, nematicides, and acaricides, and are a good representative selection for such study because of their variable hydrophobic character and their different physicochemical properties. Matrices with different analytical challenges, textures, and physicochemical properties, and also largely consumed, including an agricultural dry product (coffee), a fresh product (apple), and water, were selected. Key extraction and analytical performance parameters like signal intensity, signal-to-noise (S/N) ratio, linearity, accuracy, precision, relative abundance (ion ratio), range of a.i., extraction recovery, sensitivity, and ME, as well as process performance parameters like recovery efficiency (RE) and PE were evaluated and compared. Analytical grade reagents of above 99% purity were used in the experiments. UPLC-grade acetonitrile was procured from VWR Chemicals (Leuven, Belgium), and anhydrous magnesium sulfate, disodium hydrogen sesquihydrate, trisodium citrate dehydrate, sodium chloride, and pesticide a.i. standards were purchased from Sigma-Aldrich (Bornem, Belgium). The 15-ml-d-SPE tubes as well as Sep-Pak cartridge C18 column were obtained from Waters (Milford, MA, USA). Water was produced locally though a Milli-Q purification system. Sample collection and preparation Raw coffee beans and apples were purchased in organic shops in Ghent, Belgium. Traces of epoxiconazole, imidacloprid, pyraclostrobine, thiametoxam, and hexythiazox were found in blank coffee samples, as well as pyrimethanil in blank coffee and apple samples. They were used for correction of corresponding signals obtained in spiked samples. Extraction and clean-up were performed using the QuEChERS method commonly used in the multiresidue analysis of food matrices. Approximately 50 g of sample was ground to powder or paste using a household mill equipped with a stainless steel knife (Krups, Fleurus, Belgium). Precisely 2 g of coffee powder or 10 g of apple paste was weighed into a 50-ml Teflon-capped centrifuge tube, 8 ml of Milli-Q water was added in the coffee powder, and then 15 ml of acetonitrile was added to each sample, and the mixture was vigorously shaken for 1 min. A mixture of disodium hydrogen citrate sesquihydrate (0.75 g), trisodium citrate dihydrate (1.5 g), sodium chloride (1.5 g), and anhydrous magnesium sulfate (6 g) was added to the extract into the tube, which was agitated for 3 min at 300 rpm on a shaker (Edmund Bühler, Hechingen, Germany). The tube was centrifuged for 5 min at 10,000 rpm (Eppendorf, Leipzig, Germany) and the supernatant was collected. For clean-up of the coffee extract, 7 ml of the supernatant was pipetted into a 15-ml-d-SPE tube packed with primary secondary amines (PSA) and octadecyl (C18). The content of the tube was then shaken for 1 min, centrifuged for 5 min at 3000 rpm, and the supernatant collected. For LC-MS/MS analysis, 1 ml of the supernatant was diluted 10 times with Milli-Q water, and 2 ml of the diluted solution was sampled into a screw cap autosampler vial for chromatography analysis. For the other sample sets (pre-extraction spiked samples), before the step of addition of 15 ml of acetonitrile, samples were spiked at 0.01 mg/l with each pesticide standard. The spiked samples were left for 1 h at room temperature to allow pesticide absorption into sample before being subjected to the extraction, clean-up process, and analysis as described previously. For water samples, Sep-Pak cartridges were used for extracting the pesticides spiked in Milli-Q water [19]. Methanol (1 ml) and water (1 ml) were consecutively used to activate the cartridge before loading the sample. One liter of Milli-Q water sample was passed through the cartridge and pesticides were retained on the column. The pesticides were then desorbed with 10 mL of acetonitrile; the extract was diluted 10 times with Milli-Q water and sampled for chromatography analysis. The other water sample sets (pre-extraction spiked samples) were spiked at 0.01 mg/l with each pesticide standard before Sep-Pak cartridge extraction as described previously. Liquid chromatography tandem mass spectrometry analysis The protocol from Galani et al. [20] was followed. The equipment consisted of a Waters Acquity UPLC module coupled to a Waters Xevo TQD tandem triple quadrupole mass spectrometer, equipped with ESI or US ion source (Waters, Milford, MA, USA). Separation was carried out through a HSS T3 column (100 mm × 2.1 mm, 1.8 μm) (Waters) maintained at 40 °C. The injection volume was 10 μl; mobile phase A consisted of a 0.1% formic acid solution in water while mobile phase B was acetonitrile with 0.1% formic acid. The flow rate was set at 0.4 ml/min with a run time of 10 min. The separation started with an initial gradient of 98% mobile phase A for 0.25 min, followed by a linear gradient to 98% mobile phase B from 0.25 to 7 min which was maintained for 1 min. Then, a linear gradient was used to 98% mobile phase A and column was reconditioned for 1 min. The analyses were performed with US and ESI consecutively with less than 24-h interval gap between the two interfaces, with the parameters presented in Table 1. The ESI capillary position in relation to the mass spectrometer aperture as well as the US source protrusion of the capillary within the nebulizer tube and the vertical and horizontal position of the probe tip towards the metal rod were optimized for achieving best results. Analyses of pesticides were performed in positive ion mode, except for fludioxonil and 2,4-D, which were analyzed in negative ion mode. The analytes were monitored and quantified using multiple reaction monitoring (MRM). The optimization of the MS/MS conditions, identification of the precursor and product ions, and selection of the cone and collision voltages were performed with direct infusion of their individual standard solutions prepared at 1 mg/ml in acetonitrile/water (10/90). After the optimization of the collision cell energy, two different m/z transitions were selected for each analyte, one for quantification (QIT) and one for confirmation (CIT). The dwell time was calculated automatically. Parameters of acquisition method are summarized in Table 2. MassLynx 4.1 software (Waters, Milford, MA, USA) was used for the LC-MS/MS system control and data acquisition and analysis. Parameters of the UniSpray and electrospray ionization sources UniSpray Electrospray Source temperature (°C) Desolvation temperature (°C) US rod voltage/ESI capillary voltage (kV) ± 3 Cone gas flow (l/h) Desolvation gas flow (l/h) Parameters of acquisition method of LC-MS/MS analysis of 81 pesticide active ingredients Precursor ion (m/z) Cone voltage (eV) Ionization mode Dwell time (s) Product ion 1 (m/z) Collision energy 1 (eV) Methiocarb Fenpropimorph 147.2* Tebuthiuron Pirimicarb Thiodicarb Prochloraz Trifloxystrobin Acetamiprid Thimetoxam Difenconazole Pyrimethanil Boscalid Butachlor Carbaryl Dimethomorph Hexaconazole Propoxur Spinosad A Spinosad D Spiroxamine Thiabendazole Thifensulfuron-methyl Carbofuran Dimethoate Ethoprophos Fenamiphos Fenbuconazole Metalaxyl Metsulfuron methyl Monocrotophos Pendimethalin Pyrazosulfuron-ethyl Triazophos Azoxystrobin Bentazon Bitertanol Cadusafos Chlorotoluron Cymoxanil Iprodione Linuron Oxamyl Propanil Tebuconazole Terbutryn Tiofanate-methyl Kresoxim-methyl Carbendazim Diazinon Imazalil Metribuzin Profenofos Propiconazole Pyrachlostrobin Triadimenol Terbufos Thiacloprid Penconazole Pirimiphos-methyl Tebufenozide Spirodiclofen Cyflufenamid Temephos 2,4-D Cyanazine Terbutylazine Propazine Atrazine Simazine Isoproturon Fenoxycarb Epoxiconazole Benalaxyl Hexythiazox *Transition used for quantification (QIT) Evaluation of the performance Eight replicate injections of each sample were performed. To determine the linearity, five different concentrations of the stock solution (0.1, 0.05, 0.01, 0.005, 0.001 mg/l) were prepared by dilution with acetonitrile/water (10/90) to form a calibration curve. The signal intensity (peak area and peak height), S/N ratio, and relative abundance (ion ratio) of the QIT were calculated by the software. The sensitivity was evaluated by determining the limit of detection (LOD) and the limit of quantification (LOQ), which were statistically calculated based on the t99SLLMV method [21], by multiplying the standard deviation of the detected pesticide concentration at 0.01 mg/l from the eight replicates by 2.998 (for LOD) and 10 (for LOQ). The accuracy (percentage extraction recovery, %recovery) was calculated by dividing the recovered concentrations by spiked concentration. Finally, the precision (percentage relative standard deviation, %RSD) was obtained by dividing the standard deviation by the average calculated concentration. Matrix effect was determined by post-extraction spike matrix comparison [2]. A set of blank samples was spiked after the procedure of pesticide extraction, at 0.01 mg/l and thoroughly mixed. These post-extraction spiked samples were then diluted 10 times and analyzed as previously described. The peak area of the pesticide in solvent (A), the peak area of the pesticide in post-extraction spiked samples (B), and the peak area of the pesticide in pre-extraction spiked samples (C) were used to calculate the matrix effect (ME), recovery efficiency (RE), and process efficiency (PE) as follows [22]: $$ \mathrm{ME}\ \left(\%\right)=\mathrm{B}/\mathrm{A}\times 100 $$ $$ \mathrm{RE}\ \left(\%\right)=\mathrm{C}/\mathrm{B}\times 100 $$ $$ \mathrm{PE}\ \left(\%\right)=\mathrm{C}/\mathrm{A}\times 100=\left(\mathrm{ME}\times \mathrm{RE}\right)/100 $$ A value of 100% indicates that there is no absolute ME; if the value is above 100%, there is a signal enhancement and there is signal suppression if the value is < 100%. The number of times (fold) US was higher or lower than ESI value was obtained by dividing each US value by its counterpart ESI value. To determine statistically if the US improved the performance of analyses, the means of different parameters were compared between US and ESI using a one-tailed paired Student's t test; p values less than 0.05, 0.01, and 0.001 were considered significant, highly significant, and very highly significant, respectively. The software SPSS Statistics 19.0 (IBM Corporation, NY, USA) was used. For the tested concentration range (0.001 to 0.1 mg/l), a very highly significant difference (p = 0.000005) was observed between the values of US and ESI (Electronic Supplementary Material, ESM) but in both cases, the r2 values were very good: they ranged from 0.9976 to 0.9999 with US, and from 0.9983 to 0.9999 with ESI. The significant difference between ESI and US may result from the fact that the r2 values are very close to each other. Similar linearity with r2 values ranging from 0.994 to 0.999 but with no significant difference between US and ESI was previously reported for pharmaceutical compounds [14]. Signal intensity There was a very highly significant difference in peak areas obtained with the two interfaces in the three matrices (p = 0.0000002, 0.000035, and 0.000001 in apple, coffee, and water, respectively); US allowed a tremendous gain in intensity, up to 22.4 times in apple (spinosad D), 31.6 times in coffee (spinosad D), and 24.5 times in water (kresoxim-methyl). In average, the gain in peak area with US was 6.4-fold in apple, 7.0-fold in coffee, and 7.2-fold in water (Table 3). Similarly, a highly significant increase of peak height was obtained with US (p = 0.0000001, 0.000033, and 0.000002 in apple, coffee, and water, respectively), and peak 21.3 times higher was obtained with spinosad D in apple, 21.1 times higher with spiroxamine in coffee, and 20.3 times higher with kresoxim-methyl in water. In general, US allowed a peak height gain of 6.3-fold in apple, 6.8-fold in coffee, and 6.9-fold in water (see ESM). A general increase in peak area ranging from a factor 1.1 to 15 with an average around 2 was observed with US for analysis of prostaglandins and thromboxanes [14]. Likewise, US showed an intensity gain of a factor 2.2 compared with ESI when analyzing by infusion, a mix of 22 pharmaceutical compounds. The design of the UniSpray source helps to promote droplet break-up and desolvation which has a significant effect on signal intensity [6]. Comparison of performance parameters between UniSpray and electrospray sources for analysis of 81 pesticide residues in apple, coffee, and water Signal-to-noise ratio Limit of quantification (mg/kg) Matrix effect (%) Process efficiency (%) Fenpropimorf Thifensulfuron Thiametoxam Terbuthryn Thiofanate-methyl Metsulfuron-methyl Pyraclostrobine Terbuthylazine Minimum value Maximum value 0.0000002*** 0.00000001*** 0.002046** 0.010766* 0.000035*** Trifloxystrobine US UniSpray ionization, ESI electrospray ionization, NQ not quantified. *, **, and ***t test is significant, highly significant, and very highly significant, respectively A very highly significant increase of S/N ratio of US over ESI was obtained in all the three matrices (p = 0.00000001 in apple and in coffee, p = 0.0000001 in water). The highest increase of S/N ratio was 18.3 times in apple with spinosad D, 29.4 times in coffee with spinosad D, and 11.2 times in water with fludioxonil. In average, US increased the S/N ratio more than that of ESI by 3.4-fold in apple, 3.8-fold in coffee, and 3.3-fold in water (Table 3). Lubin et al. [14] have observed similar S/N ratios between US and ESI for four out of the five prostaglandins and thromboxane compounds investigated; a distinct increase of S/N ratio with US was obtained for 11-dehydro-thromboxane B(2) (11-dTXB2). As a result of this increase in S/N ratio with US, more compounds could be detected and quantified at low level. Table 4 presents the distribution of pesticide active ingredients which could not be recovered from pre-extraction spiked samples by using UniSpray and/or electrospray interfaces. Depending on the matrix, while imazalil, triademinol. and methomyl could only be quantified with ESI, US solely could allow the quantification of temephos, thifensulfuron, fludioxonil, bentazon, and kresoxim-methyl. A gain in the range of compounds that can be quantified just by changing the ionization source is an important benefit, especially when multiple residues have to be analyzed in single run. Distribution of the analytes not quantified in all the spiked samples with UniSpray and electrospray interfaces Triademinol Ion ratio No significant difference was found between US and ESI in all the three matrices (see ESM). This can be justified by the similarities in the ionization mechanism of the two interfaces. With US, molecules of the studied pharmaceutical compounds were ionized in a similar fashion to ESI, predominantly producing protonated or deprotonated species. Adduct formation (e.g., proton and sodium adducts) and in-source fragmentation were shown to be almost identical between the two sources [6]. Additionally, the spectra generated when using US closely resemble those from ESI analyses so, although there is no voltage applied to the capillary tip, it is likely that the eluent contains ions formed from solution phase redox reactions and other physical processes. It is also possible that surface-based effects on the US impactor pin, and additional gas phase phenomena, could further contribute to ion formation [13]. Accuracy (%recovery) The extraction recovery percentage varied largely among the active ingredients and recovery as high as 342.9% was recorded with spinosad D in apple. Pesticides pyrimethanil, spinosad A, spinosad D, and spirodiclofen showed recoveries above 120% in most of the matrices with the two interfaces, while low recoveries were mostly obtained with metsulfuron-methyl and imazalil. As compared with ESI, recovery obtained with US showed a very highly significant increase (p = 0.0000002, 0.001067, and 0.000002 in apple, coffee, and water, respectively), with up to 8.8-fold increase observed in apple (spirodiclofen), up to 10.6-fold increase obtained in coffee (temephos) and up to 6.3-fold increase recorded in water (monocrotophos). However, in average, the gain in recovery percentage with US was 1.4-fold in apple, 1.9-fold in coffee, and 1.5-fold in water (see ESM). High recoveries of spinosad A and D have been previously observed [20] and may result from the ionization of spinosad from reaction with QuEChERS salts that forms a complex with a strong signal enhancement matrix effect. Precision (%RSD) For the great majority of analyses, the %RSD remained below the acceptable 20% [23], except bentazon with US, and terbuthylazine, monocrotophos, terbufos, and temephos with ESI. The difference in %RSD between US and ESI was very highly significant for pesticides analyzed in apple (p = 0.0008) and in water (p = 0.0001), and was highly significant for coffee (p = 0.0012). In general, the two interfaces showed equal precision for pesticide residue analyses in apple, and US was 1.7 times more precise than ESI for analyses in coffee (see ESM). Lubin et al. [14] found that US offers a better precision than ESI, for three out of five prostaglandins and thromboxanes in two matrices, human plasma, and pig colon. The high values of %RSD found indicate that these pesticide chemistries favor high variations among repetitions and therefore require more refinement of the protocol for improving within-laboratory reproducibility. For the analyses of 81 pesticides in the three matrices, lower LOQs were obtained with US; it ranged between 0.0001 and 0.0333 mg/kg, while it was between 0.0001 and 0.0478 mg/kg with ESI. However, the overall LOD and LOQ did not significantly vary between the two ionization interfaces (Table 3). For analysis of prostaglandins and thromboxanes, Lubin et al. [14] reported that sensitivity was improved for three out of five compounds measured on the UniSpray source, with an increase up to factor 5, probably due to the high signal intensity resulting in saturation phenomena. In our study, we have observed a non-significant factor 1.2 to 1.3 improvement of sensitivity with US, although a rather tremendous increase of signal intensity was obtained with this novel interface. In fact, the gain in sensitivity with US was clear for some compounds, with improvement of LOQ as high as 8.9 times with thiofanate-methyl in apple and 6.7 times with metsulfuron-methyl in water (Table 3). This can be explained by the gain in signal intensity but this improvement could not be generalized to the total large number of analytes we screened. This clear gain in signal intensity could however result in better accuracy and precision for lower concentrations of analytes, and thus increase the sensitivity of the method. But, better sensitivity is guaranteed only if selectivity is warranted, and thus depends also on the type of mass spectrometer used (e.g., high-resolution MS, MSn, ion mobility capabilities) and the nature of the sample (background) [6]. Further investigation on a broad set of spiked concentrations is needed to draw clear conclusion on the increase in signal intensity and sensitivity observed with US in multiresidue analysis of large number of pesticides. Matrix effect values of 100 ± 20% are considered suitable values and indicate a small ME [24]. With US, a strong signal enhancement was mostly observed in apple, the highest values were recorded with fenpropimorf in apple (634.1%), pyrimethanil in apple (616.3%), spinosad A in apple (497.3%), pyrimethanil in water (477.4%), spinosad D in apple (451.4%), and pyrimethanil in coffee (312.2%); most of the other analyses showed ME values below the lowest suitable 80% value. With ESI however, none of the value was found within the suitable range, the signal suppression was more pronounced, and the highest ME values were obtained with pyrimethanil in apple (58.8%), fenpropimorf in apple (45.4%), and pyrimethanil in water (41.4%); all the other analyses showed ME values below 30%. The difference in matrix effect between the two interfaces was highly significant in apple (p = 0.0020) and water (p = 0.0014), and very highly significant in coffee (p = 0.0004) (Table 3). Similar ME values were found by Chawla et al. [17] who showed that MEs were dependent on the nature of both the commodity and the analyte and observed that most of the pesticides showed signal suppression in tomato, capsicum, and cumin matrixes. They also reported very high MEs of 2360.9 and 1250.8% for quizalofop-p-tefuryl and tebuconazole, respectively. In the case of chromatography coupled with MS, the predominant cause of ME is the presence of undesired components that co-elute in the chromatographic separation and either compete for access to the surface of the droplets and subsequent ion evaporation, or induce changes in eluent properties that are known to affect the ionization process (such as surface tension, viscosity, and volatility) [17]. For most of the analyses in our study, a high signal suppression was observed, but the ME percentages were better with US, suggesting a milder ME with the new interface. In analyzing five pesticides in six matrices, Lucini et al. [25] also observed that ME occurred as ionic suppression and was found in the range of 5 to 22% depending on the compound. For 19 pharmaceutical and biological compounds tested, a quite similar ME was observed between US and ESI, but depending on the matrix and ionization mode, a small but statistically significant lower percentage of ME could be observed for US in plasma and bile in the positive ion mode, and bile in negative ion mode [12]. The difference with our results can be due to the differences of the chemistry of the compounds tested and of the solvents we used. Recovery efficiency The RE varied between 1.9 and 150.0% with US, and between 1.7 and 165.5% with ESI. Irrespective of the matrix, with US, the RE percentage of 21% of the analyses was found between the suitable RE values of 100 ± 20%, while with ESI, 24% of the analyses were suitable. A significant difference was found between the RE of US and ESI in apple (p = 0.023259) and water (p = 0.037114), while in coffee, the two interfaces showed no significant difference. But in average, no difference of RE was found between the two interfaces and in the three matrices (see ESM). Lucini et al. [25] found that REs of five pesticides in six matrices were good and substantially comparable, in the range of 93–96%. The extraction recovery measures the efficiency of the analyte extraction process during sample pre-treatment (QuEChERS extraction), and the RE measures the influence of the analyzing instrument on the recovery. This implies that the two interfaces react similarly irrespective of the analyte extraction; hence, the difference of performance will mostly be based on how the interface deals with ME. Process efficiency The values of PE related to quantitative determination of pesticide residues followed the same pattern as ME. The PE was higher with US over ESI in almost all the analyses. A 3.9-fold increase was observed in apple and coffee, while the increase was 3.4-fold in water. The observed increase of PE with US was significant in apple (p = 0.0108), highly significant in coffee (p = 0.0051), and very highly significant in water (p = 0.0002) (Table 3). Lucini et al. [25] found more closer values (74% to 90%) of PE for analysis of five pesticides in six matrices and suggested that the differences in terms of overall PE of each compound can be ascribed to different MEs, rather than to poor recoveries due to ineffective extraction efficiencies of the QuEChERS procedure. In our study, a tentative correlation of the evaluated performance parameters and chemical class of the active ingredients showed no correlation. Similar results were obtained by [14] who observed no correlation between signal increase and chemical structure or physicochemical data of the pharmaceutical compounds analyzed. Also, no correlation could be found between the different gains obtained with ESI or US and the molecular weight, functional groups, pKa, or logP of the studied pharmaceutical compounds. This implies that complex ionization mechanisms are involved with the UniSpray source [6]. This work reports the first results of pesticide residue analysis with UniSpray, a novel API source for LC-MS, in comparison with ESI, for 81 active ingredients of diverse pesticide classes and physicochemical properties, and in three different matrices, apple, coffee, and water. The new source provided comparable and good linearity; it considerably increased the signal intensity and improved the S/N ratio. No significant effects on precision and ion ratio were found. UniSpray also offered a slight gain in the range of compounds that can be quantified, as well as in the recovery percentage. The US allowed a gain in sensitivity for many compounds, but overall, the LOD and LOQ did not significantly vary between the two ionization interfaces. Signal suppression was less pronounced with US, allowing most of the ME values to be within the acceptable range, while it was more prominent with ESI and none of the value was found within the suitable ME range. The ionization sources did not affect the RE, whereas the PE was higher with US in almost all the analyses. The studied performance parameters varied irrespectively to the chemical class of the active ingredients. For a better understanding of applications and benefits of US over ESI, further analysis of pesticides at different spiked concentrations and deep study of the ionization mechanism should be envisaged. The great laboratory assistance of Lilian Goeteyn is gratefully acknowledged. Galani Y.J.H. designed the study, wrote the protocol, collected the samples, carried out lab experiments, performed data analysis, and wrote the first manuscript draft. Houbraken M. participated in study design, protocol writing, lab analysis, and data analysis. Van Hulle M. participated in study design and protocol writing and contributed with laboratory equipment. Spanoghe P. provided lab facilities and supervised the entire study. All authors read, checked, and approved the final manuscript. This work was financially supported by Islamic Development Bank's (IDB) Merit Scholarship for High Technology programme. The authors declare that they have no conflict of interest. 216_2019_1886_MOESM1_ESM.pdf (710 kb) ESM 1 (PDF 710 kb) 216_2019_1886_MOESM2_ESM.xlsx (138 kb) ESM 2 (XLSX 138 kb) Hird SJ, Lau BP-Y, Schuhmacher R, Krska R. Liquid chromatography-mass spectrometry for the determination of chemical contaminants in food. TrAC Trends Anal Chem. 2014;59:59–72. http://dx.doi.org.remote.library.dcu.ie/10.1016/j.trac.2014.04.005.CrossRefGoogle Scholar Kruve A, Kunnapas A, Herodes K, Leito I. Matrix effects in pesticide multi-residue analysis by liquid chromatography-mass spectrometry. J Chromatogr A. 2008;1187:58–66.CrossRefGoogle Scholar Alder L, Greulich K, Kempe G, Vieth B. Residue analysis of 500 high priority pesticides: better by GC-MS or LC-MS/MS? Mass Spectrom Rev. 2006;25:838–65. https://doi.org/10.1002/mas.20091.CrossRefGoogle Scholar Zhang K, Wong JW, Yang P, Tech K, Dibenedetto AL, Lee NS, et al. Multiresidue pesticide analysis of agricultural commodities using acetonitrile salt-out extraction, dispersive solid-phase sample clean-up, and high-performance liquid chromatography-tandem mass spectrometry. J Agric Food Chem. 2011;59:7636–46. https://doi.org/10.1021/jf2010723.CrossRefGoogle Scholar Stahnke H, Kittlaus S, Kempe G, Hemmerling C, Alder L. The influence of electrospray ion source design on matrix effects. J Mass Spectrom. 2012;47:875–84. https://doi.org/10.1002/jms.3047.CrossRefGoogle Scholar Lubin A, Bajic S, Cabooter D, Augustijns P, Cuyckens F. Atmospheric pressure ionization using a high voltage target compared to electrospray ionization. J Am Soc Mass Spectrom. 2017;28:286–93. https://doi.org/10.1007/s13361-016-1537-3.CrossRefGoogle Scholar Thurman EM, Ferrer I, Barceló D. Choosing between atmospheric pressure chemical ionization and electrospray ionization interfaces for the HPLC/MS analysis of pesticides. Anal Chem. 2001;73:5441–9. https://doi.org/10.1021/ac010506f.CrossRefGoogle Scholar Wang R, Zhang L, Zhang Z, Tian Y. Comparison of ESI– and APCI–LC–MS/MS methods: a case study of levonorgestrel in human plasma. J Pharm Anal. 2016;6:356–62. https://doi.org/10.1016/j.jpha.2016.03.006.CrossRefGoogle Scholar Lee H, Kochhar S, Shim S. Comparison of electrospray ionization and atmospheric chemical ionization coupled with the liquid chromatography-tandem mass spectrometry for the analysis of cholesteryl esters. Int J Anal Chem. 2015;2015:1–6. https://doi.org/10.1155/2015/650927.CrossRefGoogle Scholar Eichman HJ, Eck BJ, Lagalante AF. A comparison of electrospray ionization, atmospheric pressure chemical ionization, and atmospheric pressure photoionization for the liquid chromatography/tandem mass spectrometric analysis of bisphenols. Application to bisphenols in thermal paper receipts. Rapid Commun Mass Spectrom. 2017;31:1773–8. https://doi.org/10.1002/rcm.7950.CrossRefGoogle Scholar Garcia-Ac A, Segura PA, Viglino L, Gagnon C, Sauvé S. Comparison of APPI, APCI and ESI for the LC-MS/MS analysis of bezafibrate, cyclophosphamide, enalapril, methotrexate and orlistat in municipal wastewater. J Mass Spectrom. 2011;46:383–90. https://doi.org/10.1002/jms.1904.CrossRefGoogle Scholar Lubin A, De Vries R, Cabooter D, Augustijns P, Cuyckens F. An atmospheric pressure ionization source using a high voltage target compared to electrospray ionization for the LC/MS analysis of pharmaceutical compounds. J Pharm Biomed Anal. 2017;142:225–31. https://doi.org/10.1016/j.jpba.2017.05.003.CrossRefGoogle Scholar Bajic S. U.S. patent no. 8,809,777. Washington, DC: U.S. Patent and Trademark Office; 2014.Google Scholar Lubin A, Geerinckx S, Bajic S, Cabooter D, Augustijns P, Cuyckens F, et al. Enhanced performance for the analysis of prostaglandins and thromboxanes by liquid chromatography-tandem mass spectrometry using a new atmospheric pressure ionization source. J Chromatogr A. 2016;1440:260–5. https://doi.org/10.1016/j.chroma.2016.02.055.CrossRefGoogle Scholar Van Eeckhaut A, Lanckmans K, Sarre S, Smolders I, Michotte Y. Validation of bioanalytical LC-MS/MS assays: evaluation of matrix effects. J Chromatogr B. 2009;877:2198–207. https://doi.org/10.1016/j.jchromb.2009.01.003.CrossRefGoogle Scholar Kebarle P, Tang L. From ions in solution to ions in the gas phase: the mechanism of electrospray mass spectrometry. Anal Chem. 1993;65:972A–86A. https://doi.org/10.1021/ac00070a001.Google Scholar Chawla S, Patel HK, Gor HN, Vaghela KM, Solanki PP, Shah PG. Evaluation of matrix effects in multiresidue analysis of pesticide residues in vegetables and spices by LC-MS/MS. J AOAC Int. 2017;100:616–23. https://doi.org/10.5740/jaoacint.17-0048.CrossRefGoogle Scholar King R, Bonfiglio R, Fernandez-Metzler C, Miller-Stein C, Olah T. Mechanistic investigation of ionization suppression in electrospray ionization. J Am Soc Mass Spectrom. 2000;11:942–50. https://doi.org/10.1016/S1044-0305(00)00163-X.CrossRefGoogle Scholar Kouzayha A, Rahman Rabaa A, Al Iskandarani M, Beh D, Budzinski H, Jaber F. Multiresidue method for determination of 67 pesticides in water samples using solid-phase extraction with centrifugation and gas chromatography-mass spectrometry. Am J Anal Chem. 2012;03:257–65. https://doi.org/10.4236/ajac.2012.33034.CrossRefGoogle Scholar Galani JHY, Houbraken M, Wumbei A, Djeugap FJ, Fotio D, Spanoghe P. Evaluation of 99 pesticide residues in major agricultural products from the Western Highlands Zone of Cameroon using QuEChERS method extraction and LC-MS/MS and GC-ECD analyses. Foods. 2018;7:184–201. https://doi.org/10.3390/foods7110184.CrossRefGoogle Scholar Corley J. Best practices in establishing detection and quantification limits for pesticide residues in foods. Handb Residue Anal Methods Agrochem. 2003;409:1–18.Google Scholar Matuszewski BK, Constanzer ML, Chavez-Eng CM. Strategies for the assessment of matrix effect in quantitative bioanalytical methods based on HPLC-MS/MS. Anal Chem. 2003;75:3019–30. https://doi.org/10.1021/ac020361s.CrossRefGoogle Scholar European Commission (2015) Guidance document on analytical quality control and method validation procedures for pesticides residues analysis in food and feed.Google Scholar Cuadros-Rodríguez L, García-Campaña AM, Almansa-López E, Egea-González FJ, Castro Cano ML, Garrido Frenich A, et al. Correction function on biased results due to matrix effects: application to the routine analysis of pesticide residues. Anal Chim Acta. 2003;478:281–301. https://doi.org/10.1016/S0003-2670(02)01508-8.CrossRefGoogle Scholar Lucini L, Pietro MG. Performance and matrix effect observed in QuEChERS extraction and tandem mass spectrometry analyses of pesticide residues in different target crops. J Chromatogr Sci. 2011;49:709–14. https://doi.org/10.1093/chrsci/49.9.709.CrossRefGoogle Scholar Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.School of Food Science and NutritionUniversity of LeedsLeedsUK 2.Department of Plants and Crops, Faculty of Bioscience EngineeringGhent UniversityGhentBelgium 3.Department of Agriculture and Veterinary MedicineUniversité des MontagnesBangangtéCameroon 4.Waters NV/SAZellikBelgium Galani, J.H.Y., Houbraken, M., Van Hulle, M. et al. Anal Bioanal Chem (2019) 411: 5099. https://doi.org/10.1007/s00216-019-01886-z Received 13 February 2019 Revised 23 April 2019 Accepted 30 April 2019 First Online 31 May 2019 DOI https://doi.org/10.1007/s00216-019-01886-z Published in partnership with eight prestigious societies
CommonCrawl
Context dependency of nucleotide probabilities and variants in human DNA Yuhu Liang1,2, Christian Grønbæk2,3, Piero Fariselli4 & Anders Krogh ORCID: orcid.org/0000-0002-5147-62821,2,5 BMC Genomics volume 23, Article number: 87 (2022) Cite this article A Correction to this article was published on 10 May 2022 Genomic DNA has been shaped by mutational processes through evolution. The cellular machinery for error correction and repair has left its marks in the nucleotide composition along with structural and functional constraints. Therefore, the probability of observing a base in a certain position in the human genome is highly context-dependent. Here we develop context-dependent nucleotide models. We first investigate models of nucleotides conditioned on sequence context. We develop a bidirectional Markov model that use an average of the probability from a Markov model applied to both strands of the sequence and thus depends on up to 14 bases to each side of the nucleotide. We show how the genome predictability varies across different types of genomic regions. Surprisingly, this model can predict a base from its context with an average of more than 50% accuracy. For somatic variants we show a tendency towards higher probability for the variant base than for the reference base. Inspired by DNA substitution models, we develop a model of mutability that estimates a mutation matrix (called the alpha matrix) on top of the nucleotide distribution. The alpha matrix can be estimated from a much smaller context than the nucleotide model, but the final model will still depend on the full context of the nucleotide model. With the bidirectional Markov model of order 14 and an alpha matrix dependent on just one base to each side, we obtain a model that compares well with a model of mutability that estimates mutation probabilities directly conditioned on three nucleotides to each side. For somatic variants in particular, our model fits better than the simpler model. Interestingly, the model is not very sensitive to the size of the context for the alpha matrix. Our study found strong context dependencies of nucleotides in the human genome. The best model uses a context of 14 nucleotides to each side. Based on these models, a substitution model was constructed that separates into the context model and a matrix dependent on a small context. The model fit somatic variants particularly well. The evolution of species can be followed in chromosomal DNA, which has undergone mutations and selection, and mutational processes have been essential for the development of life on earth. On the other hand mutations need to be controlled, because if an essential gene is mutated it may result in severe disease or loss of viability. This balance between plasticity and stability is important for sustaining stable life forms [1]. The question we ask in this study is, how this balance is reflected in the local sequence properties of human DNA and how the sequence context affects mutations. More precisely, we consider models of mutability that depend on the sequence context of e.g. k bases on each side of the position in question. It is well known that the sequence context influences mutational processes. For instance, the mutation of C to T is much more common in CpG dinucleotides than in other contexts in the human genome [2, 3], and previous studies have reported that the immediate neighbouring bases (up to a 7 base context) influence mutation rates [4–7]. Another study showed point mutations can be affected by sequence motifs [8]. The cellular machinery includes components for maintaining genome integrity, such as DNA repair mechanisms, which result in mutational biases [9, 10] and other processes may lead to other biases. These mechanisms together govern the intrinsic mutability. Following [11], we use the term mutability rather than mutation rate, because we are not considering the detailed evolutionary process and there is no time in our models, although the same ideas are easily applicable to estimation of context sensitive mutation rates. Models of mutability can be estimated from observed variants by simply estimating the probability of a mutation given a context. However, such models are estimated from fairly small and biased sets of variants without utilizing the mutability foot-print in the genome. Here we propose to split the context dependent mutability into a nucleotide distribution and a variant part. The nucleotide distribution can be estimated from the whole genome and the variant part from variants, thereby allowing the two parts to have different context sizes. Due to the size of the human genome, the context dependent nucleotide distribution can be estimated from a much larger context than the variant part. The variant part can depend on a smaller context and can thus be estimated from a small number of variants. In the first part of the paper, we focus on estimation of the probability of observing a base in the genome, given a context. One measure to quantify the context sensitivity is predictability. In a random sequence of nucleotides with no context sensitivity, we would only be able to predict a given base with an accuracy of 25% (random guessing), so this is the lower boundary of predictability. However, due to the mutational biasses discussed above and the repetitive nature of genomes, we would expect that a genome is more predictable than a random sequence. We show that a human genomic base can be predicted with an average of 51% using our most sophisticated model. In the second part of the paper, we estimate a mutability model based on the context dependent nucleotide distribution found. For a fixed context dependent nucleotide distribution model, we show that the mutability is not very sensitive to the context size of the variant part. We compare to a simple mutability model conditioned on a 7 base context as in [5] and show that they differ between different types of mutations. Knowledge of the background probability is important for a lot of models and the models described in this work can form a basis for other modelling efforts in the future. It has been shown, for instance, that a high-order Markov model can improve motif discovery over a simple background model [12]. Similarly our models of mutability can be useful in future studies of mutations in disease, where the mutability can be used to e.g. identify unexpected mutations. Context modeling of the human genome In our first model, the Central model, (Fig. 1), we simply estimate the conditional probability of a nucleotide given k bases to each side. For base xi at a genomic position i these probabilities are written as $$P(x_{i}|x_{i-k},\ldots,x_{i-1},x_{i+1},\ldots,x_{i+k}). $$ They are estimated from the genomic frequencies of the 4 possible (2k+1)-mers of the given context. A k=3 model corresponds to a neighbourhood of 7 as used in [5], and we use this model as our baseline. Since we are estimating frequencies from all positions on both strands, they are automatically strand symmetric. Illustration of the three models used. A DNA sequence is shown with the complement sequence below. The blue histograms illustrate nucleotide probabilities. The central model with k=7 (upper left) predicts the base in the middle from the adjacent nucleotides in the boxes to the left and right. For this illustration, C has highest probability, which happens to coincide with the correct nucleotide at the position. The Markov model (top right) of order k=14 predicts a nucleotide from the previous 14. In this example A has highest probability although G is the actual reference probability. The bidirectional model (bottom right) use the same model on the reverse complement strand. In this example C has the highest probability, which coincides with the complement base at the position. The probabilities are translated to the direct strand and averaged with the forward model One can use other values of k as long as a model can be reliably estimated. As the 4 probabilities sum to one, there are 3∗42k free parameters in the model, so the k=3 model has around 12,000 free parameters, which can easily be estimated from the 6 billion sites of the two strands of the human genome. A k=7 model has approximately 0.8 billion free parameters, and is thus the upper limit of what we can hope to reliably estimate for a genome like the human. Even with k=7 there are many contexts that occur only once or very rarely. To avoid over-fitting, we have used an interpolated Central model in which a model of order k is used to regularize a model of order k+1 and so on (see Methods). For our second model, we have used a central model with k=7 and interpolated from k=4. A Markov model of order k yields probabilities of the four bases conditional on the k previous bases. A Markov model also can be used to estimate from both strands, as above, which means that for base i, it can give two different probabilities: P(xi|xi−1,…,xi−k) on the direct strand and \(P(\hat x_{i}|\hat x_{i+1},\ldots,\hat x_{i+k})\) on the opposite strand, where \(\hat x_{i}\) means the complementary base to base xi. Note that these models are estimated from both strands as the central models, which means that a model estimated using a 5' context is identical to the complementary of a model estimated using a 3' context and therefore, without loss of generality, we always assume 5' models. Our third model is a bidrectional Markov model (Fig. 1) of order k=14, interpolated from k=8. It is called bidirectional, because we use the average between the probability of xi from one strand and the probability of \(\hat x_{i}\) from the opposite strand as explained above. Note that this model with k=14 has the same number of free parameters (3∗1014) as the central model with k=7 described above, because both use 14 bases as context. However, the bidirectional Markov model actually uses a context of 28 bases for prediction, because of the averaging over the two directions. This model is called BM14 in the following. We have developed a program written in C that implements these different models. Instead of saving counts for each context, it dynamically calculates the count based on a Burrows-Wheeler encoded genome [13] to save memory. The performance of our models can be evaluated by the accuracy, which is the fraction of positions, where the most probable base given the context equals the actual base in the reference genome. The accuracy on the human genome is shown in Fig. 2 for the different models mentioned above (Supplementary Table S1, S2). Prediction accuracy for the three models. Baseline with k=3, Central model with k=7 and the bidirectional Markov model with k=14 (BM14). The bar-plot shows accuracy for each chromosome and average accuracy on the whole genome. Results using nucleotide-based cross-validation For the baseline model there is a strong correlation between the GC content and the accuracy on each chromosome. In Supplementary Table S3, we show GC content [14] with the accuracy and find a Pearson correlation of 0.90 for the baseline model with the lowest accuracy of around 38% for Chromosome 2–6 that has GC content of 38–40% and the highest accuracy of around 42% for chromosome 19, which has the highest GC content of 48%. For the k=7 central model and BM14, the picture is less clear. Although they have correlations of 0.70 and 0.53 with GC content, the two chromosomes with the best prediction accuracy are chromosome 19 (GC 48%) and chromosome Y (GC 40%) at opposite ends of the GC scale. For estimating the performance shown in Fig. 2, we have used leave-one-out cross-validation at the nucleotide level. It means that when estimating the probabilities for a given site in the genome, that site is excluded in the counts for model estimation. Because the k-mers overlap, one may argue that it is not proper cross-validation, but more fulfilling a minimum requirement that the site itself should not be used for estimating the model. Therefore we have also done a chromosome-based cross-validation for comparison and calculated the overall accuracies for each chromosome using a model estimated from the other chromosomes. The difference between nucleotide-based and chromosome-based cross validation is only 0.5 percentage points (p.p.) on average, but for the Y chromosome, it is more than 3 p.p. (Supplementary Table S1, S2 and Supplementary Fig. S1). Chromosome Y is known to differ from other chromosomes by being more heterochromatic and contain mostly repetitive regions [15], and therefore the model performs poorly on this chromosome when estimated only from other chromosomes. With interpolation it is in principle possible to go beyond k=14, because for contexts with zero counts, the probabilities are equal to a lower order estimate, so it should adapt without over-fitting. We have not explored higher k so much, but in Supplementary Fig. S2, we have run the bi-directional Markov model from k=10 to k=20 for different values of the interpolation constant described in Methods. The figure shows results for chromosome 20 and the model estimated from all the other chromosomes. Up to k≃14 the models steeply improve and are almost insensitive to the interpolation constant. Above k=14 we still see a monotonous improvement that seems to level off at around 52% for the best model. Chromosome 20 was chosen for this experiment, because it is small and has a prediction accuracy similar to the average for the BM14 model. It clearly shows that interpolation improves the model although not by a great deal for k<14. Importantly, interpolation at any strength ensures that zero counts do not occur, which would otherwise result in undefined probabilities. The predictive performance of BM14 on different regions in the human genome is shown in Fig. 3. As expected, the model predicts repetitive sequences very well with an overall accuracy of 64%, but there are quite large differences between different types of repeats. The most common type of repeat in the human genome, the ALU sequences, is 87% correctly predicted, whereas LINE1 for instance is only at 63% (Supplementary Table S4). These differences are most likely due to differences in conservation of the different types of repeats. Prediction accuracies for BM14 in different regions across all chromosomes. The accuracy for different features on the chromosome 1 to Y, is indicated by colored dots. The line shows the overall accuracy for each chromosome The probability of the nucleotide in the reference genome given its context varies throughout the genome. The density of this probability, which we call the reference probability, is shown for different genomic regions in Fig. 4. For each feature except for CDS there are two peaks of which one is due to repeats. However, in positions where the reference probability is above 0.4, repeats account for a large proportion compared to other features. (Supplementary Table S5). Density profile of reference probabilities in different genomic regions obtained with BM14 To further elucidate the predictability across different regions, we show in Fig. 5 the reference probabilities across human 3' and 5' splice sites that averaged over all introns annotated in Chromosome 1 (Chr1). The probability shows a large jump from a level of almost random prediction (∼0.28) in the coding region to a fairly high value (∼0.36) in the intron. The conservation plot in the same figure presents an opposite trend. Probabilities (top) and conservation score (bottom) of reference bases across 3' and 5' splice sites. The probabilities of the reference bases by BM14 were averaged for each position for the first/last 100 nt in coding sequence and 500 nt in introns. The conservation score is PhastCons100Way from the UCSC browser To test whether the model can be improved for non-repeat regions, we estimated a restricted model from everything outside coding regions and repeats. There is little difference between the restricted model and the full one in terms of prediction accuracy or reference probability as seen in (Supplementary Fig. S3) and we did not analyze this model further. We briefly examined the performance of a bidirectional Markov model on some other species. Because of the smaller genome sizes, we used an interpolated bidirectional Markov model of order k=10 in this analysis. The density plot of the reference probabilities (Supplementary Fig. S4A) shows that a single main peak occurs for human and E.coli genomes. A. thaliana, C. elegans and S. cerevisiae have two peaks. The peak towards low probability is enriched in coding sequence as can be seen from Supplementary Fig. S4B, where the density is plotted separately for CDS regions and other regions. In positions where the reference probability is above ∼0.55, the density of human is higher than that of other species, which is most likely caused by repeats in human genome. In the other eukaryotic genomes the prediction accuracy of the models were 45% for C. elegans, 40% for A. thaliana, and 38% for S. cerevisiae. We next evaluated BM14 on variant datasets. We assume that our models are valid for all genomes, and variants found in population studies, such as the 1000 Genomes Project (1KGP) [16], should be predicted with the same accuracy as the corresponding positions in the reference genome. We identified ∼73 million bi-allelic single nucleotide polymorphisms (SNPs) in the 1KGP. The probability of the reference (Pref) was plotted against the probability of the alternative (Palt) shown in Fig. 6 for the k=7 central model and BM14. The latter shows a larger concentration of sites in the middle of the plot. Note the unexpected asymmetry between the corners at Pref ≃1 and Palt ≃1 for both models. Triangle plot for probabilities of Ref-Alt alleles. Probabilities of reference and alternative alleles were estimated by the k=7 central model (upper right triangle) and the k=14 bidirectional Markov model (BM14, lower left triangle) on SNPs from the 1000 Genomes Project This asymmetry is also reflected in the fact that the reference allele had the highest probability in 38.82% of cases and the alternative allele in only 24.20% for BM14. The density plot of Pref-Palt in Fig. 7A also shows a peak near 1 when all SNPs are used. However, when rare SNPs are ignored, the right peak decreases in size and a peak in the left side of the plot appears and the density becomes symmetric when only including SNPs with allele frequency above 20%. The far majority of SNPs with a reference probability higher than 0.875 in the 1KGP dataset belong to repeats. Density profiles of Pref - Palt for SNPs on Chromosome 1. A SNPs from 1KGP. The different lines represent SNPs with allele frequencies greater than 0, 0.01, 0.1 and 0.2, respectively. SNP counts are shown in the legend after the dash. B Density profiles show variants of ClinVar, somatic mutations (COSMIC) and 1KGP database in coding regions. C Densities of damaging and benign variants predicted by Polyphen-2 based on HumanVar database and annotated on 1KGP database by ANNOVAR software We also compared Pref and Palt for different types of single nucleotide variants (SNVs) in coding (Fig. 7B) and non-coding regions (Supplementary Fig. S5). Clinically relevant mutations from the Clinvar database are almost indistinguishable from 1KGP in coding regions and indeed a Kolmogorov–Smirnov (KS) test gives a p-value of 0.18 showing an insignificant difference (see Supplementary Table S6). On the contrary, somatic mutations have a clear tendency to mutate towards a more probable base (Palt > Pref) supported by a p <10−15 in the KS test. In non-coding regions, the somatic mutations are also shifted towards a higher probability for the alternative and have the same peak at high reference probability as 1KGP. To see if there is a difference between damaging and benign SNPs, we show the same densities for Polyphen2 predictions [17] on Chr1 in Fig. 7C. On Chr1 there is a total of 32,841 SNPs classified as benign and 15,299 SNPs classified as damaging. There is a small, but significant (KS test (p <10−15, see Supplementary Table S6)), shift of the damaging SNPs towards higher probability of the alternative allele. We saw that for only 21% of damaging SNPs the reference allele had the highest probability whereas for 29% the alternative allele had the highest probability. For benign SNPs, these numbers are 26.5% and 24%. This difference is highly significant (Chi-squared test p ≃10−9, see Supplementary Table S7). Context-dependent models of substitutions It is possible to estimate context dependent models of single nucleotide substitutions from a set of known variants. Since SNV sampling is very biased and variants are not fully observed, the context size needs to be much smaller than for the nucleotide distribution models described above. In the previously mentioned work [5] a seven nucleotide context is used. Here we want to explore the possibility of using our genome models to obtain models of substitutions. The rationale is that to maintain the context dependent nucleotide probabilities, they must be reflected in the mutability. We assume the genome has reached approximate equilibrium. To keep this state, the mutability towards a nucleotide should be higher, the higher the probability of that nucleotide is in the given context. Therefore we set the probability of a mutation from a to b to be proportional to the probability of nucleotide b (in that context) with a constant that depends on the nucleotides and which can also depend on the context. This model is inspired by the general time-reversible stationary Markov model [18, 19], in which the off-diagonal rates are μab=αabπb with symmetric αab for nucleotides a≠b and the equilibrium distribution P(a)=πa. The mathematical theory does not apply directly here, because reversibility is too restrictive, so we do not require the α matrix to be symmetric, but we can still estimate an α matrix that best fits a set of variants. For lack of a better term, we call α the "alpha matrix". Whereas the nucleotide distribution can be estimated from the whole genome using large contexts, the αs must be estimated from observed mutations. We hypothesize that the αs are less context dependent, and thus can be estimated from a smaller context than the nucleotide distributions. Details of the estimation procedure is described in Methods. We estimated αs from all chromosomes except Chr1 for symmetrical contexts of size 0, 3, 5, and 7 (k= 0, 1, 2, and 3) using SNPs from the 1KGP and the BM14 model for the nucleotide distribution. The alpha matrix is shown in Table 1 (left) for k=0. Notice that it is essentially strand-symmetric, but not symmetric in normal matrix-sense, so it violates reversibility. Similarly, we estimated a simple conditional model with a 7-mer context (k=3) from the same data, which is called the simple model in the following. The simple model is similar to one of the models in [5], but the variants used for estimation are slightly different. The models were then applied to Chr1 where we calculated the probability of a mutation given the context for all positions with an observed SNP. The total fraction of sites with probability above 0.25 is very small for all models, see Fig. 8A. In Fig. 8B the fraction of sites with a certain mutability that has an observed SNP is plotted against mutability for some of the models. Ideally these should be linear, but we see a significant deviation from linear for the simple model and for the α models with k>0. The models with k= 1–3 behave almost the same, and up to a substitution probability of ∼0.25 they are very close to the simple model. Substitution model. Model substitution probabilities shown for the models with context-insensitive α (k=0), the ones with α depending on 1, 2, and 3 bases to each side (k=1, 2, 3), and the simple model conditioned on the 3 bases to each side. The model substitution probability for a site is the sum of the probabilities for the three possible substitutions. A The cumulative distribution of model substitution probabilities for all sites (solid lines) and for SNPs (dashed) on Chr1 shown for the five models. Note that for all models there are very few sites with substitution probability above 0.3. B The fraction of sites on Chr1 with an observed variant in the 1000 Genomes project (1KGP) plotted against p. The y values are SNP counts in small probability intervals (10−4) divided by total counts. The curves are smoothed with splines. Estimates are noisy for larger probabilities due to low counts. C As B for SNPs in 1KGP, Clinvar and COSMIC for the k=1 model and simple only. For latter two, counts are scaled so they sum to the number of SNPs in the 1KGP set for Chr1. For high mutability values there are few SNPs, so the curves are very noisy especially for Clinvar Table 1 α matrixes for k=0 and k=1 estimated by substitution model Above a mutability of 0.25, our models with k>0 deviate significantly from the diagonal line. It turns out that these rare reference genome sites with high substitution probability are mainly CpG sites. The alpha matrix for k=1 is shown in Table 1 for the CG contexts, where it is evident that the C to T values are very large, ranging from 0.48 to 0.72, which should be compared to the largest α of 0.22 that is not a CG context, see (Supplementary Table S8). For contexts where the T has high probability according to the nucleotide distribution, the substitution probabilities will become large, because it is the product of α and the nucleotide probability. It suggests – as expected – that these substitutions are very likely at unselected positions. We applied the model also to SNVs from Clinvar and COSMIC as shown in Fig. 8C for k=1 and for the simple model. The number of variants with mutability values above 0.3 for the k=1 model is relatively small. For Clinvar only 296 SNVs out of 42000 have a mutability larger than 0.3 and for COSMIC this number is 2760 out of 120000. It means that the data are noisy as seen in Fig. 8C, but it is evident that the somatic SNVs from COSMIC follow the model more closely than germline SNPs in this domain. We developed context dependent models of the nucleotide distribution in the human genome. The most advanced one, a bi-directional Markov model with a context of 14 nucleotides to each side, can predict a nucleotide with 51% accuracy. We use interpolation from lower orders, so it is in principle possible to go above k=14, but we saw that this did not change the model very much, and the predictability of just above 50% is close to an upper limit for this type of model. In this work our objective has been to apply simple interpretable models to the problem. Previous studies have applied neural networks to the human genome by sequence context to obtain DNA representations for other tasks. This has been used for prediction of the effect of non-coding variants [20] and the regulatory code of the accessible genome [21], for instance. The DNAbert model [22] is more related to the present work. It is a transformer neural network, which in the pre-training is trained to predict k-mers (k=3-6) from the surrounding sequence context. However, the focus is on using it for other prediction tasks, and direct comparison to our models is not possible. We have used neural networks ourselves for the same task for prediction of bases from the context [23]. Using a larger context in the neural network leads to marginally better prediction accuracy, but more importantly differences in performance depending on context. The high predictability of our model is, to a large extent, due to repeats. It is interesting that approximately half the human genome is said to be repetitive [24], which superficially coincides with the predictability, but an exact definition of repetitive regions is a challenge and some report a higher repetitive fraction (see e.g. [25]). For A. thaliana and C. elegans the predicability was 40% and 45%, respectively, and they both have 12-13% repeats [26], and although the model was of lower order, it suggests that predictability could be used as a measure of the repetitiveness of a genome. This, however, would require more extensive analyses. Not surprisingly, the predictability is highly dependent on the type of the genomic region. Coding regions can be predicted with only 36% accuracy, whereas Alu repeat regions are at 87% and simple repeats even higher (Fig. 3). When looking more closely at splice sites we see – as expected – a negative correlation between conservation and the probability of the reference base (Fig. 5), although such a correlation is weak, when looked at genome wide due to the lack of conservation of repeats. There are also differences between chromosomes, where especially the Y chromosome and Chr19 stand out with higher predictability than others, which is likely due to their high repeat content. The model was applied to the genomes of Arabidopsis thaliana, Caenorhabditis elegans, Escherichia coli, and Saccharomyces cerevisiae. Due to the smaller genome sizes a bidirectional Markov model with k=10 was used. The large differences between species observed is an indication of quite different composition of genomes. Interestingly some species have two peaks in the density of the reference probability, which is partly explained by differences between coding regions and non-coding. We compared the probability of the reference allele to the alternative allele on single nucleotide variants from the 1000 Genomes Project. There is a peak with SNPs that have a reference probability close to one, which skews the distribution away from symmetry (Fig. 7A). Almost all SNPs in this peak (with reference probabilities over 0.875) fall in repeat regions and one possibility is that some of them are mapping artefacts. They also have relatively low allele frequencies, and when considering only SNPs with high allele frequency, the plot becomes symmetric. Therefore, another factor that may explain the asymmetry is that the reference genome, which is not a genome of a single individual, contains very few rare alleles. The difference between the probability of the reference allele and the alternative allele for coding SNVs in the 1000 Genomes Project was compared to SNVs from somatic mutations and clinically relevant SNPs from Clinvar (Fig. 7B). Here we see a statistically significant shift of somatic SNVs towards higher probability for the alternative allele, which suggest that somatic mutations tend to favor more probable bases. Similarly, we see a significant difference between damaging and benign SNPs (as classified by ANNOVAR) as seen in Fig. 7C. Surprisingly, the damaging SNPs seem to have a higher probability according to our model than benign ones. The sequence models presented here estimate distributions of the bases for a given context and reflect inherent properties of the cellular machinery responsible for replication, error correction, and so on, as well as the physical properties of DNA, such as curvature and bendability. A mutation that moves a base closer to this distribution is likely to be more probable than one that moves it away, at least if selection is ignored. To explore this, we have derived a model that takes the context dependent nucleotide distribution into account. In our model, we are assuming that the variation of a site in the human DNA can be described by a context sensitive continuous Markov model with a rate matrix that is a product between the nucleotide distribution and an "alpha matrix". The alpha matrix can be estimated from known variants and it can depend on a smaller context than the model for the nucleotide distribution and can be estimated from a relatively small number of SNVs. It means that our model for mutability have a very large context due to the context dependent nucleotide distribution even if the alpha matrix uses a smaller context. The model does not depend strongly on the context size for the alpha matrix for contexts of the two neighbours or larger (k≥1). Our models behave very similarly to a simple mutability model, which is estimated from SNPs alone and a context of three nucleotides to each side except in a regime of very high mutability (Fig. 8B). Our models seem to over-estimate the SNP mutability from 1KGP when the values are larger than about 0.25. However, this is not the case for somatic mutations, and the mutations seem to be well-described by these models (Fig. 8C). The model is inspired by the general time-reversible model from evolutionary theory, which has six free parameters corresponding to a symmetric alpha matrix, and with rates depending on the equilibrium distribution. However, although time-reversibility would be desirable, it is not likely that the context dependent nucleotide distribution we estimate is an equilibrium distribution for the entire genome. In fact, when inspecting the estimated alpha matrix for zero context (Table 1) and a context of one nucleotide to each side (Supplementary Table S8), it is evident that it is not symmetric. For the latter there are very large deviations from symmetry for contexts with NCG, where N can be any base. In these contexts, αCT is consistently 10-20 times larger than αTC corresponding to a strong tendency to mutate from CG to TG. Even if the α matrix depends on a small context, the substitution still depends on the full context of the nucleotide distribution. This construction is very attractive, because substitution models estimated from variants alone need to have small contexts due to the limited number of variants and the strong sampling biases. There are strong context dependencies of nucleotides in genomes. We have shown how one can estimate a model of the nucleotide probabilities depending on contexts up to 14 nucleotides to each side. Building on these models, it was shown how it is possible to make models of mutations that combine the context dependent nucleotide probabilities with a mutation matrix, called the alpha matrix, to give mutation probabilities ("mutabilities") that depend on the same large context. It was shown that these models fit observed mutations very well and especially somatic ones. Importantly, the alpha matrix can depend on a much smaller context of just one to three bases to each side and does not depend strongly on this parameter. These models can form the basis for a better understanding of human mutations and we believe it will be possible to use them in a wide range of applications from GWAS studies to analysis of somatic mutations. Conditional probability models for the central base The base at position i (chromosome, coordinate) in the reference genome is called xi and the symmetric sequence context around it is called $$ \begin{aligned} & s_{i}(k) = x_{i-k},x_{i-k+1},\ldots,x_{i-1},x_{i+1},x_{i+2},\ldots,x_{i+k}. \end{aligned} $$ If it is clear from the context which k, we call it si to ease notation. To estimate the conditional probability of base b at position i, we use the counts n(b|si) of the occurrences in the same context throughout the reference genome (on both strands): $$ \begin{aligned} & P(b|s_{i}) = \frac{n(b|s_{i})-\delta_{b,x_{i}}}{N(s_{i})-1}, \end{aligned} $$ $$N(s_{i}) = \sum_{b} n(b|s_{i}). $$ We use the Kronecker \(\phantom {\dot {i}\!}\delta _{b,x_{i}}\), which is 1 if xi=b and otherwise 0, to ensure that we only count other contexts, when estimating probabilities at position i. This is leave-one-out cross-validation and is discussed further below. For large contexts, the counts become small and thus the probabilities cannot be reliably estimated. To interpolate between different orders of the model, we use regularization by pseudo-counts obtained from the k−1 model. Specifically, for order k, we define pseudo-counts $$r(b|s_{i}(k)) = \gamma P(b|s_{i}(k-1)), $$ where γ is the strength of pseudo-counts. Now the model of order k is estimated as before, but using the actual counts plus pseudo-counts, $$P(b|s_{i}(k)) = \frac{n(b|s_{i}(k))-\delta_{b,x_{i}}+r(b|s_{i}(k))}{N(s_{i}(k))-1+\gamma}. $$ The advantage of pseudo-counts is that they have minor influence, when there is plenty of data (actual counts are high), but have strong effect at low counts. With k=4 counts are on average 6∗109/49≃23000, so we assume that psudo-counts are not needed. Therefore, our interpolated model starts with unregularized estimates for k=4, and then use the pseudo-counts iteratively for k=5 to k=7 for the interpolated model. We used a strength of γ=100 for the pseudo-counts (a few experiments showed that the model is relatively robust to changes in γ, see below). In a Markov model of order k, the probability of a base is conditioned on the k previous bases. If we redefine the k-context in (1) to be the k previous bases, $$s_{i}(k) = x_{i-k},x_{i-k+1},\ldots,x_{i-1}, $$ we can use exactly the same formulation as above. In this case however, the context size is not 2k letters as above, but only k letters. Therefore, one can estimate Markov models up to sizes around k=14 for the human genome, and we used a model interpolated from k=8 to k=14 analogously to the central interpolated model described above. Due to the interpolation, larger k are possible, and we performed a small experiment with k ranging from 10 to 20 and with four different values of the interpolation constant γ resulting in Supplementary Fig. S2. These tests were done only on chromosome 20 with a model estimated from all chromosomes except 20. Although small gains can be obtained with larger k values and different γ, we decided to stick to our initial choice of k=14 and γ=100. Estimating a "forward" Markov model from both strands of the human genome will automatically make it strand-symmetric. For a given position in the genome, the model can therefore give two sets of base probabilities: one for the forward strand and one for the reverse strand. Our final Markov probabilities are the average between the two as described in the main text and referred to as bidirectional. Cross-validation Our way of estimating the conditional probability of seeing one of the four bases given the surrounding context can be seen as a leave-one-out procedure. In particular, the estimate depends on the reference base at the considered position as well as the context. To obtain an estimate that is independent of the reference base at the position, a natural way to proceed is to consider the average of the four base-dependent estimates over all occurrences of the given context. This average turns out to be equal to the estimate that includes all positions. To see this, average (2) over all sites (skipping the k dependence for clarity) gives the probability of a base b: $$\bar P(b|s) = \frac{1}{N(s)} \sum_{b'} n(b'|s)\frac{n(b|s)-\delta_{b,b'}}{N(s)-1}. $$ Here the base we are summing over is called b′ to distinguish it from the base b in question. Since \(\sum _{s} n(b'|s) \delta _{b,b'} = n(b|s)\), we get $${}\bar P(b|s) = \frac{1}{N(s)(N(s)-1)}(N(s)n(b|s)-n(b|s))= \frac{n(b|s)}{N(s)}. $$ We also assessed our models by cross-validation by chromosomes. One chromosome was used as test data, and the remaining chromosomes as training data. We repeated this step 24 times to calculate the fraction correct predictions for each chromosome. Substitution models A simple model estimates mutability as the fraction of all sites with context \(\hat s\) having a specific mutation. More specifically, $$ \begin{aligned} & P_{\text{Simple}} (a \rightarrow b| \hat s) = \frac{n(a\rightarrow b|\hat s)}{n(a|\hat s)}. \end{aligned} $$ Here \(n(a\rightarrow b|\hat s)\) is the number of observed mutations a→b in context \(\hat s\) and \(n(a|\hat s)\) is the number of times we see reference base a in context \(\hat s\) (as above). We use \(\hat s\) to indicate that the context may be different from the context s for the genome model above. We have used this model with a symmetric context of three bases to each side, which we call the simple model. We will now derive a continuous time Markov model with context dependent substitution rates μab|s that takes the nucleotide distribution into account. We also assume a constant evolutionary time, which is infinitesimally small compared to the rates, so we can approximate the substitution probability by the first-order term in the Taylor expansion of an exponential $$P(a\rightarrow b|s) \simeq \delta_{a,b} + \mu_{ab|s}, $$ where time is set to 1. The diagonal rates are \(- \sum _{b \neq a} \mu _{ab|s}\), so in the following we will not write the diagonal terms. For a stationary, reversible Markov model with P(a|s) as equilibrium probabilities the rates can be written as $$P(a\rightarrow b|s) \simeq \mu_{ab|s} = \alpha_{ab|s} P(b|s) \text{~~~~~(\(a\neq b\))}. $$ with a symmetric matrix αab. This is the general time-reversible six-parameter model (see e.g. [19]). Inspired by this model, we assume that mutability is given by the same equation, but without requiring that the nucleotide distribution is the equilibrium distribution and without requiring that α is symmetric. The above expression factorizes the rates into the nucleotide distribution and the α-term that encapsulates the mutations. Now we assume the αs depend on a smaller context \(\hat s\) than the context s for the genome model P(a|s), so the above can be written as $$ \begin{aligned} & P(a\rightarrow b|s) \simeq \mu_{ab|s} = \alpha_{ab|\hat s} P(b|s) \text{~~~~~(\(a\neq b\))} \end{aligned} $$ In analogy with (3), P(a→b|s)=n(a→b|s)/n(a|s) with s instead of \(\hat s\), so combining with the above $$n(a\rightarrow b|s) \simeq n(a|s) \alpha_{ab|\hat s} P(b|s) \text{~~~~~(\(a\neq b\))} $$ To estimate the αs we sum over all contexts that contains \(\hat s\), which we write as \(s|\hat s \subseteq s\), so $$n(a\rightarrow b|\hat s) = \sum_{s|\hat s \subseteq s} n(a\rightarrow b|s) \simeq \alpha_{ab|\hat s} \sum_{s|\hat s \subseteq s} n(a|s) P(b|s) $$ The last sum depends only on the nucleotide distribution. It can be rewritten as a sum over all positions in the genome, where the reference base, ri, equals a and where the context is \(\hat s\). We call this term \(Z_{ab|\hat s}\), $${}Z_{ab|\hat s} \,=\, \frac{1}{n(a|\hat s)} \sum_{s|\hat s \subseteq s} n(a|s) P(b|s)= \frac{1}{n(a|\hat s)} \sum_{i|r_{i} = a \land \hat s \subseteq s_{i}} P(b|s_{i}), $$ For convenience, it is normalized by \(n(a|\hat s)\), so it is the average probability of base b over all positions with reference base a and context \(\hat s\). As an estimate of α we then have $$\alpha_{ab|\hat s} = \frac{1}{Z_{ab|\hat s}} \frac{n(a\rightarrow b|\hat s)}{n(a|\hat s)} = \frac{P_{\text{Simple}} (a\rightarrow b|\hat s)}{Z_{ab|\hat s}} $$ Note that we can rewrite the original probability (4) in terms of the simple model as $$P(a\rightarrow b|s) \simeq \frac{P(b|s)}{Z_{ab|\hat s}} P_{\text{Simple}} (a \rightarrow b| \hat s) $$ for \(\hat s \subseteq s\). The factor is 1 when \(\hat s=s\), so the models are identical as they should be when they use the same context. The equation directly shows how the wider context from the genome model can modulate the simpler estimate. If the probability of base b in context s is larger than the mean \(Z_{ab|\hat s}\), the mutability becomes larger than in the simple model, and if it is smaller, the mutability becomes smaller. The first order approximation assumes the rates are small. When calculating the total mutability of a site, we therefore use the approximation \(\phantom {\dot {i}\!}1-P(a\rightarrow a|s) \simeq 1-e^{\mu _{aa|s}}\). For small α's it makes little difference whether it is the exponentiated form or not. The human reference genome, GRCh38.p13, was downloaded from NCBI (released March 2019 by Genome Reference Consortium). We considered only primary assemblies of chromosomes 1 to 22 and X, Y. Genomic annotation bed files were downloaded from UCSC Table Browser. These are 3'-UTR, 5'-UTR, CDS, Introns, Genes, and Repeats. Conservation scores file (PhastCons100way) was downloaded from the UCSC as well. Variants were downloaded from the 1000 Genomes project (released March 2019, phased 20190312_biallelic_SNV_and_INDEL) in VCF format. The INDELs were filtered from 1KGP dataset. ClinVar (clinvar_20200310.vcf) [27, 28] and somatic mutations (CosmicCodingMuts.vcf and CosmicNonCodingVariants.vcf) [29] data were obtained from NCBI and COSMIC, respectively. The genomes and GFF files of Arabidopsis thaliana (TAIR10.1), Caenorhabditis elegans (WBcel235), Escherichia coli (str. K-12 substr. MG1655), Saccharomyces cerevisiae (R64) were downloaded from NCBI. Model implementation Counting of k-mers and estimation of probabilities is implemented in the C programming language. The program counts the contexts for each site using a Burrows-Wheeler transform (BWT) [30] rather than storing the k-mers, because it is much more efficient for the interpolated models. The program is called predictDNA and relies on an index built with the program makeabwt. One program, called makeabwt, is used for construction of an index from a fasta file containing the genome sequences. If there are multiple sequences, they are concatenated with termination symbols in between and the suffixes are sorted. The BWT is constructed from the sorted suffixes and saved. An FM index [31] is constructed to ease the search of the BWT. To limit memory usage, the values are stored in first-level checkpoints for every 216 positions as long integers (8 byte) and for every 256 positions the difference from the nearest first-level checkpoint is stored as a short integer (two bytes). We used an index containing both the forward and reverse complements strands of the genome. Another program, called predictDNA, use the index to look up k-mers. This is done using the standard backward search of the BWT/FM-index [31]. The size of the resulting suffix interval equals the number of the k-mers in the genome and these are used for calculating the conditional probabilities. The advantage of using a BWT is that the index can be used with any k and thus facilitates the interpolated models. An naive approach using table-lookup would require a new table for each value of k and a table of 415≃109 integers for k=14, which corresponds to 4GB of memory and this would become 16GB for k=15, etc. The index used for this work use around 8GB of memory. Model Performance We calculated the probabilities of the four bases for every position in the human genome using the software predictDNA we developed. We tested different k's, but used the same interpolation constant, γ=100, for all models. We counted the correct sites for which the reference alleles gave the highest probabilities of the four bases, to calculate the fraction correct for each chromosome. Furthermore, we overlapped the bed files with models' outputs via bedtools [32, 33] to get the feature-specific fraction correct and predicted probabilities. These were used to obtain the performance of our models for different regions of human genome. Based on CDS bed file and human genome fasta file, we calculated average probabilities for the positions around the human 3' and 5' splice sites. We included 500 nucleotides beforer and 100 after the 3' splice site and, similarly, 500 before and 100 after the 5' splice. Besides, we extracted the conservation scores of PhastCons100Way for the same regions [34]. Those results were shown in Fig. 5. SNP Variants Analysis We kept only single nucleotide bi-allelic variants in 1KGP, ClinVar and COSMIC databases for the following analysis, and we filtered INDELs. Based on central model and BM14 results, reference and alternative allele probabilities for each SNP sties in these three databases were extracted. The triangle plots (Fig. 6) were made by using reference probabilities against alternative probabilities of all SNPs in 1KGP database. In order to understand the possible asymmetry shown by the cluster of many sites in the corners of the triangle plot, we separated SNPs with allele frequency greater than 0, 0.01, 0.1 and 0.2. To present the different types of SNPs in coding and non-coding parts, we did the density plots also by using Pref minus Palt for SNPs in 1KGP, ClinVar and COSMIC databases. Additionally, we used ANNOVAR software [35] to annotate benign and damaging SNPs on 1KGP, which were predicted by PolyPhen2 [17]. These are sites associated with single genetic disease. We developed the subsitution model to estimate the mutability of SNVs as described above. We estimated the α matrix for k= 0, 1, 2, 3 for all SNPs 1KGP outside of Chr1. The model was applied to chromosome 1, where we calculated the probability of a mutation from the BM14 and the alpha matrices. These were compared to observed SNVs in 1KP, ClinVar, and COSMIC on Chr1. Test Bi-directional Markov Model on Other Species The bi-directional Markov model with was tested on the chosen species and also human genome. We used k=10,γ=100, and interpolated from k=6, instead of using the same parameters as BM14, that is because of the smaller genome size of these species. The densities of the reference base probabilities were plotted (Supplementary Fig. S4A). We separated the CDS and non-coding regions of A.thaliana, C. elegans and S. cerevisiae according to the GFF files and made a density plot to show the distributions of CDS and non-coding of these three species. Our software is open source and available at GiHub: https://github.com/AndersKrogh/abwt/releases/tag/v1.2.1a. We wrote several scripts in Perl and Python for data analysis and these are all available in the GitHub release. The usage of these scripts is described in README files. All the figures made in R and this code is also available. All data used in this study are publicly available. All data can be downloaded from NCBI, UCSC, 1KGP and COSMIC database as we mentioned in our methods. The links to the genomes of the species we used: Homosapiens (https://www.ncbi.nlm.nih.gov/genome/?term=GRCh38.p13), Arabidopsisthaliana (https://www.ncbi.nlm.nih.gov/genome/?term=TAIR10.1), Caenorhabditiselegans (https://www.ncbi.nlm.nih.gov/genome/?term=WBcel235), Escherichiacoli (https://www.ncbi.nlm.nih.gov/genome/?term=Escherichia+coli), Saccharomycescerevisiae (https://www.ncbi.nlm.nih.gov/genome/?term=Saccharomyces+cerevisiae) The CDS, Introns, 3'-UTR, 5'-UTR, Genes, Repeats and Conservation score are download from UCSC Table Browser (https://genome.ucsc.edu/cgi-bin/hgTables) 1000 Genomes Project (http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/1000_genomes_project/release/20190312_biallelic_SNV_and_INDEL/) clinvar_20200310 was used for Clinical SNPs analysis (https://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh38/archive_2.0/2020/) Coding and non-coding mutations of COSMIC (https://cancer.sanger.ac.uk/cosmic/download) A Correction to this paper has been published: https://doi.org/10.1186/s12864-022-08490-z BM14: Bidirectional Markov model with 14 bases as context p.p.: percentage points CDS: Coding Sequence Chr: Pref: Probability of reference Palt: Probability of alternative 1KGP: Sigle nucleotide polymorphism SNV: Single nucleotide variants BWT: Schubert I, Vu GT. Genome stability and evolution: attempting a holistic view. Trends Plant Sci. 2016; 21:749–57. Cooper DN, Youssoufian H. The CpG dinucleotide and human genetic disease. Hum Genet. 1988; 78:151–5. Hess ST, Blake JD, Blake RD. Wide variations in neighbor-dependent substitution rates. J Mol Biol. 1994; 236:1022–33. Krawczak M, Ball EV, Cooper DN. Neighboring-nucleotide effects on the rates of germ-line single-base-pair substitution in human genes. Am J Hum Genet. 1998; 63:474–88. Aggarwala V, Voight BF. An expanded sequence context model broadly explains variability in polymorphism levels across the human genome. Nat Genet. 2016; 48:349–55. Carlson J, Locke AE, Flickinger M, Zawistowski M, Levy S, Myers RM, Boehnke M, Kang HM, Scott LJ, Li JZ, et al. Extremely rare variants reveal patterns of germline mutation rate heterogeneity in humans. Nat Commun. 2018; 1:1–13. Forsdyke DR. Complementary oligonucleotides rendered discordant by single base mutations may drive speciation. Biol Theory. 2021; 27:1–5. Zhu Y, Neeman T, Yap VB, Huttley GA. Statistical methods for identifying sequence motifs affecting point mutations. Genetics. 2017; 205:843–56. Lind PA, Andersson DI. Whole-genome mutational biases in bacteria. Proc Natl Acad Sci. 2008; 105:17878–83. Pearson CE, Edamura KN, Cleary JD. Repeat instability: mechanisms of dynamic mutations. Nat Rev Genet. 2005; 6:729–42. Zavolan M, Kepler TB. Statistical inference of sequence-dependent mutation rates. Curr Opin Genet Dev. 2001; 11:612–5. Thijs G, Lescot M, Marchal K, Rombauts S, B. DM, Rouze P, Moreau Y. A higher order background model improves the detection of regulatory elements by Gibbs sampling. Bioinformatics. 2001; 17:1113–22. Li H, Durbin R. Fast and accurate short read alignment with burrows–wheeler transform. Bioinformatics. 2009; 25:1754–60. Piovesan A, Pelleri MC, Antonaros F, Strippoli P, Caracausi M, Vitale L. On the length, weight and gc content of the human genome. BMC Res Notes. 2019; 12(1):1–7. Bachtrog D, Charlesworth B. Towards a complete sequence of the human Y chromosome. Genome Biol. 2001; 2:1016–1. Consortium TGP. A global reference for human genetic variation. Nature. 2015; 526:68–74. Adzhubei IA, Schmidt S, Peshkin L, Ramensky VE, Gerasimova A, Bork P, Kondrashov AS, Sunyaev SR. A method and server for predicting damaging missense mutations. Nat Methods. 2010; 7(4):248–9. Tavaré S. Some probabilistic and statistical problems in the analysis of dna sequences. Lect Math Life Sci. 1986; 17:57–86. Felsenstein J, Felenstein J. Inferring Phylogenies, vol 2. Sunderland: Sinauer Associates; 2004. Zhou J, Troyanskaya OG. Predicting effects of noncoding variants with deep learning–based sequence model. Nat Methods. 2015; 12(10):931–4. Kelley DR, Snoek J, Rinn JL. Basset: learning the regulatory code of the accessible genome with deep convolutional neural networks. Genome Res. 2016; 26:990–9. Ji Y, Zhou Z, Liu H, Davuluri RV. Dnabert: pre-trained bidirectional encoder representations from transformers model for dna-language in genome. Bioinformatics. 2021; 37(15):2112–20. Grønbæk C, Liang Y, Elliott D, Krogh A. Prediction of DNA from context using neural networks. bioRxiv. 2021. Lander ES, Linton LM, Birren B, Nusbaum C, Zody MC, Baldwin J, Devon K, Dewar K, Doyle M, FitzHugh W, Funke R, Gage D, Harris K, Heaford A, Howland J, Kann L, Lehoczky J, LeVine R, McEwan P, McKernan K, et al.Initial sequencing and analysis of the human genome. Nature. 2001; 409:860–921. de Koning APJ, Gu W, Castoe TA, Batzer MA, Pollock DD. Repetitive elements may comprise over two-thirds of the human genome. PLoS genetics. 2011; 7:1002384. Smit AFA, Hubley R, Green P. RepeatMasker Open-4.0. Unknown Month 2013. http://www.repeatmasker.org. Landrum MJ, Lee JM, Benson M, Brown GR, Chao C, Chitipiralla S, Gu B, Hart J, Hoffman D, Jang W, et al. ClinVar: improving access to variant interpretations and supporting evidence. Nucleic Acids Res. 2018; 46:1062–7. Landrum MJ, Lee JM, Riley GR, Jang W, Rubinstein WS, Church DM, Maglott DR. ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res. 2014; 4:980–5. Tate JG, Bamford S, Jubb HC, Sondka Z, Beare DM, Bindal N, Boutselakis H, Cole CG, Creatore C, Dawson E, et al. Cosmic: the catalogue of somatic mutations in cancer. Nucleic Acids Res. 2019; 47:941–47. Burrows M, Wheeler DJ. A block-sorting lossless data compression algorithm. Technical report. 1994. Ferragina P, Manzini G. Opportunistic data structures with applications. In: Proceedings 41st Annual Symposium on Foundations of Computer Science. IEEE: 2000. p. 390–8. Quinlan AR, Hall IM. BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010; 26:841–2. Quinlan AR. BEDTools: the Swiss-army tool for genome feature analysis. Curr Protoc Bioinforma. 2014; 47:11–2. Castle JC. SNPs occur in regions with less genomic sequence conservation. PLoS ONE. 2011; 6:20660. Wang K, Li M, Hakonarson H. ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data. Nucleic Acids Res. 2010; 38:164. We thank Hanne Munkholm for her big help and support with compute servers. YL acknowledges China Scholarship Council (Grant 201804910693) for Ph.D. financial support. AK and PF acknowledge visiting fellowship support from the Italian Ministry for Education, University and Research for the programme "Dipartimenti di Eccellenza 20182022D15D18000410001" delivered to University of Torino. The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript Department of Computer Science, University of Copenhagen, Copenhagen, Denmark Yuhu Liang & Anders Krogh Department of Biology, University of Copenhagen, Copenhagen, Denmark Yuhu Liang, Christian Grønbæk & Anders Krogh Present address: Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, Copenhagen, Denmark Christian Grønbæk Department of Medical Sciences, University of Torino, Torino, Italy Piero Fariselli Center for Health Data Science, University of Copenhagen, Copenhagen, Denmark Anders Krogh Yuhu Liang AK and PF initiated the project. YL and AK performed most analyses and drafted the paper with assistance from CG and PF. All authors participated in revision and approved the final version. Correspondence to Anders Krogh. The original version of this article was revised: Christian Grønbæk was missing an affiliation in the original publication. Supplementary tables: Table S1, S2, S3, S4, S5, S6, S7, S8. Supplementary figures: Figure S1, S2, S3, S4, S5. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Liang, Y., Grønbæk, C., Fariselli, P. et al. Context dependency of nucleotide probabilities and variants in human DNA. BMC Genomics 23, 87 (2022). https://doi.org/10.1186/s12864-021-08246-1 DNA context Markov model DNA substitution model
CommonCrawl
Ocean acidification effects on in situ coral reef metabolism Ocean warming and acidification uncouple calcification from calcifier biomass which accelerates coral reef decline Sophie Gwendoline Dove, Kristen Taylor Brown, … Ove Hoegh-Guldberg Seagrass can mitigate negative ocean acidification effects on calcifying algae Ellie Bergstrom, João Silva, … Paulo Horta Resistance to ocean acidification in coral reef taxa is not gained by acclimatization S. Comeau, C. E. Cornwall, … M. T. McCulloch Coccolithophore community response to ocean acidification and warming in the Eastern Mediterranean Sea: results from a mesocosm experiment Barbara D'Amario, Carlos Pérez, … Patrizia Ziveri Ocean acidification drives community shifts towards simplified non-calcified habitats in a subtropical−temperate transition zone Sylvain Agostini, Ben P. Harvey, … Jason M. Hall-Spencer Carbon dioxide addition to coral reef waters suppresses net community calcification Rebecca Albright, Yuichiro Takeshita, … Ken Caldeira Biogeochemical feedbacks to ocean acidification in a cohesive photosynthetic sediment Kay Vopel, Alexis Marshall, … Conrad A. Pilditch Carbonate chemistry of an in-situ free-ocean CO2 enrichment experiment (antFOCE) in comparison to short term variation in Antarctic coastal waters J. S. Stark, N. P. Roden, … D. Roberts In situ response of Antarctic under-ice primary producers to experimentally altered pH Vonda J. Cummings, Neill G. Barr, … Andrew M. Lohrer Steve S. Doo ORCID: orcid.org/0000-0002-3346-61521, Peter J. Edmunds ORCID: orcid.org/0000-0002-9039-93471 & Robert C. Carpenter1 Scientific Reports volume 9, Article number: 12067 (2019) Cite this article Ecosystem ecology The Anthropocene climate has largely been defined by a rapid increase in atmospheric CO2, causing global climate change (warming) and ocean acidification (OA, a reduction in oceanic pH). OA is of particular concern for coral reefs, as the associated reduction in carbonate ion availability impairs biogenic calcification and promotes dissolution of carbonate substrata. While these trends ultimately affect ecosystem calcification, scaling experimental analyses of the response of organisms to OA to consider the response of ecosystems to OA has proved difficult. The benchmark of ecosystem-level experiments to study the effects of OA is provided through Free Ocean CO2 Enrichment (FOCE), which we use in the present analyses for a 21-d experiment on the back reef of Mo'orea, French Polynesia. Two natural coral reef communities were incubated in situ, with one exposed to ambient pCO2 (393 µatm), and one to high pCO2 (949 µatm). Our results show a decrease in 24-h net community calcification (NCC) under high pCO2, and a reduction in nighttime NCC that attenuated and eventually reversed over 21-d. This effect was not observed in daytime NCC, and it occurred without any effect of high pCO2 on net community production (NCP). These results contribute to previous studies on ecosystem-level responses of coral reefs to the OA conditions projected for the end of the century, and they highlight potential attenuation of high pCO2 effects on nighttime net community calcification. Tropical coral reefs provide coastal protection, livelihoods, and food to millions of people1. However, these goods and services are threatened by rising concentrations of atmospheric carbon dioxide (CO2) from anthropogenic sources2. A result of rising atmospheric CO2 concentrations is global climate change (GCC), which for tropical coral reefs is causing profound modifications in benthic community structure and function through thermally-driven mass bleaching events3,4. Ocean acidification (OA) is a process through which dissolution of CO2 into seawater lowers the calcium carbonate saturation state of seawater (Ω)5. In the marine environment, OA is expected to negatively affect organisms which produce calcareous shells and skeletons more than non-calcifying taxa6,7,8. These effects will be experienced acutely by coral reefs, where the physical structure of the calcareous substratum is built by scleractinian corals and calcified algae7. OA is predicted to severely impact the underlying biogenic substrata of coral reefs through increased dissolution in response to acidifying seawater conditions9,10. Field-based observations in natural CO2 seeps have shown decreased coral cover, diversity, and function in response to acidified conditions11. However, changes in carbonate chemistry across large spatial scales can interact with local effects to modulate processes that contribute to reef carbonate production (e.g. calcification and bioerosion)12. A common theme across papers addressing this topic is that OA is threatening the underlying biogenic physical structure of reefs and, therefore, understanding these effects has become a central objective of coral reef research in the 21st Century13. Depression of Net Ecosystem Calcification (NEC), the balance of gross calcification and gross dissolution, has been predicted for coral reefs in response to OA and warming14,15. However, the complexities of understanding scaling effects16 as well as feedback loops between the inorganic substrata and living biota14 have resulted in difficulties attributing natural variation in NCC rates, and limited the ability to detect and attribute OA impacts on coral reefs. Laboratory-based studies with constructed communities have shown OA will increase dissolution of coral reefs17,18, as well as decrease their functionality through an alteration of the relationship between photosynthesis and calcification19. While the impacts of OA on corals, other reef organisms, and reef communities are relatively well known7, in situ studies testing community-level responses are needed to predict how marine biota will respond to a changing climate20. Experiments conducted in situ, including those employing a Free Ocean CO2 Enrichment (FOCE) approach, are the benchmark for ecological relevance for determining OA effects on ecosystem function20 yet there are substantial challenges to implementing FOCE approaches on coral reefs. There are great benefits to addressing these challenges, however, as FOCE experiments embrace the natural complexity of ecosystems in terms of evaluating their response to OA, notably through emergent properties of multiple taxa interacting in a chemically and physically complex environment. To date, there have been two projects in which the effects of elevated CO2 have been empirically tested in situ on coral reefs. The first, by Albright et al. 2018, measured NEC of the back reef community at One Tree Islands, Australia, in response to short-term pulses of acidified seawater over multiple days15. A second project was conducted on Heron Island, Australia, in which deployment of technology described as a "coral proto-FOCE" (cp-FOCE) showed decreases in coral calcification21, and alteration in the boron isotopic composition of coral skeletons22. Here, we significantly expand on these previous experiments by conducting a FOCE experiment to test the effects of predicted end-of-century OA conditions on NCC and Net Community Production (NCP) of a natural back reef community in Mo'orea, French Polynesia. Our results show how the ecosystem function of coral reefs (NCC and NCP) will be altered by OA during the day and night, resulting in improved accuracy of predictions of the effects of OA on coral reef metabolism. Efficacy of treatment conditions Our study presents the first community metabolism results of the deployment of a FOCE experiment on a shallow, back reef community, and it describes the response of this community to high pCO2 under ecologically relevant environmental conditions23. Our experiment was conducted on two plots of coral reef (5.00 × 0.55 m) (Fig. 1A) that were similar to one another in benthic community composition at the start of the experiment (the cover of corals and crustose coralline algae [CCA] differed <4% between the plots; Fig. S1), and were ~1 km from the shore. Acrylic flumes (1.5-cm wall thickness with UV-transparent tops) without floors were sealed over each plot to allow for measurements of metabolism (Fig. 1B), and the manipulation of seawater pCO2 over 21 days (2–23 May 2018). Unidirectional flow speeds within the flumes were maintained at ~14 cm s−1 using motor-driven propellers, and were similar to average, long-term flow speeds recorded on the back reef (Table S1). Using an autonomous CO2 dosing system deployed on a floating platform adjacent to the flumes, ambient or elevated pCO2 conditions were created for each community (Fig. 1), which were assigned randomly to one of the two flumes. Elevated pCO2 was maintained by CO2 gas-enrichment to in situ seawater, with pCO2 regulated through negative feedback provided by a pH electrode fitted to the flume (Fig. 1C; see methods). (A) Aerial WorldView-2 image (Copyright 2018, Digital Globe, Inc.) of the north shore of Mo'orea, French Polynesia, with the study site shown with a yellow cross. (B) Photograph of the in situ flumes with divers sampling seawater, and the autonomous floating platform above and the north shore of Mo'orea in the background. (C) Schematic of the study site showing the layout of the experiment including the flumes and floating platform. Pre-selected undisturbed plots of reef were used in this study for the community incubated in high CO2 (D) and ambient conditions (E). Our experiment contrasted the effects of ambient pCO2 (393 ± 4 µatm, n = 950) and elevated pCO2 targeted at ~1000 µatm (949 ± 7 µatm, n = 950; corresponding to a seawater pHT of ~7.72; Fig. S2). These conditions reflect global atmospheric CO2 concentrations that equilibrate with seawater to create Ωarag of 3.98 and 2.09, respectively (Fig. S3,B). The elevated pCO2 is predicted to occur by the end of this century under a pessimistic projection (RCP 8.5) of human actions to control CO2 emissions24. NCC and NCP measurements were made over three consecutive days arranged into four sampling blocks equally spread over the 21-d experiment (i.e.,12 incubation days; see methods). Each day of measurements consisted of determinations in the morning, mid-day, afternoon, and once at night. Effects of ocean acidification on in situ reef metabolism Following initiation of the experiment, 24-h NCC was depressed by 47% within the first day of the high pCO2 treatment, and remained consistently depressed relative to 24-h NCC under ambient pCO2 (Fig. 2A). This effect corresponded to a 25% decrease in NCC per unit Ωaragonite decline, which is similar to the effect size reported in a previous meta-analysis of the sensitivity of reef corals to OA7. Overall, there was a 49% reduction in daytime NCC under high versus ambient pCO2 (Fig. 2B), which corresponds to a 26% reduction in NCC per unit Ωaragonite decrease. At One Tree Island, Australia, NCC of a lagoon reef was reduced 43% per unit Ωaragonite reduction during the day, with this effect revealed through an experimental decrease in pH of ~0.14 from ambient (0.7 Ωaragonite reduction)15. In contrast, our study employed a greater decrease of pH (i.e., ~0.38), and a reduction in Ωaragonite of 1.89 between treatments, suggesting the decrease of NCC is not a linear function of Ω. On One Tree Reef, there was a higher cover of crustose coralline algae (CCA) (26%), and lower coral cover (12%) compared to the communities in our study (12.2% and 21.9%; CCA and coral, respectively; Fig. S1)15. This contrast in community structure at One Tree Reef (as described in15) versus the back reef of Mo'orea could indicate that the coral reef community studied at One Tree Island was more sensitive to declining seawater pH than the present coral reef community studied in Mo'orea10. Metabolism of back reef communities incubated under ambient and elevated pCO2. (A) Mean (±s.e.m., n = 3 d) change in 24-h NCC of communities exposed to ambient (393 µatm; blue) or high (949 µatm; red) pCO2 over 21-d. NCC was measured three times during the day (average within day) and once at night over four blocks of three days each, beginning on day 1 (initial values) and ending on day 21 (final). Pre-exposure values show NCC in flumes before CO2 dosing began. (B) Scatterplot showing the difference in daytime (three measures [colors] n = 36) NCC between ambient and high CO2 (ΔNCC, mmol CaCO3 m−2 h−1) as a function of days of the experiment (n = 36), with no linear relationship between the two (P = 0.295, no line shown) (C) Scatterplot showing how the difference in nighttime NCC between ambient and high CO2 (ΔNCC) as a function of days of the experiment (n = 12), with the line showing the Model I linear relationship (P = 0.040). On the first day that the treatments were established in the present study, nighttime NCC of the coral reef community maintained under high pCO2 was depressed by 74% relative to ambient conditions. However, the magnitude of the pCO2-mediated depression of nighttime NCC attenuated over time, and after ~14 days the effect of pCO2 reversed, such that nighttime NCC in the elevated pCO2 treatment was ~1% higher than that under ambient conditions. After 21 days, nighttime NCC at elevated pCO2 was 27% greater than nighttime NCC at ambient pCO2 (Fig. 2C). Coral reefs, and in particular, the carbonate sediments packed within their framework, are predicted to transition to net dissolution under RCP 8.5, which was used in the present analysis to scale the pCO2 treatment applied9. This effect potentially is attributed to the role of OA in accelerating the dissolution of CaCO3 produced by calcareous organisms (e.g., high Mg-calcite producing CCAs vs aragonite producing corals21). The overall effect would result in a positive slope of NCC differences between ambient and high pCO2, shown as less reef dissolution over the 21-d experiment. While recent work has shown little acclimatization potential of reef calcifiers to OA25, the attenuation of the initially negative effect of high pCO2 on nighttime NCC of a shallow coral reef suggests these communities acquire some resistance to the negative effects of OA on nighttime NCC. The proximal mechanism(s) underlying this response are unknown, but potentially could include a rapid physiological acclimatization of calcification in organisms to high pCO2 as seen in polychaetes transplanted to CO2 vents26. While we did not observe a change in coral cover during the experiment, biological feedback loops caused by changes in relative composition of the microbial benthic community should be considered as a potential mechanism mediating the change in nighttime NCC27. Investigation into this mechanism might benefit from a further understanding of the relationship between biogenic processes that affect NCC and geochemical changes (e.g., mineral composition) within the pore water of the reef framework, as this interaction has been shown to be susceptible to OA28. Further work is required to explore these possibilities, and to determine whether the effects observed for nighttime NCC might also affect daytime NCC and/or 24-h NCC (Fig. S4). Our study provides the first experimental results of an in situ effect of OA on NCP (Fig. S4,A), in which NCP was depressed by 24% at high pCO2 compared to ambient pCO2 when averaged across the entire experiment (Fig. S5,A, S. Table 2). NCP during the second incubation (Days 7–9) was significantly higher than during the initial incubation period (Fig. S5,A, S. Table S2), an effect likely due to increased light levels experienced during this time (Fig. S3,D)19,29. Respiration (nighttime oxygen flux) was 41% higher in the community exposed to ambient pCO2 versus high pCO2 during the first 3-d incubation (Fig. S5,B; S. Table 2E), and community respiration increased by an average of 66% in both communities over the experiment (Fig. S5,B; S. Table 2E). Decreased NCP of a coral reef community exposed to OA differs from the null result recorded for the same response variable when a back reef community from Mo'orea was exposed to high pCO2 (1146 µatm) for four months19. While CCA cover increased in both treatment groups in the present study, a greater proportional increase in algal turf cover and cyanobacteria occurred in the reef community maintained at high versus ambient CO2 (Fig. S1). This outcome suggests fast settling turf/cyanobacteria may drive the changes observed for NCP and R. Functional changes of reef communities in response to OA Functional shifts in coral reef communities that alter the ratio of primary production to calcification are reflected in the slope of the NCP-NCC relationship, with a reduction in this relationship representing degradation in reef function, generally resulting from a shift in the dominant benthic community structure from calcifying organisms to algal-dominated communities30,31. In the present study, we did not observe any change in the NCP-NCC slope under high pCO2, (0.077 ± 0.027 for ambient conditions; 0.086 ± 0.028 mmol O2 m−2 h−1 for high CO2; slope ± 95% CI; χ2 = 0.142, p = 0.706; Fig. 3). Previous field studies have documented that a change in the NCP-NCC slope reflects a change in benthic community composition31,32, although the communities in both of the present treatments did not change significantly over time (Fig. S1,A). A previous study by Page et al. (2016) comparing the response of mixed vs homogenous benthic community composition representing reefs found in Kaneohe Bay (Oahu, Hawai'i) to acidified conditions showed similar results to those reported here in which the NCP-NCC slope did not change significantly as a result of increased CO229. However, in the present study, we observed that the intercept was reduced by 58% for the community incubated under high CO2 (0.927 ± 0.402) versus ambient CO2 (2.129 ± 0.478) (both ± 95% CI; WT = 16.913, p < 0.001). While previous studies of reef community metabolism have not reported alteration of the intercept of the NCP-NCC relationship as a metric of change in metabolic function, the alterations of elevation between treatment groups while maintaining similar slopes seen in our study indicate that a shift in overall community function occurred in the community exposed to high CO2, where increased rates of NCP, are required to achieve similar rates of NCC19. For the reef communities in the present experiment, these changes likely are caused by a combination of decreases in the calcification rates of individual organisms and an increase in overall dissolution under high pCO210. Model II regressions of NCP against NCC describing the relationships between primary production and calcification over all sampling times. All measurements corresponding to each time point (see methods) of ambient (393 µatm; blue) and high (949 µatm; red) pCO2 were regressed over the 21-d period. Global climate change is causing large declines in coral cover on reefs worldwide, particularly through the effects of mass bleaching3. Together with experimental analyses of the effects of global climate change and OA on corals, the rapid declines in coral cover have fueled concerns that coral reefs may not persist as calcified entities beyond the end of the current century15. The benthic community structure and ecological functions of coral reefs are tied intrinsically to the success of their ecosystem engineers, scleractinian corals and calcified algae, and these taxa already have been impacted by a wide variety of local disturbances that act in concert with global anthropogenic effects which cause a change in ecosystem functioning33. To date, predictions of how coral reefs will respond to OA have been based largely on species-level experiments testing the sensitivity of coral calcification to high pCO216. Our study highlights that the "organism approach" cannot capture the functional complexity arising from multiple organisms operating in concert in a natural environment subject to routine variation in select environmental conditions (e.g., light and temperature). An important implication of this outcome is that the whole coral reef community is more than the sum of its parts with regard to its response to OA. The present study highlights the importance of these emergent properties through in situ analysis of an undisturbed coral reef community. Our results suggest that these communities have the potential for adjustments to partially alleviate the negative consequences of OA on NCC. While this outcome might attenuate the risks of OA-related dissolution of the carbonate framework supporting most corals reefs10, the absence of a comparable effect on daytime NCC underscores that a further understanding of mechanisms that drive changes in NCC are needed. Our study was conducted on the back reef of the north shore of Mo'orea, French Polynesia, which is a high volcanic island in the South Pacific. A custom-designed Shallow Coral Reef Free Ocean CO2 Enrichment (SCoRe-FOCE) system23 was used to enrich pCO2 to levels projected for 2100 under representative concentration pathway (RCP) 8.5, which assumes a "business as usual" scenario with regards to anthropogenic emission of CO224. In February 2016, two plots were identified in the back reef, each 5.00 m long by 0.55 m wide, and they were selected to have a benthic community similar to this back reef in 200718, which consisted of 22% coral, 12% CCA, 23% turf algae, and 43% sand and rubble. Although this community structure differed from that occurring in 2018 when the experiment was conducted, our goal was to explore the response of an average back reef community, which is well represented by the state of the back reef in Mo'orea in 2007. When the study plots first were chosen, coral cover had declined compared to 2007 and, therefore, the community structure was augmented through transplantation of a few coral colonies from the adjacent reef to the study plots. Transplantations were completed >6 months prior to initiation of the incubations. Two-weeks before the start of the experiment (15 April 2018), two, 1.5-cm thick clear acrylic flumes (5.0 × 0.55 × 0.55 m (length × width × height) with UV-transparent tops were secured to fiberglass rails previously attached to the reef at the perimeter of each study plot. The internal volumes of each flume and the return-section was ~2,300 L. The flume was secured to the reef with stainless steel threaded rods drilled and epoxied into the carbonate substratum, and was sealed to the reef using rubber tubing inflated with water and placed between the supporting rails and the reef surface. Rubber matting extending 30 cm from the rails and weighted using 10-kg sand bags augmented the seal. An indicator dye (Rhodamine B, Matheson Coleman & Bell) was injected (~1 mg L−1) into the closed flumes at the start and end of the experiment to evaluate the efficacy of the seal to the reef, and visual inspection was used to detect leaks; none were visible during ~2 h trials. CO2 enrichment The flumes were powered and controlled through an umbilical cable connecting them to a nearby floating platform fitted with solar panels, wind turbines, and batteries (Fig. 1). CO2 dosing to the flume was controlled through an Apex Aquacontroller (Neptune Systems), connected to an Atlas Scientific pH probe (ENV-40-pH), which controlled a solenoid (McMaster-Carr Model 5077T141) that injected CO2 gas into the flume from a 60-L gas cylinder on the floating platform. pH probes that controlled the autonomous dosing system were calibrated against pH values every 3 days using the m-cresol dye method (SOB 6 b Dickson34). Adjustments were made daily to maintain CO2 conditions within the flume (Fig. S2). The seawater in the ambient flow was not manipulated with respect to CO2. The pH of the elevated pCO2 flume was set to a daytime target (06:00–18:00 h) of 1000 µatm pCO2 (~7.70 pHTotal units), and the system was programed to decrease by 0.1 pH units (i.e., ~1300 µatm pCO2), at night (18:00–06:00 h) to mimic in situ diel oscillations of pCO2 recorded on the backreef of Mo'orea35 (Fig. S2). A SeaFET pH sensor (Durafet® pH sensor) was deployed in each flume to continually record seawater pH, and these instruments were calibrated every 3 days36. Calculation of carbonate chemistry parameters were performed in CO2SYSv2.1 using pHTotal and AT as the two input parameters. Incubation parameters and calculation of community metabolism During the experiment, each flume was flushed with ~200 L h−1 ambient seawater that was pumped from the reef within 50 m of the study plots. Sampling for NCC and NCP was performed on days 1–3, 7–9, 13–15, and 19–21 of the 21–d experiment (Fig. S3). Each day of sampling consisted of three incubations during the day (06:30 to 09:30 h, 10:00 to 14:00 h, and 14:30 to 17:30 h) and one incubation that extended over the night (18:00 to 06:00 h). During the measurements of metabolism, flushing of the flumes with seawater was halted, but CO2 treatments and flow conditions were maintained. To prevent hyperoxia and hypoxia in the flume during closed-circuit operation, seawater from the surrounding reef was pumped into the flume for 30 min in between each incubation to replace ~25% of the volume. For the flume maintained at high pCO2, flushing with ambient seawater between incubations decreased pCO2 to ambient levels. However, following cessation of flushing and prior to the next incubation, treatment conditions were restored to target pCO2 values (~1000 µatm) within 15–30 min. For each incubation, samples of seawater were collected at the beginning and end of each incubation to quantify seawater carbonate chemistry. Samples were drawn from the flume using a 60-mL syringe that was attached to a vinyl tube fitted with a shut off valve. Samples were transported immediately to the shore lab, where salinity was measured using a Thermo Scientific Orion Star A212 conductivity meter, then potentiometrically titrated following standard operating procedures (SOP 3b of Dickson et al. [2007]) using an automatic titrator (Mettler-Toledo T50) fitted with a Rondolino-sample carousel (Mettler-Toledo). The titrator was fitted with a Mettler pH probe (DGi-115) that was operated with certified HCl (Batch A13 Dickson Laboratory). Certified reference material (Dickson CRM Batch #138) was used to evaluate the accuracy of the total alkalinity (AT) measurements (SOP 3b34). NCC was quantified using the alkalinity anomaly method37 where \(\,{\rm{\Delta }}{A}_{Tfinal-initial}\) (µmol kg−1) was calculated from the difference between AT in final and initial salinity-normalized water samples from each flume: $$NCC=\frac{-({\rm{\Delta }}{A}_{Tfinal-initial})}{2t\times SA}\times \rho V$$ where t is time (h), SA is the planar surface area of the reef enclosed by the working section of the flumes (m2), ρ is the density of seawater (1.023 kg L−1 and calculated from average salinity, and temperature from each daily measurement), and V is the internal volume (L) of the flume (including the return sections). NCP was measured from the rate of change of O2 concentration as a function of time, with dissolved O2 concentrations measured using MiniDOT O2 sensors (Precision Measurement Engineering, Inc.) with one sensor in each flume. We chose to measure O2 flux as opposed to measuring DIC changes to maintain treatment conditions through CO2 dosing. Rates of change were determined using least squares linear regression of O2 concentration (mmol L−1) against time (h) (final units of mmol O2 m−2 h−1). All O2 sensors were within factory calibration, which is stable for ~1 year. NCP was calculated from O2 fluxes where DO is the change in O2 concentration (mg L−1), molar mass of O2 (32 g mol−1), SA is benthic surface area enclosed in the flumes (2.5 m2), t is incubation duration (h), and V is the volume of the flume (L). $$NCP=\frac{DO\,}{molar\,mass\,O2\times SA\times t}\times V$$ NCC and NCP To calculate time-integrated values for NCC and NCP (calculated over 06:30–09:30 h, 10:00–14:00 h, 14:30–17:30 h, and 18:00–06:00 h), hourly values were integrated over each 3–12-h incubation period and summed within each day to obtain 24-h values. For 12-hr daytime values, integrated 3 h incubation periods were summed from 06:00–18:00 h, and for 12-hr nighttime values, data were integrated from 18:00–06:00 h Analysis of NCC Twenty four hour NCC was analyzed using a two-way ANOVA in which pCO2 (ambient and high), and sampling period (incubations 1–4 throughout the whole experiment) were fixed factors, and 24-h NCC was the response variable. Each sampling period consisted of 3 consecutive days that were treated as statistical replicates. The effect of time (incubation day) on the difference of 24-h NCC between ambient and high pCO2 was tested using a Model I least squares linear regression, in which a significant slope indicated that the treatment effect (i.e., the difference in 24-h NCC between ambient and high pCO2) changed over time. Separate regressions were completed for daytime and nighttime NCC in order to distinguish between the effects of OA on NCC during the day and night. All analyses were performed in R. Analysis of NCP NCP and night community respiration (R) was analyzed in a similar way to NCC using a two-way ANOVA in which pCO2 (ambient and high), and time (incubations 1–4 throughout each day) were fixed factors, and 12-h NCP or R was the response variable. Similar to NCC, each sampling period consisted of 3 consecutive days that were treated as statistical replicates. Relationship between NCP and NCC A change in community function (sensu38) was evaluated from the relative scaling of calcification as a function of productivity, with decreases in this quotient generally representing a degradation of the reef, leading to increased primary producers, and decreased calcifiers. The relationship between NCP and NCC was calculated as the slope of the latter on the former (expressed as change in mmol CaCO3 per mmol O2) over the 21-d incubation, in which hourly NCP was regressed on hourly NCC using Model II regression31. The slope, elevation, and corresponding error (95% CI) for each of the flumes was calculated using a Major Axis (MA) approach, in which error on both x- and y-axes are accounted for39. Significance of differing slopes between the high and ambient CO2 treatment were evaluated using a Bartlett-corrected likelihood ratio statistic, and the p-value was calculated assuming a chi-squared distribution with 1 degree of freedom39. Difference in elevation (y-intercepts) between treatment groups was evaluated using a Wald statistic that tested for no difference among the y-axis intercepts, and similar to slope significance, this statistic was evaluated using a p-value assuming a chi-squared distribution with 1 degree of freedom39. This analysis was performed on an aggregation of all sampling points within the 21-d incubation (4 time points, 3 days per time point, 4 samples per day = 48 paired NCP/NCC measurements per flume), and analyzed using a model II linear regression with the package smatr in R. Spalding, M. et al. Mapping the global value and distribution of coral reef tourism. Marine Policy 82, 104–113 (2017). Peters, G. P. et al. Towards real-time verification of CO2 emissions. Nature Clim Change 7, 848–850 (2017). Hughes, T. P. et al. Global warming and recurrent mass bleaching of corals. Nature 543, 373–377 (2017). Hughes, T. P. et al. Spatial and temporal patterns of mass bleaching of corals in the Anthropocene. Science 359, 80–83 (2018). Orr, J. C. et al. Anthropogenic ocean acidification over the twenty-first century and its impact on calcifying organisms. Nature 437, 681–686 (2005). Kornder, N. A., Riegl, B. M. & Figueiredo, J. Thresholds and drivers of coral calcification responses to climate change. Glob Change Biol 78, 1277–12 (2018). Chan, N. C. S. & Connolly, S. R. Sensitivity of coral calcification to ocean acidification: a meta-analysis. Glob Change Biol 19, 282–290 (2012). Article ADS Google Scholar Kroeker, K. J. et al. Impacts of ocean acidification on marine organisms: quantifying sensitivities and interaction with warming. Glob Change Biol 19, 1884–1896 (2013). Eyre, B. D. et al. Coral reefs will transition to net dissolving before end of century. Science 359, 908–911 (2018). Comeau, S., Lantz, C. A., Edmunds, P. J. & Carpenter, R. C. Framework of barrier reefs threatened by ocean acidification. Glob Change Biol 22, 1225–1234 (2016). Fabricius, K. E. et al. Losers and winners in coral reefs acclimatized to elevated carbon dioxide concentrations. Nature Climate Change 1, 165–169 (2011). Silbiger, N. J., Donahue, M. J. & Brainard, R. E. Environmental drivers of coral reef carbonate production and bioerosion: a multi-scale analysis. Ecology 98, 2547–2560 (2017). Gattuso, J. P. et al. Contrasting futures for ocean and society from different anthropogenic CO2 emissions scenarios. Science 349, aac4722–aac4722 (2015). Andersson, A. J. & Gledhill, D. Ocean Acidification and Coral Reefs: Effects on Breakdown, Dissolution, and Net Ecosystem Calcification. Annu Rev Mar Sci 5, 321–348 (2013). Albright, R. et al. Carbon dioxide addition to coral reef waters suppresses net community calcification. Nature Publishing Group 555, 516–519 (2018). Edmunds, P. J. et al. Integrating the Effects of Ocean Acidification across Functional Scales on Tropical Coral Reefs. BioScience 66, 350–362 (2016). Comeau, S., Carpenter, R. C., Lantz, C. A. & Edmunds, P. J. Ocean acidification accelerates dissolution of experimental coral reef communities. Biogeosciences 12, 365–372 (2015). Comeau, S., Edmunds, P. J., Lantz, C. A. & Carpenter, R. C. Water flow modulates the response of coral reef communities to ocean acidification. Sci Rep 4, 108–6 (2014). Carpenter, R. C., Lantz, C. A., Shaw, E. & Edmunds, P. J. Responses of coral reef community metabolism in flumes to ocean acidification. Mar Biol 165, 66 (2018). Gattuso, J. P. et al. Free-ocean CO2 enrichment (FOCE) systems: present status and future developments. Biogeosciences 11, 4057–4075 (2014). Kline, D. I. et al. A short-term in situ CO2 enrichment experiment on Heron Island (GBR). Sci Rep 2, 10288–9 (2012). Georgiou, L. et al. pH homeostasis during coral calcification in a free ocean CO2 enrichment (FOCE) experiment, Heron Island reef flat, Great Barrier Reef. P Natl Acad Sci USA 112, 13219–13224 (2015). Srednick, G. et al. SCoRe FOCE: Novel in situ flumes to manipulate pCO2 on shallow tropical coral reef communities. Limnology and Oceanography Methods IPCC. Summary for Policymakers. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. (2013). Comeau, S. et al. Resistance to ocean acidification in coral reef taxa is not gained by acclimatization. Nature Climate Change 1–12, https://doi.org/10.1038/s41558-019-0486-9 (2019). Calosi, P. et al. Adaptation and acclimatization to ocean acidification in marine ectotherms: an in situ transplant experiment with polychaetes at a shallow CO2 vent system. Philos. Trans. R. Soc. Lond., B, Biol. Sci. 368, 20120444–20120444 (2013). O'Brien, P. A., Morrow, K. M., Willis, B. L. & Bourne, D. G. Implications of Ocean Acidification for Marine Microorganisms from the Free-Living to the Host-Associated. Front. Mar. Sci. 3, 1029 (2016). Eyre, B. D., Andersson, A. J. & Cyronak, T. Benthic Coral Reef Calcium Carbonate Sediment Dissolution in an Acidifying Ocean. Nature Publishing Group 4, 969–976 (2014). Page, H. N. et al. Differential modification of seawater carbonate chemistry by major coral reef benthic communities. Coral Reefs 35, 1311–1325 (2016). DeCarlo, T. M. et al. Community production modulates coral reef pH and the sensitivity of ecosystem calcification to ocean acidification. J. Geophys. Res. Oceans 122, 745–761 (2017). Takeshita, Y. et al. Assessment of net community production and calcification of a coral reef using a boundary layer approach. J. Geophys. Res. Oceans 121, 5655–5671 (2016). Cyronak, T. et al. Taking the metabolic pulse of the world's coral reefs. Plos One 13, e0190872–17 (2018). Kroeker, K. J., Kordas, R. L. & Harley, C. D. G. Embracing interactions in ocean acidification research: confronting multiple stressor scenarios and context dependence. Biol. Lett. 13, 20160802–4 (2017). Dickson, A. G., Sabine, C. L. & Christian, J. R. Guide to best practices for ocean CO2 measurements. PICES Special Publication, pp. 1–196 (PICES Special Publication, 2007). Hofmann, G. E. et al. High-Frequency Dynamics of Ocean pH: A Multi-Ecosystem Comparison. Plos One 6, e28983–11 (2011). Rivest, E. B. et al. Beyond the benchtop and the benthos: Dataset management planning and design for time series of ocean carbonate chemistry associated with Durafet®-based pH sensors. Ecological Informatics 36, 209–220 (2016). Kinsey, D. W. Alkalinity changes and coral reef calcification. Limnol. Oceanogr. 23, 989 (1978). Suzuki, A. & Kawahata, H. Carbon budget of coral reef systems: an overview of observations in fringing reefs, barrier reefs and atolls in the Indo-Pacific regions. Tellus B 55, 428–444 (2003). Warton, D. I., Wright, I. J., Falster, D. S. & Westoby, M. Bivariate line-fitting methods for allometry. Biological Reviews 81, 259–33 (2006). This study was funded by the US National Science Foundation (OCE 10-26851, 14-15268, and Mo'orea Coral Reef Long Term Ecological Research Program 12-36905) and California State University Northridge (CSUN). We thank the CSUN Science Shop (J. Ferree, R. Rojas, M. Hawthorne, and R. Arias) for design and construction of the floating platform and flumes, and G. Srednick, B. Shakya, J. Bergman, C. Lantz, S. Merolla, S. Ginther, A. Potter, A. Widrick, J. Serrano, L. Perng, A. Isaak, A. Wiryadimejo, for field support and equipment implementation. Research was completed under permits issued by the Haut-Commissariat de la République en Polynésie Francaise (DRRT) (Protocole d'Accueil 2015–2016). This is contribution number 292 of the CSUN Marine Biology Program. Department of Biology, California State University, Northridge, United States Steve S. Doo, Peter J. Edmunds & Robert C. Carpenter Steve S. Doo Peter J. Edmunds Robert C. Carpenter S.S.D. collected, analyzed, and wrote the main manuscript text. S.S.D., P.J.E., and R.C.C. designed the study, and revised the manuscript. All authors reviewed the manuscript and gave final approval for publication. Correspondence to Steve S. Doo. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Figures and Tables for Manuscript Doo, S.S., Edmunds, P.J. & Carpenter, R.C. Ocean acidification effects on in situ coral reef metabolism. Sci Rep 9, 12067 (2019). https://doi.org/10.1038/s41598-019-48407-7 Increased light availability enhances tolerance against ocean acidification-related stress in the calcifying macroalga Halimeda opuntia Zhangliang Wei Yating Zhang Lijuan Long Acta Oceanologica Sinica (2022) Top 100 in Earth Science About Scientific Reports Guide to referees Journal highlights Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
CommonCrawl
System model for the coexistence of PS-LTE and LTE-R networks Analysis on co-channel interference for the coexistence of LTE-R and PS-LTE networks eICIC/FeICIC schemes with coordinated scheduling under the coexistence of PS-LTE and LTE-R networks Performance evaluation of the interference management schemes Co-channel interference management using eICIC/FeICIC with coordinated scheduling for the coexistence of PS-LTE and LTE-R networks Wan Chen1, Ishtiaq Ahmad1 and KyungHi Chang1Email authorView ORCID ID profile EURASIP Journal on Wireless Communications and Networking20172017:34 In the Republic of Korea, a Long Term Evolution (LTE)-based public safety (PS)-LTE network is being built using 718~728 MHz for uplink and 773~783 MHz for downlink. However, the same bands are also assigned to the LTE-based high-speed railway (LTE-R) network, so great concerns and practical researches on co-channel interference (CCI) management schemes are urgently required. In this paper, performance is analyzed and evaluated by considering the cases of non-RAN (radio access network) sharing and LTE-R RAN sharing by PS-LTE user equipments (UE). Since a train control signal requires high reliability and low latency in order to fulfill its mission-critical service (MCS) requirements, we give higher priority to LTE-R UE during resource allocation under the LTE-R RAN sharing by PS-LTE UEs. In addition, interference management schemes are more effective for the coexistence of PS-LTE and LTE-R networks under RAN sharing environment. In this paper, we utilize enhanced inter-cell interference coordination (eICIC) and further enhanced ICIC (FeICIC) schemes to mitigate the interference from PS-LTE network to LTE-R network while improving the LTE-R eNodeB (eNB) resource utilization by offloading more PS-LTE UEs to LTE-R network. Moreover, a coordinated multipoint (CoMP) transmission scheme is considered among LTE-R eNBs to enhance LTE-R cell edge user performance. By employing FeICIC along with coordinated scheduling (CS) CoMP, the best throughput performance can be achieved under the case of RAN sharing. PS-LTE LTE-R QoS priority (F)eICIC In the Republic of Korea, the national disaster safety network project has been started in 2014 [1], which costs over 1.6 billion dollars, and is being built using Long Term Evolution (LTE) for public safety on the 700 MHz frequency band. Frequency bands for next-generation railway network and e-navigation network over marine environment are also assigned to the same 700 MHz bands as the public safety network. It is known that target users of the public safety network are police, firefighters, soldiers, emergency medical workers, and so on, and the LTE-based high-speed railway (LTE-R) network will provide communication services to control trains and for train crews. Because railway wireless communication together with train control has been crucial for the reliability and safety of railway operation, if both networks use the same frequency, great concerns and practical researches on interference management schemes are urgently required. Radio access network (RAN) sharing is currently considered as a candidate for the coexistence of these two networks [2]. For the coexistence of public safety (PS)-LTE and LTE-R networks, which are assumed to use the same spectrum, active RAN sharing should be considered. LTE-R user equipment (UE) usually receives strong downlink (DL) signals from LTE-R eNodeBs (eNBs) due to the short distance to the railway. Under the assumption of very reliable LTE-R network deployment, we only consider LTE-R RAN sharing by PS-LTE UEs. In the 3rd Generation Partnership Project (3GPP) LTE Rel. 8, inter-cell interference coordination (ICIC) is proposed, by which eNBs can communicate via the X2 interface and optimize scheduling for cell edge users. In 3GPP LTE Rel. 10, ICIC is enhanced to better support heterogeneous network (HetNet) deployments, which is eICIC [3]. The major difference is the additional time-domain ICIC, realized through almost blank subframes (ABS). During ABS, no traffic is transmitted except the control signaling and cell specific reference signals to mitigate the interference from macro eNodeBs (MeNB) to the cell edge UEs in small cells, especially the UEs in the cell range expansion (CRE) area [4, 5]. In 3GPP LTE Rel. 11, further enhanced ICIC (FeICIC) is proposed, which allows traffic data to be transmitted during ABS with relatively low power [6, 7]. For the coexistence of PS-LTE and LTE-R networks, due to the different range of coverage and overlapping deployment, the interference management schemes for heterogeneous networks are better to be adopted. In this paper, we consider eICIC and FeICIC as candidates. Since the coverage of PS-LTE eNB is much larger than LTE-R eNBs, ABSs and power-reduced ABSs (PR-ABS) are used to protect the cell edge UEs served by LTE-R eNBs against the interference from PS-LTE eNBs. Simultaneously, CRE is used to offload more PS-LTE UEs to LTE-R eNBs to further enhance the efficiency of resource utilization of LTE-R eNBs to improve system throughput. A coordinated multipoint (CoMP) is recognized as an effective method to further improve the cell edge users' performance. In Rel. 10 [8], downlink (DL) CoMP includes the possibility of coordination among different points or cells. If inter-eNB coordination is supported, information needs to be signaled among eNBs. There are various types of CoMP schemes: joint transmission (JT), dynamic point selection (DPS), and coordinated scheduling/beamforming (CS/CB). For JT and DPS, the data for an UE should be available at multiple transmission points, but at only one transmission point for CS/CB. In this paper, CS CoMP is considered for the coexistence of PS-LTE and LTE-R networks. It should be noted that the LTE-R network can be built in centralized (C)-RAN architecture in 5G. With this assumption, the interference management schemes are also effective. The LTE-R eNBs in our scenario can be regarded as a group of RRUs (remote radio unit) which are connected to one BBU (baseband unit), and the centralized coordinated scheduling algorithm can be executed in the BBU. The combination of eICIC/FeICIC with CoMP can also be used in 5G ultra dense network. Similarly, eICIC/FeICIC can be used to prevent interference from macrocells to the small cells, while at the same time, CoMP can be applied to a group of small cells [9]. The cooperation among the small cells costs more than the case in our scenario, since LTE-R eNBs are deployed along the train track. In this paper, a typical scenario for the coexistence of PS-LTE and LTE-R is considered, and we evaluate the popular inter-cell interference management schemes while applying the practical channel model, channel quality indicator (CQI) feedback, and scheduling procedure. After that, the effective interference management techniques are employed while noticing the feasibility, complexity, and the details of how to combine as well as to utilize these techniques are illustrated. The rest of the paper is organized as follows. In section 2, we present the system model used to analyze the co-channel interference (CCI) between the PS-LTE and LTE-R networks. Section 3 compares the cases of non-RAN sharing and LTE-R RAN sharing by PS-LTE UEs, and the interference management schemes are introduced in Section 4. In section 5, the performance of the interference management schemes is evaluated using system-level simulations (SLS). Finally, section 6 concludes the paper. 2 System model for the coexistence of PS-LTE and LTE-R networks 2.1 PS-LTE and LTE-R network deployment The inter-site distance (ISD) of PS-LTE and LTE-R networks is assumed to be 4 km [10] and 1 km, respectively. We consider a scenario with one-tier deployment of PS-LTE eNBs and four LTE-R eNBs overlapped with the center site of PS-LTE network, as shown in Fig. 1. The center area is the region of interest (ROI). There are three sectors for each PS-LTE eNB, but only two sectors for each LTE-R eNB to support the coverage over the railway. The LTE-R eNBs are located on the both sides of the track. PS-LTE and LTE-R network deployment 2.2 Channel model To calculate the propagation loss for each link, the general equation for the channel gain is given in Eq. (1): $$ G=\mathrm{Antenna}\_\mathrm{Gain}-\mathrm{PathLoss}-\mathrm{Shadowing}-\mathrm{Fading} $$ The Hata model is widely used for path loss [11], which can support the carrier frequency range from 150 to 1500 MHz. In this paper, the Hata rural model is considered and given as: $$ L(R)=69.55+26.16{ \log}_{10}(f)-13.82{ \log}_{10}\left({h}_b\right)+\left[44.9-6.55{ \log}_{10}\left({h}_b\right)\right]{ \log}_{10}(R)-4.78{\left({ \log}_{10}(f)\right)}^2+18.33{ \log}_{10}(f)-40.94 $$ where R is the distance between the eNB and UE in km, f is the carrier frequency in MHz, and h b is the base station antenna height above ground in meters. Shadowing is modeled by using log-normal distribution with a mean of 0 dB and standard deviation of 6 dB [12]. For the paths from the same eNBs to two UEs at different positions, spaced by a distance x, the shadowing correlation coefficient is r(x) = e − αx . In addition, the inter-site correlation coefficient is 0.5. Fast fading refers to rapid variation of the signal levels due to multipath transmission. In this paper, fast fading is generated according to the D1 and D2a scenarios supported by Winner II [13] for PS-LTE UEs with low mobility and LTE-R UEs with high mobility, respectively. 3D antenna pattern is widely used [11], which considers horizontal cut and vertical cut of antenna gain as follows: $$ \left\{\begin{array}{l}{\mathrm{A}}_{\mathrm{V}}\left({\theta}_{\mathrm{V}}\right)= \min \left[12{\left(\frac{\theta_{\mathrm{V}}-{\theta}_{\mathrm{etilt}}}{\theta_{3\mathrm{dB}}}\right)}^2,{\mathrm{SLA}}_{\mathrm{v}}\right]\\ {}{\mathrm{A}}_{\mathrm{H}}\left({\varphi}_{\mathrm{H}}\right)= \min \left[12{\left(\frac{\varphi_{\mathrm{H}}}{\varphi_{3\mathrm{dB}}}\right)}^2,\ {A}_m\right]\end{array}\right. $$ where AV and AH are the vertical and horizontal cut, respectively. θ V and φ H are the angles between the sector antenna direction and the eNB-to-UE transmission path direction on the vertical plane and horizontal plane, respectively. φ 3dB and θ 3dB are the 3-dB horizontal and vertical beam widths, respectively, and A m and SLA v are backward attenuation and side lobe vertical attenuation, respectively. A(θ, φ) is the total attenuation. The antenna gain should be calculated as: $$ \mathrm{AntennaGain}= \max \_\mathrm{AntennaGain}- \min \left[{\mathrm{A}}_{\mathrm{V}}\left({\theta}_V\right)+{\mathrm{A}}_{\mathrm{H}}\left({\varphi}_{\mathrm{H}}\right),{A}_m\right] $$ Typical values for these parameters are φ 3dB = 70∘, A m = 20 dB, \( {\theta}_{3\mathrm{dB}}={10}^{\circ },\kern.5em {\mathrm{SLA}}_{\mathrm{V}}=20\kern.2em \mathrm{dB} \), and θ etilt = (0∘, 15∘). 2.3 Abstraction of physical layer The abstraction of physical (PHY) layer is meant to obtain the block error rate (BLER) for each transport block (TB) under the corresponding modulation and coding scheme (MCS) to calculate UE throughput. In LTE system, there is a loop for the adaptive modulation and coding (AMC) scheme in DL transmission, which requires UE feedback of the channel quality indicator (CQI) index. Each CQI index corresponds to a certain MCS. By using various MCS levels, different spectral efficiencies can be achieved [14]. In addition, the MCS level is one factor in deciding the BLER for the corresponding TB as well as the signal-to-interference-plus-noise ratio (SINR). To get the BLER in SLS without carrying out real signal processing, we use the curves obtained by link-level simulation (LLS) under an additive white Gaussian noise (AWGN) channel with respect to 15 MCS levels. To predict the BLER under a fading channel, AWGN-equivalent SINR is required. In this paper, mutual information-based exponential SINR mapping (MIESM) [15] is used to map the SINRs of multiple subcarriers assigned to the TB to the AWGN-equivalent effective SINR. 3 Analysis on co-channel interference for the coexistence of LTE-R and PS-LTE networks 3.1 LTE-R and PS-LTE network coexistence without RAN sharing For the scenario without RAN sharing illustrated in Fig. 2. PS-LTE UEs are not allowed to access LTE-R eNBs, which have smaller coverage overlapped with the PS-LTE network. This is similar to the coexistence of macrocells and closed subscriber group (CSG) femtocells, in which the low-power nodes (LPN) only allow access by a group of UEs. However, LTE-R eNBs are high-power nodes but with a limited coverage along the track, which results in more severe co-channel interference, especially when LTE-R eNBs are located at the cell edge of PS-LTE network, as shown in Fig. 2. For the PS-LTE UE_B at the edge of PS-LTE coverage, which simultaneously receives a relatively low-power desired signal, the interference power from the LTE-R eNBs is really high because the UE is also in the center coverage of LTE-R network. Coexistence of PS-LTE and LTE-R networks without RAN sharing The power setting is already specified in [6] for eICIC/FeICIC schemes in order to protect the macro UEs that are close to the CSG femtocells by decreasing the transmission power of the femtocells, because femto eNBs are LPNs and are usually deployed by users for limited service requirements, which gives the small cells lower priority. Hence, it is rational to protect macro UEs by reducing the transmission power of femto eNBs. However, both PS-LTE and LTE-R networks are high-power nodes deployed by operators. In addition, reducing the power of LTE-R eNBs will impact the reliability of the LTE-R network and cause outage in service so the eICIC/FeICIC schemes are not effective under the environment without RAN sharing. 3.2 LTE-R RAN sharing by PS-LTE UEs Instead of considering LTE-R eNBs as sources of high interference, they can be considered as eNBs to enhance the cell edge coverage of PS-LTE network by active RAN sharing. Under the RAN sharing environment, PS-LTE UEs can connect with LTE-R eNBs, which reduces the co-channel interference and boosts the resource utilization of LTE-R eNBs. However, LTE-R UE is moving along the track and usually receives higher power from LTE-R eNBs, so it is not necessary for PS-LTE network to support RAN sharing for LTE-R UEs. In this paper, only LTE-R RAN sharing by PS-LTE UEs is considered. 3.2.1 Scheduling for LTE-R RAN sharing by PS-LTE UEs In the RAN sharing case, we consider predefined rules for scheduling. Since the downlink transmission for LTE-R UE requires low latency and high reliability, it is necessary to assign the best resources to the LTE-R UE. Thus, in order to fulfill the LTE-R mission-critical service requirements, we always give higher priority to LTE-R UE during resource allocation. Figure 3 shows the scheduling process for LTE-R eNBs that offer RAN sharing to PS-LTE UEs, while PS-LTE eNB schedule their UEs based on proportional fair scheduling. LTE-R eNB scheduling with RAN sharing for PS-LTE UEs 4 eICIC/FeICIC schemes with coordinated scheduling under the coexistence of PS-LTE and LTE-R networks In this section, we introduce effective interference mitigation schemes under RAN sharing environment to improve the LTE-R UE channel quality, meanwhile further improving PS-LTE UE performance, including both reliability and throughput. It should be noted that our scenario of PS-LTE and LTE-R network coexistence is a unique HetNet scenario with overlapped macrocells, and both of them are deployed by operators. Considering the difference in cell sizes and that LTE-R eNBs can offload PS-LTE UEs using RAN sharing, this scenario is similar to the HetNet scenario of macro and picocells, in which UEs can connect with both the two types of eNBs and pico eNBs with smaller coverage are supposed to offload macro UEs [8]. However, there is an obvious difference, which is that there are two types of UEs in our scenarios, PS-LTE UEs can be distributed anywhere and considered as normal UEs but LTE-R UE is moving along the track located between the LTE-R eNBs. Targeting this scenario with RAN sharing, time-domain eICIC and FeICIC schemes can be employed to restrain the interference from PS-LTE eNBs to LTE-R network. As for the LTE-R UE along the track, it usually receives higher interference power from neighbor LTE-R eNBs than that from PS-LTE eNBs. To mitigate this interference, CS CoMP is considered between the neighbor LTE-R eNBs. 4.1 General notations and scenario description The general notations of the considered network are given as follows: M: set of PS-LTE cells. K: set of LTE-R cells. U: set of UEs, which can be divided into two sets, U M and U K , served by PS-LTE eNBs and LTE-R eNBs, respectively. U M : set of PS-LTE UEs served by PS-LTE cells. U K : set of UEs served by LTE-R eNBs; can be both LTE-R and PS-LTE UEs. U CK : set of cell center UEs of LTE-R cells. U EK : set of CRE UEs of LTE-R eNBs will only be scheduled during ABSs. U CM : set of cell center UEs of PS-LTE eNBs, which will be scheduled during PR-ABSs. U EM : set of the rest of the UEs of PS-LTE eNBs, which will only be scheduled during normal subframes when FeICIC is applied. n: index of the physical resource block (PRB) for 10 MHz bandwidth (50 PRBs). \( {G}_{m, u}^n \): channel gain from PS-LTE cell m to UE u on PRB n. \( {G}_{k, u}^n \): channel gain from LTE-R cell k to UE u on PRB n. \( {I}_{i, u}^n \): interference to UE u from neighbor PS-LTE cell i on PRB n. \( {I}_{j, u}^n \): interference to UE u from neighbor LTE-R cell j on PRB n. η: thermal noise per RB, including the UE noise figure. \( {\mathrm{RB}}_p^k \): set of RBs, scheduled by LTE-R cell k during ABSs, which are protected resources. \( {\mathrm{RB}}_{\mathrm{np}}^k \): set of RBs, scheduled by LTE-R cell k during normal subframes, which are non-protected resources. \( {\mathrm{RB}}_{\mathrm{fp}}^m \): set of RBs, scheduled by PS-LTE cell k during normal subframes by using full transmission power. \( {\mathrm{RB}}_{\mathrm{rp}}^m \): set of RBs, scheduled by PS-LTE cell k during normal subframes by using reduced transmission power. \( {r}_u^n \): rate that can be achieved by RB n while allocated to UE n. R: total throughput of the system, in bit/second. Figure 4 shows the scenarios of RAN sharing with interference management schemes of eICIC/FeICIC with CS CoMP. For eICIC, during ABSs, no data is transmitted from PS-LTE eNB to avoid interference towards PS-LTE UEs served by the LTE-R eNB and the LTE-R UE. By using FeICIC, data is transmitted during power-reduced ABSs for the center PS-LTE UEs while keeping the interference to the UEs supported by LTE-R eNBs at a relatively low level. However, the introduction of ABS/PR-ABS will only benefit the cell edge UEs of LTE-R eNBs. To protect the PS-LTE UEs which are getting high interference from LTE-R eNBs, CRE is introduced by adding a positive bias to the RSRPs of LTE-R eNBs to offload them into the LTE-R eNBs and then they can benefit from ABSs/PR-ABSs. Therefore, the criterion for serving cell selection is given by as follows: Scenarios 3 and 4: LTE-R RAN sharing by PS-LTE with eICIC/FeICIC and CS CoMP $$ \mathrm{Serving}\_\mathrm{e}\mathrm{N}\mathrm{B}\_\mathrm{ID}=\underset{i\in \left( M\cup K\right)}{ \arg \max}\left\{{\mathrm{RSRP}}_i+{\mathrm{bias}}_i\right\} $$ where bias i = 0 dB if i ∈ M and bias i > 0dB if i ∈ K. 4.2 eICIC/FeICIC schemes between PS-LTE and LTE-R eNBs 4.2.1 eICIC/FeICIC operation rules When PS-LTE eNBs are under the condition of high load, LTE-R coverage can be extended to allow more PS-LTE UEs to be offloaded and then the remaining PS-LTE UEs with better channel quality can be well served by PS-LTE eNBs. In this regard, the offloaded UEs will suffer severe co-channel interference from the PS-LTE eNBs. Thus, ABS/PR-ABS is applied in order to mitigate the interference to the offloaded UEs. Figure 5a shows the flow chart of the combination of eICIC/FeICIC and CS CoMP, and the details of CS CoMP will be introduced in section 4.3. The overall procedure of eICIC/FeICIC can be divided into three main parts: cell selection with CRE, UE identification, and scheduling. In flow chart (a), the threshold for RAN sharing is the bias used to extend the coverage of LTE-R eNBs. Flow charts (b) and (c) show the rules of UE classification for eICIC and FeICIC, respectively. As for scheduling, during ABS, only LTE-R eNBs are supposed to schedule the UEs when eICIC is applied. During PR-ABS, the center UEs served by PS-LTE eNBs will also be scheduled when FeICIC is considered. Many researches have been done on eICIC and FeICIC [15–20], some preferring to schedule both CRE UEs and center UEs of small cells during ABSs/PR-ABSs, while the rest assume that only the CRE UEs are scheduled. According to reference [19], to avoid inefficient utilization of resources during ABS and to achieve better fairness, proportional fair (PF) scheduling is better used for both UEs during ABSs/PR-ABSs. Flow charts for the procedures of eICIC/FeICIC and CS CoMP: a overall flow chart, b UE identification for eICIC, c UE identification for FeICIC 4.2.2 SINR and throughput calculation The SINRs observed by UE u on PRB n during a normal subframe are given in Eq. (6): $$ \left\{\begin{array}{cc}\hfill {\mathrm{SINR}}_{m, u}^n=\frac{G_{m, u}^n{p}_{m, u}^n}{{\displaystyle {\sum}_{i\in M, i\ne m}{I}_{i, u}^n}+{\displaystyle {\sum}_{j\in K}{I}_{j, u}^n}+\eta},\hfill & \hfill u\in {U}_M\hfill \\ {}\hfill {\mathrm{SINR}}_{k, u}^n=\frac{G_{k, u}^n{p}_{k, u}^n}{{\displaystyle {\sum}_{i\in M}{I}_{i, u}^n}+{\displaystyle {\sum}_{j\in K, j\ne k}{I}_{j, u}^n}+\eta},\hfill & \hfill u\in {U}_K\hfill \end{array}\right. $$ where \( {p}_{m, u}^n \) and \( {p}_{k,\mu}^n \) are the transmission powers of PS-LTE eNBs and LTE-R eNBs on PRB n, respectively. The interference received by UE u from neighbor PS-LTE eNB i and neighbor LTE-R eNB j is given as: $$ \left\{\begin{array}{cc}\hfill {I}_{i, u}^n={G}_{i, u}^n{p}_{i, u}^n,\hfill & \hfill i\in M\hfill \\ {}\hfill {I}_{j, u}^n={G}_{j, u}^n{p}_{j, u}^n,\hfill & \hfill j\in K\hfill \end{array}\right. $$ During ABS, the SINR of the UEs served by LTE-R eNBs can be calculated as follows: $$ \begin{array}{cc}\hfill {\mathrm{SINR}}_{k, u}^n=\frac{G_{k, u}^n{p}_{k, u}^n}{{\displaystyle {\sum}_{j\in K, j\ne k}{I}_{j, u}^n}+\eta},\hfill & \hfill u\in {U}_K\hfill \end{array} $$ where the interference only comes from other LTE-R eNBs. During PR-ABS, the SINR is expressed as: $$ \begin{array}{cc}\hfill {\mathrm{SINR}}_{k, u}^n=\frac{G_{k, u}^n{p}_{k, u}^n}{{\displaystyle {\sum}_{i\in M}\left({I}_{i, u}^n/{10}^{\varDelta}\right)}+{\displaystyle {\sum}_{j\in K, j\ne k}{I}_{j, u}^n}+\eta},\hfill & \hfill u\in {U}_K\hfill \end{array} $$ where Δ is the power reduction level in dB compared to the maximum transmission power during normal subframes. The overall system throughput for eICIC and FeICIC is expressed in Eqs. (10) and (11), respectively. $$ {R}_{\mathrm{eICIC}}=\underset{\mathrm{Throughput}\ \mathrm{of}\ \mathrm{LTE}\hbox{-} \mathrm{R}\ \mathrm{cells}}{\underbrace{{\displaystyle \sum_{k\in K}{\displaystyle \sum_{n\in {\mathrm{RB}}_{\mathrm{np}}^k, u\in {U}_{\mathrm{CK}}}{r}_u^n}}+{\displaystyle \sum_{k\in K}{\displaystyle \sum_{n\in {\mathrm{RB}}_p^k, u\in {U}_K}{r}_u^n}}}} + \underset{\mathrm{Throughput}\ \mathrm{of}\ \mathrm{PS}\hbox{-} \mathrm{LTE}\ \mathrm{cells}}{\underbrace{{\displaystyle \sum_{m\in M}{\displaystyle \sum_{n\in {\mathrm{RB}}_{\mathrm{fp}}^m, u\in {U}_{\mathrm{CM}}}{r}_u^n}}}} $$ $$ {R}_{\mathrm{FeICIC}}=\underset{\mathrm{Throughput}\ \mathrm{of}\ \mathrm{LTE}\hbox{-} \mathrm{R}\ \mathrm{cells}}{\underbrace{{\displaystyle \sum_{k\in K}{\displaystyle \sum_{n\in {\mathrm{RB}}_{\mathrm{np}}^k, u\in {U}_{\mathrm{CK}}}{r}_u^n}}+{\displaystyle \sum_{k\in K}{\displaystyle \sum_{n\in {\mathrm{RB}}_p^k, u\in {U}_K}{r}_u^n}}}} + \underset{\mathrm{Throughput}\ \mathrm{of}\ \mathrm{PS}\hbox{-} \mathrm{LTE}\ \mathrm{cells}}{\underbrace{{\displaystyle \sum_{m\in M}{\displaystyle \sum_{n\in {\mathrm{RB}}_{\mathrm{fp}}^m, u\in {U}_{\mathrm{CM}}}{r}_u^n}}+{\displaystyle \sum_{m\in M}{\displaystyle \sum_{n\in {\mathrm{RB}}_{\mathrm{rp}}^m, u\in {U}_M}{r}_u^n}}}} $$ 4.3 CS CoMP between LTE-R eNBs 4.3.1 Dynamic coordinated muting based coordinated scheduling process In this section, a dynamic coordinated muting (DCM)-based CS CoMP scheme is introduced among LTE-R eNBs using the LTE Rel. 11 framework [8]. In this paper, the PRBs are allocated by one central scheduler [21]. CoMP UE identification is done in individual eNBs based on UE feedback information. According to the resource allocation and muting decision made in the central scheduler, MCS selection is done in each eNB for the UEs with and without CoMP assistance by coordinated link adaptation, as shown in Fig. 6. Unlike the cell specific muting scheme done elsewhere [22, 23], this paper considers a PRB specific muting scheme. Centralized CS CoMP for LTE-R network Figure 7 shows the details of the CS CoMP procedure. All the LTE-R eNBs are considered to be in one CoMP set, and there is one centralized scheduler that will jointly process the information obtained from each LTE-R eNB. To identify which UEs need CoMP assistance and which neighbor cells should provide the assistance, the interference from the neighbor cell should be evaluated using Eq. (12): Flow chart for CS CoMP between LTE-R eNBs $$ {\mathrm{SIR}}_u^j>\mathrm{SIR}\_\mathrm{threshold} $$ where \( {\mathrm{SIR}}_u^j \) is the signal-to-interference ratio for UE u and interfering cell j. Since LTE-R UE has higher priority than the rest of the UEs served by the LTE-R eNBs, the LTE-R UE will be scheduled first, as shown in Fig. 7, and whenever the LTE-R UE needs CoMP assistance, the neighbor eNB should mute the corresponding RBs. As for the rest of the UEs, there are competition rules. The center CoMP scheduler will randomly choose one eNB to make the scheduling decision based on the PF scheduling rule in Eq. (13) in order to assign PRBs: $$ u= \arg \max \left(\frac{r_u^a}{{\mathrm{avg}\_\mathrm{thr}}_u}\right) $$ where \( {r}_u^a \) is the rate that can be achieved by UE u on RB a, and avg_thr u is the average UE throughput. Then, check the interference measurement results and find out if this UE needs CoMP assistance. If yes, this RB cannot be assigned in the neighbor eNB again, but the other eNBs can still use the RB. Otherwise, the rest of the eNBs can use this RB. Because each sector is reusing the whole band and it also needs to make scheduling decision over all the RBs. We assume that there are total L sectors, and for each sector N RBs and K UEs are considered. According to Fig. 7, for each RB i, the selected sector S v needs to make scheduling decision which is to select an UE j. Since the UE selection is based on PF metric in Eq. (13), at most, K−1 times of comparisons need to be made. If the RB is allocated to the UE who does not need CoMP assistance then make the scheduling decision for the neighbor sectors of S v , which is S v−1 and S v+1. It should be noted that the RB can only be scheduled to that UEs who do not need CoMP assistance so if there are UEs who need CoMP assistance in S v−1 and S v+1, making the scheduling decision on RB i will need less than K−1 times comparisons. If the sector S v allocate the RB to the UE who needs CoMP assistance, S v−1 and S v+1 will mute their RB i and S v−2 and S v−3 will then schedule the RB i. Considering that there are no neighbors of them that are using RB i, they can allocate it to either UE who needs CoMP assistance or not. This loop will stop until all the sectors made their scheduling decision of RB i. For each RB, at most, L sectors should make their scheduling decision and each decision takes at most K−1 times comparison. Besides, once an RB is allocated, the bits that it can be used to transmit for the corresponding UE will be got through table searching and will be subtracted from the total bits that UE needs to transmit. Considering all the calculation steps mentioned above, the complexity which is expressed by O of the coordinated scheduling procedures can be calculated as follows: $$ O\left(\mathrm{LN}\left( K-1\right)+ LN\right)\approx O\left(\mathrm{LN}\mathrm{K}\right) $$ 4.3.2 Multiple CSI processes based coordinated link adaptation It should be noted that if the neighbor cell mutes certain resources to support the UE that needs CoMP assistance, the MCS level should be selected based on the hypothesis of no interference from the neighbor cell. Hence, higher spectral efficiency for the corresponding RBs can be achieved. Since the CoMP set only includes the LTE-R eNBs, LTE-R eNBs are located along the train line for each UE served by LTE-R eNBs; at most, one neighbor LTE-R cell will cause high interference to the UE. Therefore, among all the LTE-R sectors, each cell edge UE only needs one LTE-R sector to mute corresponding RBs and whether other cells mute the same RBs will not make much difference for the UE. In order to identify that if UE needs CoMP assistance from neighbor cell and executes adaptive link adaptation when the neighbor cell mutes certain RBs then each UE needs channel state information (CSI) under two hypothesizes. One is the CSI information when UE gets interference from the neighbor cell; another is the CSI when UE gets no interference from the neighbor cell. Hence, each UE is configured with two channel state information (CSI) processes [14]. CSI process-0 is configured to get normal CQI indexes, where all the cells are transmitting on the same transmitting resource elements. CSI process-1 reflects the benefit obtained by the UE if the strongest interfering cell is muted. CSI process-1 can be achieved by allowing the neighbor LTE-R eNBs to use orthogonal resource elements for reference signal transmission. For the UEs supported by CoMP assistance, the MCS level selection is performed based on the channel quality measured by CSI process-1. In addition, these two CSI processes are also used to support the UE-received interference evaluation in Eq. (12). 5 Performance evaluation of the interference management schemes 5.1 Simulation environment and assumptions To evaluate the performance of the scenario without RAN sharing and LTE-R RAN sharing by PS-LTE UEs, we perform the SLS. Moreover, the performance of interference management schemes is verified for the coexistence of PS-LTE and LTE-R networks. The main SLS parameters are given in Table 1. Instead of full buffer traffic, the realistic traffic models are considered, e.g., voice over internet protocol (VoIP) and video. For LTE-R UE, only VoIP is used to model the traffic of the train control signal. In addition, we consider three ABSs/PR-ABSs per frame for eICIC and FeICIC. Moreover, the bias value for LTE-R CRE is 6 dB for both eICIC and FeICIC. To avoid high interference to UEs in the CRE area, a 7 dB power reduction can be used for PS-LTE eNB to transmit the data for the center UEs during PR-ABSs. System level simulation parameters Carrier frequency (DL) Bandwidth (PS-LTE eNB/LTE-R eNB) No. of PRBs RB bandwidth No. of PS-LTE eNBs 21 sectors (1-tier, 7 sites) (only 3 inner sectors are the region of interest) No. of LTE-R eNBs Maximum 2 eNBs/sector beside the railway Inter-eNB distance PS-LTE eNBs, 4 km LTE-R eNBs, 1 km No. of UEs/sector PS-LTE UEs, 40 LTE-R UE, 1 (1 terminal for train control) Transmission power PS-LTE, 46 dBm LTE-R, 43 dBm Maximum antenna gain PS-LTE, 15 dBi LTE-R, 17 dBi Minimum coupling loss Noise spectral density −174 dBm/Hz Path loss model Rural macro (3GPP TR 36.837) Log-normal distribution (mean 0 dB, st. dev. 6 dB) (Correlation b/w eNBs/sectors, 0.5/1) Fast fading PS-LTE: winner II (D1—rural macro) LTE-R: winner II (D2a—rural moving networks) UE mobility PS-LTE UE, 3 Km/h LTE-R UE, 250 Km/h Transmission modes SISO (1 × 1) Effective SINR MIESM UE receiver Zero forcing Traffic models PS-LTE: VoIP (80%), video (20%) LTE-R: VoIP 5.2 Simulation results and discussion In Fig. 8, the SINR distribution under the RAN sharing environment is given. The cell selection for RAN sharing is done based on Eq. (15) below, where \( {\mathrm{RSRP}}_u^z \) is the reference signal receiving power of UE u from eNB z. We can see that LTE-R cells occupy almost half of the coverage of the two sectors of the center PS-LTE eNB, which implies that under uniform distribution of PS-LTE UEs, around half of the UEs will access the LTE-R eNBs. It also shows that LTE-R UE is moving along the track and usually enjoys better channel quality by connecting to LTE-R eNBs. However, when LTE-R UE moves to the cell edge, also close to the PS-LTE eNB, it can suffer high interference from the PS-LTE eNB. UE SINR distribution for the coexistence of PS-LTE and LTE-R networks $$ \mathrm{Serving}\_\mathrm{e}\mathrm{N}\mathrm{B}\_{\mathrm{ID}}_u=\underset{z\in \left( M\cup K\right)}{\mathrm{argmax}}\left({\mathrm{RSRP}}_u^z\right) $$ Figure 9 shows the SINR of LTE-R UE at different positions, where the x-axis indicates the y-coordinate of the LTE-R UE position, while the x-coordinate is assumed to be constant, because LTE-R UE moves along the track between LTE-R eNBs. We can see that while at the cell edge between neighbor sectors and at the cell edge between neighbor sites, LTE-R UE suffers high interference and bad channel conditions. At the range of 200~300 m, there are more chances that the LTE-R UE-received SINR goes lower than −5 dB because this area is not only the cell edge between neighbor LTE-R sites but is also close to the center PS-LTE eNB, as marked in Fig. 8. LTE-R UE SINR without interference management schemes Figure 10 shows the SINR of LTE-R UE when CS CoMP and eICIC are applied, and we can see that the cell edge SINR of LTE-R UE has greatly increased due to the interference mitigation at the cell edge by CS CoMP and eICIC. It should be noted that by using eICIC scheme, LTE-R UE can be served with better channel quality than FeICIC because there is no interference from PS-LTE eNBs during ABSs. However, the curve for FeICIC and CS CoMP is similar to Fig. 10, since most of the time, SINR is measured during normal subframes, while there are only 30% ABSs. Besides, the major improvement is achieved by CS CoMP between neighbor LTE-R eNBs. LTE-R UE SINR with eICIC and CS CoMP As shown in Fig. 11, that channel qualities for UEs are greatly improved using RAN sharing. When the SINR threshold is −5 dB, around 20% of PS-LTE UEs will be in outage without RAN sharing. The outage probability is given in Eq. (16). For the scenario with LTE-R RAN sharing by PS-LTE UEs, only around 3% of the PS-LTE UEs are in outage. This is due to the conversion of the high-interference source to the desired signal, while at the same time, the previous weak desired signal will become part of the interference, which improves the signal power and decreases the interference power, as shown in Fig. 12. In the scenarios with eICIC/FeICIC and CS CoMP, the channel quality is significantly improved and at the 2 dB threshold, the outage probabilities have decreased to 6.1% and 9.4%, respectively, compared to the 51.5% for the scenario only with RAN sharing, due to the benefits of ABS/PR-ABS and CS CoMP UE outage probability UE Rx interference $$ P\left(\mathrm{outage}\right)=1- P\left(\mathrm{SINR}>\mathrm{SINR}\_\mathrm{Threshold}\right) $$ Figure 13 shows the improvement in the throughput of PS-LTE UEs using RAN sharing. At 50% of cumulative distribution function (CDF) curve, the UE throughput has increased by around 7.3%. There are two reasons for the improvement. The first is that the channel quality for PS-LTE UEs is getting better. Another is that more resources are available for PS-LTE UEs with RAN sharing. When a PS-LTE eNB is under high load, the LTE-R eNBs can offload PS-LTE users using RAN sharing and offer better services to these users. In the scenario with RAN sharing and eICIC, the edge throughput performance is further improved because of the CRE and ABS. Due to CRE, more PS-LTE UEs can be offloaded to the LTE-R eNB. Since there are two LTE-R eNBs (four sectors) per PS-LTE sector reusing the same band as the PS-LTE eNB, many PS-LTE UEs should be allowed to access the services provided by LTE-R eNBs. Then, with ABS, these UEs can even enjoy better channel conditions, compared to the channel conditions while being served by PS-LTE eNBs. Applying FeICIC and CoMP under the case of RAN sharing achieves the best performance among all the scenarios, because using PR-ABS, the resource utilization of the PS-LTE eNBs is enhanced, compared to the zero-power ABS for eICIC. UE throughput Figure 14 shows the PS-LTE UE SINR to spectral efficiency performance, where the x-axis is the UE SINR and the y-axis is the UE spectral efficiency in effective data bits per channel use (bit/cu). The value of spectral efficiency is determined by MCS level and BLER. By using the AMC scheme abstracted in the PHY layer, the MCS level is selected based on the policy to guarantee BLER within 10%. In the scenario with eICIC, the maximum spectral efficiency is higher than other scenarios because the muting of PS-LTE eNBs and CS CoMP results in better channel quality; then, higher MCS levels can be selected for the UEs. In the scenario with FeICIC, the maximum spectral efficiency is lower than that with eICIC due to the data transmission of PS-LTE eNBs during power-reduced ABS, which gives interference to the UEs served by LTE-R eNBs. Overall, SINR to spectral efficiency curves for all the scenarios almost overlap, because the same AMC scheme is used in the LTE network for all the UEs. UE SINR to spectral efficiency This paper introduces scenarios for the coexistence of PS-LTE and LTE-R networks and evaluates the system performance under the cases of non-RAN sharing and LTE-R RAN sharing by PS-LTE UEs. Simulation results show that there is around a 17% reduction in the outage probability and a 7% improvement of 50th percentile UE throughput using RAN sharing. To improve the reliability of the train control signal and to further enhance PS-LTE UE performance, time-domain eICIC and FeICIC along with CoMP are applied. From the results shown in section 5, with an SINR threshold of 2 dB, the outage probabilities of scenarios 3 and 4 decrease to 6.1% and 9.4%, respectively, compared to the 51.5% in scenario 2 without any interference management schemes. The throughput significantly increases due to the load balancing by CRE and interference coordination by using ABS/PR-ABS and CS CoMP for the cell edge UEs. Because there are still data transmitted from PS-LTE eNBs during PR-ABS, scenario 4 shows the best throughput performance. In addition, by using the interference coordination schemes, the channel quality for LTE-R UE shows remarkable improvement, especially for scenario 3, due to the complete muting of PS-LTE eNBs during ABS. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2016-H8501-15-1019) supervised by the IITP (Institute for Information & communications Technology Promotion). Department of Electronic Engineering, Inha University, Incheon, Korea J.K. Choi, H. Cho, K.H. Kim, Challenges of LTE high-speed railway network to coexist with LTE public safety network, in Proceedings of IEEE International Conference on Advanced Communication Technology (IEEE, Seoul, Korea, 2015), pp. 543–547Google Scholar T Guo, R Arnott, Active LTE RAN sharing with partial resource, in Proceedings of Vehicular Technology Conference, 2013, pp. 2–5Google Scholar 3GPP, R1-105081: Summary of the description of candidate eICIC solutions, in 3GPP TSG RAN WG1 Meeting 62 (3GPP, Madrid, Spain, 2010)Google Scholar 3GPP, Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Overall descriptions (Release 12), 3GPP TS 36.300 (2015)Google Scholar 3GPP, Technical Specification Group Radio Access Network; Coordinated multi-point operation for LTE physical layer aspects (Release 11), 3GPP TR 36.819 (2011)Google Scholar N. Naganuma, S. Nakazawa, S. Suyama, Adaptive control CRE technique for eICIC in HetNet, in Proceedings of IEEE International Conference on Ubiquitous and Future Networks (ICFUN) (IEEE, Vienna, 2016), pp. 4–6Google Scholar H Zhou, YS Ji, XY Wang, eICIC configuration algorithm with service scalability in heterogeneous cellular networks. IEEE/ACM Trans Networking. PP(99), 1–16 (2016)Google Scholar A Merwaday, I Guvenc, Optimization of FeICIC for energy efficiency and spectrum efficiency in LTE-advanced HetNets. Electron Lett 52(11), 982–984 (2016)View ArticleGoogle Scholar C Huang, QB Chen, L Tang, Hybrid inter-cell interference management for ultra-dense heterogeneous network in 5G. Sci China Inf Sci 59, 082305 (2016). doi:10.1007/s11432-016-5556-2 View ArticleGoogle Scholar 3GPP, Technical Specification Group Radio Access Network; Public safety broadband high power user equipment (UE) (Release 11), 3GPP TR 36.837 (2012)Google Scholar 3GPP, Evolved Universal Terrestrial Radio Access (E-UTRA); Radio frequency (RF) system scenarios (Release 8), 3GPP TR 36.942 (2014)Google Scholar H. Claussen, Efficient modeling of channel maps with correlated shadow fading in mobile radio systems, in Proceedings of IEEE 16th International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC) (IEEE, Berlin, Germany, 2005), pp. 512–516Google Scholar L. Hentila, P. Kyosti, M. Alatossava, MATLAB implementation of the WINNER phase II channel model, v1.1 (Sept. 30, 2007), http://projects.celtic-initiative.org/winner+/WINNER2-Deliverables/D1.1.2v1.1.pdf. Accessed 30 Sept 2007 3GPP, Technical Specification Group Radio Access Network; Physical layer procedures, 3GPP TS 36.213 (2016)Google Scholar Z. Hanzaz, H.D. Schotten, Analysis of effective SINR mapping models for MIMO OFDM in LTE system, in Proceedings of IEEE International Wireless Communications and Mobile Computing Conference (IEEE, Italy, Sardinia, 2013), pp. 1–5Google Scholar S Deb, P Monogioudis, J Miernik, Algorithms for enhanced inter-cell interference coordination (eICIC) in LTE HetNets. IEEE/ACM Trans. Networking 22(1), 137–150 (2014)View ArticleGoogle Scholar J.B. Abderrazak, M. Sfar, H. Besbes, Fair scheduling and dynamic ICIC for multi-cellular OFDMA systems, in Proceedings of IEEE International Conference and Workshop on the Network of the Future (NOF) (IEEE, Paris, France, 2014) pp. 1–5Google Scholar Y. Ikeda, S. Okasaka, M. Hoshino, Proportional fair-based joint optimization of cell association and inter-cell interference coordination for heterogeneous networks, in Proceedings of IEEE 80th Vehicular Technology Conference (VTC Fall) (IEEE, Vancouver, Canada, 2014), pp. 1–5Google Scholar H.T. Du, W.A. Zhou, X.Q. Lu, An optimized resource allocation and CoMP based interference coordination scheme for LTE-A Het-Net, in Proceedings of IEE 9th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA) (IEEE, Guangdong, China, 2014), pp. 207–211Google Scholar K. Somasundaram, Proportional fairness in LTE-advanced heterogeneous networks with eICIC, in Proceedings of IEEE Vehicular Technology Conference (VTC Fall), (IEEE, Las Vegas, NV, America, 2013), pp. 1–6Google Scholar X.Y. Wang, B. Mondal, A. Ghosh, Coordinated scheduling and network architecture for LTE macro and small cell deployments, in Proceedings of IEEE International Conference on Communication Workshops (ICC), (IEEE, Sydney, Australia, 2014), pp. 604–609Google Scholar G. Nardini, G. Stea, A. Virdis, Effective dynamic coordinated scheduling in LTE-advanced networks, in Proceedings of IEEE European Conference on Networks and Communications (EuCNC), (IEEE, Bologna, Italy, 2014), pp. 1–5Google Scholar R. Agrawal, A. Bedekar, N. Arulselvan, Centralized and decentralized coordinated scheduling with muting, in Proceedings of IEEE Vehicular Technology Conference (VTC Spring), (IEEE, Seoul, Korea, 2014), pp. 1–5Google Scholar
CommonCrawl
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Giving to the AMS About the AMS MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS Bookstore MathSciNet® Meetings Journals Member Directory Employment Services Giving to the AMS About the AMS Home > News & Public Outreach > Mathematical Imagery Mathematical Concepts Illustrated by Hamid Naderi Yeganeh One of my goals is to create very beautiful images by using mathematical concepts such as trigonometric functions, exponential function, regular polygons, line segments, etc. I create images by running my program on a Linux operating system — Hamid Naderi Yeganeh This image shows 1,000 line segments. For each $i=1,2,3,\ldots\ldots,1000$ the endpoints of the $i$-th line segment are: $(-\sin(2\pi i/1000), -\cos(2\pi i/1000))$ and $((-1/2)\sin(8\pi i/1000), (-1/2)\cos(12\pi i/1000))$. I created this image by running my program on a Linux operating system. This image shows 1,000 line segments. For each $i=1,2,3,\ldots,1000$ the endpoints of the $i$-th line segment are: $(-\sin(4\pi i/1000), -\cos(2\pi i/1000))$ and $((-1/2)\sin(8\pi i/1000), (-1/2)\cos(4\pi i/1000))$. I created this image by running my program on a Linux operating system. This image shows 1,000 line segments. For each $i=1,2,3,\ldots,1000$ the endpoints of the line segment are: $(-\sin(8\pi i/1000), -\cos(2\pi i/1000))$ and $((-1/2)\sin(6\pi i/1000), (-1/2)\cos(2\pi i/1000))$. I created this image by running my program on a Linux operating system. This image shows 1,000 line segments. For each $i=1,2,3,\ldots,1000$ the endpoints of the $i$-th line segment are: $(-\sin(10\pi i/1000), -\cos(2\pi i/1000))$ and $((-1/2)\sin(12\pi i/1000), (-1/2)\cos(2\pi i/1000))$. I created this image by running my program on a Linux operating system. This image contains a heart-like figure. It shows 601 line segments. For each $i=1, 2, 3, \ldots. , 601$ the endpoints of the $i$-th line segment are: $(\sin(10\pi (i+699)/2000), \cos(8\pi (i+699)/2000))$ $(\sin(12\pi (i+699)/2000), \cos(10\pi (i+699)/2000))$. I created this image by running my program. This image is like a bird in flight. It shows 2000 line segments. For each $i=1, 2, 3, \ldots , 2000$ the endpoints of the $i$-th line segment are: $(3(\sin(2\pi i/2000)^3), -\cos(8\pi i/2000))$ $((3/2)(\sin(2\pi i/2000)^3), (-1/2)\cos(6\pi i/2000))$. This image shows 10,000 circles. For each $i=1,2,3,\ldots,10{,}000$ the center of the $i$-th circle is: $((\cos(38\pi i/10{,}000))^3, \sin(10\pi i/10{,}000))$ and the radius of the $i$-th circle is: $(1/3)(\sin(16\pi i/10{,}000))^2$. This image is like a bird in flight. It shows 500 line segments. For each $i=1,2,3,\ldots,500$ the endpoints of the $i$-th line segment are: $((3/2)(\sin((2\pi i/500)+(\pi /3)))^7, (1/4)(\cos(6\pi i/500))^2)$ and $((1/5)\sin((6\pi i/500)+(\pi /5)), (-2/3)(\sin((2\pi i/500)-(\pi /3)))^2).$ This image is like a sailing boat. It shows 2,000 line segments. For each $k=1,2,3,\ldots,2000$ the endpoints of the $k$-th line segment are: $(\cos(6\pi k/2000)-i \cos(12\pi k/2000))e^{3\pi i/4}$ and $(\sin((4\pi k/2000)+(\pi /8))+i \sin((2\pi k/2000)+(\pi /3)))e^{3\pi i/4}.$ This image is like a fish. It shows 1,000 line segments. For $i=1,2,3,\ldots,1000$ the endpoints of the $i$-th line segment are: $(-2\cos(4\pi i/1000), (1/2)(\cos(6\pi i/1000))^3)$ and $(-(2/15)\sin(6\pi i/1000), (4/5)\sin(2\pi i/1000))$. This image shows 4000 circles. For $k=1,2,3,\ldots,4000$ the center of the $k$-th circle is $(X(k), Y(k))$ and the radius of the $k$-th circle is $R(k)$, where $\begin{array}{llll}X(k)&=&&(2k/4000)+(1/28)\sin(42\pi k/4000)\\&&+ &(1/9)((\sin(21\pi k/4000))^8)\\&&+ &(1/4)((\sin(21\pi k/4000))^6)*\\ && &\sin((2\pi /5)(k/4000)^{12}),\end{array}$ $\begin{array}{llll}Y(k)&=&&(1/4)(k/4000)^2\\&&+&(1/4)(((\sin(21\pi k/4000))^5) \\&& + &(1/28)\sin(42\pi k/4000))*\\&&&(\cos((\pi /2)(k/4000)^{12})),\end{array}$ $\begin{array}{lll}R(k)& =& (1/170)+(1/67)((\sin(42\pi k/4000))^2)*\\& &(1-((\cos(21\pi k/4000))^4)).\end{array}$ This image shows 40,000 circles. For $k=1,2,3,\ldots,40{,}000$ the center of the $k$-th circle is $(X(k), Y(k))$ and the radius of the $k$-th circle is $R(k)$, where $\begin{array}{lll}X(k)&=&(6/5)((\cos(141\pi k/40{,}000))^9)(1-(1/2)(\sin(\pi k/40{,}000))^3)*\\&&(1-(1/4)((\cos(2\pi k/40{,}000))^{30})(1+(2/3)(\cos(30\pi k/40{,}000))^{20})-\\&&((\sin(2\pi k/40{,}000))^{10})((\sin(6\pi k/40{,}000))^{10})*\\&&((1/5)+(4/5)(\cos(24\pi k/40{,}000))^{20})),\end{array}$ $\begin{array}{lll}Y(k)&=&\cos(2\pi k/40{,}000)((\cos(141\pi k/40{,}000))^2)(1+(1/4)((\cos(\pi k/40{,}000))^{24})*\\&&((\cos(3\pi k/40{,}000))^{24})(\cos(19\pi k/40{,}000))^{24}),\end{array}$ $\begin{array}{lll}R(k)&=&(1/100)+(1/40)(((\cos(2820\pi k/40{,}000))^6)+\\&&(\sin(141\pi k/40{,}000))^2)(1-((\cos(\pi k/40{,}000))^{16})*\\&&((\cos(3\pi k/40{,}000))^{16})(\cos(12\pi k/40{,}000))^{16}).\end{array}$ $\begin{array}{lll}X(k)&=&(3/2)((\cos(141\pi k/40{,}000))^9)*\\&&(1-(1/2)\sin(\pi k/40{,}000))*\\&&(1-(1/4)((\cos(2\pi k/40{,}000))^{30})*\\&&(1+(\cos(32\pi k/40{,}000))^{20}))*\\&&(1-(1/2)((\sin(2\pi k/40{,}000))^{30})*\\&&((\sin(6\pi k/40{,}000))^{10})*\\&&((1/2)+(1/2)(\sin(18\pi k/40{,}000))^{20})),\end{array}$ $\begin{array}{lll}Y(k)&=&\cos(2\pi k/40{,}000)*\\&&((\cos(141\pi k/40{,}000))^2)*\\&&(1+(1/4)((\cos(\pi k/40{,}000))^{24})*\\&&((\cos(3\pi k/40{,}000))^{24})*\\&&(\cos(21\pi k/40{,}000))^{24}),\end{array}$ $\begin{array}{lllcl}R(k)&=&(1/100)&+&(1/40)(((\cos(141\pi k/40{,}000))^{14})+(\sin(141\pi k/40{,}000))^6)*\\&&&&(1-((\cos(\pi k/40{,}000))^{16})((\cos(3\pi k/40{,}000))^{16})*\\&&&&(\cos(12\pi k/40{,}000))^{16}).\end{array}$ This image shows 2500 ellipses. For each $k=1,2,3,\ldots,2500$ the foci of the $k$-th ellipse are: $A(k)+iB(k)+C(k)e^{68\pi i k/2500}$ $A(k)+iB(k)-C(k)e^{68\pi i k/2500}$ and the eccentricity of the $k$-th ellipse is $D(k)$, where $A(k)=(-3/2)((\sin(2\pi k/2500))^3)+(3/10)((\sin(2\pi k/2500))^7),$ $B(k)=\sin((2\pi k/1875)+(\pi /6))+(1/4)(\sin((2\pi k/1875)+(\pi /6)))^3,$ $C(k)=(2/15)-(1/8)\cos(\pi k/625),$ $D(k)=(49/50)-(1/7)(\sin(4\pi k/2500))^4.$ This image shows 8,000 ellipses. For each $k=1,2,3,\ldots,8000$ the foci of the $k$-th ellipse are: $A(k)+iB(k)+C(k)e^{300\pi ik/8000}$ $A(k)+iB(k)-C(k)e^{300\pi ik/8000}$ $\begin{array}{llll}A(k)&=&&(3/4)\sin(2\pi k/8000)\cos(6\pi k/8000)\\&&+&(1/4)\sin(28\pi k/8000),\end{array}$ $\begin{array}{llll}B(k)&=&&(3/4)\cos(2\pi k/8000)\cos(8\pi k/8000)\\&&+&(1/4)\cos(28\pi k/8000),\end{array}$ $\begin{array}{lll}C(k)&=&(1/18)+(1/20)\cos(24\pi k/8000),\end{array}$ $ \begin{array}{lll} D(k)&=&(49/50)-(1/7)(\sin(10\pi k/8000))^4.\end{array}$ $A(k)+iB(k)+C(k)e^{44\pi ik/5600}$ $A(k)+iB(k)-C(k)e^{44\pi ik/5600}$ $\begin{array}{lll}A(k)&=&(\cos(28\pi k/5600))^3,\end{array}$ $\begin{array}{llll}B(k)&=&&\sin(28\pi k/5600)\\&&+&(1/4)(\cos((14\pi k/5600)-(7\pi /4)))^{18},\end{array}$ $\begin{array}{lll}C(k)&=&(1/70)+(1/6)+(1/6)\sin(28\pi k/5600),\end{array}$ $\begin{array}{lll}D(k)&=&(399/400)-(1/6)(\sin(28\pi k/5600))^8.\end{array}$ This image shows all circles of the form: $(x-A(k))^2+(y-B(k))^2=(R(k))^2$, for $k=-10000, -9999, \ldots , 9999, 10000$, where $\begin{array}{lllcl}A(k)&=&(3k/20{,}000)&+&\sin((\pi /2)(k/10{,}000)^7)((\cos(41\pi k/10{,}000))^6)\\&&&+&(1/4)((\cos(41\pi k/10{,}000))^{16})((\cos(\pi k/20{,}000))^{12})\sin(6\pi k/10{,}000),\end{array}$ $\begin{array}{lll}B(k)&=&-\cos((\pi /2)(k/10{,}000)^7)*\\&&(1+(3/2)(\cos(\pi k/20{,}000)\cos(3\pi k/20{,}000))^6)*\\&&((\cos(41\pi k/10{,}000))^6)+(1/2)(\cos(3\pi k/100{,}000)\cos(9\pi k/100{,}000)\cos(18\pi k/100{,}000))^{10},\end{array}$ $\begin{array}{lllcl}R(k)&=&(1/50)&+&(1/10)((\sin(41\pi k/10{,}000)\sin(9\pi k/100{,}000))^2)\\&&&+&(1/20)((\cos(41\pi k/10{,}000))^2)((\cos(\pi k/20{,}000))^{10}).\end{array}$ News & Public Outreach Home AMS News Releases Math in the Media Mathematical Moments In Memory Of AMS Posters Headlines & Deadlines Feature Column AMS for Students Mathematical Imagery Media Information Mathematics and Statistics Awareness Month Mathematics History: AMS Books and Resources Washington, DC Outreach & Communications Join the AMS AMS Conferences News & Public Outreach Who Wants to Be a Mathematician Mathematical Imagery Mathematical Moments Data on the Profession Fellows of the AMS Mathematics Research Communities AMS Fellowships Collaborations and position statements Appropriations Process Primer Congressional briefings and exhibitions About the AMS Notices of the AMS · Bulletin of the AMS AMS Blogs American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267 AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office. © Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility
CommonCrawl
A ring such that $(a+b)^2=a^2+b^2$ and $(a+b)^3=a^3+b^3$ abstract-algebra ring-theory asked Sep 17 '18 at 12:12 Prove that $\det(AB-BA)=0$ linear-algebra matrices determinant matrix-rank asked Dec 26 '17 at 21:45 A determinant involving a polynomial is $0$ matrices derivatives polynomials determinant partial-fractions asked Mar 31 '18 at 9:35 Property of a continuous function in the neighborhood of a point real-analysis continuity asked Apr 1 '18 at 14:21 Are there matrices such that $(AB-BA)^{71}=I_{69}$? linear-algebra matrices eigenvalues-eigenvectors trace minimal-polynomials asked Mar 28 '18 at 21:45 Complex numbers involving roots of unity polynomials complex-numbers roots-of-unity asked Apr 15 '17 at 7:10 If $f(f(x))+f(x)=x^4+3x^2+3$, prove that $f$ is even calculus real-analysis asked Mar 20 '18 at 12:29 Finding $\lim x_n$ when $\left( 1+\frac{1}{n}\right)^{n+x_n}=1+\frac{1}{1!}+\frac{1}{2!}+\dots+\frac{1}{n!}$ calculus real-analysis sequences-and-series limits convergence asked Feb 11 '18 at 19:21 $(a_n)_{n \geq 1}=\mathbb{Q}_+$ and $\sqrt[n]{a_n}$ is convergent real-analysis sequences-and-series limits convergence rational-numbers asked Feb 11 '18 at 22:28 For which values of $a$ is $f$ primitivable? calculus real-analysis integration indefinite-integrals asked Nov 8 '18 at 14:16
CommonCrawl
arXiv.org > astro-ph > arXiv:2011.03483 Astrophysics > Cosmology and Nongalactic Astrophysics arXiv:2011.03483 (astro-ph) [Submitted on 6 Nov 2020 (v1), last revised 18 Nov 2020 (this version, v2)] Title:BICEP / Keck XII: Constraints on axion-like polarization oscillations in the cosmic microwave background Authors:BICEP/Keck Collaboration: P. A. R. Ade, Z. Ahmed, M. Amiri, D. Barkats, R. Basu Thakur, C. A. Bischoff, J. J. Bock, H. Boenish, E. Bullock, V. Buza, J. R. Cheshire IV, J. Connors, J. Cornelison, M. Crumrine, A. Cukierman, M. Dierickx, L. Duband, S. Fatigoni, J. P. Filippini, S. Fliescher, N. Goeckner-Wald, J. Grayson, G. Hall, M. Halpern, S. Harrison, S. Henderson, S. R. Hildebrandt, G. C. Hilton, J. Hubmayr, H. Hui, K. D. Irwin, J. Kang, K. S. Karkare, E. Karpel, B. G. Keating, S. Kefeli, S. A. Kernasovskiy, J. M. Kovac, C. L. Kuo, K. Lau, E. M. Leitch, K. G. Megerian, L. Moncelsi, T. Namikawa, C. B. Netterfield, H. T. Nguyen, R. O'Brient, R. W. Ogburn IV, S. Palladino, T. Prouve, C. Pryke, B. Racine, C. D. Reintsema, S. Richter, A. Schillaci, B. L. Schmitt, R. Schwarz, C. D. Sheehy, A. Soliman, T. St. Germaine, B. Steinbach, R. V. Sudiwala, G. Teply, K. L. Thompson, J. E. Tolan, C. Tucker, A. D. Turner, C. Umilta, A. G. Vieregg, A. Wandui, A. C. Weber, D. V. Wiebe, J. Willmert, C. L. Wong, W. L. K. Wu, H. Yang, K. W. Yoon, E. Young, C. Yu, L. Zeng, C. Zhang Abstract: We present a search for axion-like polarization oscillations in the cosmic microwave background (CMB) with observations from the Keck Array. A local axion field induces an all-sky, temporally sinusoidal rotation of CMB polarization. A CMB polarimeter can thus function as a direct-detection experiment for axion-like dark matter. We develop techniques to extract an oscillation signal. Many elements of the method are generic to CMB polarimetry experiments and can be adapted for other datasets. As a first demonstration, we process data from the 2012 observing season to set upper limits on the axion-photon coupling constant in the mass range $10^{-21}$-$10^{-18}~\mathrm{eV}$, which corresponds to oscillation periods on the order of hours to months. We find no statistically significant deviations from the background model. For periods larger than $24~\mathrm{hr}$ (mass $m < 4.8 \times 10^{-20}~\mathrm{eV}$), the median 95%-confidence upper limit is equivalent to a rotation amplitude of $0.68^\circ$, which constrains the axion-photon coupling constant to $g_{\phi\gamma} < \left ( 1.1 \times 10^{-11}~\mathrm{GeV}^{-1} \right ) m/\left (10^{-21}~\mathrm{eV} \right )$, if axion-like particles constitute all of the dark matter. The constraints can be improved substantially with data already collected by the BICEP series of experiments. Current and future CMB polarimetry experiments are expected to achieve sufficient sensitivity to rule out unexplored regions of the axion parameter space. Comments: 25 pages, 6 figures, 2 tables Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph) Journal reference: Phys. Rev. D 103, 042002 (2021) DOI: 10.1103/PhysRevD.103.042002 Cite as: arXiv:2011.03483 [astro-ph.CO] (or arXiv:2011.03483v2 [astro-ph.CO] for this version) From: Ari Cukierman [view email] [v1] Fri, 6 Nov 2020 17:26:18 UTC (377 KB) [v2] Wed, 18 Nov 2020 01:25:37 UTC (367 KB) astro-ph.CO hep-ex
CommonCrawl
Explicit characterization of the support of non-linear inclusions IPI Home Inverse boundary value problems for discrete Schrödinger operators on the multi-dimensional square lattice August 2011, 5(3): 695-714. doi: 10.3934/ipi.2011.5.695 Identification of a real constant in linear evolution equations in Hilbert spaces Alfredo Lorenzi 1, and Gianluca Mola 1, Dipartimento di Matematica "F. Enriques", Universitá di Milano, via C. Saldini 50, 20133 Milano, Italy Received November 2010 Revised February 2011 Published August 2011 Let $H$ be a real separable Hilbert space and $A:\mathcal{D}(A) \to H$ be a positive and self-adjoint (unbounded) operator, and denote by $A^\sigma$ its power of exponent $\sigma \in [-1,1)$. We consider the identification problem consisting in searching for a function $u:[0,T] \to H$ and a real constant $\mu$ that fulfill the initial-value problem $$ u' + Au = \mu \, A^\sigma u, \quad t \in (0,T), \quad u(0) = u_0, $$ and the additional condition $$ \alpha \|u(T)\|^{2} + \beta \int_{0}^{T}\|A^{1/2}u(\tau)\|^{2}d\tau = \rho, $$ where $u_{0} \in H$, $u_{0} \neq 0$ and $\alpha, \beta \geq 0$, $\alpha+\beta > 0$ and $\rho >0$ are given. By means of a finite-dimensional approximation scheme, we construct a unique solution $(u,\mu)$ of suitable regularity on the whole interval $[0,T]$, and exhibit an explicit continuous dependence estimate of Lipschitz-type with respect to the data $u_{0}$ and $\rho $. Also, we provide specific applications to second and fourth-order parabolic initial-boundary value problems. Keywords: linear evolution equations in Hilbert spaces, linear parabolic equations, unknown constants, well-posedness results, Faedo-Galerkin approximation., Identification problems. Mathematics Subject Classification: Primary: 35R30, 35K90; Secondary: 35K20, 35K25, 65N3. Citation: Alfredo Lorenzi, Gianluca Mola. Identification of a real constant in linear evolution equations in Hilbert spaces. Inverse Problems & Imaging, 2011, 5 (3) : 695-714. doi: 10.3934/ipi.2011.5.695 E. A. Artyukhin and A. S. Okhapkin, Determination of the parameters in the generalized heat-conduction equation from transient experimental data,, J. Eng. Phys. Thermophys., 42 (1982), 693. Google Scholar J. R. Cannon, Determination of certain parameters in heat conduction problems,, J. Math. Anal. Appl., 8 (1964), 188. doi: 10.1016/0022-247X(64)90061-7. Google Scholar L. C. Evans, "Partial Differential Equations,", Graduate Studies in Mathematics, 19 (1998). Google Scholar P. Grisvard, Caractérisation de quelques espaces d'interpolation (French),, Arch. Rational Mech. Anal., 25 (1967), 40. doi: 10.1007/BF00281421. Google Scholar G. Hellwig, "Partial Differential Equations: An Introduction,", Blaisdell Publishing Co. Ginn and Co., (1964). Google Scholar A. Lorenzi, Recovering two constants in a parabolic linear equation,, Journal of Physics: Conference Series, 73 (2007). Google Scholar L. Lorenzi, An identification problem for the Ornstein-Uhlenbeck operator,, Journal of Inverse and Ill-posed Problems, 19 (2011), 293. Google Scholar A. Sh. Lyubanova, Identification of a constant coefficient in an elliptic equation,, Appl. Anal., 87 (2008), 1121. doi: 10.1080/00036810802189654. Google Scholar J. Simon, Compact sets in the space $L^p(0,T;B)$,, Ann. Mat. Pura Appl. (4), 146 (1987), 65. Google Scholar R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics,", Applied Mathematical Sciences, 68 (1988). Google Scholar M. Yamamoto, Determination of constant parameters in some semilinear parabolic equations,, in, (1992), 439. Google Scholar Tôn Việt Tạ. Existence results for linear evolution equations of parabolic type. Communications on Pure & Applied Analysis, 2018, 17 (3) : 751-785. doi: 10.3934/cpaa.2018039 Fatihcan M. Atay, Lavinia Roncoroni. Lumpability of linear evolution Equations in Banach spaces. Evolution Equations & Control Theory, 2017, 6 (1) : 15-34. doi: 10.3934/eect.2017002 Radhia Ghanmi, Tarek Saanouni. Well-posedness issues for some critical coupled non-linear Klein-Gordon equations. Communications on Pure & Applied Analysis, 2019, 18 (2) : 603-623. doi: 10.3934/cpaa.2019030 Matthias Hieber, Sylvie Monniaux. Well-posedness results for the Navier-Stokes equations in the rotational framework. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5143-5151. doi: 10.3934/dcds.2013.33.5143 Qunyi Bie, Qiru Wang, Zheng-An Yao. On the well-posedness of the inviscid Boussinesq equations in the Besov-Morrey spaces. Kinetic & Related Models, 2015, 8 (3) : 395-411. doi: 10.3934/krm.2015.8.395 Giuseppe Floridia. Well-posedness for a class of nonlinear degenerate parabolic equations. Conference Publications, 2015, 2015 (special) : 455-463. doi: 10.3934/proc.2015.0455 Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. I. Well-posedness and convergence of the method of lines. Inverse Problems & Imaging, 2013, 7 (2) : 307-340. doi: 10.3934/ipi.2013.7.307 Vishal Vasan, Bernard Deconinck. Well-posedness of boundary-value problems for the linear Benjamin-Bona-Mahony equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3171-3188. doi: 10.3934/dcds.2013.33.3171 Tarek Saanouni. A note on global well-posedness and blow-up of some semilinear evolution equations. Evolution Equations & Control Theory, 2015, 4 (3) : 355-372. doi: 10.3934/eect.2015.4.355 Aissa Guesmia, Nasser-eddine Tatar. Some well-posedness and stability results for abstract hyperbolic equations with infinite memory and distributed time delay. Communications on Pure & Applied Analysis, 2015, 14 (2) : 457-491. doi: 10.3934/cpaa.2015.14.457 Jin-Mun Jeong, Seong-Ho Cho. Identification problems of retarded differential systems in Hilbert spaces. Evolution Equations & Control Theory, 2017, 6 (1) : 77-91. doi: 10.3934/eect.2017005 Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1 Massimo Cicognani, Michael Reissig. Well-posedness for degenerate Schrödinger equations. Evolution Equations & Control Theory, 2014, 3 (1) : 15-33. doi: 10.3934/eect.2014.3.15 Timur Akhunov. Local well-posedness of quasi-linear systems generalizing KdV. Communications on Pure & Applied Analysis, 2013, 12 (2) : 899-921. doi: 10.3934/cpaa.2013.12.899 Fucai Li, Yanmin Mu, Dehua Wang. Local well-posedness and low Mach number limit of the compressible magnetohydrodynamic equations in critical spaces. Kinetic & Related Models, 2017, 10 (3) : 741-784. doi: 10.3934/krm.2017030 Hongjie Dong. Dissipative quasi-geostrophic equations in critical Sobolev spaces: Smoothing effect and global well-posedness. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1197-1211. doi: 10.3934/dcds.2010.26.1197 Xiaoping Zhai, Yongsheng Li, Wei Yan. Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1865-1884. doi: 10.3934/cpaa.2015.14.1865 Tyrone E. Duncan. Some linear-quadratic stochastic differential games for equations in Hilbert spaces with fractional Brownian motions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5435-5445. doi: 10.3934/dcds.2015.35.5435 G. Fonseca, G. Rodríguez-Blanco, W. Sandoval. Well-posedness and ill-posedness results for the regularized Benjamin-Ono equation in weighted Sobolev spaces. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1327-1341. doi: 10.3934/cpaa.2015.14.1327 Pengyu Chen, Yongxiang Li, Xuping Zhang. On the initial value problem of fractional stochastic evolution equations in Hilbert spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1817-1840. doi: 10.3934/cpaa.2015.14.1817 Alfredo Lorenzi Gianluca Mola
CommonCrawl
BMC Pediatrics December 2019 , 19:194 | Cite as Effects of particulate matter (PM) on childhood asthma exacerbation and control in Xiamen, China Jinzhun Wu Taoling Zhong Yu Zhu Dandan Ge Xiaoliang Lin Qiyuan Li Part of the following topical collections: Global and public health and healthcare The short-term effects of particulate matter (PM) exposure on childhood asthma exacerbation and disease control rate is not thoroughly assessed in Chinese population yet. The previous toxic effects of PM exposure are either based on long-term survey or experimental data from cell lines or mouse models, which also needs to be validated by real-world evidences. We evaluated the short-term effects of PM exposure on asthma exacerbation in a Chinese population of 3106 pediatric outpatientsand disease control rate (DCR) in a population of 3344 children using case-crossover design. All the subjects enrolled are non-hospitalized outpatients. All data for this study were collected from the electronic health record (EHR) in the period between January 1, 2016 and June 30, 2018 in Xiamen, China. We found that exposure to PM2.5 and PM10 within the past two weeks was significantly associated with elevated risk of exacerbation (OR = 1.049, p < 0.001 for PM2.5and OR = 1.027, p < 0.001 for PM10). In addition, exposure to PM10 was associated with decreased DCR (OR = 0.976 for PM10, p < 0.001). Our results suggest that exposure to both PM10 and PM2.5 has significant short-term effects on childhood asthma exacerbation and DCR, which serves as useful epidemiological parameters for clinical management of asthma risk in the sensitive population. Childhood asthma Particulate matter Exacerbation Asthma control Electronic health record;Xiamen Asthma Control Questionnaire AQI Case-Crossover Disease control rate Inhaled corticosteroids Nuclear factor-erythroid 2-ralated factor 2 The online version of this article ( https://doi.org/10.1186/s12887-019-1530-7) contains supplementary material, which is available to authorized users. Asthma is a chronic allergic respiratory disease with a heterogeneous background involving both genetic and environmental factors. In 2016, 339.4 million people worldwide were affected by asthma [1]. In China, the prevalence of asthma was 3.02% in children under 14 years old (95%CI:2.97–3.06%) [2]. Corticosteroids therapy can relieve the symptoms of asthma, however, the prevalence of asthma still increased significantly over the past 20 years [3, 4]. Exposure to all ergens in the pollutants is one of the major risk factors of asthma in children [5]. Evidence currently available has shown that many environmental factors, including allergens, airborne irritants, unfavorable weather conditions and adverse indoor environment, are associated with asthma progression [6, 7]. Inhalable particulate matter (PM) including PM2.5 and PM10 (inhalable particles with an aerodynamic diameter less than or equal to 2.5 μm and 10 μm, respectively), is known as major environmental hazardous factors that impact human health [8, 9, 10, 11]. Previous epidemiological studies have shown that high concentrations of PM2.5 and PM10 are associated with elevated mortality rate and increased incidence of many diseases, such as respiratory diseases, cardiovascular diseases, central nervous system diseases and inflammation [12, 13, 14]. In China, PM has become a major cause of air pollution due to rapid industrialization and urbanization in recent years [15, 16]. This fact leads to growing concerns on the part of hospitals, government and the public about the health risks associated with PM. In particular, the ability of stakeholders to predict the impact of PM on public health is essential for hospitals to take timely and efficient actions to handle overwhelming outpatient volume caused by hazardous environmental conditions. Most of the studies conducted worldwide addressed the relationship between PM exposure and asthma in terms of long-term effects and few assessed the impact of PM exposure on asthma control rate [17].Many studies address the transient effect of indoor and ambient pollutants and allergens on asthma exacerbation but less is known for particulate matters [18, 19]. On the other hand, current empirical studies examining the toxic effects of PM exposure were mostly conducted in cell lines and mouse models based on case-control design. Real world evidence derived from electronic health record (EHR) is likely to provide more pragmatic and accurate estimate of the effects of PM exposure [20, 21]. It has been well documented that PM exposure causes specific immune responses in the airway [22, 23, 24]. PM induces inflammation, apoptosis, increased secretion of T-cell cytokines, and DNA damage [25, 26]. Asthmatic symptoms are documented in 14% of children worldwide [27]. Children are more susceptible to PM-related diseases because of higher breathing rates, narrower airways, immature lung tissue, and longer exposure time to outdoor ambient air [28, 29]. Xiamen is located on the southeast coast of China. The city is in typical subtropical climate zone. No study is available to address the short-term effects of PM on childhood asthma exacerbation and control rate in this area. Considering the increasing PM pollution in this area and growing public health concern over PM, there is a need to obtain further epidemiological evidences for public health service to take proper preventive measure to control the risk caused by PM exposure. Therefore,we designed this study to evaluate the effects of PM exposure on childhood asthma exacerbation and control rate. Childhood asthma data were collected from the electronic health record system of Pediatric Outpatient Department of the First Affiliated Hospital of Xiamen University (Joint Commission International accredited hospital). All subjects are outpatients between zero and 14-year-old, who were diagnosed with asthma exacerbation inthe period from January 1, 2016 to June 30, 2018.The diagnosis of childhood asthma is based on respiratory symptoms including wheezing, shortness of breath, chest tightness or cough (Additional file 1 Table 1). Patients with respiratory symptoms caused by other diseases were excluded. The classification of asthmafollowsthe International Classification of Disease 10 (ICD-10-CM) code of J45 [27]. The study was designed conforming to the ethical guidance (KY2015–027). For each case of acute exacerbation, the date of the latest asthma exacerbation was determined. Patients whose symptoms reappeared within 14 days were defined as the same one exacerbation, and the last exacerbation was selected as index exacerbation. For asthma control, the outcome was determined upon return visit after four-week treatment since the initial visit based on Guidelines forthe Diagnosis and Treatment of Childhood Bronchial Asthma [30]. The outcome of the disease is defined for children aged below and above six separately (Additional file 1 Table 2). We further classify the cohort in to two subgroups, well-controlled asthma and uncontrolled or partly controlled asthma [31, 32]. Asthma was managed with budesonide aerosol inhalation, fluticasone MDI with spacerdevices, or budesonide or budesonide/formoterol powder in halation according to patients' age. The patients were followed up every oneto three months. In case of acute exacerbation, salbutamol aerosol or budesonide and aerosolized terbutaline solution for inhalation were added. Appropriate treatment was added if there was comorbidity, such as allergic rhinitis or infection. The outcome of asthma was assessed according to "Guidelines for the Diagnosis and Prevention of Asthma in Children" [30]. Disease control was rated as well controlled, partly controlled, or uncontrolled according to the daytime and night symptoms inthe past 4 weeks. Air pollution data Air pollution data were obtained from Xiamen Department of Environmental Protection. The concentration of pollutants was measured at different sites of the city. Daily average PM10 and PM2.5 concentrations were used to measure the exposure. Meteorological data including daily average ambient temperature,wind speed, cumulative precipitation, humidity and barometric pressure were obtained from Xiamen Meteorological Bureau. Case-Crossover (CCO) designwas used to assess the effects of PM on asthma exacerbation. To measure the exposure to PM, we recorded the number of days of AQI (air quality index) level 2 or 3(24-h average of PM2.5 > 35 μg/m3 and PM10 > 50 μg/m3) [33] within two weeks preceding the onset of the index exacerbation(Fig. 1a). We also measured the four-week-exposure before the time point of control evaluation for rating disease control. The outcome of disease control was defined as 1 if asthma was controlled, or 0 if the disease was partly controlled or uncontrolled(Fig. 1b). Schematic view of the study design. Panel (a):For patients of acute exacerbation, the day two weeks before the exacerbation was considered as control. The PM exposures within 2 weeks before the exacerbation day and control day were recorded,respectively. Panel (b):The PM exposure within 4 weeks before the return visitwas recorded. Patients were assessed at follow-up visit based on the symptoms in the past 4 weeks To evaluate the effects of PM exposure on asthma exacerbation and control rate, mixed effects logistic regression was performed,in which PM exposure was considered as a fixed effect and individual patient as random effects. Fever and weather conditions including average temperature, cumulative precipitation and average wind speed were covariates in the model. We standardized the estimated odds ratio (OR) for each fixed effect to compare the effects of different factors. The model is described as: $$ \mathrm{logit}\left(\mathrm{P}\right)=\log \left(\frac{\mathrm{P}}{1\hbox{-} \mathrm{P}}\right)=\upbeta \ast \mathrm{M}+\uptau \ast \mathrm{T}+\upgamma \ast \mathrm{R}+\upomega \ast \mathrm{W}+\upvarphi \ast \mathrm{F}+\upmu \ast \mathrm{s} $$ where, P is the probability of asthma exacerbation or control, M is the measure of exposure of PM2.5 or PM10; s is a random grouping variable corresponding to each individual; T is average temperature, R is cumulative precipitation, W is average wind speed and F is fever. β, τ, γ, ω, φ and μ are regression coefficients. As there is high collinearity between PM10 and PM2.5, which is evidenced by pairwise Pearson correlation coefficient of 0.906, which would lead to instability in effect estimates in multivariate regression analysis, the regression models were built with the two air pollutants separately. All statistical procedures were conducted using R-3.5. Summary of patient information A total of 3106 patients with 4728 cases of acute asthma exacerbation were identified from 16,355 cases of childhood asthma (Table 1). The patients included 2110 (67.9%) males and 996 females (32.1%). The age of these patients ranged from zero to fourteen years old. Patients aged four to six accounted for the largest proportion (39.9%), showing that preschool children were affected by asthma mostly. In the control period, 53 patients (1.1%) in the study had fever and during the 2 weeks before exacerbation there were 832 patients (18.2%) who had fever. Among the 3443 returning-visit patients, 2292 (66.6%) were males and 1151 (33.4%) were females, and children aged four to six accounted for the largest proportion (44.8%). In the course of the 4 weeks in which we assessed the control level of the patients, 6.6% of the patients had fever. There are nine subtypes of J45 present in the cohort (Additional file 2 Figure 1a). Bronchial asthma (J45.903) makes the majority of the cohort (41.2%), followed by asthmatic bronchitis (J45.901, 26.2%) and cough variant asthma (J45.005, 19.7%). The other subtypes (J45.004, J45.900, J45.000, J45.904, J45.006 and J45.003) cover 12.9% of the cohort. Patients' characteristics of the study cohorts Age(years) Return visit Summary of the exposure measures and covariates The summary statistics of environmental variables were summarized in Table 2. During the study period, the daily levels of PM2.5 ranged from 6 to 110 μg/m3 with an annual mean of 27.44 μg/m3. The mean PM2.5 concentration was 2.16% lower than the Grade II Annual PM2.5 Standard of CNAAQS (35 μg/m3), but 2.7 times higher than the annual average PM2.5(10 μg/m3) in WHO guideline. Daily levels of PM10ranged from 11 to 141 μg/m3 with an annual mean of 47.66 μg/m3. The exposure of PM2.5 for the cohort ranged from 0 to 7 days in one week before the exacerbation, and 0 to 14 days in two weeks before the exacerbation(Fig. 2). The exposure of PM10for the cohort ranged from 0 to 7 days in one week and 0 to 14 days in two weeks (Fig. 3). The average exposure to PM was 2 days in one week and 4 days in two weeks for PM2.5, and 3 days in one week and 6 days in two weeks for PM10. Overview of environmental variables in Xiamen First quartile Third quartile PM2.5(μg/m3) PM10(μg/m3) Temperature(°C) Precipitation(mm) Wind speed(m/s) Distribution and exposure leveltoPM2.5. Panel(a): Distribution of exposure days of PM2.5 in one week before the exacerbation and the density curve;Panel(b): Distribution of exposure days of PM2.5 in two weeks before the exacerbation and the density curve; Panel(c): Exposure days of PM2.5 in one week (red line) and in two weeks (blue line) before the exacerbation Distribution and exposure level to PM10.Panel (a): Distribution of exposure days of PM10 in one week before the exacerbation and the density curve; Panel(b): Distribution of exposure days of PM10 in two weeks before the exacerbation and the density curve; Panel(c): Exposure days of PM10 in one week (red line) and in two weeks (blue line) before the exacerbation As for the weather conditions during the study period, the average daily temperature ranged from3.9–31 °C (annual average 21.3 °C) during the study period. The average precipitation ranged from 0 to 172.7 mm (annual average 4.07 mm). And the average wind speed ranged from 2 to 9.6 m/s (annual average 2.68 m/s). PM exposure versusRisk of exacerbation The exposureto PM2.5 in one week (Standardized OR = 1.091; 95% CI: [1.029, 1.157]; p = 0.003) and two weeks (Standardized OR = 1.161; 95% CI: [1.084, 1.243];p < 0.001) were both significantly associated with higher risk of asthma exacerbation (Fig. 4, Table 3a). And the effect of PM2.5 exposure in two weeks was more severe than the exposure in one week. Odds ratiosof asthma exacerbation estimated for the exposure days to PM2.5 and PM10.Panel (a): Exposure days to PM2.5 within one week. Panel(b): Exposure days to PM2.5 within two weeks. Panel(c): Exposure to PM10 within one week. Panel(d): Exposure to PM10 within two weeks a. Odds ratios of asthma exacerbation for exposure days to PM2.5. b. Odds ratios of asthma exacerbation for exposure days toPM10 Standardized OR %increase Standardized 95%CI One-week-model PM2.5 Exposure (day) [1.029, 1.157] −3.44 < 0.001*** Two-week-model PM2.5 Exposure(day) PM10 Exposure(day) *P < 0.05, **P < 0.01, *** P < 0.001 Just like PM2.5, PM10 exposure during one week and two weeks showed a significant increase in the risk of asthma attacks. Each incremental day of exposure increased the risk of asthma onset by 7.12% (p = 0.015; 95% CI: [1.3, 13.2%], in one week) and 10.64%(p < 0.001; 95% CI: [4.2, 17.5%], in two weeks) (Table 3b). As for weather conditions, temperature and wind speed had significant effect on asthma exacerbation. Rise of temperature increased the risk of asthma exacerbation, and increase in wind speed reduced the risk of asthma exacerbation. When exposed to PM2.5, the standardized OR of the temperature during one week was 1.049 (p = 0.079). The standardized OR of the temperature during two weeks was 1.125 (p < 0.001) (Table 3a). When exposed to PM10, the standardized OR of the temperature during two weeks was 1.079 (p = 0.008) (Table 3b). In addition, when exposed to PM10, the standardized OR of wind speed in one week was 0.950 (p = 0.021), and the standardized OR of wind speed in two weeks was 0.954 (p = 0.033) (Table 3b). Fever had a significant effect on asthma exacerbation. When exposed to PM2.5 for two weeks, the standardized OR of fever was 2.402 (p < 0.001), and as for PM10, the standardized OR was 2.401 (p < 0.001). Association between PM exposure and disease control rate of childhood asthma During the whole period, the exposure of PM2.5 and PM10 was higher in winter and lower in summer, while the control rate peaked in summer and was the lowest in winter (Fig. 5a). With the increase of days of PM exposure, the control rate showed a downward trend (Fig. 5b). The association between PM exposure and DCR for childhood asthma. Panel (a):Time series of PM and DCR for childhood asthma during the study period. Panel (b): Distribution of PM exposure and DCR. PM2.5 (blue) and PM10 (red) were indicated Among the 3443 returning patients, PM2.5exposure did not affect the control rate (p = 0.347, Fig. 6a, Table 4),however exposure to PM10 had a negative effect on childhood asthma control rate (Fig. 6b, Table 4),as each increasing day of exposure to PM10 reduced the odds of childhood asthma control by 15.18% (standardized OR 0.848;95%CI: [0.786,0.915], p < 0.001). Fever was associated with the decrease of DCR (standardized OR of PM2.5 was 0.923 and standardized OR of PM10 was 0.924). Odds ratios of each increasing day of exposure to PM2.5(a) and PM10(b) on DCR of childhood asthma Odds ratios of asthma control for exposure days toPM2.5 and PM10 Exposure(day) − 4.10 −11.72 Our study confirmed that the exposure to PM2.5and PM10 within one or two week sposed significant risk to exacerbation of childhood asthma in Xiamen, China. The risk of PM exposure was independent on the effects of other pollutants, weather conditions, or individual variation. In addition, our data suggested that the effect of exposure to PM lasted for at least weeks. The association between PM exposure and the risk of asthma has been studied in different regions of the world and consensus have been reached that high exposure to PM causes increased risk of exacerbation and admission rate [34]. For example, one study conducted in Seattle, Washington suggested that for every 11 μg/m3 increase in PM2.5 concentration, the OR of childhoodasthma was 1.15 (95% CI: 1.08 to 1.23) [35]. An Australian survey which sampled 36,024 hospitalized patients with asthma showed that the impacts of PM2.5, NO2, PM10and pollen in the cold season on hospitalization for asthma were 30.2% (95% CI: 13.4 to 49.6%), 12.5% (95% CI: 6.6 to 18.7%), 8.3% (95% CI: 2.5 to 14.4%), and 4.2% (95% CI: 2.2~6.1%), respectively [36]. Taiwanese scholars used the open data of the government to investigate the air pollution in different urban models using time-stratified case crossover studies and conditional logistic regression analysis in 4237 hospitalized children with asthma in Taipei and Kaohsiung from 2001 to 2010 [37]. The results showed that the risk of hospitalization for childhood asthma was significantly correlated with air pollutants. After being adjusted by season, the air pollution in Kaohsiung City had greater impact on the hospitalization of childhood asthma than that in Taipei. Although many studies addressed the effects of PM exposure on asthma risk in the long-term [17], less is known about the transient effects of PM exposure in the scale of weeks. Several recent studies address the effects of PM exposure on asthma exacerbation in short-term in Ningbo, Taipei, Seoul and Detroit [38, 39, 40, 41]. According to these reports, the highest effect size of PM exposure on asthma exacerbation ranges from5 to 10 days. In order to accommodate the lagged effect we estimated the effect size for one week and two weeks of exposure, respectively. Our data confirm PM exposure as a risk factor to asthma exacerbation and the effect peaks at two weeks of exposure, which is consistent to prior studies. Moreover, our results suggest PM exposure has a negative effect on the disease control rate, which provided extra evidence for the hazardous impact of PM on childhood asthma. In an investigation of commuters, PM2.5 exposure was associated with lower FEV1% predicted among participants with below-median asthma control (3 h postcommute: -7.2 [95% CI = − 11.8, − 2.7]) [42]. A study in El Paso, Texas showed positive associations between Asthma Control Questionnaire (ACQ) scores and 96-h effects of PM10, PM2.5, black carbon, NO2 and ozone. In this study, the ACQ was used to evaluate asthma control [43]. Scottish scholars found that there is an exposure-response relationship between indoor PM2.5 concentration and poorer asthma control in children prescribed inhaled corticosteroids (ICS) [44]. The effect of PM2.5 in this study is reported after 5 days of exposure. Prior studies use different measures to quantify the level of exposure to PM [39, 40]. In this study, we used the "Technical Regulation on Ambient Air Quality Index" (AQI, HJ 633–2012) [33] issued by Chinese government as an official standard classify air quality and use the total number of days of level 2 and 3 as a measure of exposure. The regional AQI is based on air-pollution measures from different sites and normalized for geological variations hence more accurate and comprehensive. In addition, the use of AQI makes our data directly applied to the regulation policies of pollution control and public health. There are other ways to measure the exposure to PM, such as the average concentration. Our results based on exposure days are consistent with and complementary to the prior studies. PM exposure is not a stand-alone risk of asthma exacerbation. It has been previously shown that weather conditions, other environmental exposure, infections and self-management all contribute to the exacerbation of asthma. Our study is based on case-crossover design where each subject serves as its own control. Such a design can effectively remove inter-subject variations such as self-management. As for the weather conditions, temperature, barometric pressure and humidity are tightly correlated with each other, therefore, we kept only temperature to avoid collinearity. Co-morbid infections are not directly measured in the data we obtained but at the same time strongly affect the exacerbation of asthma. Therefore, we used surrogate variables such as the record of fever in the history of present illness. To estimate the effect of PM exposure on DCR, we combine the uncontrolled and partly controlled subject into one group. The same classification is used in prior clinical studies of asthma exacerbation [31, 32]. Plus, around 20% of partly controlled asthma will develop into uncontrolled disease and has a risk of exacerbation (0.1%) [45, 46]. In spite of the growing concern over air pollution caused by PM, the hospitals and public health services in China still lack accurate regional assessment of the risk posed by PM exposure, which is required for risk management and preventative measures. The resultsofour study provided basis for preventative and clinical management of the exacerbation risk of asthma. In particular, we also described a method based on case-crossover design that can apply to other regions of the country. Real-world evidence (RWE) has become increasingly important in medical and epidemiological research. Our study based on information extracted from local EHR database provides a plausible pipeline to address environmental risk factors using RWE, which enables more accurate estimate of the effects in large population. On the other hand, unknown bias factors can confound the analysis based on RWE, therefore we have considered all possible covariates. More importantly, the case-crossover design is based on self-control, thus, less affected by sampling biases. Finally, the biological mechanism of the toxicity of PM is not fully elucidated inhuman. However, many studies confirm that the toxicity of PM is related to the immunogenicity and the consequential immune responses using cell line and animal model [47]. In OVA-sensitized mice, exposure to PM promote the proliferation of peribronchial lymph nodes and the activation of T-help cell subtype 2 which provokes inflammation in airway [48, 49]. Other studies suggest exposure to PM result in an increment of both neutrophils and eosinophils [50]; it also causes imbalance activities of Th1/Th2 through the activation of TNF- α and suppression of INF-γ [51, 52]. Moreover, prior studies also demonstrate that exposure to PM affect with the activities of monocytes and macrophages [53]. A number of pathological changes, such as inflammatory cell infiltration, bronchial smooth muscle thickening, and bronchial mucosal injury are observed following the exposure to PM [54]. More recent study shows that certain transcription factors, such as Toll-like receptor and nuclear factor-erythroid 2-ralated factor 2(Nrf2) signaling pathway are involved in the inflammatory responses in the airway of asthmatic mice [55]. The physicochemical property of PM varies substantially due to the source of pollutant as well as climate. It is still not clear what is the exact molecular basis underlying the toxicity of PM. Our results are constrained to the local conditions in Xiamen and may differ from other regions due to the different chemical features of PM. To address the question, systematic chemical description of the PM is needed in future study. This study assessed the short-term effects of air pollution and weather conditions on childhood asthma exacerbation and control rate in Xiamen. We confirmed that short-term exposure to PM for one or two weeks increased the risk of exacerbation in asthmatic children and compromises the disease control rate. Our study provides epidemiological data for formulating environmental health policy and clinical prevention of asthma in children. Our findings reaffirmed the necessity of preventive care for asthma susceptible population according to environmental conditions. We would like to thank Dr. Liyang Zhan for the advices in environmental pollutants. Availability of data and material Datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. This research was funded by Natural Science Foundation of Fujian Province (No. 2016 J01644). The funding body provided funding for the collection of data and the hardware and software used in this research. J.W., T.Z., and Q.L. designed the study, analyzed, and interpreted the data, secured the funding for the study, and wrote the paper. Y.Z., D.G., and X.L. helped to collect and analyze data, and critically revise the manuscript. All authors read and approved the manuscript for submission. Ethical clearance was obtained from Ethical Review Board of the First Affiliated Hospital of Xiamen University conforming to the institutional ethical guidance (KY2015–027). All private information, including patient ID, residence and contact information is crypted and hashed. As no private information is revealed in this study, the review board agreed to waive the statement of consent. The usage of the patient records was authorized by the director of the Pediatric Department in the scientific management system. Additional file 1: The enrollment criteria of patients in the study. (DOCX 15 kb) 12887_2019_1530_MOESM2_ESM.pdf (343 kb) Additional file 2: Assessment of disease control of asthma for children below and above 6 years old. (PDF 343 kb) Global Asthma Network. The global asthma report 2018. Auckland, New Zealand.Google Scholar The National Cooperative Group on Childhood Asthma. The third nationwide survy of childhood asthma in urban areas of China. China JPediatr. 2013;51:729–36.Google Scholar Asher MI, Montefort S, Bjorksten B, Lai CK, Strachan DP, Weiland SK, Williams H. Worldwide time trends in the prevalence of symptoms of asthma, allergic rhinoconjunctivitis, and eczema in childhood: ISAAC phases one and three repeat multicountry cross-sectional surveys. Lancet. 2006;368:733–43.CrossRefGoogle Scholar Russell G. The childhood asthma epidemic. Thorax. 2006;61:276–8.CrossRefGoogle Scholar Ahmadizar F, Vijverberg SJH, Arets HGM, Boer A, Lang JE, Garssen J, Kraneveld A. Maitland-van Der zee AH early-life antibiotic exposure increases the risk of developing allergic symptoms later in life: a meta-analysis. Allergy. 2018;73:971–86.CrossRefGoogle Scholar Kelly FJ, Fussell JC. Air pollution and airway disease. Clin Exp Allergy. 2011;41:1059–71.CrossRefGoogle Scholar Asher I, Pearce N. Global burden of asthma among children. Int J Tuberc Lung Dis. 2014;18:1269–78.CrossRefGoogle Scholar Li Q, Liu H, Alattar M, Jiang S, Han J, Ma Y, Jiang C. The preferential accumulation of heavy metals in different tissues following frequent respiratory exposure to PM2.5 in rats. Sci Rep. 2015;5.Google Scholar Zhou Z, Liu Y, Duan F, Qin M, Wu F, Sheng W, Yang L, Liu J, He K. Transcriptomic analyses of the biological effects of airborne PM2.5 exposure on human bronchial epithelial cells. PLoS One. 2015;10.CrossRefGoogle Scholar Bunyavanich S, Schadt EE. Systems biology of asthma and allergic diseases: a multiscale approach. J Allergy Clin Immunol. 2015;135:31–42.CrossRefGoogle Scholar Janssen NAH, Fischer P, Marra M, Ameling C, Cassee FR. Short-term effects of PM2.5, PM10 and PM2.5-10 on daily mortality in the Netherlands. Sci Total Environ. 2013;463:20–6.CrossRefGoogle Scholar Kaji DA, Belli AJ, Mccormack MC, Matsui EC, Williams DAL, Paulin L, Putcha N, Peng RD, Diette GB, Breysse PN, et al. Indoor pollutant exposure is associated with heightened respiratory symptoms in atopic compared to non-atopic individuals with COPD. Bmc Pulmonary Medicine. 2014;14.Google Scholar Lee B-J, Kim B, Lee K. Air pollution exposure and cardiovascular disease. Toxicological research. 2014;30:71–5.CrossRefGoogle Scholar Ying Z, Xu X, Bai Y, Zhong J, Chen M, Liang Y, Zhao J, Liu D, Morishita M, Sun Q, et al. Long-term exposure to concentrated ambient PM2.5 increases mouse blood pressure through abnormal activation of the sympathetic nervous system: a role for hypothalamic inflammation. Environ Health Perspect. 2014;122:79–86.Google Scholar Huang F, Li X, Wang C, Xu Q, Wang W, Luo Y, Tao L, Gao Q, Guo J, Chen S, et al. PM2.5 spatiotemporal variations and the relationship with meteorological factors during 2013-2014 in Beijing, China. PLoS One. 2015;10.CrossRefGoogle Scholar Wu J, Xie W, Li W, Li J. Effects of urban landscape pattern on PM2.5 pollution-a Beijing case study. PLoS One. 2015;10.CrossRefGoogle Scholar Reid CE, Jerrett M, Tager IB, Petersen ML, Mann JK, Balmes JR. Differential respiratory health effects from the 2008 northern California wildfires: a spatiotemporal approach. Environ Res. 2016;150:227–35.CrossRefGoogle Scholar Ali Abdalla A, Mohammed O, Ghmaird A, Albalawi S, Jad N, Mirghani H, Mursal A, Amirthalingam P. Association of triggering factors with asthma exacerbations among the pediatric population in Tabuk. Kingdom of Saudi Arabia. 2016;5.Google Scholar DePriest K, Butz A. Neighborhood-level factors related to asthma in children living in urban areas: an integrative literature review. J Sch Nurs. 2017;33(1):8–17.CrossRefGoogle Scholar Pablo-Romero MP, Roman R, Gonzalez Limon JM, Praena-Crespo M. Effects of fine particles on children's hospital admissions for respiratory health in Seville, Spain. J Air Waste Manage Assoc. 2015;65:436–44.CrossRefGoogle Scholar Jedrychowski WA, Perera FP, Spengler JD, Mroz E, Stigter L, Flak E, Majewska R, Klimaszewska-Rembiasz M, Jacek R. Intrauterine exposure to fine particulate matter as a risk factor for increased susceptibility to acute broncho-pulmonary infections in early childhood. Int J Hyg Environ Health. 2013;216:395–401.CrossRefGoogle Scholar Iskandar A, Andersen ZJ, Bonnelykke K, Ellermann T, Andersen KK, Bisgaard H. Coarse and fine particles but not ultrafine particles in urban air trigger hospital admission for asthma in children. Thorax. 2012;67:252–7.CrossRefGoogle Scholar Samoli E, Nastos PT, Paliatsos AG, Katsouyanni K, Priftis KN. Acute effects of air pollution on pediatric asthma exacerbation: evidence of association and effect modification. Environ Res. 2011;111:418–24.CrossRefGoogle Scholar Son J, Lee J, Park Y, Bell M. Short-term effects of air pollution on hospital admissions in Korea. Epidemiology. 2013;24:545–54.CrossRefGoogle Scholar Kumar SS, Muthuselvam P, Pugalenthi V, Subramanian N, Ramkumar KM, Suresh T, Suzuki T, Rajaguru P. Toxicoproteomic analysis of human lung epithelial cells exposed to steel industry ambient particulate matter (PM) reveals possible mechanism of PM related carcinogenesis. Environ Pollut. 2018;239:483–92.CrossRefGoogle Scholar Pfeffer PE, Ho TR, Mann EH, Kelly FJ, Sehlstedt M, Pourazar J, Dove RE, Sandstrom T, Mudway IS, Hawrylowicz CM. Urban particulate matter stimulation of human dendritic cells enhances priming of naive CD8 T lymphocytes. Immunology. 2018;153:502–12.CrossRefGoogle Scholar Mallol J, Crane J, Von Mutius E, Odhiambo J, Keil U, Stewart A. The international study of asthma and allergies in childhood (ISAAC) phase three: a global synthesis. Allergol Immunopathol. 2013;41:73–85.CrossRefGoogle Scholar Shannon MW, Best D, Binns HJ, Johnson CL, Kim JJ, Mazur LJ, Reynolds DW, Roberts JR, Weil WB, Balk SJ, et al. Ambient air pollution: health hazards to children. Pediatrics. 2004;114:1699–707.CrossRefGoogle Scholar Bateson TF, Schwartz J. Children's response to air pollutants. Journal of Toxicology and Environmental Health-Part a-Current Issues. 2008;71:238–43.CrossRefGoogle Scholar The Subspecialty Group of Respirology. The Society of Pediatrics, Chinese Medical Association guidelines for the diagnosis and prevention of asthma in children (2016). Chin. J Pediatr. 2016. https://doi.org/10.3760/ema.j.issn.0578-1310.2016.03.003. Park HJ, Byun MK, Kwon J-W, Kim WK, Nahm D-H, Lee M-G, Lee S-P, Lee SY, Lee J-H, Jeong YY, et al. Video education versus face-to-face education on inhaler technique for patients with well-controlled or partly-controlled asthma: a phase IV, open-label, non-inferiority, multicenter, randomized, controlled trial. PLoS One. 2018;13(8).CrossRefGoogle Scholar Adachi M, Hozawa S, Nishikawa M, Yoshida A, Jinnai T, Tamura G. Asthma control and quality of life in a real-life setting: a cross-sectional study of adult asthma patients in Japan (ACQUIRE-2). The Journal of asthma : official journal of the Association for the Care of Asthma. 2018:1–10.Google Scholar Ministry of Environmental Protection of P. R. C. Technical requirements for environmental air quality index (AQI) (for Trial Implementation). Journal of China Environmental Management Cadre College. 2012;22:44.Google Scholar Kloog I, Coull BA, Zanobetti A, Koutrakis P. Schwartz JD acute and chronic effects of particles on hospital admissions in new-England. PLoS One. 2012;7(8).CrossRefGoogle Scholar Norris G, Youngpong SN, Koenig JQ, Larson TV, Sheppard L, Stout JW. An association between fine particles and asthma emergency department visits for children in Seattle. Environ Health Perspect. 1999;107:489–93.CrossRefGoogle Scholar Chen K, Glonek G, Hansen A, Williams S, Tuke J, Salter A, Bi P. The effects of air pollution on asthma hospital admissions in Adelaide, South Australia, 2003-2013: time-series and case-crossover analyses. Clin Exp Allergy. 2016;46:1416–30.CrossRefGoogle Scholar Kuo CY, Pan RH, Chan CK, Wu CY, Phan DV, Chan CL. Application of a time-stratified case-crossover design to explore the effects of air pollution and season on childhood asthma hospitalization in cities of differing urban patterns: big data analytics of government open data. Int J Environ Res Public Health. 2018;15(15).CrossRefGoogle Scholar Li G, Huang J, Xu G, Pan X, Qian X, Xu J, Zhao Y, Zhang T, Liu Q, Guo X, et al. The short term burden of ambient fine particulate matter on chronic obstructive pulmonary disease in Ningbo, China. Environ Health. 2017;16.Google Scholar Chang J-H, Hsu S-C, Bai K-J, Huang S-K, Hsu C-W. Association of time-serial changes in ambient particulate matters (PMs) with respiratory emergency cases in Taipei's Wenshan District. PLoS One. 2017;12(7).CrossRefGoogle Scholar Kim H, Kim H, Park YH, Lee JT. Assessment of temporal variation for the risk of particulate matters on asthma hospitalization. Environ Res. 2017;156:542–50.CrossRefGoogle Scholar Martenies SE, Batterman SA. Effectiveness of using enhanced filters in schools and homes to reduce indoor exposures to PM2.5 from outdoor sources and subsequent health benefits for children with asthma. Environmental Science & Technology. 2018;52(18):10767–76.CrossRefGoogle Scholar Mirabelli MC, Golan R, Greenwald R, Raysoni AU, Holguin F, Kewada P, Winquist A, Flanders WD, Sarnat JA. Modification of traffic-related respiratory response by asthma control in a population of Car commuters. Epidemiology. 2015;26:546–55.CrossRefGoogle Scholar Zora JE, Sarnat SE, Raysoni AU, Johnson BA, Li W-W, Greenwald R, Holguin F, Stock TH, Sarnat JA. Associations between urban air pollution and pediatric asthma control in El Paso, Texas. Sci Total Environ. 2013;448:56–65.CrossRefGoogle Scholar Woods KE, Apsley A, Semple S, Turner SW. Domestic airborne fine particulate matter exposure and asthma control among children receiving inhaled steroid treatment. Indoor and Built Environment. 2014;23:497–503.CrossRefGoogle Scholar Van Weel C, Bateman ED, Bousquet J, Reid J, Grouse L, Schermer T, Valovirta E, Zhong N. Asthma management pocket reference 2008. Allergy. 2008;63(8):997–1004.CrossRefGoogle Scholar Bateman ED, Reddel HK, Eriksson G, Peterson S, Ostlund O, Sears MR, Jenkins C, Humbert M, Buhl R, Harrison TW, et al. Overall asthma control: the relationship between current control and future risk. J Allergy Clin Immunol. 2010;125(3):600–8.CrossRefGoogle Scholar De Grove KC, Provoost S, Brusselle GG, Joos GF, Maes T. Insights in particulate matter-induced allergic airway inflammation: focus on the epithelium. ClinExp Allergy. 2018;48(7):773–86.CrossRefGoogle Scholar De HC, Hassing I, Bol M, Bleumink R, Pieters R. Ultrafine but not fine particulate matter causes airway inflammation and allergic airway sensitization to co-administered antigen in mice. Clin Exp Allergy. 2010;36(11):1469–79.Google Scholar Haar CD, Hassing I, Bol M, Bleumink R, Pieters R. Ultrafine carbon black particles cause early airway inflammation and have adjuvant activity in a mouse allergic airway disease model. Toxicological Sciences An Official Journal of the Society of Toxicology. 2005;87(2):409.CrossRefGoogle Scholar Mcgee MA, Kamal AS, Mcgee JK, Wood CE, Dye JA, Krantz QT, Landis MS, Gilmour MI, Gavett SH. Differential effects of particulate matter upwind and downwind of an urban freeway in an allergic mouse model. Environmental Science & Technology. 2015;49(6):3930–9.CrossRefGoogle Scholar Xingliang Z, Wenqing Z, Qingqi M, Qianwen L, Chao F, Xiulan H, Chengyan L, Yuge H, Jianxin T. Ambient PM2.5 exposure exacerbates severity of allergic asthma in previously sensitized mice. Journal of Asthma Official Journal of the Association for the Care of Asthma. 2015;52(8):785–94.Google Scholar Wang Y-H, Lin Z-Y, Yang L-W, He H-J, Chen T, Xu W-Y, Li C-Y, Zhou X, Li D-M, Song Z-Q, et al. PM2.5 exacerbate allergic asthma involved in autophagy signaling pathway in mice. Int J Clin Exp Pathol. 2016;9(12):12247–61.Google Scholar Becker S, Mundandhara S, Devlin RB, Madden M. Regulation of cytokine production in human alveolar macrophages and airway epithelial cells in response to ambient air pollution particles: further mechanistic studies. Toxicol Appl Pharmacol. 2005;207(2 Suppl):269–75.CrossRefGoogle Scholar Liu MH, Fan X, Wang N, Zhang Y, Yu J. Exacerbating effects of PM2.5 in OVA-sensitized and challenged mice and the expression of TRPA1 and TRPV1 protein in lung. Journal of Asthma Official Journal of the Association for the Care of Asthma. 2017;54(8):1–11.CrossRefGoogle Scholar Deng X, Rui W, Zhang F, Ding W. PM2.5 induces Nrf2-mediated defense mechanisms against oxidative stress; by activating PIK3/AKT signaling pathway in human lung alveolar; epithelial A549 cells. Cell Biology & Toxicology. 2013;29(3):143–57.CrossRefGoogle Scholar 1.Department of Pediatrics, the First Affiliated Hospital of Xiamen UniversityXiamenChina 2.National Institute for Data Science in Health and Medicine, School of MedicineXiamen UniversityXiamenChina Wu, J., Zhong, T., Zhu, Y. et al. BMC Pediatr (2019) 19: 194. https://doi.org/10.1186/s12887-019-1530-7 Accepted 08 May 2019 Publisher Name BioMed Central
CommonCrawl
PARTICLE AND FIELD THEORY Observation of ${{e^+e^- \rightarrow D_s^+} \overline{ D}^{\bf (*)0} {K^-}}$ and study of the P-wave ${{D_s}}$ mesons M. Ablikim, M. N. Achasov, S. Ahmed, M. Albrecht, M. Alekseev, A. Amoroso, F. F. An, Q. An, Y. Bai, O. Bakina, R. Baldini Ferroli, Y. Ban, K. Begzsuren, D. W. Bennett, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, S. A. Cetin, J. Chai, J. F. Chang, W. L. Chang, G. Chelkov, G. Chen, H. S. Chen, J. C. Chen, M. L. Chen, S. J. Chen, Y. B. Chen, W. S. Cheng, G. Cibinetto, F. Cossio, H. L. Dai, J. P. Dai, A. Dbeyssi, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, Z. L. Dou, S. X. Du, J. Z. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Fu, Q. Gao, X. L. Gao, Y. N. Gao, Y. G. Gao, Z. Gao, B. Garillon, I. Garzia, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, Z. Haddadi, S. Han, Studies of \begin{document}$ e^+e^- \to D^+_s \overline{D}{}^{(*)0}K^- $\end{document} and the \begin{document}$ P $\end{document} -wave charmed-strange mesons are performed based on an \begin{document}$ e^+e^- $\end{document} collision data sample corresponding to an integrated luminosity of 567 pb−1 collected with the BESIII detector at \begin{document}$ \sqrt{s}= 4.600 $\end{document} GeV. The processes of \begin{document}$ e^+e^-\to D^+_s \overline{D}{}^{*0} K^- $\end{document} and \begin{document}$ D^+_s \overline{D}{}^{0} K^- $\end{document} are observed for the first time and are found to be dominated by the modes \begin{document}$ D_s^+ D_{s1}(2536)^- $\end{document} and \begin{document}$ D_s^+ D^*_{s2}(2573)^- $\end{document} , respectively. The Born cross sections are measured to be \begin{document}$ \sigma^{B}(e^+e^-\to D^+_s \overline{D}{}^{*0} K^-) = (10.1\pm2.3\pm0.8)$\end{document} pb and \begin{document}$ \sigma^{B}(e^+e^-\to D^+_s \overline{D}{}^{0} K^-) = (19.4\pm2.3\pm1.6)$\end{document} pb, and the products of Born cross section and the decay branching fraction are measured to be \begin{document}$ \sigma^{B}(e^+e^-\to D^+_s D_{s1}(2536)^- + c.c.)\cdot$\end{document} \begin{document}$ {\cal{B}}( D_{s1}(2536)^- \to \overline{D}{}^{*0} K^-) = (7.5 \pm 1.8 \pm 0.7)$\end{document} pb and \begin{document}$\sigma^{B}(e^+e^-\to D^+_s D^*_{s2}(2573)^- + c.c.)\cdot {\cal{B}}( D^*_{s2}(2573)^- \to \overline{D}{}^{0} K^-) =$\end{document} \begin{document}$ (19.7 \pm 2.9 \pm 2.0)$\end{document} pb. For the \begin{document}$ D_{s1}(2536)^- $\end{document} and \begin{document}$ D^*_{s2}(2573)^- $\end{document} mesons, the masses and widths are measured to be \begin{document}$ M( D_{s1}(2536)^- ) = (2537.7 \pm 0.5 \pm 3.1)\; {\rm{MeV}}/c^2 $\end{document} , \begin{document}$ \Gamma( D_{s1}(2536)^- ) = (1.7\pm 1.2 \pm 0.6) $\end{document} MeV, and \begin{document}$M( D^*_{s2}(2573)^- ) = $\end{document} \begin{document}$ (2570.7\pm 2.0 \pm 1.7)\; {\rm{MeV}}/c^2, $\end{document} \begin{document}$ \Gamma( D^*_{s2}(2573)^- ) = (17.2 \pm 3.6 \pm 1.1)$\end{document} MeV. The spin-parity of the \begin{document}$ D^*_{s2}(2573)^- $\end{document} meson is determined to be \begin{document}$ J^P=2^{+} $\end{document} . In addition, the processes \begin{document}$ e^+e^-\to D^+_s \overline{D}{}^{(*)0} K^- $\end{document} are searched for using the data samples taken at four (two) center-of-mass energies between 4.416 (4.527) and 4.575 GeV, and upper limits at the 90% confidence level on the cross sections are determined. M. Ablikim, M. N. Achasov, S. Ahmed, M. Albrecht, M. Alekseev, A. Amoroso, F. F. An, Q. An, Y. Bai, O. Bakina, R. Baldini Ferroli, Y. Ban, K. Begzsuren, D. W. Bennett, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, S. A. Cetin, J. Chai, J. F. Chang, W. L. Chang, G. Chelkov, G. Chen, H. S. Chen, J. C. Chen, M. L. Chen, S. J. Chen, Y. B. Chen, W. S. Cheng, G. Cibinetto, F. Cossio, H. L. Dai, J. P. Dai, A. Dbeyssi, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, Z. L. Dou, S. X. Du, J. Z. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Fu, Q. Gao, X. L. Gao, Y. N. Gao, Y. G. Gao, Z. Gao, B. Garillon, I. Garzia, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, Z. Haddadi, S. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, J. S. Huang, X. T. Huang, X. Z. Huang, Z. L. Huang, N. Huesken, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. L. Jiang, X. S. Jiang, X. Y. Jiang, J. B. Jiao, Z. Jiao, D. P. Jin, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, M. Kavatsyuk, B. C. Ke, I. K. Keshk, T. Khan, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. Kurth, W. Kühn, J. S. Lange, P. Larin, L. Lavezzi, H. Leithoff, C. Li, Cheng Li, D. M. Li, F. Li, F. Y. Li, G. Li, H. B. Li, H. J. Li, J. C. Li, J. W. Li, Ke Li, L. K. Li, Lei Li, P. L. Li, P. R. Li, Q. Y. Li, W. D. Li, W. G. Li, X. L. Li, X. N. Li, X. Q. Li, Z. B. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, D. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. L Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Y. Liu, Kai Liu, Ke Liu, Q. Liu, S. B. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Zhiqing Liu, Y. F. Long, X. C. Lou, H. J. Lu, J. D. Lu, J. G. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, X. N. Ma, X. X. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, J. Min, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, C. Morales Morales, N. Yu. Muchnoi, H. Muramatsu, A. Mustafa, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Niu, S. L. Olsen, Q. Ouyang, S. Pacetti, Y. Pan, M. Papenbrock, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, M. Qi, T. Y. Qi, S. Qian, C. F. Qiao, N. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, C. F. Redmer, M. Richter, M. Ripka, A. Rivetti, M. Rolo, G. Rong, Ch. Rosner, M. Rump, A. Sarantsev, M. Savrié, K. Schoenning, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. Y. Sheng, X. Shi, J. J. Song, X. Y. Song, S. Sosio, C. Sowa, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, X. H. Sun, Y. J. Sun, Y. K Sun, Y. Z. Sun, Z. J. Sun, Z. T. Sun, Y. T Tan, C. J. Tang, G. Y. Tang, X. Tang, M. Tiemens, B. Tsednee, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Y. Wang, H. H. Wang, K. Wang, L. L. Wang, L. S. Wang, M. Wang, Meng Wang, P. Wang, P. L. Wang, R. M. Wang, W. P. Wang, X. F. Wang, Y. Wang, Y. F. Wang, Z. Wang, Z. G. Wang, Z. Y. Wang, Zongyuan Wang, T. Weber, D. H. Wei, P. Weidenkaff, S. P. Wen, U. Wiedner, M. Wolke, L. H. Wu, L. J. Wu, Z. Wu, L. Xia, Y. Xia, Y. J. Xiao, Z. J. Xiao, Y. G. Xie, Y. H. Xie, X. A. Xiong, Q. L. Xiu, G. F. Xu, L. Xu, Q. J. Xu, W. Xu, X. P. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Y. H. Yan, H. J. Yang, H. X. Yang, L. Yang, R. X. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Z. Q. Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, J. S. Yu, C. Z. Yuan, Y. Yuan, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, B. Y. Zhang, C. C. Zhang, D. H. Zhang, H. H. Zhang, H. Y. Zhang, J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, K. Zhang, L. Zhang, S. F. Zhang, T. J. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yang Zhang, Yao Zhang, Yu Zhang, Z. H. Zhang, Z. P. Zhang, Z. Y. Zhang, G. Zhao, J. W. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, T. C. Zhao, Y. B. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. H. Zheng, B. Zhong, L. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, Xiaoyu Zhou, Xu Zhou, A. N. Zhu, J. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, X. L. Zhu, Y. C. Zhu, Y. S. Zhu, Z. A. Zhu, J. Zhuang, B. S. Zou, J. H. Zou and (BESIII Collaboration). Observation of \begin{document}${{e^+e^- \rightarrow D_s^+} \overline{ D}^{\bf (*)0} {K^-}}$\end{document} and study of the P-wave \begin{document}${{D_s}}$\end{document} mesons[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/031001. Potential of octant degeneracy resolution in JUNO M.V. Smirnov, Zhoujun Hu, Shuaijie Li, Jiajie Ling This work extends the idea of using a cyclotron-based antineutrino source for purposes of neutrino physics. Long baseline experiments suffer from degeneracies and correlations between \begin{document}$ \Theta_{23} $\end{document} , \begin{document}$ \delta_{\rm CP} $\end{document} and the mass hierarchy. However, the combination of a superconducting cyclotron and a big liquid scintillator detector like JUNO in a medium baseline experiment, which does not depend on the mass hierarchy, may allow to determine whether the position of the mixing angle \begin{document}$ \Theta_{23} $\end{document} is in the lower octant or the upper octant. Such an experiment would improve the precision of the \begin{document}$ \Theta_{23} $\end{document} measurement to a degree which depends on the CP-phase. M.V. Smirnov, Zhoujun Hu, Shuaijie Li and Jiajie Ling. Potential of octant degeneracy resolution in JUNO[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/033001. Leptogenesis via a varying Weinberg operator: a semi-classical approach Silvia Pascoli, Jessica Turner, Ye-Ling Zhou In this paper, we introduce leptogenesis via a varying Weinberg operator from a semi-classical perspective. This mechanism is motivated by the breaking of an underlying symmetry which triggers a phase transition that causes the coupling of the Weinberg operator to become dynamical. Consequently, a lepton anti-lepton asymmetry arises from the interference of the Weinberg operator at two different spacetime points. Using the semi-classical approach, we treat the Higgs as a background field and show that a reflection asymmetry between leptons and anti-leptons is generated in the vicinity of the bubble wall. We solve the equations of motion of the lepton and anti-lepton quasiparticles to obtain the final lepton asymmetry. Silvia Pascoli, Jessica Turner and Ye-Ling Zhou. Leptogenesis via a varying Weinberg operator: a semi-classical approach[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/033101. Evaluating the topological charge density with the symmetric multi-probing method Guang-Yi Xiong, Jian-Bo Zhang, You-Hao Zou We evaluate the topological charge density of SU(3) gauge fields on a lattice by calculating the trace of the overlap Dirac matrix employing the symmetric multi-probing (SMP) method in 3 modes. Since the topological charge Q for a given lattice configuration must be an integer number, it is easy to estimate the systematic error (the deviation of Q to the nearest integer). The results demonstrate a high efficiency and accuracy in calculating the trace of the inverse of a large sparse matrix with locality by using the SMP sources when compared to using point sources. We also show the correlation between the errors and probing scheme parameter \begin{document}$r_{\min}$\end{document} , as well as lattice volume \begin{document}$N_{L}$\end{document} and lattice spacing a. It is found that the computational time for calculating the trace by employing the SMP sources is less dependent on \begin{document}$N_{L}$\end{document} than by using point sources. Therefore, the SMP method is very suitable for calculations on large lattices. Guang-Yi Xiong, Jian-Bo Zhang and You-Hao Zou. Evaluating the topological charge density with the symmetric multi-probing method[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/033102. Probing the QCD phase structure with higher order baryon number susceptibilities within the NJL model Wenkai Fan, Xiaofeng Luo, Hongshi Zong Conserved charge fluctuations can be used to probe the phase structure of strongly interacting nuclear matter in relativistic heavy-ion collisions. To obtain the characteristic signatures of the conserved charge fluctuations for the quantum chromodynamics (QCD) phase transition, we study the susceptibilities of dense quark matter up to eighth order in detail, using an effective QCD-based model. We studied two cases, one with the QCD critical end point (CEP) and one without owing to an additional vector interaction term. The higher order susceptibilities display rich structures near the CEP and show sign changes as well as large fluctuations. These can provide us information about the presence and location of the CEP. Furthermore, we find that the case without the CEP also shows a similar sign change pattern, but with a relatively smaller magnitude compared with the case with the CEP. Finally, we conclude that higher order susceptibilities of conserved charge can be used to probe the QCD phase structures in heavy-ion collisions. Wenkai Fan, Xiaofeng Luo and Hongshi Zong. Probing the QCD phase structure with higher order baryon number susceptibilities within the NJL model[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/033103. Finite volume effects on the QCD chiral phase transition in the finite size dependent Nambu-Jona-Lasinio model Yonghui Xia, Qingwu Wang, Hongtao Feng, Hongshi Zong The effective Lagrangian of a finite volume system should, in principle, depend on the system size. In the framework of the Nambu-Jona-Lasinio (NJL) model, by considering the influence of quark feedback on the effective coupling, we obtain a modified NJL model so that its Lagrangian depends on the volume. Based on the modified NJL model, we study the influence of finite volume on the chiral phase transition at finite temperature, and find that the pseudo-critical temperature of crossover is much lower than that obtained in the normal NJL model. This clearly shows that the volume dependent effective Lagrangian plays an important role in the chiral phase transitions at finite temperature. Yonghui Xia, Qingwu Wang, Hongtao Feng and Hongshi Zong. Finite volume effects on the QCD chiral phase transition in the finite size dependent Nambu-Jona-Lasinio model[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/034101. Spectroscopy of light ${{N^*}}$ baryons Zalak Shah, Keval Gandhi, Ajay Kumar Rai We present the masses of N baryons up to 3300 MeV. The radial and orbital excited states are determined using hypercentral constituent quark model with the first-order correction. The obtained masses are compared with the experimental results and other theoretical predictions. The Regge trajectories are also determined in (n, \begin{document}$M^2$\end{document} ) and (J, \begin{document}$M^2$\end{document} ) planes. Moreover, the magnetic moments with \begin{document}$J^{P}= \displaystyle\frac{1}{2}^{+}, \displaystyle\frac{1}{2}^{-}$\end{document} are calculated. We also calculates the \begin{document}$N\pi$\end{document} decay width of excited nucleons. Zalak Shah, Keval Gandhi and Ajay Kumar Rai. Spectroscopy of light \begin{document}${{N^*}}$\end{document} baryons[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/034102. Chiral phase structure of the sixteen meson states in the SU(3) Polyakov linear-sigma model for finite temperature and chemical potential in a strong magnetic field Abdel Nasser Tawfik, Abdel Magied Diab, M.T. Hussein In characterizing the chiral phase-structure of pseudoscalar ( \begin{document}$J^{pc}=0^{-+}$\end{document} ), scalar ( \begin{document}$J^{pc}=0^{++}$\end{document} ), vector ( \begin{document}$J^{pc}=1^{--}$\end{document} ) and axial-vector ( \begin{document}$J^{pc}=1^{++}$\end{document} t) meson states and their dependence on temperature, chemical potential, and magnetic field, we utilize the SU(3) Polyakov linear-sigma model (PLSM) in the mean-field approximation. We first determine the chiral (non)strange quark condensates, \begin{document}$\sigma_l$\end{document} and \begin{document}$\sigma_s$\end{document} , and the corresponding deconfinement order parameters, \begin{document}$\phi$\end{document} and \begin{document}$\phi^*$\end{document} , in thermal and dense (finite chemical potential) medium and finite magnetic field. The temperature and the chemical potential characteristics of nonet meson states normalized to the lowest bosonic Matsubara frequency are analyzed. We note that all normalized meson masses become temperature independent at different critical temperatures. We observe that the chiral and deconfinement phase transitions are shifted to lower quasicritical temperatures with increasing chemical potential and magnetic field. Thus, we conclude that the magnetic field seems to have almost the same effect as the chemical potential, especially on accelerating the phase transition, i.e. inverse magnetic catalysis. We also find that increasing the chemical potential enhances the mass degeneracy of the various meson masses, while increasing the magnetic field seems to reduce the critical chemical potential, at which the chiral phase transition takes place. Our mass spectrum calculations agree well with the recent PDG compilations and PNJL, lattice QCD calculations, and QMD/UrQMD simulations. Abdel Nasser Tawfik, Abdel Magied Diab and M.T. Hussein. Chiral phase structure of the sixteen meson states in the SU(3) Polyakov linear-sigma model for finite temperature and chemical potential in a strong magnetic field[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/034103. Revisiting hidden-charm pentaquarks from QCD sum rules Jia-Bing Xiang, Hua-Xing Chen, Wei Chen, Xiao-Bo Li, Xing-Qun Yao, Shi-Lin Zhu We revisit hidden-charm pentaquark states \begin{document}$ P_c(4380) $\end{document} and \begin{document}$ P_c(4450) $\end{document} using the method of QCD sum rules by requiring the pole contribution to be greater than or equal to 30% in order to better that the one-pole parametrization is valid. We find two mixing currents, and our results suggest that \begin{document}$ P_c(4380) $\end{document} and \begin{document}$ P_c(4450) $\end{document} can be identified as hidden-charm pentaquark states having \begin{document}$ J^P=3/2^- $\end{document} and \begin{document}$ 5/2^+ $\end{document} , respectively. However, there still exist other possible spin-parity assignments, such as \begin{document}$ J^P=3/2^+ $\end{document} and \begin{document}$ J^P=5/2^- $\end{document} , which must be clarified in further theoretical and experimental studies. Jia-Bing Xiang, Hua-Xing Chen, Wei Chen, Xiao-Bo Li, Xing-Qun Yao and Shi-Lin Zhu. Revisiting hidden-charm pentaquarks from QCD sum rules[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/034104. Nuclear matter and neutron star properties with the extended Nambu-Jona-Lasinio model Yan-Jun Chen An extended Nambu-Jona-Lasinio (eNJL) model with nucleons as the degrees of freedom is used to investigate properties of nuclear matter and neutron stars (NSs), including the binding energy and symmetry energy of the nuclear matter, the core-crust transition density, and mass-radius relation of NSs. The fourth-order symmetry energy at saturation density is also investigated. When the bulk properties of nuclear matter at saturation density are used to determine the model parameters, the double solutions of parameters are obtained for a given nuclear incompressibility. It is shown that the isovector-vector interaction has a significant influence on the nuclear matter and NS properties, and the sign of isovector-vector coupling constant is critical in the determination of the trend of the symmetry energy and equation of state. The effects of the other model parameters and symmetry energy slope at saturation density are discussed. Yan-Jun Chen. Nuclear matter and neutron star properties with the extended Nambu-Jona-Lasinio model[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/035101. Charged scalar fields in a Kerr–Sen black hole: exact solutions, Hawking radiation, and resonant frequencies H. S. Vieira, V. B. Bezerra In this study, we consider charged massive scalar fields around a Kerr–Sen spacetime. The radial and angular parts of the covariant Klein–Gordon equation are solved in terms of the confluent Heun function. From the exact radial solution, we obtain the Hawking radiation spectrum and discuss its resonant frequencies. The massless case of the resonant frequencies is also examined. H. S. Vieira and V. B. Bezerra. Charged scalar fields in a Kerr–Sen black hole: exact solutions, Hawking radiation, and resonant frequencies[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/035102. Testing the fidelity of Gaussian processes for cosmography Huan Zhou, Zhengxiang Li The dependence of implications from observations on cosmological models is an intractable problem not only in cosmology, but also in astrophysics. Gaussian processes (GPs), a powerful nonlinear interpolating tool without assuming a model or parametrization, have been widely used to directly reconstruct functions from observational data (e.g., expansion rate and distance measurements) for cosmography. However, the fidelity of this reconstructing method has never been checked. In this study, we test the fidelity of GPs for cosmography by mocking observational data sets comprising different number of events with various uncertainty levels. These factors are of great importance for the fidelity of reconstruction. That is, for the expansion rate measurements, GPs are valid for reconstructing the functions of the Hubble parameter versus redshift when the number of observed events is as many as 256 and the uncertainty of the data is ~ 3%. Moreover, the distance-redshift relation reconstructed from the observations of the upcoming Dark Energy Survey type Ia supernovae is credible. Huan Zhou and Zhengxiang Li. Testing the fidelity of Gaussian processes for cosmography[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/035103. Fermion scattering by a class of Bardeen black holes Ciprian A. Sporea In this study, the scattering of fermions by a class of Bardeen black holes is investigated. After obtaining the scattering modes by solving the Dirac equation in this geometry, we use the partial wave method to derive an analytical expression for the phase shifts that enter into the definitions of partial amplitudes that define the scattering cross sections and induced polarization. It is shown that, similar to Schwarzschild and Reissner-Nordström black holes, the phenomena of glory and spiral scattering are present. Ciprian A. Sporea. Fermion scattering by a class of Bardeen black holes[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/035104. The matrix method for black hole quasinormal modes Kai Lin, Wei-Liang Qian We provide a comprehensive survey of possible applications of the matrix method for black hole quasinormal modes. The proposed algorithm can generally be applied to various background metrics, and in particular, it accommodates both analytic and numerical forms of the tortoise coordinates, as well as black hole spacetimes. We give a detailed account of different types of black hole metrics, master equations, and the corresponding boundary conditions. Besides, we argue that the method can readily be applied to cases where the master equation is a system of coupled equations. By adjusting the number of interpolation points, the present method provides a desirable degree of precision, in reasonable balance with its efficiency. The method is flexible and can easily be adopted to various distinct physical scenarios. Kai Lin and Wei-Liang Qian. The matrix method for black hole quasinormal modes[J]. Chinese Physics C. doi: 10.1088/1674-1137/43/3/035105.
CommonCrawl
Results for 'Hans-J��rgen Arendt' Democracy in practice? The Norwegian public inquiry of the Alexander L. Kielland North-Sea oil platform disaster.Hans-Jørgen Wallin Weihe & Marie Smith-Solbakken - 2021 - Journal of Critical Realism 20 (5):525-541.details In March 1980, the oil-platform Alexander L. Kielland capsized in the North Sea resulting in the death of 123 workers. The Norwegian inquiry into the disaster was closed to the public and the survi... Democracy in Social and Political Philosophy Det banale og ruinhoben.Hans Jørgen Thomsen - 1991 - Slagmark - Tidsskrift for Idéhistorie 17:126-128.details Spøgelser og spor.Hans Jørgen Thomsen - 2018 - Slagmark - Tidsskrift for Idéhistorie 7:134-136.details Europæisk Idehistorie: Historie, Samfund, Eksistens.Hans-Jørgen Schanz - 2011 - Aarhus Universitetsforlag.details So Denmark finally got its History of Philosophy.Hans-Jørgen Schanz - 2006 - SATS 7 (2):148-155.details Adorno : Auschwitz et la métaphysique.Hans-Jørgen Schanz - 2008 - Les Etudes Philosophiques 87 (4):519.details Résumé — Cet article se propose d'explorer les thèses d'Adorno sur les thèmes de la solidarité et de la métaphysique, au moment où ces derniers perdent de leur importance. Le point de vue qu'Adorno défend de façon réactionnaire peut être qualifié de « métaphysique de la sécularisation ». Cette prise de position a joué un rôle crucial dans son opposition au capitalisme tardif, au nominalisme, au positivisme et plus généralement à la prohibition de la réflexion. L'article développe enfin les affinités (...) et les différences fondamentales entre les idées d'Adorno concernant la métaphysique et la perspective critique de Heidegger sur ce sujet, dans un contexte de décadence de la métaphysique classique.— The article investigates Adorno's idea concerning solidarity towards the metaphysics at the moment when it collapses. The position which he indignant defends could be called metaphysics in secularization. It plays a part in his showdown with late capitalism, nominalism, positivism and in general the prohibition against thinking. The article furthermore shows the affinity but also the essential difference between Adorno's idea concerning metaphysics after the breakdown of the classical metaphysics and Heidegger's criticism of the metaphysics. (shrink) Theodor W. Adorno in Continental Philosophy Idéhistorisk fest-tidsskrift.Hans-Jørgen Schanz - 2016 - Slagmark - Tidsskrift for Idéhistorie 73:284-292.details Steen Ebbesen and Carl Henrik Koch, The Danish History of Philosophy, 5 volumes, Gyldendal 2002-2004.Hans-Jørgen Schanz - 2006 - SATS 7 (2):148-155.details Steen Ebbesen and Carl Henrik Koch, The Danish History of Philosophy, 5 volumes, Gyldendal 2002-2004.Hans-Jørgen Schanz - 2006 - SATS 7 (2).details Rebellen Martin Luther.Hans-Jørgen Schanz - 2015 - Slagmark - Tidsskrift for Idéhistorie 72:215-218.details Habermas og modernitetskritikken.Hans-Jørgen Schanz & Hans Jørgen Thomsen - 2018 - Slagmark - Tidsskrift for Idéhistorie 1:21-45.details Det foreliggende arbejde er en i mange henseender ufærdig gennem­skrivning af dispositionen, noter etc. til et foredrag vi holdt i ldehisto­risk Forening i foråret 1983. Bred, informativ og uoriginal bog om nazismen.Hans-Jørgen Schanz - 2018 - Slagmark - Tidsskrift for Idéhistorie 76.details Naivitet og kynisme - omkring det totale og det totalitære.Hans-Jørgen Schanz - 2018 - Slagmark - Tidsskrift for Idéhistorie 7:122-129.details Nash Equilibrium with Lower Probabilities.Ebbe Groes, Hans Jørgen Jacobsen, Birgitte Sloth & Torben Tranaes - 1998 - Theory and Decision 44 (1):37-66.details We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible to distinguish between a player's assessment of ambiguity and his attitude towards ambiguity. We also generalize the concept of trembling hand perfect equilibrium. Finally, we demonstrate that for certain attitudes towards ambiguity it is possible to explain (...) cooperation in the one-shot Prisoner's Dilemma in a way that is in accordance with some recent experimental findings. (shrink) Prisoner's Dilemma in Philosophy of Action Economic Darwinism.Birgitte Sloth & Hans Jørgen Whitta-Jacobsen - 2011 - Theory and Decision 70 (3):385-398.details We define an evolutionary process of "economic Darwinism" for playing the field, symmetric games. The process captures two forces. One is "economic selection": if current behavior leads to payoff differences, behavior yielding lowest payoff has strictly positive probability of being replaced by an arbitrary behavior. The other is "mutation": any behavior has at any point in time a strictly positive, very small probability of shifting to an arbitrary behavior. We show that behavior observed frequently is in accordance with "evolutionary equilibrium", (...) a static equilibrium concept suggested in the literature. Using this result, we demonstrate that generally under positive (negative) externalities, economic Darwinism implies even more under- (over-)activity than does Nash equilibrium. (shrink) Darwinism in Philosophy of Biology Testing the Intransitivity Explanation of the Allais Paradox.Ebbe Groes, Hans JØrgen Jacobsen, Birgitte Sloth & Torben Tranæs - 1999 - Theory and Decision 47 (3):229-245.details This paper uses a two-dimensional version of a standard common consequence experiment to test the intransitivity explanation of Allais-paradox-type violations of expected utility theory. We compare the common consequence effect of two choice problems differing only with respect to whether alternatives are statistically correlated or independent. We framed the experiment so that intransitive preferences could explain violating behavior when alternatives are independent, but not when they are correlated. We found the same pattern of violation in the two cases. This is (...) evidence against intransitivity as an explanation of the Allais Paradox. The question whether violations of expected utility are mainly due to intransitivity or to violation of independence is important since it is exactly on this issue the main new decision theories differ. (shrink) Philosophy of Economics in Philosophy of Social Science Theory in Economics in Philosophy of Social Science Mindeord.Hans Vejleskov, Jørgen Huggler & Oliver Kauffmann - 2019 - Studier i Pædagogisk Filosofi 8 (2):95-96.details Hannah Arendt 1906 - 1975.Hans J. Morgenthau - 1976 - Political Theory 4 (1):5-8.details Hannah Arendt in 20th Century Philosophy Preliminary Material.Finn Collin, Uffe Juul Jensen, Jørgen Mikkelsen, Sven Erik Nordenbo, Stig Andur Pedersen, Erich Klawonn, Hans Siggaard Jensen & Mogens Pahuus - 1997 - Danish Yearbook of Philosophy 32 (1):1-5.details Tanker af en anden verden: Jørgen K. Bukdahl: hans liv, værk og aktualitet.Nils Gunder Hansen - 2019 - København: Gyldendal.details Gabriel Cercel: Hans-Georg Gadamer, Hermeneutische Entwürfe. Vorträge und AufsätzePaul Marinescu: Pascal Michon, Poétique d'une anti-anthropologie: l'herméneutique de GadamerPaul Marinescu: Robert J. Dostal (ed.), The Cambridge Companion to GadamerAndrei Timotin: Denis Seron, Le problème de la métaphysique. Recherches sur l'interprétation heideggerienne de Platon et d'AristoteDelia Popa: Henry Maldiney, Ouvrir le rien. L'art nuCristian Ciocan: Dominique Janicaud, Heidegger en France, I. Récit; II. EntretiensVictor Popescu: Maurice Merleau-Ponty, Fenomenologia percepţieiRadu M. Oancea: Trish Glazebrook, Heidegger's Philosophy of SciencePaul Balogh: Richard Wolin, Heidegger's Children. Hannah Arendt, Karl Löwith, Hans Jonas and Herbert MarcuseBogdan Mincă: Ivo De Gennaro, Logos - Heidegger liest HeraklitRoxana Albu: O. K. Wiegand, R. J. Dostal, L. Embree, J. Kockelmans and J. N. Mohanty (eds.), Phenomenology on Kant, German Idealism, Hermeneutics and LogicAnca Dumitru: James Faulconer an. [REVIEW]Gabriel Cercel, Paul Marinescu, Andrei Timotin, Delia Popa, Cristian Ciocan, Victor Popescu, Radu M. Oancea, Paul Balogh, Bogdan Mincă, Roxana Albu & Anca Dumitru - 2002 - Studia Phaenomenologica 2 (1):261-313.details Hans-Georg GADAMER, Hermeneutische Entwürfe. Vorträge und Aufsätze ; Pascal MICHON, Poétique d'une anti-anthropologie: l'herméneutique deGadamer ; Robert J. DOSTAL, The Cambridge Companion to Gadamer ; Denis SERON, Le problème de la métaphysique. Recherches sur l'interprétation heideggerienne de Platon et d'Aristote ; Henry MALDINEY, Ouvrir le rien. L'art nu ; Dominique JANICAUD, Heidegger en France, I. Récit; II. Entretiens ; Maurice MERLEAU-PONTY, Fenomenologia percepţiei ; Trish GLAZEBROOK, Heidegger's Philosophy of Science ; Richard WOLIN, Heidegger's Children. Hannah Arendt, Karl Löwith, (...) Hans Jonas and Herbert Marcuse ; Ivo DEGENNARO, Logos – Heidegger liest Heraklit ; O. K. WIEGAND, R. J. DOSTAL, L. EMBREE, J. KOCKELMANS and J. N. MOHANTY, Phenomenology on Kant, German Idealism, Hermeneutics and Logic ; James FAULCONER and Mark WRATHALL, Appropriating Heidegger. (shrink) Hans-Georg Gadamer in Continental Philosophy Martin Heidegger in Continental Philosophy Heidegger's Children: Hannah Arendt, Karl Löwith, Hans Jonas, and Herbert Marcuse. [REVIEW]Brian J. Fox - 2002 - Review of Metaphysics 56 (2):469-472.details There seems to be a general consensus that the most important Continental philosopher of the twentieth century was Martin Heidegger. Even Étienne Gilson spoke of him as one of only two real philosophers of his lifetime. Despite the general acknowledgment of his philosophical brilliance, Heidegger remains a highly controversial figure in the history of thought largely on account of his infamous involvement with Nazism. In recent years Richard Wolin has gone to great lengths to document and examine Heidegger's troubling politics (...) and legacy. Wolin claims that Heidegger's Children is his final offering on Heidegger and his flawed politics; it follows upon his books The Politics of Being and The Heidegger Controversy. (shrink) Wolin, Richard. Heidegger's Children: Hannah Arendt, Karl Löwith, Hans Jonas, and Herbert Marcuse.Brian J. Fox - 2002 - Review of Metaphysics 56 (2):469-471.details Hans Jonas's Mortality and Morality.Richard J. Bernstein - 1997 - Graduate Faculty Philosophy Journal 19 (2/1):315-321.details Hannah Arendt, who was Hans Jonas's lifelong friend, always stressed the importance and rarity of the independent thinker. The independent thinker is the thinker who has the imagination to break new ground, who does not follow current fashions, and has the courage to pursue thought trains wherever they may lead. Her model was Lessing, but she might have considered Hans Jonas to be an outstanding twentieth century exemplar of the independent thinker. Although Hans Jonas was a (...) student of both Heidegger and Bultmann in the 1920's, he did not become a disciple of anyone. Both of these teachers encouraged him to pursue his research into the history of Gnosticism. Jonas's path-breaking achievement can be compared with what Gershom Scholem did for the study of the Kabbalah. For Jonas literally created a new field of research in the history of religions. His study of Gnosticism became one of those rare twentieth century landmarks that opened up our understanding of Gnosticism and revealed its powerful subterranean influence throughout the history of the West. The first volume of Jonas's study, Gnosis und spätantiker Geist was published in Germany in 1934 only after he fled from Nazi Germany and decided to emigrate to Palestine. If Jonas had never published anything else he would be known today as the major twentieth-century scholar of Gnosticism.. But Jonas was much more than an original scholar. He was a creative thinker—and he remained one until his death in 1993, shortly before his ninetieth birthday. During the Second World War, he fought in the famous Jewish Brigade of the British army. It was during this period, when he faced death all around him on the battlefield, that the phenomenon of life in all its ramifications became his central philosophical preoccupation. Jonas had been compelled to suspend his scholarly research during the war years, but he never suspended his independent thinking. He felt it was his obligation to fight the Nazis, but his dream was to return to his true vocation—philosophical speculation. After fighting in the Israeli War of Independence, he accepted a fellowship at McGill University in Canada in 1949, and eventually accepted a position at the Graduate Faculty of the New School for Social Research in 1955. The line of inquiry that he began when he was able to return to philosophical study resulted in the publication of The Phenomenon of Life. Jonas's project was to understand what is distinctive about living organisms, and the emergence and consequences of life in the cosmos. But in order to do this, Jonas had to engage in a systematic radical critique of the dualisms of matter and mind, body and soul, which have dominated and shaped so much of modern thought. From Jonas's perspective, even those philosophers who had rejected dualism were still tainted by the misguided ontology of dualism. The variety of monisms that arose in reaction to dualism tended to move to the extremes of materialism or idealism. Neither of these extremes is adequate for illuminating what is distinctive about bios. In German idealism there was a failure to do justice to the needs and character of the lived body. And in the varieties of "reductive materialism" that have been—and continue to be—so fashionable in twentieth century there is also a failure to appreciate what is distinctive about dynamic metabolic processes. To engage in a critique of dualism and its legacy, it was also necessary to rethink what can be learned from the biological sciences, and especially, the theory of evolution. Here we also witness the philosophical daring of Jonas. A dominant prejudice of the twentieth century has been that philosophy as a discipline has nothing significant to contribute to our understanding of biological processes. All that philosophy can do is to reflect on the methodological and epistemological status of the sciences because, presumably, the only legitimate source of knowledge about living organisms is what we learn from the natural sciences. Jonas argues that this prevailing prejudice has led to disastrous intellectual consequences. Of course, philosophers qua philosophers cannot and should not engage in "armchair" scientific speculation. Furthermore, they must be fully informed about the hypotheses and claims of the best biological research. But at the same time it is a philosophical endeavor to understand critically the meaning of what we learn from the sciences, and to develop an adequate philosophical account of the meaning of nature. Philosophers cannot and should not abandon this task. There is an important distinction to be drawn between the scientific achievements and the philosophical reflection on their meaning—a distinction that too frequently is forgotten or neglected. (shrink) Grue-Sørensen imellem filosofi og pædagogik.Hans Siggaard Jensen - 2018 - Studier i Pædagogisk Filosofi 7 (1):115-122.details The philosophical situation at Copenhagen University in the 1960's was dominated by two positivists. Th elogical positivist Jørgen Jørgensen – who had written the history of the "movement" – and the legal positivistAlf Ross. There were also two "outsiders": Peter Zinkernagel, who did more analytical philosophy of language in the British style, and K. Grue Sørensen who was working in the traditions of neo-Kantianism. In 1955 Grue-Sørensen was hired as the first professor in education – after a long controversy about (...) the scientific status ofeducation as a discipline – but with a focus on the history of education. He had received a doctoral degree in philosophy in 1950 with a dissertation on refl exivity as a philosophical concept and a thesis about the reflexivity of consciousness. He was also an objectivist in ethics, and had been critical of the prevalent moral relativism and subjectivism found in recent philosophy. Jørgensen and Ross had done important work on moral argumentation with more technical work on the logic of imperatives and norms. Moral objectivism was not only wrong but in a way also "immoral" because it undermined their belief in democracy. Especially Jørgensen also thought that the idea of reflexivity was wrong when applied to consciousness. Neither statements nor consciousness could be reflexive – that is refer to themselves/itself. The reflexivity of consciousness is – according to Jørgensen – simply not an empirical psychological fact. Grue-Sørensen tried to establish the foundation of a theory of education based both on conceptions of consciousness and of the relation between scientific knowledge – facts – and moral values – in a neo-Kantian fashion. For him the interplay between ethics and knowledge was a central part of a theory of education – a belief due to which he never became a professor of philosophy – having tried many times. These debates in philosophy and in education were superseded in the 1970's by the rise in influence of the German inspiration from Critical Theory and the demise of logical positivism. (shrink) Book reviews (Hans-Georg Gadamer, Hermeneutische Entwürfe. Vorträge und Aufsätze, ..., etc.).Gabriel Cercel, Paul Marinescu, Andrei Timotin, Delia Popa, Cristian Ciocan, Victor Popescu, Radu M. Oancea, Paul Balogh, Bogdan Mincă & Roxana Albu - 2002 - Studia Phaenomenologica 2 (1):261-313.details Phenomenology in Continental Philosophy A Tribute to Hans Morgenthau: [Truth and Tragedy]: With an Intellectual Autobiography by Hans J. Morgenthau.Hans J. Morgenthau & Kenneth W. Thompson (eds.) - 1977 - New Republic Book Co..details Richard J. Bernstein.Brendan Hogan - 2005 - In John Shook (ed.), The Dictionary Of Modern American Philosophers.details This encyclopedia article traces the development of Richard J. Bernstein's philosophical work and provide s short biography. 19th Century American Pragmatism, Misc in Philosophy of the Americas Critical Theory, Misc in Continental Philosophy Hermeneutics, Misc in Continental Philosophy John Dewey in 20th Century Philosophy Preface.Jørgen Albretsen, Per Hasle & Peter Øhrstrøm - 2016 - Synthese 193 (11):3397-3399.details British Philosophy in European Philosophy Mindless coping in competitive sport: Some implications and consequences.J.⊘Rgen W. Eriksen - 2010 - Sport, Ethics and Philosophy 4 (1):66 – 86.details The aim of this paper is to elaborate on the phenomenological approach to expertise as proposed by Dreyfus and Dreyfus and to give an account of the extent to which their approach may contribute to a better understanding of how athletes may use their cognitive capacities during high-level skill execution. Dreyfus and Dreyfus's non-representational view of experience-based expertise implies that, given enough relevant experience, the skill learner, when expert, will respond intuitively to immediate situations with no recourse to deliberate actions (...) or mental representations. The paper will subsequently outline some implications and consequences of such an approach and will also examine to what extent Dreyfus and Dreyfus's skill model is capable to resist different attacks that have been made against their view, and in particular regarding the practical application of their approach to the skill domain of competitive sport. (shrink) Unconscious and Conscious Processes in Philosophy of Cognitive Science Mindless coping in competitive sport: Some implications and consequences.J. ⊘ Rgen W. Eriksen - 2010 - Sport, Ethics and Philosophy 4 (1):66-86.details Diogenes Laertius and His Hellenistic Background.Jørgen Mejer - 1936 - Steiner.details Epicureans, Misc in Ancient Greek and Roman Philosophy Jørgen Jørgensen's Relation to Logical Positivism.Carl Henrik Koch - 2020 - Danish Yearbook of Philosophy 53 (1):17-32.details Between the two World Wars, Jørgen Jørgensen was a central figure in Danish philosophy and internationally recognized, as his teacher Harald Høffding had been before World War 1. When in the late 1920s Jørgensen established contact with the movement that would later be called logical positivism, he found a group of philosophers of his own age who advocated empiricism, the tools of formal logic and the Unity of Science, and who shared his anti-metaphysical approach to philosophy. He became one of (...) the movement's organizers and wrote its history, but he was only for a short period influenced by especially Rudolf Carnap's philosophy of logic. Although Jørgensen was never an uncritical member of the movement, he is often considered as a central representative of logical positivism in Scandinavia. (shrink) A Cube of Opposition for Predicate Logic.Jørgen Fischer Nilsson - 2020 - Logica Universalis 14 (1):103-114.details The traditional square of opposition is generalized and extended to a cube of opposition covering and conveniently visualizing inter-sentential oppositions in relational syllogistic logic with the usual syllogistic logic sentences obtained as special cases. The cube comes about by considering Frege–Russell's quantifier predicate logic with one relation comprising categorical syllogistic sentence forms. The relationships to Buridan's octagon, to Aristotelian modal logic, and to Klein's 4-group are discussed.GraphicThe photo shows a prototype sculpture for the cube. Aristotelian Logic in Logic and Philosophy of Logic Constitutional revolution.Gary J. Jacobsohn - 2020 - New Haven: Yale University Press.details Few terms in political theory are as overused, and yet as under-theorized, as constitutional revolution. In this book, Gary Jacobsohn and Yaniv Roznai argue that the most widely accepted accounts of constitutional transformation, such as those found in the work of Hans Kelsen, Hannah Arendt, and Bruce Ackerman, fail adequately to explain radical change. For example, a "constitutional moment" may or may not accompany the onset of a constitutional revolution. The consolidation of revolutionary aspirations may take place over (...) an extended period. The "moment" may have been under way for decades-or there may be no such moment at all. On the other hand, seemingly radical breaks in a constitutional regime actually may bring very little change in constitutional practice and identity. Constructing a clarifying lens for comprehending the many ways in which constitutional revolutions occur, the authors seek to capture the essence of what happens when constitutional paradigms change. (shrink) Constitutional Law in Philosophy of Law Thomas Aastrup Rømer, Lene Tanggaard & Svend Brinkmann (red.), Uren pædagogik.Jørgen Gleerup - 2013 - Studier i Pædagogisk Filosofi 2 (1):93-94.details Mathematically Gifted Accelerated Students Participating in an Ability Group: A Qualitative Interview Study.Jørgen Smedsrud - 2018 - Frontiers in Psychology 9.details Should Soldiers Think before They Shoot?Jørgen Weidemann Eriksen - 2010 - Journal of Military Ethics 9 (3):195-218.details Intuition has increasingly been considered as a legitimate foundation for decision-making, and the concept has started to find its way into military doctrines as a supplement to traditional decision-making procedures, primarily in time-constrained situations. Yet, absent inside the military realm is a critical and level-headed discussion of the ethical implications of intuitive behaviour, understood as an immediate and situational response with no recourse to thoughtful or deliberate activity. In this article the author turns to phenomenological philosophy, and in particular to (...) the works of Hubert and Stuart Dreyfus, to elaborate on the ethical implications and consequences of intuitive behaviour. Dreyfus and Dreyfus understand moral behaviour as a skill, and as such they claim that it is possible to develop this capability through practice. They even claim that intuitive behaviour is the hallmark of the way experts respond to situations. The article seeks to investigate if the prerequisites for development of experience-based intuition are fulfilled inside the frames of military operations. The possible implications and consequences of utilizing such a capability are also emphasized. The article's empirical materials are qualitative and build mainly upon extracted information from interviews and informal conversations with Norwegian soldiers and officers serving in Afghanistan under ISAF's Regional Command North in 2007 and 2008. (shrink) Military Ethics in Applied Ethics Jørgen Pedersen: Rettferdig fordelingog rettferdig skatt.Dag Einar Thorsen - 2020 - Norsk Filosofisk Tidsskrift 55 (2-3):214-217.details Cosmopolitanism and Peace in Kant's Essay on 'Perpetual Peace'.Jørgen Huggler - 2009 - Studies in Philosophy and Education 29 (2):129-140.details Immanuel Kant's essay on Perpetual Peace contains a rejection of the idea of a world government. In connexion with a substantial argument for cosmopolitan rights based on the human body and its need for a space on the surface of the Earth, Kant presents the most rigorous philosophical formulation ever given of the limitations of the cosmopolitan law. In this contribution, Kant's essay is analysed and the reasons he gives for these restrictions discussed in relation to his main focus: to (...) project a realistic path to perpetual peace. (shrink) Økonomisk, politisk og ideologisk verdenskrise.Jørgen Sandemose - 2014 - Agora (History Teachers' Association of Victoria) 31 (3-4):303-318.details Om kvalitetsvurderinger modo agorensi.Jørgen Sandemose - 2011 - Agora (History Teachers' Association of Victoria) 29 (2-3):268-269.details Algebraic completion without the axiom of choice.Jørgen Harmse - 2022 - Mathematical Logic Quarterly 68 (4):394-397.details Läuchli and Pincus showed that existence of algebraic completions of all fields cannot be proved from Zermelo-Fraenkel set theory alone. On the other hand, important special cases do follow. In particular, I show that an algebraic completion of Q p $\mathbb {Q}_p$ can be constructed in Zermelo-Fraenkel set theory. The Axiom of Choice in Philosophy of Mathematics Jean-Philippe Deranty, Beyond Communication: A Critical Study of Axel Honneth's Social Philosophy (Leiden: E. J. Brill, 2009) ISBN: 978 90 04 17577 8, 231. [REVIEW]Jørgen Pedersen - 2010 - Critical Horizons 11 (3):497-500.details Continental Political Philosophy in Continental Philosophy Winckelmanns museum.Jørgen Langdalen - 2006 - Agora (History Teachers' Association of Victoria) 24 (3):5-31.details Outcome Uncertainty and Brain Activity Aberrance in the Insula and Anterior Cingulate Cortex Are Associated with Dysfunctional Impulsivity in Borderline Personality Disorder.Jørgen Assar Mortensen, Hallvard Røe Evensmoen, Gunilla Klensmeden & Asta Kristine Håberg - 2016 - Frontiers in Human Neuroscience 10.details Psychopathy, Misc in Philosophy of Cognitive Science Jørgen Bukdahl, Søren Kierkegaard and the Common Man. Translated, revised, edited, and with notes by Bruce H. Kirmmse, Grand Rapids, Michigan, Cambridge, U.K., William B. Eerdmans Publishing Co., 2001, xviii-154 p.Jørgen Bukdahl, Søren Kierkegaard and the Common Man. Translated, revised, edited, and with notes by Bruce H. Kirmmse, Grand Rapids, Michigan, Cambridge, U.K., William B. Eerdmans Publishing Co., 2001, xviii-154 p. [REVIEW]Mathieu Lavoie - 2003 - Laval Théologique et Philosophique 59 (1):169-170.details Continental Philosophy of Religion in Continental Philosophy Neils Jørgen Green-Pedersen, "The Tradition of the Topics in the Middle Ages. The Commentaries on Aristole's and Boethius' 'Topics'". [REVIEW]Alan R. Perreiah - 1987 - Journal of the History of Philosophy 25 (3):442.details Subjektivitet og negativitet: Kierkegaard.Jørgen Dehs - 1999 - Kierkegaardiana 20.details Supra-logic: using transfinite type theory with type variables for paraconsistency.Jørgen Villadsen - 2005 - Journal of Applied Non-Classical Logics 15 (1):45-58.details We define the paraconsistent supra-logic Pσ by a type-shift from the booleans o of propositional logic Po to the supra-booleans σ of the propositional type logic P obtained as the propositional fragment of the transfinite type theory Q defined by Peter Andrews as a classical foundation of mathematics. The supra-logic is in a sense a propositional logic only, but since there is an infinite number of supra-booleans and arithmetical operations are available for this and other types, virtually anything can be (...) specified. The supra-logic is a generalization of Lukasiewicz's three-valued logic, with the intermediate value duplicated many times and ordered such that none of the copies of this value imply other ones, but it differs from Lukasiewicz's many-valued logics as well as from logics based on bilattices. There are several automated theorem provers for classical higher order logic and it should be possible to modify these to our needs. (shrink)
CommonCrawl
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. View all journals Acute effects of active breaks during prolonged sitting on subcutaneous adipose tissue gene expression: an ancillary analysis of a randomised controlled trial Megan S. Grace1, Melissa F. Formosa1, Kiymet Bozaoglu1,2, Audrey Bergouignan ORCID: orcid.org/0000-0002-1266-51443,4,5, Marta Brozynska1,6, Andrew L. Carey1, Camilla Bertuzzo Veiga ORCID: orcid.org/0000-0001-6989-93741, Parneet Sethi1, Francis Dillon1, David A. Bertovic1, Michael Inouye1,6, Neville Owen1,7, David W. Dunstan1,8 & Bronwyn A. Kingwell1 Scientific Reports volume 9, Article number: 3847 (2019) Cite this article Fat metabolism Active breaks in prolonged sitting has beneficial impacts on cardiometabolic risk biomarkers. The molecular mechanisms include regulation of skeletal muscle gene and protein expression controlling metabolic, inflammatory and cell development pathways. An active communication network exists between adipose and muscle tissue, but the effect of active breaks in prolonged sitting on adipose tissue have not been investigated. This study characterized the acute transcriptional events induced in adipose tissue by regular active breaks during prolonged sitting. We studied 8 overweight/obese adults participating in an acute randomized three-intervention crossover trial. Interventions were performed in the postprandial state and included: (i) prolonged uninterrupted sitting; or prolonged sitting interrupted with 2-minute bouts of (ii) light- or (iii) moderate-intensity treadmill walking every 20 minutes. Subcutaneous adipose tissue biopsies were obtained after each condition. Microarrays identified 36 differentially expressed genes between the three conditions (fold change ≥0.5 in either direction; p < 0.05). Pathway analysis indicated that breaking up of prolonged sitting led to differential regulation of adipose tissue metabolic networks and inflammatory pathways, increased insulin signaling, modulation of adipocyte cell cycle, and facilitated cross-talk between adipose tissue and other organs. This study provides preliminary insight into the adipose tissue regulatory systems that may contribute to the physiological effects of interrupting prolonged sitting. Prolonged uninterrupted sitting is positively associated with cardiometabolic risk biomarkers and premature mortality, independent of moderate-to-vigorous intensity physical activity1. There is emerging interest in the underlying biological responses to prolonged sitting that may drive disease pathophysiology, and how breaking up sitting with short, regular bouts of physical activity can potentially act to mitigate these adverse effects. Studies examining beneficial effects of breaking up sedentary time have implicated regulation of postprandial glucose, insulin and lipid metabolism2,3,4, endothelium-mediated arterial vasodilation5,6,7, and anti-inflammatory mechanisms8,9,10,11. The mechanistic underpinnings of the beneficial effects of breaking up prolonged sitting are likely to be multifactorial, and involve peripheral organs that are known to play a key role in metabolism, such as skeletal muscle, liver and adipose tissue. Our group has observed favourable changes in skeletal muscle gene and protein expression that likely contribute to the improved glucose control associated with breaking up of prolonged sitting via light- or moderate-intensity activities. These changes include some which align with, and others which may be distinct from, the effects of continuous acute exercise12,13. Subcutaneous white adipose tissue is another important metabolic regulatory tissue that may be a central mediator of the cardiometabolic effects of breaking up prolonged sitting time. In addition to its key role in lipid storage, factors secreted from white adipose tissue play pivotal roles in appetite regulation, energy homeostasis, insulin sensitivity, inflammation, and immunological responses14,15. Excessive accumulation of visceral adipose tissue positively associates with increased cardiometabolic disease risk16. While the role of subcutaneous adipose tissue in disease pathophysiology is less clear, it has also been associated with increased disease risk17,18,19,20,21,22,23. Storage of excess lipid in subcutaneous adipocytes has been driven by the vital physiological roles of lipids; including as structural elements in plasma membranes, second messengers and energy substrates. However, upon exceeding storage capacity of subcutaneous adipocytes, lipid accumulates in visceral adipose and eventually ectopically in metabolic regulatory organs17. Ectopic lipid accumulation has been well established as a contributor to low-grade inflammation, endoplasmic reticulum stress and tissue insulin resistance17. In adipose tissue, insulin resistance is characterized by reduced glucose uptake, impairment in both lipogenesis and insulin-stimulated inhibition of lipolysis14,24. Thus, insulin resistance increases lipid availability to visceral and ectopic sites, contributing to increased cardiometabolic risk14. Acute exercise protects against ectopic lipid accumulation by oxidising fatty acids for ATP production and enhancing insulin sensitivity in multiple tissues. In adipose tissue, exercise training also reduces inflammation and increases glucose uptake through pathways which have been well characterized25. To date, studies have typically focused on understanding the impact of continuous exercise bouts on gene and protein expression. Few studies have investigated whether regular active breaks from prolonged sitting modulate similar pathways13,26, and the effects of breaks from prolonged sitting on adipose tissue are unknown. In a previous study we showed that, compared to uninterrupted sitting, frequent brief bouts of either light- or moderate-intensity walking lowered acute postprandial glucose and insulin responses2. An ancillary analysis of vastus lateralis muscle collected from 8 participants in the main study showed that those brief interruptions to sitting time resulted in upregulation of genes involved in cell development, glucose uptake, and anti-inflammatory pathways; and, downregulation of genes associated with protein degradation and muscle atrophy13. In this investigation, we now aim to define the acute transcriptional events induced in subcutaneous adipose tissue by regular brief active interruptions to prolonged sitting time, using subcutaneous abdominal adipose tissue collected from the same subset of 8 participants in the skeletal muscle-tissue study described above2,13. We hypothesized that interrupting sitting time with brief, intermittent activity bouts of either light or moderate intensity would change expression of genes involved in substrate metabolism and inflammation, relative to prolonged sitting. This randomized, three-intervention crossover trial was conducted in accordance with the Declaration of Helsinki and approved by the Alfred Health Human Research Ethics Committee (Melbourne, Australia). The study is registered with the Australian and New Zealand Clinical Trials Registry (ACTRN12609000656235, 04/08/2009) and participants provided written, informed consent. A detailed description of the participant characteristics, screening and testing procedures for the full study have been previously described2 and the relevant aspects of the study design and protocol are summarised in Fig. 1. Of the 19 participants in the main study, eight (7 male, 1 female) consented to subcutaneous adipose biopsies, and are included in the current investigation. The inclusion criteria were: age 45–65 years, body mass index (BMI) 25–45 kg/m2. Participants were excluded if they had a diagnosis of diabetes, were taking glucose and/or lipid-lowering medication, or were regularly engaged in moderate intensity exercise ≥150 minutes/week for at least 3 months. They attended three study visits at the Baker Heart and Diabetes Institute to complete the study conditions in a randomized order2. Study conditions were as follows: Uninterrupted sitting. Participants remained seated throughout the experimental period and were instructed to minimize excessive movement, only rising from the chair to void. Sitting interrupted with light intensity walking breaks. Participants rose from the seated position every 20 minutes throughout the experimental period (3 breaks per hour) and completed a 2-minute bout of light intensity walking (3.2 km/h) on a motorized treadmill on a level surface. The participants then returned to the seated position. This procedure was undertaken on 14 occasions, providing a total of 28 minutes of light-intensity activity. Sitting interrupted with moderate intensity walking breaks. Identical to the light intensity walking breaks condition, but participants completed 2 minute bouts of moderate-intensity walking (5.8–6.4 km/h) on the treadmill, providing a total of 28 minutes of moderate-intensity activity. The speed of walking for this condition was based on individual perception of activity intensity, determined at a familiarisation session, and based on a Borg Rating of Perceived Exertion between 12 and 142. Study design and protocol. A minimum washout period of 6 days between each condition was imposed to avoid potential confounding effects since an acute bout of activity may enhance insulin sensitivity for up to 72 hours. Participants were instructed to refrain from any structured moderate-vigorous exercise, alcohol and caffeine in the 48 hours prior to each of the trial conditions. During this time, physical activity was monitored using an Actigraph GT1M accelerometer (Actigraph, Pensacola, FL), which was worn around the hip during waking hours. Participants reported to the laboratory between 0700 and 0800 hours, having fasted from at least 2200 hours the night before. A cannula was inserted into an antecubital vein for hourly blood sampling. For all of the three experimental conditions, and following the initial blood collection (time point: −2 hours), they remained seated for 2 hours to achieve a steady state before consumption of a standardized test drink (time point: 0 hours). The 200 mL test drink consisted of 75 g carbohydrate (100% corn maltodextrin powder, Natural Health) and 50 g fat (Calogen, Nuticia). The specific nutritional components were as follows: energy: 3,195 kJ; total fat: 50 g; saturated fat: 5 g; monounsaturated fat: 30.4 g; polyunsaturated fat: 14.3 g; total carbohydrate: 75 g; total sugars: 12.8 g; protein nil; fiber <1 g; sodium: 46.9 mg; and water: 90 g. Blood was sampled at baseline before drink consumption and hourly post drink consumption. The incremental area under the glucose-time, insulin-time and insulin/glucose-time curves have been previously presented for the whole cohort2, and the 8 participants included in this sub-study13. Adipose tissue biopsy Abdominal subcutaneous adipose tissue biopsies were obtained using standard aseptic technique and local anaesthesia (lignocaine) approximately 40–50 minutes after the last activity bout, and 5 hours after the drink ingestion (Fig. 1). During the first intervention visit, a 0.5–1 cm skin incision was made ~5 cm lateral to the navel/umbilicus, and a Bergstrom biopsy needle passed through to obtain approximately 1–2 cm3 of subcutaneous adipose tissue under suction. Biopsies taken at the second and third intervention trials were obtained from the side opposite that of the preceding trial and at the third trial >5 cm superior or inferior to the first to avoid the potential for sampling prior injured tissue. All biopsies were rinsed of blood in ice-cold sterile saline, the connective tissue removed and cleaned adipose tissue was snap frozen in liquid nitrogen for subsequent storage at −80 °C until further analysis. RNA extraction RNA isolation from 100 mg of adipose tissue was performed using TRIzol Reagent as per manufacturer's instructions (ThermoFisher Scientific, Massachusetts, USA). The RNA phase (the clear upper aqueous layer) was transferred to a new tube without disturbing the interphase. Then 1.5 volumes of 100% ethanol was added to the samples and loaded onto the RNeasy mini spin column (Qiagen, Hilden, Germany). RNA was extracted according to the manufacturer's protocol. The RNA was quantified using a Nanodrop spectrophotometer (ThermoFisher Scientific, Massachusetts, USA) and Qubit fluorometer (ThermoFisher Scientific, Massachusetts, USA). The integrity of the RNA was assessed using a MultiNA Microchip Electrophoresis System (Shimadzu, Kyoto, Japan). Gene Expression Profiling/Microarrays The amplification and labelling of RNA was performed using the TotalPrep amplification kit according to manufacturer's instructions (Ambion; ThermoFisher Scientific Massachusetts, USA). The labelled samples were hybridized to the Human HT-12 v4.0 Expression BeadChip as per the manufacturer's protocol (47,231 probes; Illumina, California, USA). The BeadChips were scanned using the iScan microarray BeadStudio platform (Illumina, California, USA). Quality standards for hybridization, labelling, staining, background signal, and basal level of housekeeping gene expression for each chip were verified. After scanning on the Illumina iScan, the resulting images were background subtracted, quantile normalized and processed using the GenomeStudio software (Illumina, California, USA). Data files were deposited into the National Center for Biotechnology Information Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc = GSE115645). An Illumina detection P-value of <0.05 was used to determine presence or absence of a probe in each tissue sample. Call rates for each of the three study conditions were then calculated for each gene, per condition using equation 1: $$Call\,rate=\frac{number\,of\,adipose\,samples\,with\,a\,detection\,p\,value < 0.05}{total\,number\,of\,adipose\,samples}$$ A gene was included in the subsequent analysis if a call rate of ≥0.5 was calculated for at least one condition. Analysis of the effects of activity breaks versus prolonged sitting was performed in Stata (StataCorp, College Station, TX). To identify genes with the most marked changes in expression level, we applied an absolute fold change threshold of ≥0.5 (in either direction from 1) for any of the three experimental condition pairs (light intensity breaks versus uninterrupted sitting; moderate intensity breaks versus uninterrupted sitting; moderate versus light intensity breaks), where a fold change <1 refers to downregulation and >1 refers to upregulation of the gene relative to the comparator condition. This approach has been previously validated27, showing close qualitative and quantitative relationships to qPCR, including in skeletal muscle from this same study13. Linear mixed models accounting for dependency in the data (repeated measures) were used to evaluate the differential effects of the trial conditions on expression of genes. All models were adjusted for age and BMI. P-values for the overall condition variable were obtained from post-hoc Wald tests and corrected for multiple comparisons using the False Discovery Rate (FDR) method of Benjamini-Hochberg28. Post hoc, pairwise analyses were performed using post-estimation commands of the linear mixed model, and the resultant pairwise P-values were corrected by the Dunn-Sidak approach. A corrected P-value of <0.05 was considered to be statistically significant. All statistical analyses were performed using Stata 14.1 for Windows (StataCorp, College Station, TX). Pathway analysis Genes were ranked using a signed log10-transformed P-value (from the linear mixed model analysis) with the sign denoting the direction of change, positive for increasing and negative for decreasing29. The rank score for genes that were represented by more than one transcript, was calculated for the transcript with the lowest P-value, and the others were disregarded. The ranked gene list was inputted into GSEA 3.0 software (Broad Institute, Cambridge, MA). A normalized enrichment score (NES), the degree to which a gene set is overrepresented at the top or bottom of a ranked list of genes, was calculated30. NES scores greater than zero indicate upregulation of a pathway, whereas NES values of less than zero represent down regulation. Gene sets with a P-value ≤ 0.1, following false discovery rate (FDR) correction, were considered likely to generate valid hypotheses and drive further research. Leading edge analysis was used to examine the genes that were in the leading-edge subsets of the enriched gene sets. The Reactome pathway database (www.reactome.org) was used to categorize genes by pathways to facilitate biological interpretation. All analyses were performed in a blinded manner. Participant demographics and biochemical analyses for the eight participants recruited for this sub-study have been reported previously13. Those recruited for the present sub-study had a mean age of 55 ± 6 years and BMI 30.9 ± 2.9 kg/m2 (mean ± SD). In the main study (n = 19), interrupting prolonged sitting with both light- and moderate-intensity walking significantly reduced postprandial glucose (by 24% and 30%, respectively) and insulin (by 23% for both conditions) incremental area under the curve (iAUC), relative to uninterrupted sitting2. In the current subset of eight participants, there was a trend toward a decrease in glucose iAUC (−22% and −20%; P = 0.1) and a significant decrease in insulin iAUC (−25% and −23%; P < 0.05) in the light-intensity and moderate-intensity breaks conditions, respectively, relative to uninterrupted sitting13. The microarray analysis identified 18,844 transcripts expressed in the adipose tissue. Of these, 469 satisfied the 0.5-fold change criteria (described in the Methods) in at least one out of the three comparisons between the experimental condition pairs. Thirty-six genes were significantly differentially expressed (FDR-adjusted P < 0.05; Table 1, Fig. 2). Of these, 7/36 genes were upregulated and 2/36 genes were downregulated in the light-intensity breaks condition, compared to uninterrupted sitting; 1/36 genes was upregulated and none were downregulated in the moderate-intensity breaks condition compared to uninterrupted sitting; and, 29/36 genes were upregulated and none were downregulated in the moderate-intensity breaks condition compared to the light-intensity breaks (Table 1). Table 1 Genes differentially expressed between the three experimental conditions. Genes differentially expressed between the three experimental conditions. Heat map showing those genes up (blue) and down (green) regulated for comparisons of Light vs Sit (light intensity breaks versus uninterrupted sitting, column 1); Mod vs Sit (moderate intensity breaks versus uninterrupted sitting, column 2) and Mod vs Light (moderate intensity breaks versus light intensity breaks, column 3). Comparisons meeting both the 0.5 fold change criteria (in either direction from 1) and the significance threshold are highlighted in Table 1. Refer to Table 1 for gene definitions. Analysis of the ranked gene list identified 102 differentially (up- or down-) regulated pathways, with an FDR-adjusted P ≤ 0.10 (Fig. 3, Supplementary Table 1). In the light-intensity breaks condition, 64/102 pathways were upregulated and 19/102 were downregulated, compared to uninterrupted sitting. In the moderate-intensity breaks condition, 11/102 pathways were upregulated and 0/102 were downregulated, compared to uninterrupted sitting; and, 23/102 pathways were upregulated and 42/102 were downregulated, compared to light-intensity breaks. Among the regulated pathways were those involved with metabolism of macronutrients, adenosine triphosphate (ATP) synthesis, immune function, signal transduction, extracellular matrix organisation and cell cycle. Pathways differentially regulated between the three experimental conditions. Heat map showing pathways up (blue, Normalized Enrichment Score (NES) >0) and down (green, NES <0) regulated for comparisons of Light vs Sit (light intensity breaks versus uninterrupted sitting, column 1); Mod vs Sit (moderate intensity breaks versus uninterrupted sitting, column 2) and Mod vs Light (moderate intensity breaks versus light intensity breaks, column 3). The significance of these comparisons are indicated in Supplementary Table 1. ATP = adenosine triphosphate; BCR = B cell receptor; ER = endoplasmic reticulum; MHC = major histocompatibility; Resp. electr. transp = respiratory electron transport; TCA = tricarboxcylic acid. The major novel finding of this study is that actively breaking up prolonged sitting time in the postprandial state was associated with changes in the expression of subcutaneous abdominal adipose tissue genes and pathways involved in macronutrient metabolism and ATP synthesis, immune function, signal transduction, and cell cycle regulation. These changes are likely induced by systemic physiological drivers associated with physical activity and known to modulate adipose tissue function through multiple mechanisms including via the sympathetic nervous system, increased adipose tissue blood flow and changes in circulating factors including adrenaline, insulin and glucagon31. Interestingly, the majority of the differences were between the light-intensity activity, relative to both the uninterrupted sitting and the moderate-intensity breaks conditions. This may relate to the fact that of all the interventions studied, light intensity breaks would be most dependent on fat metabolism (adipose tissue)32,33. By contrast, moderate intensity breaks which would have been more reliant on glucose metabolism, had relatively few differentially regulated genes and pathways in comparison to prolonged uninterrupted sitting. Active breaks during prolonged sitting alter adipose expression of metabolic regulatory networks controlling ATP synthesis Consistent with fat oxidation being the major pathway for ATP production induced by low-intensity activity, upregulation of genes and pathways associated with enhanced capacity for lipid oxidation were observed for the light-intensity breaks condition in comparison to both the uninterrupted sitting and moderate-intensity breaks conditions (Figs 2 and 3). Mitochondrial (tricarboxylic acid cycle and respiratory electron transport) and peroxisomal β-oxidation pathways were upregulated in the light-intensity breaks condition. The key functional difference between these compartmental processes is that mitochondrial β-oxidation generates acetyl-CoA and reduced cofactors via degradation of energy substrates, primarily long-chain fatty acids, which is coupled to oxidative phosphorylation; whereas, peroxisomes degrade a wide variety of lipophilic compounds that have important functions not involved in ATP resynthesis34. Although it did not reach the fold-change cut-off for the main analysis (1.45 fold change compared to uninterrupted sitting), upregulation of the NDUFAB1 (NADH:Ubiquinone Oxidoreductase Subunit AB1) gene was identified in the pathway analysis for the light-intensity breaks condition. This non-catalytic accessory subunit of the mitochondrial membrane Complex I functions in the transfer of electrons from NADH to the respiratory chain, indicating an increased capacity for substrate oxidation. Significantly lower expression of ID1 and NOS3 (endothelial nitric oxide synthase, eNOS) in the light-intensity breaks condition, compared to the uninterrupted sitting and moderate-intensity breaks conditions, is also consistent with higher lipid oxidation. Deletion of the ID1 gene (a transcriptional regulator of helix-loop-helix transcription factors that control cell type-specific gene expression) in mice has been implicated in numerous physiological effects with potential metabolic benefit. It was found that loss of the ID1 gene leads to increases in expression of genes which promote fatty acid oxidation, with id1−/− mice also exhibiting reduced insulin resistance and fat mass while on a high fat diet, and increased oxygen consumption compared to wild type mice35,36. Adipose eNOS expression is increased in obese adults, and has been implicated in the inhibition of lipolysis via downregulation of lipolytic pathways37,38. Downregulation of eNOS (NOS3) expression in the light-intensity breaks condition may therefore also indicate an increase in lipolysis. The differing expression patterns between the two break conditions for genes regulating lipolysis occurred despite similarity in blood glucose and insulin levels, suggesting that mechanisms associated with activity break intensity such as adrenergic pathways may have been the predominant influence. Reflecting the preponderance of lipid over carbohydrate metabolism for provision of ATP during light-intensity activity physical activity32,33, pathways linked to carbohydrate metabolism were downregulated in comparison to both uninterrupted sitting and moderate-intensity breaks. Here, significantly lower adipose tissue NOS3 and MYO1C gene expression in the light-intensity breaks condition may indicate reduced insulin-dependent and -independent GLUT-4 translocation and subsequent glucose uptake39,40. In contrast, significantly higher expression of PFKFB3 was observed in the moderate-intensity breaks condition compared to both the uninterrupted sitting and light-intensity breaks conditions. PFKFB3 is an enzyme whose product (fructose 2,6-biphosphate) is a powerful activator of the glycolytic enzyme 6-phosphofructo-1-kinase, the rate-limiting step in glycolysis41,42, indicating a greater capacity for glucose oxidation in the moderate-intensity breaks condition (Fig. 3). Active breaks in prolonged sitting regulate anti-inflammatory and anti-oxidative stress pathways, and improve insulin sensitivity Inflammatory and metabolic signaling are closely interlinked, such that chronic disturbance of metabolic homeostasis can lead to aberrant immune responses and inflammation; and, vice versa, where chronic inflammation can induce insulin resistance and metabolic dysfunction43. While a continuous moderate-to-high intensity exercise bout has been shown to activate inflammatory and oxidative pathways44,45, we found little evidence for modulation of immune function or inflammatory status with the brief moderate-intensity breaks from sitting in the current study, compared to uninterrupted sitting (Fig. 3). In contrast, light intensity breaks from sitting appeared to upregulate immune function and downregulate inflammatory signals compared to the uninterrupted sitting and moderate-intensity breaks conditions. Tumour necrosis factor alpha (TNFα), derived primarily from resident macrophages, is known to be overexpressed in adipose tissue of obese humans and mice46,47,48. TNFα is a pro-inflammatory cytokine that activates various signal transduction cascades, including pathways involved in inhibiting insulin action43,49,50. TNFα and other inflammatory cytokines impair insulin action by post-translational modification of insulin receptor substrate 1 through serine phosphorylation, which interferes with the ability of this protein to engage in insulin-receptor signaling43. The ID1 gene has been suggested to be important in mediating the increased expression of TNFα in adipose tissue following a high-fat diet in mice36; the ID1 gene was downregulated in the light-intensity breaks compared to the moderate-intensity breaks condition. Receptors for TNFα play an important role in its downstream effects. Here, we also observed lower expression of the receptor TNFRSF25 in the light-intensity breaks condition; TNFRSF25 is known to stimulate NF-κB activity51. NF-κB inflammatory pathways contribute to the pathology of metabolic disorders52. Indeed, clinical trials have shown amelioration of insulin resistance and improved glucose homeostasis in type 2 diabetes patients treated with salicylates, which inhibit NF-κB activation53. Significantly lower expression of ID1 and TNFRSF25 in the light-intensity breaks condition compared to moderate-intensity breaks is suggestive of reduced production of TNFα and its downstream signaling. Despite not reaching the 1.5 fold change threshold, both the ID1 and TNFRSF25 genes were also significantly downregulated in the light-intensity breaks condition compared to uninterrupted sitting. The potent pro-inflammatory molecule LTB4 is highly expressed in obesity, and has recently been shown to directly induce insulin resistance at least partially through its receptor, LTB4R54,55,56. Here, we observed significantly lower expression of the LTB4R gene in the light-intensity breaks, compared to the moderate-intensity breaks condition. LTB4R was also statistically significantly downregulated in the light-intensity breaks condition compared to uninterrupted sitting though the 0.5 fold change threshold was not reached (0.69 fold change). Furthermore, the Notch signaling pathway was significantly downregulated in the light-intensity breaks condition, compared to uninterrupted sitting and moderate-intensity breaks conditions. Notch signaling is highly conserved in mammals, and rodent studies indicate that inhibition of this pathway improves adipose tissue insulin sensitivity57. Together, these results suggest that downregulation of inflammatory pathways involving TNFα, NF-κB, lipoxygenase and Notch signaling could contribute to the beneficial effects of breaking up sedentary time with light-intensity activities on metabolism by positively influencing adipose tissue insulin sensitivity. Reactive oxygen species (ROS) are continuously produced as by-products of aerobic metabolism. ROS can act as important signaling molecules, but can also cause oxidative damage to cells and tissues, and their accumulation is regulated by a complex system of anti-oxidative defences. In addition to β-oxidation, the peroxisomal lipid metabolism pathway is involved in the dynamic response to cellular stress. Peroxisomes contain a number of antioxidant defences involved in the regulation of ROS, including the synthesis of the ether phospholipid species plasmalogens58,59. We have previously shown that light-intensity breaks during prolonged sitting in adults with type 2 diabetes ameliorates the postprandial reduction in plasma plasmalogen species, compared to prolonged uninterrupted sitting4. The upregulation of peroxisomal lipid metabolism in the light-intensity breaks condition in the current study suggests that this may be partially due to increased biosynthesis of plasmalogen lipid species in the adipose tissue. This finding may provide an important mechanistic link between the protective effects of light-intensity breaks during prolonged sitting and risk of cardiometabolic diseases, as plasmalogens have been suggested to be protective against atherosclerosis60,61. Active breaks in prolonged sitting regulate adipose tissue cell cycle pathways Coupling of cell cycle progression and programmed cell death pathways are regulated to maintain tissue homeostasis. The cell cycle involves a cascade of events that leads to cell division and DNA replication. Progression of the cell cycle is orchestrated by regulatory (cyclins) and catalytic subunits (cyclin-dependent kinases, CDKs), as well as ubiquitin ligases (such as anaphase promoting complex, APC) which mark cell cycle proteins for degradation62,63. Here, we observed upregulation of several pathways related to cell cycle and DNA replication, concurrently with upregulation of apoptosis pathways, in response to light-intensity breaks from sitting versus uninterrupted sitting. These data suggest adaptation of adipocyte cell cycle in the light-intensity breaks condition. Cell cycle and DNA replication pathways also tended to be upregulated in the moderate-intensity breaks condition, in comparison to uninterrupted sitting, but in most cases did not reach statistical significance (P > 0.1, Table 2). Upregulation of the M, G1, and S phases of cell cycle involved in cell growth and proliferation, as well as pathways involved in the transition between these phases, indicates acute stimulation of adipogenesis pathways. Upregulation of checkpoints also suggests greater quality control to prevent unhealthy cells from proliferating62. Adipose cells dynamically adapt their growth and metabolism to current requirements, and crosstalk between these two biological responses has recently been identified64. Although relatively few studies have been conducted in humans, animal models provide strong evidence that, in addition to the control of cell proliferation and death, cell cycle mediators also play key roles in the biological function of adipocytes such as insulin sensitivity, lipolysis and glucose transport62,63. Therefore, acute upregulation of cell cycle pathways with active breaks in prolonged sitting (independently of break intensity) may indicate a role for these effectors in metabolic adaptation of adipose tissue to physical activity. This is also characteristic of other general cellular function pathways (e.g. metabolism of RNA and proteins, DNA replication) which were also upregulated with breaks at both intensities. Light intensity breaks in prolonged sitting may facilitate cross-talk between adipose and other organs The configuration of adipose tissue is structured so that adipocytes are in close proximity to immune cells, with immediate access to a vast network of blood vessels, which allows continuous and dynamic interaction between immune and metabolic processes and cross-talk with other organs43. This structure and inter-connectivity likely contributes to the key role of adipose tissue in the development of metabolic disease43. Indeed, an active communication network between adipose tissue and muscle regulating glucose homeostasis is now well-established, whereby adipokines can increase muscle insulin sensitivity and glucose uptake via direct and indirect means65,66. Wnt glycoproteins are produced and released from several tissues, including white adipose tissue. Wnt signaling inhibits adipogenesis and regulates whole-body metabolism by altering the behaviour of multiple cell types and tissues, including a role in promoting insulin sensitivity66,67. Moreover, Wnt signaling activation in adipose progenitors has been shown to promote insulin-independent muscle glucose uptake65. Therefore, in the current study upregulation of the Wnt signaling pathway in the light-intensity breaks condition, compared to uninterrupted sitting, could indicate facilitation of cross-talk between the adipose tissue and other body organs. Our highly controlled study design, including within-participant comparison across three interventions completed in a randomized order, is a strength. The number of participants, albeit relatively small, was sufficient to detect significant changes in gene expression with appropriate FDR correction. A larger cohort may reveal a greater number of regulated genes. Similarly, we investigated only the acute effects of 5 hours of light- and moderate-intensity breaks from sitting and chronic interventions (multiple days/weeks) may show evolution in gene expression patterns. Differential effects are likely to be observed in adipose tissue collected from other subcutaneous (eg, gluteal) or visceral depots, although there is no evidence for site specific relationships in terms of substrate provision to local active muscle (spot reduction)68. It is also relevant to consider the potential contribution of RNA derived from immune and endothelial cells within our adipose tissue samples. Generally, such components are unlikely to have contributed significantly to our observations. Adipose samples used in the present analysis were dissected free of connective and vascular tissue, and washed thoroughly minimizing any vascular component. Immune cells would however remain69, and future studies dissecting their potential contribution to metabolism during breaks from prolonged sitting will be important. The absolute fold-change criteria of 0.5 (in either direction) for selection of significant effects may result in failure to detect important regulated genes; however, all genes meeting the call rate criteria were included in the pathway analysis and therefore contributed to the overall biological interpretation of the results. False positives were minimized by use of repeated measures analysis to allow comparison of all three interventions, and correction for multiple testing with the FDR method by Benjamini-Hochberg28. Our findings provide a basis for future studies to examine the adipose tissue transcriptional networks, in various adipose depots, regulated by breaking up sedentary time over longer periods and their relationship to bioclinical phenotypes. In particular, studies with a larger sample size and detailed measurements of glucose and fat metabolism beyond the simple plasma measurements employed in the current study will be necessary to substantiate the findings of the present analysis. This is the first study to characterize the acute transcriptional events induced in subcutaneous abdominal adipose tissue by breaking up prolonged sitting time with frequent brief activity bouts. We observed distinct and biologically-interpretable effects of what conventionally would be understood as an extremely modest physical activity stimulus. Interestingly, upregulation of pathways involved in oxidative metabolism and immunity were greater for light-intensity breaks compared to sitting, although moderate-intensity breaks showed similar directional trends. Our findings highlight some of the regulated mechanisms that potentially underlie the physiological benefits of breaking up prolonged periods of sedentary time. We have identified candidate genes that may contribute to the modulationof postprandial glucose, insulin, lipid and inflammatory responses shown to occur with breaking up of prolonged sitting. These findings further elucidate the potential mechanistic underpinnings of the adverse health consequences of prolonged sitting. They add further weight to the scientific literature describing the potential health effects of regular, brief activity breaks in sitting (even of a light intensity), which could contribute to the development and extension of future public health guidelines. However larger studies with more detailed metabolic measures are required to corroborate the findings of the current investigation. Diaz, K. M. et al. Patterns of Sedentary Behavior and Mortality in U.S. Middle-Aged and Older Adults: A National Cohort Study. Ann. Intern. Med. 167, 465–475, https://doi.org/10.7326/M17-0212 (2017). Dunstan, D. W. et al. Breaking up prolonged sitting reduces postprandial glucose and insulin responses. Diabetes Care 35, 976–983, https://doi.org/10.2337/dc11-1931 (2012). Dempsey, P. C., Owen, N., Yates, T. E., Kingwell, B. A. & Dunstan, D. W. Sitting Less and Moving More: Improved Glycaemic Control for Type 2 Diabetes Prevention and Management. Curr. Diab. Rep. 16, 114, https://doi.org/10.1007/s11892-016-0797-4 (2016). Grace, M. S. et al. Breaking Up Prolonged Sitting Alters the Postprandial Plasma Lipidomic Profile of Adults With Type 2 Diabetes. J. Clin. Endocrinol. Metab. 102, 1991–1999, https://doi.org/10.1210/jc.2016-3926 (2017). Restaino, R. M., Holwerda, S. W., Credeur, D. P., Fadel, P. J. & Padilla, J. Impact of prolonged sitting on lower and upper limb micro- and macrovascular dilator function. Exp. Physiol. 100, 829–838, https://doi.org/10.1113/EP085238 (2015). Restaino, R. M. et al. Endothelial dysfunction following prolonged sitting is mediated by a reduction in shear stress. Am. J. Physiol. Heart Circ. Physiol. 310, H648–653, https://doi.org/10.1152/ajpheart.00943.2015 (2016). Thosar, S. S., Bielko, S. L., Mather, K. J., Johnston, J. D. & Wallace, J. P. Effect of prolonged sitting and breaks in sitting time on endothelial function. Med. Sci. Sports Exerc. 47, 843–849, https://doi.org/10.1249/MSS.0000000000000479 (2015). Henson, J. et al. Sedentary time and markers of chronic low-grade inflammation in a high risk population. PLoS One 8, e78350, https://doi.org/10.1371/journal.pone.0078350 (2013). ADS CAS Article PubMed PubMed Central Google Scholar Howard, B. J. et al. Associations of overall sitting time and TV viewing time with fibrinogen and C reactive protein: the AusDiab study. Br. J. Sports Med. 49, 255–258, https://doi.org/10.1136/bjsports-2013-093014 (2015). Schmid, D. & Leitzmann, M. F. Television viewing and time spent sedentary in relation to cancer risk: a meta-analysis. J. Natl. Cancer Inst. 106, https://doi.org/10.1093/jnci/dju098 (2014). Yates, T. et al. Self-reported sitting time and markers of inflammation, insulin resistance, and adiposity. Am. J. Prev. Med. 42, 1–7, https://doi.org/10.1016/j.amepre.2011.09.022 (2012). Bergouignan, A. et al. Frequent interruptions of sedentary time modulates contraction- and insulin-stimulated glucose uptake pathways in muscle: Ancillary analysis from randomized clinical trials. Sci. Rep. 6, 32044, https://doi.org/10.1038/srep32044 (2016). Latouche, C. et al. Effects of breaking up prolonged sitting on skeletal muscle gene expression. J Appl Physiol (1985) 114, 453–460, https://doi.org/10.1152/japplphysiol.00978.2012 (2013). Ali, A. T., Hochfeld, W. E., Myburgh, R. & Pepper, M. S. Adipocyte and adipogenesis. Eur. J. Cell Biol. 92, 229–236, https://doi.org/10.1016/j.ejcb.2013.06.001 (2013). Lafontan, M. Adipose tissue and adipocyte dysregulation. Diabetes Metab. 40, 16–28, https://doi.org/10.1016/j.diabet.2013.08.002 (2014). Despres, J. P. et al. Abdominal obesity and the metabolic syndrome: contribution to global cardiometabolic risk. Arterioscler. Thromb. Vasc. Biol. 28, 1039–1049, https://doi.org/10.1161/ATVBAHA.107.159228 (2008). Gustafson, B. & Smith, U. Regulation of white adipogenesis and its relation to ectopic fat accumulation and cardiovascular risk. Atherosclerosis 241, 27–35, https://doi.org/10.1016/j.atherosclerosis.2015.04.812 (2015). Karcher, H. S. et al. Body fat distribution as a risk factor for cerebrovascular disease: an MRI-based body fat quantification study. Cerebrovasc. Dis. 35, 341–348, https://doi.org/10.1159/000348703 (2013). Lee, M. J., Wu, Y. & Fried, S. K. Adipose tissue heterogeneity: implication of depot differences in adipose tissue for obesity complications. Mol. Aspects Med. 34, 1–11, https://doi.org/10.1016/j.mam.2012.10.001 (2013). Lim, S. & Meigs, J. B. Ectopic fat and cardiometabolic and vascular risk. Int. J. Cardiol. 169, 166–176, https://doi.org/10.1016/j.ijcard.2013.08.077 (2013). Patel, P. & Abate, N. Role of subcutaneous adipose tissue in the pathogenesis of insulin resistance. J. Obes. 2013, 489187, https://doi.org/10.1155/2013/489187 (2013). Porter, S. A. et al. Abdominal subcutaneous adipose tissue: a protective fat depot? Diabetes Care 32, 1068–1075, https://doi.org/10.2337/dc08-2280 (2009). Stanford, K. I., Middelbeek, R. J. & Goodyear, L. J. Exercise Effects on White Adipose Tissue: Beiging and Metabolic Adaptations. Diabetes 64, 2361–2368, https://doi.org/10.2337/db15-0227 (2015). MacLean, P. S., Higgins, J. A., Giles, E. D., Sherk, V. D. & Jackman, M. R. The role for adipose tissue in weight regain after weight loss. Obes. Rev. 16(Suppl 1), 45–54, https://doi.org/10.1111/obr.12255 (2015). Trevellin, E. et al. Exercise training induces mitochondrial biogenesis and glucose uptake in subcutaneous adipose tissue through eNOS-dependent mechanisms. Diabetes 63, 2800–2811, https://doi.org/10.2337/db13-1234 (2014). Zderic, T. W. & Hamilton, M. T. Identification of hemostatic genes expressed in human and rat leg muscles and a novel gene (LPP1/PAP2A) suppressed during prolonged physical inactivity (sitting). Lipids Health Dis. 11, 137, https://doi.org/10.1186/1476-511X-11-137 (2012). Morey, J. S., Ryan, J. C. & Van Dolah, F. M. Microarray validation: factors influencing correlation between oligonucleotide microarrays and real-time PCR. Biol. Proced. Online 8, 175–193, https://doi.org/10.1251/bpo126 (2006). Reiner, A., Yekutieli, D. & Benjamini, Y. Identifying differentially expressed genes using false discovery rate controlling procedures. Bioinformatics 19, 368–375 (2003). Plaisier, S. B., Taschereau, R., Wong, J. A. & Graeber, T. G. Rank-rank hypergeometric overlap: identification of statistically significant overlap between gene-expression signatures. Nucleic Acids Res. 38, e169, https://doi.org/10.1093/nar/gkq636 (2010). Subramanian, A. et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl. Acad. Sci. USA 102, 15545–15550, https://doi.org/10.1073/pnas.0506580102 (2005). Thompson, D., Karpe, F., Lafontan, M. & Frayn, K. Physical activity and exercise in the regulation of human adipose tissue physiology. Physiol. Rev. 92, 157–191, https://doi.org/10.1152/physrev.00012.2011 (2012). van Loon, L. J., Greenhaff, P. L., Constantin-Teodosiu, D., Saris, W. H. & Wagenmakers, A. J. The effects of increasing exercise intensity on muscle fuel utilisation in humans. J. Physiol. 536, 295–304 (2001). Romijn, J. A. et al. Regulation of endogenous fat and carbohydrate metabolism in relation to exercise intensity and duration. Am. J. Physiol. 265, E380–391, https://doi.org/10.1152/ajpendo.1993.265.3.E380 (1993). Reddy, J. K. & Mannaerts, G. P. Peroxisomal lipid metabolism. Annu. Rev. Nutr. 14, 343–370, https://doi.org/10.1146/annurev.nu.14.070194.002015 (1994). Satyanarayana, A., Klarmann, K. D., Gavrilova, O. & Keller, J. R. Ablation of the transcriptional regulator Id1 enhances energy expenditure, increases insulin sensitivity, and protects against age and diet induced insulin resistance, and hepatosteatosis. FASEB J. 26, 309–323, https://doi.org/10.1096/fj.11-190892 (2012). Zhao, Y. et al. Up-regulation of the Sirtuin 1 (Sirt1) and peroxisome proliferator-activated receptor gamma coactivator-1alpha (PGC-1alpha) genes in white adipose tissue of Id1 protein-deficient mice: implications in the protection against diet and age-induced glucose intolerance. J. Biol. Chem. 289, 29112–29122, https://doi.org/10.1074/jbc.M114.571679 (2014). Elizalde, M. et al. Expression of nitric oxide synthases in subcutaneous adipose tissue of nonobese and obese humans. J. Lipid Res. 41, 1244–1251 (2000). Engeli, S. et al. Dissociation between adipose nitric oxide synthase expression and tissue metabolism. J. Clin. Endocrinol. Metab. 92, 2706–2711, https://doi.org/10.1210/jc.2007-0234 (2007). Chen, X. W., Leto, D., Chiang, S. H., Wang, Q. & Saltiel, A. R. Activation of RalA is required for insulin-stimulated Glut4 trafficking to the plasma membrane via the exocyst and the motor protein Myo1c. Dev. Cell 13, 391–404, https://doi.org/10.1016/j.devcel.2007.07.007 (2007). Tanaka, T. et al. Nitric oxide stimulates glucose transport through insulin-independent GLUT4 translocation in 3T3-L1 adipocytes. Eur. J. Endocrinol. 149, 61–67 (2003). Huo, Y. et al. Disruption of inducible 6-phosphofructo-2-kinase ameliorates diet-induced adiposity but exacerbates systemic insulin resistance and adipose tissue inflammatory response. J. Biol. Chem. 285, 3713–3721, https://doi.org/10.1074/jbc.M109.058446 (2010). Trefely, S. et al. Kinome Screen Identifies PFKFB3 and Glucose Metabolism as Important Regulators of the Insulin/Insulin-like Growth Factor (IGF)-1 Signaling Pathway. J. Biol. Chem. 290, 25834–25846, https://doi.org/10.1074/jbc.M115.658815 (2015). Hotamisligil, G. S. Inflammation and metabolic disorders. Nature 444, 860–867, https://doi.org/10.1038/nature05485 (2006). ADS CAS Article PubMed Google Scholar Moldoveanu, A. I., Shephard, R. J. & Shek, P. N. The cytokine response to physical activity and training. Sports Med. 31, 115–144 (2001). Peake, J. M., Neubauer, O., Walsh, N. P. & Simpson, R. J. Recovery of the immune system after exercise. J Appl Physiol (1985) 122, 1077–1087, https://doi.org/10.1152/japplphysiol.00622.2016 (2017). Hotamisligil, G. S., Arner, P., Caro, J. F., Atkinson, R. L. & Spiegelman, B. M. Increased adipose tissue expression of tumor necrosis factor-alpha in human obesity and insulin resistance. J. Clin. Invest. 95, 2409–2415, https://doi.org/10.1172/JCI117936 (1995). Hotamisligil, G. S., Shargill, N. S. & Spiegelman, B. M. Adipose expression of tumor necrosis factor-alpha: direct role in obesity-linked insulin resistance. Science 259, 87–91 (1993). ADS CAS Article Google Scholar Ouchi, N., Parker, J. L., Lugus, J. J. & Walsh, K. Adipokines in inflammation and metabolic disease. Nat. Rev. Immunol. 11, 85–97, https://doi.org/10.1038/nri2921 (2011). Krogh-Madsen, R., Plomgaard, P., Moller, K., Mittendorfer, B. & Pedersen, B. K. Influence of TNF-alpha and IL-6 infusions on insulin sensitivity and expression of IL-18 in humans. Am. J. Physiol. Endocrinol. Metab. 291, E108–114, https://doi.org/10.1152/ajpendo.00471.2005 (2006). Uysal, K. T., Wiesbrock, S. M., Marino, M. W. & Hotamisligil, G. S. Protection from obesity-induced insulin resistance in mice lacking TNF-alpha function. Nature 389, 610–614, https://doi.org/10.1038/39335 (1997). So, T. & Croft, M. Regulation of PI-3-Kinase and Akt Signaling in T Lymphocytes and Other Cells by TNFR FamilyMolecules. Front. Immunol. 4, 139, https://doi.org/10.3389/fimmu.2013.00139 (2013). Baker, R. G., Hayden, M. S. & Ghosh, S. NF-kappaB, inflammation, and metabolic disease. Cell Metab. 13, 11–22, https://doi.org/10.1016/j.cmet.2010.12.008 (2011). Goldfine, A. B. et al. The effects of salsalate on glycemic control in patients with type 2 diabetes: a randomized trial. Ann. Intern. Med. 152, 346–357, https://doi.org/10.7326/0003-4819-152-6-201003160-00004 (2010). Johnson, A. M. F., Hou, S. & Li, P. Inflammation and insulin resistance: New targets encourage new thinking: Galectin-3 and LTB4 are pro-inflammatory molecules that can be targeted to restore insulin sensitivity. Bioessays 39, https://doi.org/10.1002/bies.201700036 (2017). Esmaili, S. & George, J. Ltb4r1 inhibitor: A pivotal insulin sensitizer? Trends Endocrinol. Metab. 26, 221–222, https://doi.org/10.1016/j.tem.2015.03.007 (2015). Ying, W. et al. Adipose tissue B2 cells promote insulin resistance through leukotriene LTB4/LTB4R1 signaling. J. Clin. Invest. 127, 1019–1030, https://doi.org/10.1172/JCI90350 (2017). Bi, P. et al. Inhibition of Notch signaling promotes browning of white adipose tissue and ameliorates obesity. Nat. Med. 20, 911–918, https://doi.org/10.1038/nm.3615 (2014). Sandalio, L. M., Rodriguez-Serrano, M., Romero-Puertas, M. C. & del Rio, L. A. Role of peroxisomes as a source of reactive oxygen species (ROS) signaling molecules. Subcell. Biochem. 69, 231–255, https://doi.org/10.1007/978-94-007-6889-5_13 (2013). Wanders, R. J. & Waterham, H. R. Peroxisomal disorders: the single peroxisomal enzyme deficiencies. Biochim. Biophys. Acta 1763, 1707–1720, https://doi.org/10.1016/j.bbamcr.2006.08.010 (2006). Moxon, J. V. et al. Baseline serum phosphatidylcholine plasmalogen concentrations are inversely associated with incident myocardial infarction in patients with mixed peripheral artery disease presentations. Atherosclerosis 263, 301–308, https://doi.org/10.1016/j.atherosclerosis.2017.06.925 (2017). Rasmiena, A. A. et al. Plasmalogen modulation attenuates atherosclerosis in ApoE- and ApoE/GPx1-deficient mice. Atherosclerosis 243, 598–608, https://doi.org/10.1016/j.atherosclerosis.2015.10.096 (2015). Lopez-Mejia, I. C., Castillo-Armengol, J., Lagarrigue, S. & Fajas, L. Role of cell cycle regulators in adipose tissue and whole body energy homeostasis. Cell. Mol. Life Sci. 75, 975–987, https://doi.org/10.1007/s00018-017-2668-9 (2018). Chavey, C., Lagarrigue, S., Annicotte, J.-S. & Fajas, L. In Physiology and Physiopathology of Adipose Tissue (eds Bastard, J. P. & Fève, B.) Ch. 2, 17–25 (Springer-Verlag, 2013). Manteiga, S., Choi, K., Jayaraman, A. & Lee, K. Systems biology of adipose tissue metabolism: regulation of growth, signaling and inflammation. Wiley Interdiscip. Rev. Syst. Biol. Med. 5, 425–447, https://doi.org/10.1002/wsbm.1213 (2013). Zeve, D. et al. Wnt signaling activation in adipose progenitors promotes insulin-independent muscle glucose uptake. Cell Metab. 15, 492–504, https://doi.org/10.1016/j.cmet.2012.03.010 (2012). Gauger, K. J. et al. Mice deficient in Sfrp1 exhibit increased adiposity, dysregulated glucose metabolism, and enhanced macrophage infiltration. PLoS One 8, e78320, https://doi.org/10.1371/journal.pone.0078320 (2013). Sherwood, V. WNT signaling: an emerging mediator of cancer cell metabolism? Mol. Cell. Biol. 35, 2–10, https://doi.org/10.1128/MCB.00992-14 (2015). Ramirez-Campillo, R. et al. Regional fat changes induced by localized muscle endurance resistance training. J. Strength Cond. Res. 27, 2219–2224, https://doi.org/10.1519/JSC.0b013e31827e8681 (2013). Ferrante, A. W. Jr. Macrophages, fat, and the emergence of immunometabolism. J. Clin. Invest. 123, 4992–4993, https://doi.org/10.1172/JCI73658 (2013). This work was supported by a National Health and Medical Research Council (NHMRC) of Australia Project Grant [NHMRC #540107] and an NHMRC Program Grant [NHMRC #569940]. This work was also partially supported by an OIS grant from the Victorian State Government and a NHMRC Centre of Research Excellence Grant [NHMRC # 1041056]. Professors Kingwell, Owen, Inouye and Dunstan are NHMRC Research Fellows. Baker Heart & Diabetes Institute, Melbourne, Australia Megan S. Grace, Melissa F. Formosa, Kiymet Bozaoglu, Marta Brozynska, Andrew L. Carey, Camilla Bertuzzo Veiga, Parneet Sethi, Francis Dillon, David A. Bertovic, Michael Inouye, Neville Owen, David W. Dunstan & Bronwyn A. Kingwell Murdoch Children's Research Institute, and Department of Paediatrics, University of Melbourne, Parkville, VIC, Australia Kiymet Bozaoglu Division of Endocrinology, Metabolism, and Diabetes and Anschutz Health and Wellness Center, University of Colorado, School of Medicine, Aurora, Colorado, USA Institut Pluridisciplinaire Hubert Curien, Université de Strasbourg, CNRS, Strasbourg, France UMR 7178 Centre National de la Recherche scientifique (CNRS), Strasbourg, France Department of Public Health and Primary Care, University of Cambridge, Cambridge, CB1 8RN, United Kingdom Marta Brozynska & Michael Inouye Swinburne University of Technology, Melbourne, Australia Neville Owen Mary MacKillop Institute for Health Research, Australian Catholic University, Melbourne, Australia David W. Dunstan Megan S. Grace Melissa F. Formosa Marta Brozynska Andrew L. Carey Camilla Bertuzzo Veiga Parneet Sethi Francis Dillon David A. Bertovic Michael Inouye Bronwyn A. Kingwell Study conception (M.G., D.D. and B.K.), Data collection (M.F., K.B., A.C. and D.B.) data analysis (M.G., K.B., P.S., F.D., M.B. and M.I.), manuscript drafting (M.G. and B.K.), preparation of figures (C.B.V.), critical manuscript revision (M.G., N.O., D.D., K.B., A.B., A.C., M.B., M.I. and B.K.). Correspondence to Bronwyn A. Kingwell. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Dataset 1 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Grace, M.S., Formosa, M.F., Bozaoglu, K. et al. Acute effects of active breaks during prolonged sitting on subcutaneous adipose tissue gene expression: an ancillary analysis of a randomised controlled trial. Sci Rep 9, 3847 (2019). https://doi.org/10.1038/s41598-019-40490-0 Does breaking up prolonged sitting improve cognitive functions in sedentary adults? A mapping review and hypothesis formulation on the potential physiological mechanisms Baskaran Chandrasekaran Arto J. Pesola Ashokan Arumugam BMC Musculoskeletal Disorders (2021) Asymptomatic malaria and hepatitis B do not influence cytokine responses of persons involved in chronic sedentary activities Nsoh Godwin Anabire Paul Armah Aryee Gideon Kofi Helegbe BMC Infectious Diseases (2020) Distinct abdominal and gluteal adipose tissue transcriptome signatures are altered by exercise training in African women with obesity Pamela A. Nono Nankam Matthias Blüher Julia H. Goedecke Scientific Reports (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. About Scientific Reports Guide to referees Guest Edited Collections Scientific Reports Top 100 2019 Scientific Reports Top 10 2018 Editorial Board Highlights Author Highlights 10th Anniversary Editorial Board Interviews Search articles by subject, keyword or author Show results from All journals This journal Explore articles by subject Scientific Reports (Sci Rep) ISSN 2045-2322 (online) nature.com sitemap Protocol Exchange Nature portfolio policies Author & Researcher services Scientific editing Nature Masterclasses Nature Research Academies Librarian service & tools Partnerships & Services Nature Conferences Nature Africa Nature China Nature India Nature Italy Nature Japan Nature Korea Nature Middle East Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Rigid chains admitting many embeddings Authors: M. Droste and J. K. Truss Journal: Proc. Amer. Math. Soc. 129 (2001), 1601-1608 MSC (2000): Primary 06A05 DOI: https://doi.org/10.1090/S0002-9939-00-05702-6 Published electronically: October 31, 2000 Abstract | References | Similar Articles | Additional Information Abstract: A chain (linearly ordered set) is rigid if it has no non-trivial automorphisms. The construction of dense rigid chains was carried out by Dushnik and Miller for subsets of $\mathbb {R}$, and there is a rather different construction of dense rigid chains of cardinality $\kappa$, an uncountable regular cardinal, using stationary sets as 'codes', which was adapted by Droste to show the existence of rigid measurable spaces. Here we examine the possibility that, nevertheless, there could be many order-embeddings of the chain, in the sense that the whole chain can be embedded into any interval. In the case of subsets of $\mathbb {R}$, an argument involving Baire category is used to modify the original one. For uncountable regular cardinals, a more complicated version of the corresponding argument is used, in which the stationary sets are replaced by sequences of stationary sets, and the chain is built up using a tree. The construction is also adapted to the case of singular cardinals. Manfred Droste, The existence of rigid measurable spaces, Topology Appl. 31 (1989), no. 2, 187–195. MR 994410, DOI https://doi.org/10.1016/0166-8641%2889%2990081-3 Manfred Droste, Super-rigid families of strongly Blackwell spaces, Proc. Amer. Math. Soc. 103 (1988), no. 3, 803–808. MR 947662, DOI https://doi.org/10.1090/S0002-9939-1988-0947662-2 M. Droste, D. Kuske, R. McKenzie, R. Pöschel, Complementary closed relational clones are not always Krasner clones, Algebra Universalis, to appear. Manfred Dugas and Rüdiger Göbel, Applications of abelian groups and model theory to algebraic structures, Infinite groups 1994 (Ravello), de Gruyter, Berlin, 1996, pp. 41–62. MR 1477163 B. Dushnik and E. W. Miller, Concerning similarity transformations of linearly ordered sets, Bull. Amer. Math. Soc. 46 (1940), 322-326. A. M. W. Glass, Ordered permutation groups, London Mathematical Society Lecture Note Series, vol. 55, Cambridge University Press, Cambridge-New York, 1981. MR 645351 Joseph G. Rosenstein, Linear orderings, Pure and Applied Mathematics, vol. 98, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1982. MR 662564 Robert M. Solovay, Real-valued measurable cardinals, Axiomatic set theory (Proc. Sympos. Pure Math., Vol. XIII, Part I, Univ. California, Los Angeles, Calif., 1967) Amer. Math. Soc., Providence, R.I., 1971, pp. 397–428. MR 0290961 M. Droste, The existence of rigid measurable spaces, Topology and its Applications 31 (1989), 187-195. M. Droste, Super-rigid families of strongly Blackwell spaces, Proc. Amer. Math. Soc. 103 (1988), 803-808. M. Dugas, R. Göbel: Applications of abelian groups and model theory in algebraic structures, in: 'Infinite Groups' (de Giovanni, Newell, eds.), de Gruyter and Co., Berlin, New York, 1996, 41-62. A. M. W. Glass, Ordered Permutation Groups, London Mathematical Society, Lecture Notes, 55, Cambridge University Press, 1981. J. G. Rosenstein, Linear Orderings, Academic Press, 1982. R. M. Solovay, Real-valued measurable cardinals, in D. S. Scott (ed.) Axiomatic set theory, Proc. Symp. Pure Math. XIII Part 1, Amer. Math. Soc. 1971, 397-428. Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 06A05 Retrieve articles in all journals with MSC (2000): 06A05 M. Droste Affiliation: Institut für Algebra, Technische Universität Dresden, D-01062 Dresden, Germany Email: [email protected] J. K. Truss Affiliation: Department of Pure Mathematics, University of Leeds, Leeds LS2 9JT, England Email: [email protected] Keywords: Chain, linearly ordered set, rigid, embedding, meagre, stationary Received by editor(s): July 7, 1999 Received by editor(s) in revised form: September 15, 1999 Additional Notes: Research supported by a grant from the British-German Academic Collaboration Programme. Communicated by: Alan Dow
CommonCrawl
Adversarial attacks on fingerprint liveness detection Jianwei Fei1, Zhihua Xia ORCID: orcid.org/0000-0001-6860-647X1, Peipeng Yu1 & Fengjun Xiao2 EURASIP Journal on Image and Video Processing volume 2020, Article number: 1 (2020) Cite this article Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness detection schemes by leveraging this property in this paper. Extensive evaluations are made with three existing adversarial methods: FGSM, MI-FGSM, and Deepfool. We also proposed an adversarial attack method that enhances the robustness of adversarial fingerprint images to various transformations like rotations and flip. We demonstrate these outstanding schemes are likely to classify fake fingerprints as live fingerprints by adding tiny perturbations, even without internal details of their used model. The experimental results reveal a big loophole and threats for these schemes from a view of security, and enough attention is urgently needed to be paid on anti-adversarial not only in fingerprint liveness detection but also in all deep learning applications. The rapid growth in deep learning and in particular convolutional neural networks (CNNs) brings new solutions to many problems in computer vision, big data [1], and security [2]. These breakthroughs are gradually being put into use of various practical applications like face identification [3,4,5], pedestrian detection [6, 7], and unmanned vehicles [8, 9]. While deep networks have seen phenomenal success in many domains, Szegedy et al. [10] first demonstrated that through intentionally adding certain tiny perturbations, an image remains indistinguishable to original image but networks probably misclassify it as other classes instead of the original prediction. This is called adversarial attack and the perturbed image is the namely adversarial sample. Part of their results is shown in Fig. 1. It is interesting that we notice the perturbation images show some similarity with the encrypted images [12,13,14,15,16], but the former are magnified noise while the latter are sophisticated designed encrypted files. Recent researchers have created serval methods to craft adversarial samples which vary greatly in terms of perturbation degree, number of perturbed pixels, and computation complexity. a Adversarial samples generated with AlexNet [11] in [10]. The left column shows the correctly predicted samples, and the middle column is the magnified value of perturbations. The adversarial samples and target labels are shown in the rightmost column. b Fake fingerprints made from different materials and cheating authentication system or unlocking smartphones with them There are serval sorting criterions of adversarial attacks concerning the level that attackers are in the know of target models or whether the misclassified label is specified. Generating adversarial samples with the architecture and parameters of the target model is referred to as white-box attack while black-box attack without them. For an image, if not only the attack is required to be successful, but also the adversarial sample generated is required to classified to a specific class, it is called targeted attack and otherwise untargeted attack. Generating adversarial samples is a constrained optimization problem. Given a clean image and a fixed classifier that originally makes correct classification, our goal is to make the classifier misclassify the clean image. Note that the prediction results can be regarded as a function of the clean image about the classifier of which the parameters are fixed. Thus, general adversarial attack methods computing gradients of the clean image about the classifier to make the prediction deviate from the original result, and modify the clean image accordingly. Since Szegedy et al. [10] explored this property, and with many efficient, robust attack methods being crafted continuously, a potential security threat for practical deep learning applications came into view. For instance, face recognition systems using CNNs also show vulnerability against adversarial samples [17,18,19]. Such biometric information is always used with sensitive purposes or scenarios requiring high security, especially fingerprint due to its uniqueness varies individuals. Considering this, we extend similar work on another application referred to as fingerprint liveness detection in this paper, notice that we are the first introducing adversarial attacks into this area to our knowledge. The fingerprint liveness detection module is always deployed in fingerprint authentication systems. This technology aims to distinguish whether the fingerprint is an alive part of a person or a fake one forged with silicone, etc. It is in general divided into hardware- and software-based approaches depending on whether additional sensors are required. The latter can be easily developed into most systems therefore received more attention, and it can be further classified as feature- and deep learning-based. Among them, deep learning-based solutions caused a rising interest in recent years thanks to the rising of deep learning. Although they reached much more outstanding performance than feature-based solutions, the vulnerable property of CNNs leaves a potential risk. That is, the correctly classified fake fingerprint can pass through the detection module by presenting its adversarial sample. Even though attackers cannot successfully cheat fingerprint recognition system with fake fingerprint, they may still against the system by supplying an adversarial fingerprint image. In this paper, we thoroughly evaluate the robustness of several state-of-the-art fingerprint liveness detection models by both white-box and black-box attacks in various settings and demonstrate the vulnerability of these models in this setting. In our paper, we successfully attack deep learning-based fingerprint liveness detection methods, including the-state-of-the-art one by adversarial attack technology. Sufficient experiments show that once these methods are open source, for almost any fingerprint, the malicious can make its adversarial sample to pose as an alive one and cheat the detection algorithms. Our work also shows even if the details of these detection algorithms are unknown, there is still a definite possibility to realize this attack. We also propose an enhanced adversarial attack method to generate adversarial samples that are more robust to various transformations and achieve a higher attack success rate compared to other advanced methods. In this section, we will review the development of adversarial attack methods and deep learning-based fingerprint liveness detection models. On the basis of current knowledge, deep neural networks achieve high performance on tasks in computer vision and natural language processing because they can characterize arbitrary continuous function with an incalculable number of cascaded nonlinear steps. But as the result is automatically computed by backpropagation via supervised learning, it can be difficult to interpret and can have counterintuitive properties. And with deep neural networks' increasing usage in the physical world, these properties may be used for malicious behavior. Szegedy et al. [10] first revealed that adding a certain hardly perceptible perturbation which increasing the prediction error could cause networks to misclassify an image. They also found this property is not affected by the structure and dimensionality of networks or data distribution, and even more, the same perturbation could cause misclassifications on different networks with the same original input image. They proposed an equation that searches the smallest perturbation added to cause misclassification: $$ \operatorname{minimize}\ {\left\Vert p\right\Vert}_2\ \mathrm{s}.\mathrm{t}.\kern0.5em f\left({\mathrm{X}}_c+p\right)={y}_{\mathrm{target}};{\mathrm{X}}_c+p\in \left[0,1\right] $$ This is a hard problem, hence the author approximated it using a box-constrained L-BFGS [20] and it turns into a convex optimization process. This is completing by searching the minimum c > 0 where the minimizer p of the following problem satisfies f(Xc + p) = ytarget: $$ \operatorname{minimize}\ c\mid p\mid +{\mathrm{Loss}}_f\left({\mathrm{X}}_c+p,{y}_{\mathrm{target}}\right).\mathrm{s}.\mathrm{t}\ {\mathrm{X}}_{\mathrm{c}}+p\in \left[0,1\right] $$ As shown in Fig. 1a, by solving this optimization problem, we could compute the perturbations to which a clean image that could successfully fool a model should be added, but the adversarial images and original images are hardly distinguishable to human. It was also observed that a considerable number of adversarial examples will be misclassified by different networks as well, namely, cross model generalization. These astonishing discoveries aroused strong interest of researchers in adversarial attacks of computer vision and gave birth to related competitions [21, 22]. In ICLR 2015, Goodfellow et al. [23] proposed a method referred to as Fast Gradient Sign Method (FGSM) to efficiently compute the perturbation by solving the following problem: $$ p=\varepsilon \operatorname {sign}\left(\nabla J\left(\theta, {\mathrm{X}}_c,{y}_{\mathrm{target}}\right)\right) $$ where ∇J(…) computes the gradient of the cost function around parameters of the model w.r.t. Xc and ε notes a small coefficient that restricts the infinite norm of the perturbation. They successfully caused a misclassification rate of 99.9% on a shallow softmax classifier trained on MNIST while ε = 0.25 and 87.15% on a convolutional maxout network trained on CIFAR-10 while ε = 0.1. Miyato et al. [24] then normalized the computed perturbation with L2-norm on this basis. FGSM and its varietas are classic one-shot method that generates an adversarial sample with one step only. Later in 2017, Kurakin et al. [25] developed an iterative method that takes multiple steps increasing the loss function namely Basic Iterative Method (BIM). Their approach exceedingly reduces the size of perturbation for generating an adversarial sample and shows a serious threat to deep architecture models such as Inception-v3 [26]. Similarly, Moosavi-Dezfooli et al. [27] proposed Deepfool that also computes the minimum perturbation iteratively. This algorithm disturbs the image with a small vector, pushing the clean image confined in the decision boundary out of the boundary step by step until the misclassification occurs. Dong et al. [28] introduced momentum into FGSM, in their approach, not only the current gradient is computed during every iteration but also the gradient of the last iteration is added, and a decay factor is used to control the influence of the previous gradient. This Momentum Iterative Method (MIM) greatly improves cross model generalization and black-box attack success rate, their team won the first prize in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions [21]. The above methods all compute the perturbation by solving a gradient related problem, usually requiring direct access to target models. To realize a more robust black-box attack, Su et al. [29] proposed One Pixel Attack that searches the perturbation by differential evolution that causes misclassification with the highest confidence instead of computing the gradient. This method made no restraint of perturbation size, meanwhile, it limits the number of perturbed pixels. With the development of adversarial attack technology, some scholars began to conduct research on attacking real-world systems embedded with deep learning algorithms. Kurakin et al. [25] first proved that the threat of adversarial attack also exists in the real world. They printed adversarial images and took snapshots from smartphones. Results show that even through captured by camera, a relatively large part of adversarial images are misclassified as well. Kevin et al. [30] designed Robust Physical Perturbations (RP2) which only perturbs the target objects in physical world such as guideposts and keeps the background unchanged. For instance, sticking several black and white stickers on a stop sign according to RP2's result could prevent YOLO and Faster-RCNN from detecting it correctly. Bose et al. [31] also successfully attacks Faster-RCNN with adversarial examples that crafted from their proposed adversarial generator network by solving a constrained optimization problem. In addition to face location, another key problem in face recognition is liveness detection. Biometrics like faces are usually applied in systems with high-security requirements, thus the systems are always accompanied by liveness detection module to detect whether a captured face image is alive or from photos. We note that fingerprint identification systems also require liveness detection to distinguish live fingers from fake ones [32], and with more and more fingerprint liveness detection algorithms based on deep learning are developed, the adversarial attack has risen a potential risk in this domain as well. To our knowledge, Nogueira et al. [33] first detected fake fingerprint using CNNs, later in [34], they fine-tuned the fully connected layer of VGG and Alexnet with fingerprint datasets, leaving previous convolutional and pooling layers unchanged. This work has reached astonishing performance compared to feature-based approaches in fingerprint liveness detection. Chugh et al. [35] cut fingerprint patches centered on pre-extracted minutiaes and trained them with Mobilenet-v1. Their results are state-of-the-art as we got on with this work. In the literature, Kim et al. [36] proposed a detection algorithm based on deep belief network (DBN) that is constructed layer by layer using restricted Boltzmann machines (RBM). Nguyen et al. [37] regarded the fingerprint as a global texture feature and designed an end-to-end model following this idea. Their experimental results show that networks designed to combine the inherent characteristics of fingerprints can achieve better performance. Pala et al. [38] constructed a triple dataset to train their network. A triple set consists of a fingerprint to be detected, a fingerprint of the same class as it and a fingerprint of the other class. This data structure could make a constraint to minimize within-class distance and maximize between-class class distance is as large as possible. It is noteworthy that all these methods mentioned are based on CNN, and achieved very competitive performances. Networks to be attacked VGG19 and Alexnet-based method In this section, we will briefly introduce the target networks we attempt to attack, including specific structure and training processes. Before we conduct adversarial attacks on the state-of-the-art fingerprint liveness detection networks, a pre-evaluation would be carried on [34], the finetuned VGG and Alexnet. This is because the way that finetuning classical models for new tasks is widely used, though these models are a bit out of date, they stood the test of time and from which more advanced models derive. Equally thorough experiments will also be carried on [35]. According to Nogueira's method in [34], both models are finetuned with stochastic gradient descent (SGD) while batch size is 5, momentum [39] is 0.9, and the learning rate is fixed at 1E−6. In these two models, both outputs fully connected layers are replaced by 2 units which were 1024 in original networks as shown in Fig. 2. For keeping a concise but intuitive impression, the size of these feature maps is not prorated and pooling operations are represented by shrinkage of it. In pre-process, the training set is augmented by the implementation similar to the one in [11], patches with 80% of each dimension of the original images are cut for each fingerprint image, thus we totally obtain five patches from four corners and center and create horizontal reflection version of them. The whole training set is therefore 10 times larger than the original edition. During the testing phase, the testing set adopts the same approach and fuse the 10 patch's prediction as to the final classification results for a single fingerprint image. The upper is Alexnet and the lower is VGG19 Mobilenet-v1-based method Chugh's method also utilizes an existing structure called Mobilenet-v1 but train it from scratch. The last layer is replaced by a 2-unit softmax layer as well. In pre-process, they extracted minutiaes using the algorithm from [40] for a fingerprint image, a minutiae is a key point in fingerprint images, for instance, ridge ending, short or independent ridge and the circle in the ridge pattern. A minutiae object returns x, y coordinate and its direction. Then cut out patches centered on the coordinates, and align the patches according to the directions in order to cut out smaller ones. All the patches are used to train a Mobilenet-v1, the result is a fusion of all the patches' scores Fig. 3. This series of operations is on the basis that a fingerprint image has large blank areas surrounding the ridge region, directly resizing these images would lead to a serious discriminatory information loss. The noise involved in the fingerprint forgery process provides salient cues to distinguish a spoof fingerprint from live fingerprints, thus patches centered at minutiaes could maximize this difference. This is the best fingerprint liveness detection method at present to our knowledge. The flow chart of Chugh's method Methods to generate samples In this paper, we totally compared four algorithms regarding success rate, visual impact, and robust to transformations. FGSM is the first basic adversarial algorithm we tested using the function (3), and its effectiveness is evaluated by adjusting ε. MI-FGSM is an upgraded version of FGSM that used in this paper, the number of iterations T and momentum degree μ is two other hyperparameters to be controlled instead of ε. We then made another evaluation with Deepfool and tested our own modified method based on MI-FGSM. The Deepfool automatically computes the minimum perturbations without setting up a fixed ε. Since it has been shown that iterative methods are stronger white-box adversaries than one-step methods at the cost of worse transferability, our method can keep the transferability to a certain extent. Deepfool In our case, fingerprint liveness detection is always treated as a binary classification problem, and therefore the Deepfool algorithm is used here for binary classifiers as well. The author assumes \( \hat{k}\left(\boldsymbol{x}\right)=\operatorname{sign}\left(f\left(\boldsymbol{x}\right)\right) \) where f represents a binary image classification function and derives the general algorithm, which can be applied to any differentiable binary classifier f. That is, to adopt an iterative process to estimate Δ(x; f). Specifically, f is linearized around the current point xi at each iteration where i is the current number of iterations, and the minimal perturbation of linearized f can be computed through: $$ \mathrm{argmin}\ {\left\Vert {r}_i\right\Vert}_2\ \mathrm{subject}\ \mathrm{to}\ f\left({\boldsymbol{x}}_i\right)+\nabla f{\left({\boldsymbol{x}}_i\right)}^T{r}_i=0 $$ The algorithm terminates when xi changes sign of the classifier's result or maximum iterations is reached. The Deepfool algorithm for binary classifiers is summarized as follows. Momentum iterative fast gradient sign method Momentum iterative fast gradient sign method (MI-FGSM) is upgraded twice in the basic version of FGSM. The I-FGSM iteratively applies multiple steps with a small step size α, and MI-FGSM further introduces momentum [41]. Momentum method is a technique to accelerate and stabilize stochastic gradient descent algorithm. Gradients in the previous iteration are accumulated in the current gradient direction of the loss function, it can be considered a velocity vector pass through every iteration. Dong et al. first applied the technique of momentum to generate adversarial samples and get tremendous benefits. The MI-FGSM is summarized below. Transformation robust attack During the experiments, we found that adversarial samples generated by these methods are not robust enough to image transformations, for instance, resize, horizontal flip, and rotations. However, these transformations commonly occur in the physical world, and to generate adversarial samples that can successfully attack detection modules under any conditions, we have to take such demand into account. A heuristic and natural idea is to add slight Gaussian noise in order to disturb the sample at every iteration. And by randomly rotating the sample at a very small angle, we could improve its robustness to rotation transformations and even transferability on a different model. Note that with the addition of the noise, the global perturbation degree is increased compared to the original MI-FGSM. In this section, we conduct different adversarial attacks on the above models. Details are available in the following part. In general, we compare the success rates of different attack methods to different models, and furthermore, evaluate their robustness to varies transformations such as rotating and resizing. The fingerprint datasets used in this paper are from Liveness Detection Competition (LivDet), containing the years 2013 [42] and 2015 [43], namely, LiveDet2013 and LiveDet2015(Table 1). The earlier competition datasets are not used because of fingerprint images quality and the coincidence of data distribution, e.g., fake fingerprints made with the same materials captured by the same sensors probably cause similar results. LivDet 2013 consists of fingerprint images captured by four different sensors. Each has approximately 2000 images of fake/real fingerprints respectively, the number of real/fake fingerprints ratio is also equally distributed between training and testing sets. The fake fingerprints are made from different materials: Gelatin, Latex, Eco Flex, Wood Glue, and Modasil. Although the sizes of the images range from 315 × 372 to 700 × 850 pixels depending on sensors, they were all resized concerning the input dimension of the models which is 224 × 224 for VGG and 227 × 227 pixels for Alexnet. Table 1 Summary of liveness detection datasets used in our work We adjust ε in FGSM to control the perturbation degree, five different values: 0.03, 0.06, 0.09, 0.12, and 0.15 are tested on all three detection algorithms trained on LivDet2013 in a white-box manner. Since Deepfool automatically searches the minimum perturbation, it does not restrict the perturbation degree, however, we limit the number of max iterations as 100 to guarantee time consumption acceptable, also, 100 is a moderate value that ensures most fingerprint images can be converted to their adversarial samples. As for MI-FGSM, we set the ε = 0.12, iterations = 10, and decay factor = 0.5 according to the existing literature and our preliminary test results. Our method originates in MI-FGSM, thus we apply similar settings but iterations number raised to 20. The noise added obeys gauss distribution of which the standard deviation is 0.1 and the mean is 0. Meanwhile, we set the angle of random rotation between − 5° and 5°. To evaluate the feasibility of black-box attack, we have trained our own detection models. We first consider two models of which one is shallow with several layers and the other is much deeper. The shallow one consists of 4 convolutional layers with 3×3 kernel, and the stride is 2 thus no pooling layer is involved. Each layer is twice as deep as the previous one while there are 32 channels in first layer. The deeper one consists of 5 blocks in which there are 3 convolutional layers and BN layers, numbers of kernels in block are doubled to previous layer and consistent inside the block as it is 32 in the first one. We further train two ensemble models with three branches for each in addition to the models used above: one is shallow and the other is relatively deep as well. Each branch is different to each other regarding size of kernel, number of kernels and pooling methods. This idea is originated in inception module. The reason we set up the black-box attack models to be ensemble is that successful attacks on a collection of models may cause the improvement on attacking single model. This is a natural intuition and has been verified in our work. Specific structures of the above four models are different for different dataset and chosen via an extensive search. At last, we prepared five kinds of transformations to research their influence. Resizing represents that we expand the adversarial sample by 2X and restore it to original size, this approximately equal to adding very small noise according to scaling method. We also horizontally flip and rotate the samples at random angle − 30° to 30°, combination of resize and flip and resize and rotation are also considered. We first evaluate original FGSM and results are shown in Table 2, this one-step attack methed does not produce a satisfactory effect on target models in white-box manner with a low perturbation degree. The table shows that while ε = 0.03, almost over half inputs can be turned into adversarial samples which lead to misclassifications. With the ε increasing, the ratio unsurprisingly increases and is nearly full at 0.16. However, with larger ε, the attack success rate obviously rises, we did not further improve the value of ε because it is foreseeable that 100% is reachable with a ε large enough. We deem that this increase in the success rate is at the expense of larger perturbations. We also find some other notable phenomena from the table. Generally, under the same ε, models with greater complexity are more robust to adversarial attacks even in white-box, the complexity here is depth. It may be due to the high dimension of complex model and the learned decision boundary is complex as well. Another reasonable explanation is that as the complexity of the model increases, its learning ability becomes stronger, therefore its adversarial samples are harder to make. We also found that fingerprint images of higher resolution are always harder to be made into adversarial samples, as high resolution provides more discriminative details. Table 2 Success rate of FGSM attacks with different ε in white-box manner. Bio2013, Ita2013, and Cro2013 represent dataset of Biometrika, ItalData, and Crossmatch in LiveDet2013, respectively An overall evaluation of different attack methods in white-box manner shows their performance in Table 3. Here we set ε = 0.12 of FGSM, and other settings are the same as mentioned above. It shows that the iterative method is generally much better than FGSM, although the MI-FGSM and our method both set ε = 0.12 as well. It can be also observed that the attack success rate of the high-resolution dataset is slightly lower than that of the lower resolution dataset too. In a white-box manner, our method achieves competitive results compared to other iterative algorithms. Table 3 Success rate of different methods in white-box manner. Gre2015, Bio2015, and Cro2015 represent dataset of GreenBit, Biometrika, and CrossMatch in LiveDet2015 respectively To research the average perturbation degree of the adversarial samples generated by different methods, we compute the "average robustness" using the method proposed in [27]. It is defined by $$ \frac{1}{\left|N\right|}\sum \limits_{\boldsymbol{x}\in N}\frac{{\left\Vert \hat{\boldsymbol{r}}\left(\boldsymbol{x}\right)\right\Vert}_2}{{\left\Vert \boldsymbol{x}\right\Vert}_2} $$ where \( \hat{\boldsymbol{r}}\left(\boldsymbol{x}\right) \) is the perturbation computed by different methods, and N denotes the dataset. This computes the average perturbation amplitude by averaging the proportion of perturbation vector to the original image of each adversarial sample. We report in Table 4 the average robustness of each model and method. FGSM requires the largest perturbations to successfully generate an adversarial sample. Our method gets similar results to Deepfool and MI-FGSM, a much lower average perturbation degree. It is consistent with our previous observation that a deeper and complicated network is more robust to adversarial samples, and more perturbations are necessary for a successful attack. It also shows that the magnitude of the disturbance caused by our method is acceptable and at the same level compared to other advanced methods. Moreover, the TRA is not seriously affected by the complexity of the target model and the average robustness is stable among different target models. Table 4 Average robustness computed for different methods. For each model, we randomly pick 200 samples from GreenBit, Biometrika, and CrossMatch in LivDet2015, respectively, and compute their average robustness All the above experiments are white-box attacks; we conduct more experiments under black-box condition. We first trained four models which have different structures to each other and to the target models. Table 5 shows their performance of detecting fake fingerprints on Biometrika2013 and Biometrika2015. And in Table 6, we report the attack success rate of black-box with adversarial samples generated from our models. It is tested on Biometrika2013 and Biometrika2015 to further analyze the influence of image resolution on attack success rate. Attack success rate of black-box is much lower than that of white-box; however, with the depth increasing, the success rate improves. Compared to single CNN, the ensemble models no matter shallow or deep both achieve considerable performances, adversarial samples generated by them are more likely to realize black-box attacks. The influence of complexity of target models on attack success rate is more significant in this case, MobilenetV1 is the hardest one to attack while Alexnet is easier. This part of the experiments also proves that fingerprint images of higher resolution provide more discriminative cues for models to learn better features and lead to more robust of the models to adversarial samples. Table 5 Error rate of different models on Biometrika2013 and Biometrika2015 Table 6 Black-box attacks with adversarial samples generated from different models by MI-FGSM and TRA For a more comprehensive assessment in the feasibility that attack deep learning-based fingerprint liveness detection algorithms deployed in the physical world. We also compared our method and MI-FGSM in both white-box and black-box manners while various transformations applied in the adversarial samples. Table 7 shows that even under white-box, there is still a great probability to make adversarial samples invalid. And about half adversarial samples will be classified correctly after rotations, and most of them are invalid after resizing and rotations. These transformations are more destructive in black-box attacks; however, a small part of adversarial samples generated by our method can survive. Our method surpasses MI-FGSM by a narrow margin in various situations. It indicates that these detection algorithms still may be threatened in complex cases like this. Table 7 Robustness to transformations of different adversarial attack methods, we randomly pick 300 samples from GreenBit, Biometrika, and CrossMatch in LivDet2015, respectively, and generate their corresponding adversarial samples to attack VGG19 In this work, we provided extensive experimental evidence that cheating excellent deep learning-based fingerprint liveness detection schemes by adversarial samples is feasible. These detection networks could be easily break through by basic FGSM in white-box manner at the cost of some perturbations. With more advanced methods like Deepfool and MI-FGSM, almost arbitrary fingerprint image can turn into an adversarial sample with more imperceptible changes. We note that adversarial samples generated by the above methods are not robust enough to transformations, for instance, resize, horizontal flip, and rotations. Thus, we also proposed an algorithm to generate adversarial samples that are slightly more robust to various transformations by adding noise and random rotations during every iteration. These methods are evaluated on LivDet2013 and LivDet2015 datasets. According to our results, a small part of adversarial samples possesses transferability on different models, that indicate it is also possible to cause misclassification under black-box scenarios. In terms of robustness to transformations, further evaluations demonstrate the proposed method can also surpass others slightly. These results highlight the potential risks of existing fingerprint liveness detection algorithms, and we hope our work will encourage researchers to start designing more robust detection algorithms that have innate adversarial robustness to achieve higher security. The datasets used and analyzed during the current study are available from the first author on reasonable request. Xc : An original clean image x ∗ : An adversarial sample y true : The label of the original clean image y target : The target label p : Perturbation f : The classifier f(xc): Classification result J(…): Loss function of the classifier θ : Parameter of the classifier ε : The size of perturbation ε μ : Decay factor Y. Zheng, X. Xu, L. Qi, Deep CNN-assisted personalized recommendation over big data for mobile wireless networks. Wireless Communications and Mobile Computing 2019 (2019) Y. Zheng, J. Zhu, W. Fang, L.-H. Chi, Deep learning hash for wireless multimedia image content security. Security and Communication Networks 2018 (2018) H. Wang et al., in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Cosface: large margin cosine loss for deep face recognition (2018), pp. 5265–5274 K. Cao, Y. Rong, C. Li, X. Tang, C. Change Loy, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Pose-robust face recognition via deep residual equivariant mapping (2018), pp. 5187–5196 Y. Sun, D. Liang, X. Wang, X. Tang, Deepid3: Face recognition with very deep neural networks. arXiv preprint arXiv 1502, 00873 (2015) Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, J. Kautz, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Joint discriminative and generative learning for person re-identification (2019), pp. 2138–2147 Y. Li, C. Huang, C.C. Loy, X. Tang, in European Conference on Computer Vision. Human attribute recognition by deep hierarchical contexts (Springer, 2016), pp. 684–700 P. Li, X. Chen, S. Shen, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Stereo r-cnn based 3d object detection for autonomous driving (2019), pp. 7644–7652 F. Codevilla, M. Miiller, A. López, V. Koltun, A. Dosovitskiy, in 2018 IEEE International Conference on Robotics and Automation (ICRA). End-to-end driving via conditional imitation learning (IEEE, 2018), pp. 1–9 C. Szegedy et al., Intriguing properties of neural networks. arXiv preprint arXiv 1312, 6199 (2013) A. Krizhevsky, I. Sutskever, G.E. Hinton, in Advances in neural information processing systems. Imagenet classification with deep convolutional neural networks (2012), pp. 1097–1105 Z. Xia, L. Jiang, D. Liu, L. Lu, B. Jeon, BOEW: a content-based image retrieval scheme using bag-of-encrypted-words in cloud computing. IEEE Transactions on Services Computing (2019) Z. Xia, L. Lu, T. Qiu, H. Shim, X. Chen, B. Jeon, A privacy-preserving image retrieval based on AC-coefficients and color histograms in cloud environment. Computers, Materials & Continua 58(1), 27–44 (2019) Z. Xia, L. Jiang, X. Ma, W. Yang, P. Ji, N. Xiong, A privacy-preserving outsourcing scheme for image local binary pattern in secure industrial internet of things. IEEE Transactions on Industrial Informatics (2019) Z. Xia, N.N. Xiong, A.V. Vasilakos, X. Sun, EPCBIR: an efficient and privacy-preserving content-based image retrieval scheme in cloud computing. Information Sciences 387, 195–204 (2017) Z. Xia, Y. Zhu, X. Sun, Z. Qin, K. Ren, Towards privacy-preserving content-based image retrieval in cloud computing. IEEE Transactions on Cloud Computing 6(1), 276–286 (2015) M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security 2016, pp. 1528-1540: ACM. M. Sharif, S. Bhagavatula, L. Bauer, M.K. Reiter, Adversarial generative nets: Neural network attacks on state-of-the-art face recognition. arXiv preprint arXiv 1801, 00349 (2017) G. Goswami, N. Ratha, A. Agarwal, R. Singh, M. Vatsa, in Thirty-Second AAAI Conference on Artificial Intelligence. Unravelling robustness of deep learning based face recognition against adversarial attacks (2018) H. Tang, X. Qin, Practical methods of optimization (Dalian University of Technology Press, Dalian, 2004), pp. 138–149 A. Kurakin et al., in The NIPS'17 Competition: Building Intelligent Systems. Adversarial attacks and defences competition (Springer, 2018), pp. 195–231 W. Brendel et al., Adversarial vision challenge. arXiv preprint arXiv 1808, 01976 (2018) I.J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples. arXiv preprint arXiv 1412, 6572 (2014) T. Miyato, S.-i. Maeda, M. Koyama, S. Ishii, Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence 41(8), 1979–1993 (2018) A. Kurakin, I. Goodfellow, S. Bengio, Adversarial examples in the physical world. arXiv preprint arXiv 1607, 02533 (2016) C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, in Proceedings of the IEEE conference on computer vision and pattern recognition. Rethinking the inception architecture for computer vision (2016), pp. 2818–2826 S.-M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, in Proceedings of the IEEE conference on computer vision and pattern recognition. Deepfool: a simple and accurate method to fool deep neural networks (2016), pp. 2574–2582 Y. Dong et al., in Proceedings of the IEEE conference on computer vision and pattern recognition. Boosting adversarial attacks with momentum (2018), pp. 9185–9193 J. Su, D.V. Vargas, K. Sakurai, One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation (2019) K. Eykholt et al., Robust physical-world attacks on deep learning models, 2018. A.J. Bose, P. Aarabi, Adversarial attacks on face detectors using neural net based constrained optimization (2018) Z. Xia, C. Yuan, R. Lv, X. Sun, N.N. Xiong, Y.-Q. Shi, A novel weber local binary descriptor for fingerprint liveness detection, IEEE Transactions on Systems, Man, and Cybernetics: Systems (2018) R.F. Nogueira, R. de Alencar Lotufo, R.C. Machado, in 2014 IEEE workshop on biometric measurements and systems for security and medical applications (BIOMS) Proceedings. Evaluating software-based fingerprint liveness detection using convolutional networks and local binary patterns (IEEE, 2014), pp. 22–29 R.F. Nogueira, R. de Alencar Lotufo, R.C. Machado, Fingerprint liveness detection using convolutional neural networks. IEEE transactions on information forensics and security 11(6), 1206–1213 (2016) T. Chugh, K. Cao, A.K. Jain, Fingerprint spoof buster: Use of minutiae-centered patches. IEEE Transactions on Information Forensics and Security 13(9), 2190–2202 (2018) S. Kim, B. Park, B.S. Song, S. Yang, Deep belief network based statistical feature learning for fingerprint liveness detection ☆. Pattern Recognition Letters 77(C), 58–65 (2016) T. Nguyen, E. Park, X. Cui, V. Nguyen, H. Kim, fPADnet: small and efficient convolutional neural network for presentation attack detection. Sensors 18(8), 2532 (2018) F. Pala, B. Bhanu, in Deep Learning for Biometrics. Deep triplet embedding representations for liveness detection (Springer, 2017), pp. 287–307 I. Sutskever, J. Martens, G. Dahl, G. Hinton, in International conference on machine learning. On the importance of initialization and momentum in deep learning (2013), pp. 1139–1147 C. Kai, E. Liul, L. Pangi, J. Liangi, T. Jie, in International Joint Conference on Biometrics. Fingerprint matching by incorporating minutiae discriminability (2011) B.T. Polyak, Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics 4(5), 1–17 (1964) L. Ghiani et al., in Iapr International Conference on Biometrics. LivDet 2013 Fingerprint Liveness Detection Competition 2013 (2013) L. Ghiani, D.A. Yambay, V. Mura, G.L. Marcialis, F. Roli, S.A. Schuckers, Review of the Fingerprint Liveness Detection (LivDet) competition series: 2009 to 2015. Image and Vision Computing 58, 110–128 (2017) This work is supported in part by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20181407, in part by the National Natural Science Foundation of China under grant numbers 61672294, in part by Six peak talent project of Jiangsu Province (R2016L13), Qing Lan Project of Jiangsu Province and "333" project of Jiangsu Province, in part by the National Natural Science Foundation of China under grant numbers U1836208, 61502242, 61702276, U1536206, 61772283, 61602253, 61601236, and 61572258, in part by National Key R&D Program of China under grant 2018YFB1003205, in part by NRF-2016R1D1A1B03933294, in part by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20150925 and BK20151530, in part by Humanity and Social Science Youth Foundation of Ministry of Education of China (15YJC870021), in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund, in part by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET) fund, China. Zhihua Xia is supported by BK21+ program from the Ministry of Education of Korea. This work is funded by the National Natural Science Foundation of China under grant numbers 61672294. Jiangsu Engineering Center of Network Monitoring, Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, 210044, China Jianwei Fei , Zhihua Xia & Peipeng Yu Hangzhou Dianzi University, No. 1, 2nd Street Jianggan District, Hangzhou City, Zhejiang Province, China Fengjun Xiao Search for Jianwei Fei in: Search for Zhihua Xia in: Search for Peipeng Yu in: Search for Fengjun Xiao in: JF and ZX collectively designed the research, performed the research and wrote the paper. PY partly performed the research, analyzed the data, and partly wrote the paper. FX partly designed the research, wrote the paper, and modified the paper. All authors read and approved the final manuscript. Jianwei Fei received his BE degree in Electronic and Information Engineering from Nanjing Forestry University in 2014. He is currently pursuing a master's degree in Computer Science in Nanjing University of Information Science and Technology. His reach interests include artificial intelligence security and multimedia forensics. Zhihua Xia received a BS degree in Hunan City University, China and PhD degree in computer science and technology from Hunan University, China, in 2006 and 2011, respectively. He works as an associate professor in the School of Computer and Software, Nanjing University of Information Science and Technology. His research interests include digital forensic and encrypted image processing. He is a member of the IEEE from 1 March 2014. Peipeng Yu is a BE (2019) and is currently pursuing master degree in Computer Science in Nanjing University of Information Science and Technology. His reach interests include artificial intelligence security. Fengjun Xiao received his BS degree in Economics from the BeiHang University in 2009. He received his Master's degree in Technology Policy in 2014 under the supervision of Prof. Shi Li. He has been a Doctoral Candidate under the supervision of Prof. Chengzhi Li and began his research on the Network Security and Emergency Management since 2015. Correspondence to Zhihua Xia. Fei, J., Xia, Z., Yu, P. et al. Adversarial attacks on fingerprint liveness detection. J Image Video Proc. 2020, 1 (2020) doi:10.1186/s13640-020-0490-z DOI: https://doi.org/10.1186/s13640-020-0490-z Fingerprint liveness detection Adversarial attacks New Advances on Intelligent Multimedia Hiding and Forensics
CommonCrawl
Royi Electrical Engineer (Student) Signal Processing 7.8k 7.8k 33 gold badges1818 silver badges4545 bronze badges Stack Overflow 3k 3k 44 gold badges2828 silver badges4545 bronze badges Area 51 993 993 1919 bronze badges 47 The Median Minimizes the Sum of Absolute Deviations (The $ {L}_{1} $ Norm) 23 What Is the Best First Order IIR (AR Filter) Approximation to a Moving Average Filter (FIR Filter)? 19 Unbiased Estimator for a Uniform Variable Support 18 The proof of equivalent formulas of ridge regression 13 Why Is the Canny Edge Detection Used Instead of Sobel / Prewitt Edge Detection Before Hough Transformation? 13 Noise Estimation / Noise Measurement in Image 9 Prove Inequality $\frac{\left(1-\alpha\right)\left(1+{\alpha}^{k}\right)}{\left(1+\alpha\right)\left(1-{\alpha}^{k}\right)}\geqslant\frac{1}{k}$
CommonCrawl
As far as anxiety goes, psychiatrist Emily Deans has an overview of why the Kiecolt-Glaser et al 2011 study is nice; she also discusses why fish oil seems like a good idea from an evolutionary perspective. There was also a weaker earlier 2005 study also using healthy young people, which showed reduced anger/anxiety/depression plus slightly faster reactions. The anti-stress/anxiolytic may be related to the possible cardiovascular benefits (Carter et al 2013). The information learned in the tasks reviewed so far was explicit, declarative, and consistent within each experiment. In contrast, probabilistic and procedural learning tasks require the subject to gradually extract a regularity in the associations among stimuli from multiple presentations in which the correct associations are only presented some of the time, with incorrect associations also presented. Findings are mixed in these tasks. Breitenstein and colleagues (2004, 2006) showed subjects drawings of common objects accompanied by nonsense word sounds in training sessions that extended over multiple days. They found faster learning of the to-be-learned, higher probability pairings between sessions (consistent with enhanced retention over longer delays). Breitenstein et al. (2004) found that this enhancement remained a year later. Schlösser et al. (2009) tested subjects' probabilistic learning ability in the context of a functional magnetic resonance imaging (fMRI) study, comparing performance and brain activation with MPH and placebo. MPH did not affect learning performance as measured by accuracy. Although subjects were overall faster in responding on MPH, this difference was independent of the difficulty of the learning task, and the authors accordingly attributed it to response processes rather than learning. Took pill around 6 PM; I had a very long drive to and from an airport ahead of me, ideal for Adderall. In case it was Adderall, I chewed up the pill - by making it absorb faster, more of the effect would be there when I needed it, during driving, and not lingering in my system past midnight. Was it? I didn't notice any change in my pulse, I yawned several times on the way back, my conversation was not more voluminous than usual. I did stay up later than usual, but that's fully explained by walking to get ice cream. All in all, my best guess was that the pill was placebo, and I feel fairly confident but not hugely confident that it was placebo. I'd give it ~70%. And checking the next morning… I was right! Finally. There is an ancient precedent to humans using natural compounds to elevate cognitive performance. Incan warriors in the 15th century would ingest coca leaves (the basis for cocaine) before battle. Ethiopian hunters in the 10th century developed coffee bean paste to improve hunting stamina. Modern athletes ubiquitously consume protein powders and hormones to enhance their training, recovery, and performance. The most widely consumed psychoactive compound today is caffeine. Millions of people use coffee and tea to be more alert and focused. Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect. A number of different laboratory studies have assessed the acute effect of prescription stimulants on the cognition of normal adults. In the next four sections, we review this literature, with the goal of answering the following questions: First, do MPH (e.g., Ritalin) and d-AMP (by itself or as the main ingredient in Adderall) improve cognitive performance relative to placebo in normal healthy adults? Second, which cognitive systems are affected by these drugs? Third, how do the effects of the drugs depend on the individual using them? The difference in standard deviations is not, from a theoretical perspective, all that strange a phenomenon: at the very beginning of this page, I covered some basic principles of nootropics and mentioned how many stimulants or supplements follow a inverted U-curve where too much or too little lead to poorer performance (ironically, one of the examples in Kruschke 2012 was a smart drug which did not affect means but increased standard deviations). "Such an informative and inspiring read! Insight into how optimal nutrients improved Cavin's own brain recovery make this knowledge-filled read compelling and relatable. The recommendations are easy to understand as well as scientifically-founded – it's not another fad diet manual. The additional tools and resources provided throughout make it possible for anyone to integrate these enhancements into their nutritional repertoire. Looking forward to more from Cavin and Feed a Brain!!!!!!" ** = Important note - whilst BrainZyme is scientifically proven to support concentration and mental performance, it is not a replacement for a good diet, moderate exercise or sleep. BrainZyme is also not a drug, medicine or pharmaceutical. It is a natural-sourced, vegan food supplement with ingredients that are scientifically proven to support cognition, concentration, mental performance and reduction of tiredness. You should always consult with your Doctor if you require medical attention. Even party drugs are going to work: Biohackers are taking recreational drugs like LSD, psilocybin mushrooms, and mescaline in microdoses—about a tenth of what constitutes a typical dose—with the goal of becoming more focused and creative. Many who've tried it report positive results, but real research on the practice—and its safety—is a long way off. "Whether microdosing with LSD improves creativity and cognition remains to be determined in an objective experiment using double-blind, placebo-controlled methodology," Sahakian says. So what's the catch? Well, it's potentially addictive for one. Anything that messes with your dopamine levels can be. And Patel says there are few long-term studies on it yet, so we don't know how it will affect your brain chemistry down the road, or after prolonged, regular use. Also, you can't get it very easily, or legally for that matter, if you live in the U.S. It's classified as a schedule IV controlled substance. That's where Adrafinil comes in. CDP-Choline is also known as Citicoline or Cytidine Diphosphocholine. It has been enhanced to allow improved crossing of the blood-brain barrier. Your body converts it to Choline and Cytidine. The second then gets converted to Uridine (which crosses the blood-brain barrier). CDP-Choline is found in meats (liver), eggs (yolk), fish, and vegetables (broccoli, Brussels sprout). Took pill #6 at 12:35 PM. Hard to be sure. I ultimately decided that it was Adderall because I didn't have as much trouble as I normally would in focusing on reading and then finishing my novel (Surface Detail) despite my family watching a movie, though I didn't notice any lack of appetite. Call this one 60-70% Adderall. I check the next evening and it was Adderall. Some suggested that the lithium would turn me into a zombie, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I'd have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I'm not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn't notice any large change in emotional affect or energy levels. And it may've helped my motivation (though I am also trying out the tyrosine). Historically used to help people with epilepsy, piracetam is used in some cases of myoclonus, or muscle twitching. Its actual mechanism of action is unclear: It doesn't act exactly as a sedative or stimulant, but still influences cognitive function, and is believed to act on receptors for acetylcholine in the brain. Piracetam is used off-label as a 'smart drug' to help focus and concentration or sometimes as a way to allegedly boost your mood. Again, piracetam is a prescription-only drug - any supply to people without a prescription is illegal, and supplying it may result in a fine or prison sentence. When I spoke with Jesse Lawler, who hosts the podcast Smart Drugs Smarts, about breakthroughs in brain health and neuroscience, he was unsurprised to hear of my disappointing experience. Many nootropics are supposed to take time to build up in the body before users begin to feel their impact. But even then, says Barry Gordon, a neurology professor at the Johns Hopkins Medical Center, positive results wouldn't necessarily constitute evidence of a pharmacological benefit. A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes." Please note: Smart Pills, Smart Drugs or Brain Food Supplements are also known as: Brain Smart Vitamins, Brain Tablets, Brain Vitamins, Brain Booster Supplements, Brain Enhancing Supplements, Cognitive Enhancers, Focus Enhancers, Concentration Supplements, Mental Focus Supplements, Mind Supplements, Neuro Enhancers, Neuro Focusers, Vitamins for Brain Function,Vitamins for Brain Health, Smart Brain Supplements, Nootropics, or "Natural Nootropics" In the largest nationwide study, McCabe et al. (2005) sampled 10,904 students at 119 public and private colleges and universities across the United States, providing the best estimate of prevalence among American college students in 2001, when the data were collected. This survey found 6.9% lifetime, 4.1% past-year, and 2.1% past-month nonmedical use of a prescription stimulant. It also found that prevalence depended strongly on student and school characteristics, consistent with the variability noted among the results of single-school studies. The strongest predictors of past-year nonmedical stimulant use by college students were admissions criteria (competitive and most competitive more likely than less competitive), fraternity/sorority membership (members more likely than nonmembers), and gender (males more likely than females). ATTENTION CANADIAN CUSTOMERS: Due to delays caused by it's union's ongoing rotating strikes, Canada Post has suspended its delivery standard guarantees for parcel services. This may cause a delay in the delivery of your shipment unless you select DHL Express or UPS Express as your shipping service. For more information or further assistance, please visit the Canada Post website. Thank you. In terms of legal status, Adrafinil is legal in the United States but is unregulated. You need to purchase this supplement online, as it is not a prescription drug at this time. Modafinil on the other hand, is heavily regulated throughout the United States. It is being used as a narcolepsy drug, but isn't available over the counter. You will need to obtain a prescription from your doctor, which is why many turn to Adrafinil use instead. A big part is that we are finally starting to apply complex systems science to psycho-neuro-pharmacology and a nootropic approach. The neural system is awesomely complex and old-fashioned reductionist science has a really hard time with complexity. Big companies spends hundreds of millions of dollars trying to separate the effects of just a single molecule from placebo – and nootropics invariably show up as "stacks" of many different ingredients (ours, Qualia , currently has 42 separate synergistic nootropics ingredients from alpha GPC to bacopa monnieri and L-theanine). That kind of complex, multi pathway input requires a different methodology to understand well that goes beyond simply what's put in capsules. (If I am not deficient, then supplementation ought to have no effect.) The previous material on modern trends suggests a prior >25%, and higher than that if I were female. However, I was raised on a low-salt diet because my father has high blood pressure, and while I like seafood, I doubt I eat it more often than weekly. I suspect I am somewhat iodine-deficient, although I don't believe as confidently as I did that I had a vitamin D deficiency. Let's call this one 75%. After I ran out of creatine, I noticed the increased difficulty, and resolved to buy it again at some point; many months later, there was a Smart Powders sale so bought it in my batch order, $12 for 1000g. As before, it made Taekwondo classes a bit easier. I paid closer attention this second time around and noticed that as one would expect, it only helped with muscular fatigue and did nothing for my aerobic issues. (I hate aerobic exercise, so it's always been a weak point.) I eventually capped it as part of a sulbutiamine-DMAE-creatine-theanine mix. This ran out 1 May 2013. In March 2014, I spent $19 for 1kg of micronized creatine monohydrate to resume creatine use and also to use it as a placebo in a honey-sleep experiment testing Seth Roberts's claim that a few grams of honey before bedtime would improve sleep quality: my usual flour placebo being unusable because the mechanism might be through simple sugars, which flour would digest into. (I did not do the experiment: it was going to be a fair amount of messy work capping the honey and creatine, and I didn't believe Roberts's claims for a second - my only reason to do it would be to prove the claim wrong but he'd just ignore me and no one else cares.) I didn't try measuring out exact doses but just put a spoonful in my tea each morning (creatine is tasteless). The 1kg lasted from 25 March to 18 September or 178 days, so ~5.6g & $0.11 per day. Table 1 shows all of the studies of middle school, secondary school, and college students that we identified. As indicated in the table, the studies are heterogeneous, with varying populations sampled, sample sizes, and year of data collection, and they focused on different subsets of the epidemiological questions addressed here, including prevalence and frequency of use, motivations for use, and method of obtaining the medication. Poulin (2007) 2002 Canadian secondary school 7th, 9th, 10th, and 12th graders (N = 12,990) 6.6% MPH (past year), 8.7% d-AMP (past year) MPH: 84%: 1–4 times per year; d-AMP: 74%: 1–4 times per year 26% of students with a prescription had given or sold some of their pills; students in class with a student who had given or sold their pills were 1.5 times more likely to use nonmedically A fancier method of imputation would be multiple imputation using, for example, the R library mice (Multivariate Imputation by Chained Equations) (guide), which will try to impute all missing values in a way which mimicks the internal structure of the data and provide several possible datasets to give us an idea of what the underlying data might have looked like, so we can see how our estimates improve with no missingness & how much of the estimate is now due to the imputation: No. There are mission essential jobs that require you to live on base sometimes. Or a first term person that is required to live on base. Or if you have proven to not be as responsible with rent off base as you should be so your commander requires you to live on base. Or you're at an installation that requires you to live on base during your stay. Or the only affordable housing off base puts you an hour away from where you work. It isn't simple. The fact that you think it is tells me you are one of the "dumb@$$es" you are referring to above. Most research on these nootropics suggest they have some benefits, sure, but as Barbara Sahakian and Sharon Morein-Zamir explain in the journal Nature, nobody knows their long-term effects. And we don't know how extended use might change your brain chemistry in the long run. Researchers are getting closer to what makes these substances do what they do, but very little is certain right now. If you're looking to live out your own Limitless fantasy, do your research first, and proceed with caution. But how, exactly, does he do it? Sure, Cruz typically eats well, exercises regularly and tries to get sufficient sleep, and he's no stranger to coffee. But he has another tool in his toolkit that he finds makes a noticeable difference in his ability to efficiently and effectively conquer all manner of tasks: Alpha Brain, a supplement marketed to improve memory, focus and mental quickness. Interesting. On days ranked 2 (below-average mood/productivity), nicotine seems to have boosted scores; on days ranked 3, nicotine hurts scores; there aren't enough 4's to tell, but even '5 days seem to see a boost from nicotine, which is not predicted by the theory. But I don't think much of a conclusion can be drawn: not enough data to make out any simple relationship. Some modeling suggests no relationship in this data either (although also no difference in standard deviations, leading me to wonder if I screwed up the data recording - not all of the DNB scores seem to match the input data in the previous analysis). So although the 2 days in the graph are striking, the theory may not be right. Running low on gum (even using it weekly or less, it still runs out), I decided to try patches. Reading through various discussions, I couldn't find any clear verdict on what patch brands might be safer (in terms of nicotine evaporation through a cut or edge) than others, so I went with the cheapest Habitrol I could find as a first try of patches (Nicotine Transdermal System Patch, Stop Smoking Aid, 21 mg, Step 1, 14 patches) in May 2013. I am curious to what extent nicotine might improve a long time period like several hours or a whole day, compared to the shorter-acting nicotine gum which feels like it helps for an hour at most and then tapers off (which is very useful in its own right for kicking me into starting something I have been procrastinating on). I have not decided whether to try another self-experiment. Privacy Policy. Sitemap Disclaimer: None of the statements made on this website have been reviewed by the Food and Drug Administration. The products and supplements mentioned on this site are not intended to diagnose, treat, cure, alleviate or prevent any diseases. All articles on this website are the opinions of their respective authors who do not claim or profess to be medical professionals providing medical advice. This website is strictly for the purpose of providing opinions of the author. You should consult with your doctor or another qualified health care professional before you start taking any dietary supplements or engage in mental health programs. This website is supported by different affiliates and we receive a paid commission on certain products from our advertisers. Any and all trademarks, logos brand names and service marks displayed on this website are the registered or unregistered Trademarks of their respective owners. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. CERTAIN CONTENT THAT APPEARS ON THIS SITE COMES FROM AMAZON SERVICES LLC. THIS CONTENT IS PROVIDED 'AS IS' AND IS SUBJECT TO CHANGE OR REMOVAL AT ANY TIME. The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total. Turning to analyses related specifically to the drugs that are the subject of this article, reanalysis of the 2002 NSDUH data by Kroutil and colleagues (2006) found past-year nonmedical use of stimulants other than methamphetamine by 2% of individuals between the ages of 18 and 25 and by 0.3% of individuals 26 years of age and older. For ADHD medications in particular, these rates were 1.3% and 0.1%, respectively. Finally, Novak, Kroutil, Williams, and Van Brunt (2007) surveyed a sample of over four thousand individuals from the Harris Poll Online Panel and found that 4.3% of those surveyed between the ages of 18 and 25 had used prescription stimulants nonmedically in the past year, compared with only 1.3% between the ages of 26 and 49. Zach was on his way to being a doctor when a personal health crisis changed all of that. He decided that he wanted to create wellness instead of fight illness. He lost over a 100 lbs through functional nutrition and other natural healing protocols. He has since been sharing his knowledge of nutrition and functional medicine for the last 12 years as a health coach and health educator. Natural and herbal nootropics are by far the safest and best smart drugs to ingest. For this reason, they're worth covering first. Our recommendation is always to stick with natural brain fog cures. Herbal remedies for enhancing mental cognition are often side-effect free. These substances are superior for both long-term safety and effectiveness. They are also well-studied and have deep roots in traditional medicine. Of course, there are drugs out there with more transformative powers. "I think it's very clear that some do work," says Andrew Huberman, a neuroscientist based at Stanford University. In fact, there's one category of smart drugs which has received more attention from scientists and biohackers – those looking to alter their own biology and abilities – than any other. These are the stimulants. Amphetamines have a long track record as smart drugs, from the workaholic mathematician Paul Erdös, who relied on them to get through 19-hour maths binges, to the writer Graham Greene, who used them to write two books at once. More recently, there are plenty of anecdotal accounts in magazines about their widespread use in certain industries, such as journalism, the arts and finance. Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo. Medication can be ineffective if the drug payload is not delivered at its intended place and time. Since an oral medication travels through a broad pH spectrum, the pill encapsulation could dissolve at the wrong time. However, a smart pill with environmental sensors, a feedback algorithm and a drug release mechanism can give rise to smart drug delivery systems. This can ensure optimal drug delivery and prevent accidental overdose. Supplements, medications, and coffee certainly might play a role in keeping our brains running smoothly at work or when we're trying to remember where we left our keys. But the long-term effects of basic lifestyle practices can't be ignored. "For good brain health across the life span, you should keep your brain active," Sahakian says. "There is good evidence for 'use it or lose it.'" She suggests brain-training apps to improve memory, as well as physical exercise. "You should ensure you have a healthy diet and not overeat. It is also important to have good-quality sleep. Finally, having a good work-life balance is important for well-being." Try these 8 ways to get smarter while you sleep. When you hear about nootropics, often called "smart drugs," you probably picture something like the scene above from Limitless, where Bradley Cooper's character becomes brilliant after downing a strange pill. The drugs and supplements currently available don't pack that strong of a punch, but the concept is basically the same. Many nootropics have promising benefits, like boosting memory, focus, or motivation, and there's research to support specific uses. But the most effective nootropics, like Modafinil, aren't intended for use without a prescription to treat a specific condition. In fact, recreational use of nootropics is hotly-debated among doctors and medical researchers. Many have concerns about the possible adverse effects of long-term use, as well as the ethics of using cognitive enhancers to gain an advantage in school, sports, or even everyday work. In addition, the cognitive enhancing effects of stimulant drugs often depend on baseline performance. So whilst stimulants enhance performance in people with low baseline cognitive abilities, they often impair performance in those who are already at optimum. Indeed, in a study by Randall et al., modafinil only enhanced cognitive performance in subjects with a lower (although still above-average) IQ. Speaking of addictive substances, some people might have considered cocaine a nootropic (think: the finance industry in Wall Street in the 1980s). The incredible damage this drug can do is clear, but the plant from which it comes has been used to make people feel more energetic and less hungry, and to counteract altitude sickness in Andean South American cultures for 5,000 years, according to an opinion piece that Bolivia's president, Evo Morales Ayma, wrote for the New York Times. In fact, some of these so-called "smart drugs" are already remarkably popular. One recent survey involving tens of thousands of people found that 30% of Americans who responded had taken them in the last year. It seems as though we may soon all be partaking – and it's easy to get carried away with the consequences. Will this new batch of intellectual giants lead to dazzling, space-age inventions? Or perhaps an explosion in economic growth? Might the working week become shorter, as people become more efficient? One symptom of Alzheimer's disease is a reduced brain level of the neurotransmitter called acetylcholine. It is thought that an effective treatment for Alzheimer's disease might be to increase brain levels of acetylcholine. Another possible treatment would be to slow the death of neurons that contain acetylcholine. Two drugs, Tacrine and Donepezil, are both inhibitors of the enzyme (acetylcholinesterase) that breaks down acetylcholine. These drugs are approved in the US for treatment of Alzheimer's disease. 10:30 AM; no major effect that I notice throughout the day - it's neither good nor bad. This smells like placebo (and part of my mind is going how unlikely is it to get placebo 3 times in a row!, which is just the Gambler's fallacy talking inasmuch as this is sampling with replacement). I give it 60% placebo; I check the next day right before taking, and it is. Man! Neuro Optimizer is Jarrow Formula's offering on the nootropic industry, taking a more creative approach by differentiating themselves as not only a nootropic that enhances cognitive abilities, but also by making sure the world knows that they have created a brain metabolizer. It stands out from all the other nootropics out there in this respect, as well as the fact that they've created an all-encompassing brain capsule. What do they really mean by brain metabolizer, though? It means that their capsule is able to supply nutrition… Learn More... While the commentary makes effective arguments — that this isn't cheating, because cheating is based on what the rules are; that this is fair, because hiring a tutor isn't outlawed for being unfair to those who can't afford it; that this isn't unnatural, because humans with computers and antibiotics have been shaping what is natural for millennia; that this isn't drug abuse anymore than taking multivitamins is — the authors seem divorced from reality in the examples they provide of effective stimulant use today. Those who have taken them swear they do work – though not in the way you might think. Back in 2015, a review of the evidence found that their impact on intelligence is "modest". But most people don't take them to improve their mental abilities. Instead, they take them to improve their mental energy and motivation to work. (Both drugs also come with serious risks and side effects – more on those later). When taken as prescribed, Modafinil is safer than Adderall with fewer side effects. Smart pill enthusiasts find a heightened sense of alertness and motivation with Modafinil. In healthy individuals, Modafinil will reliably boost energy levels. If you find that it gives you headaches, add a choline supplement to your stack. With that said, you should only use Modafinil in moderation on an as-needed basis. Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage. Government restrictions and difficulty getting approval for various medical devices is expected to impede market growth. The stringency of approval by regulatory authorities is accompanied by the high cost of smart pills to challenge the growth of the smart pills market. However, the demand for speedy diagnosis, and improving reimbursement policies are likely to reveal market opportunities. A television advertisement goes: "It's time to let Focus Factor be your memory-fog lifter." But is this supplement up to task? Focus Factor wastes no time, whether paid airtime or free online presence: it claims to be America's #1 selling brain health supplement with more than 4 million bottles sold and millions across the country actively caring for their brain health. It deems itself instrumental in helping anyone stay focused and on top of his game at home, work, or school. Learn More... "Love this book! Still reading and can't wait to see what else I learn…and I am not brain injured! Cavin has already helped me to take steps to address my food sensitivity…seems to be helping and I am only on day 5! He has also helped me to help a family member who has suffered a stroke. Thank you Cavin, for sharing all your knowledge and hard work with us! This book is for anyone that wants to understand and implement good nutrition with all the latest research to back it up. Highly recommend!" Spaced repetition at midnight: 3.68. (Graphing preceding and following days: ▅▄▆▆▁▅▆▃▆▄█ ▄ ▂▄▄▅) DNB starting 12:55 AM: 30/34/41. Transcribed Sawaragi 2005, then took a walk. DNB starting 6:45 AM: 45/44/33. Decided to take a nap and then take half the armodafinil on awakening, before breakfast. I wound up oversleeping until noon (4:28); since it was so late, I took only half the armodafinil sublingually. I spent the afternoon learning how to do value of information calculations, and then carefully working through 8 or 9 examples for my various pages, which I published on Lesswrong. That was a useful little project. DNB starting 12:09 AM: 30/38/48. (To graph the preceding day and this night: ▇▂█▆▅▃▃▇▇▇▁▂▄ ▅▅▁▁▃▆) Nights: 9:13; 7:24; 9:13; 8:20; 8:31. Qualia Mind, meanwhile, combines more than two dozen ingredients that may support brain and nervous system function – and even empathy, the company claims – including vitamins B, C and D, artichoke stem and leaf extract, taurine and a concentrated caffeine powder. A 2014 review of research on vitamin C, for one, suggests it may help protect against cognitive decline, while most of the research on artichoke extract seems to point to its benefits to other organs like the liver and heart. A small company-lead pilot study on the product found users experienced improvements in reasoning, memory, verbal ability and concentration five days after beginning Qualia Mind. Attention-deficit/hyperactivity disorder (ADHD), a behavioral syndrome characterized by inattention and distractibility, restlessness, inability to sit still, and difficulty concentrating on one thing for any period of time. ADHD most commonly occurs in children, though an increasing number of adults are being diagnosed with the disorder. ADHD is three times more… Metabolic function smart drugs provide mental benefits by generally facilitating the body's metabolic processes related to the production of new tissues and the release of energy from food and fat stores. Creatine, a long-time favorite performance-enhancement drug for competitive athletes, was in the news recently when it was found in a double-blind, placebo-controlled crossover trial to have significant cognitive benefits – including both general speed of cognition and improvements in working memory. Ginkgo Biloba is another metabolic function smart drug used to increase memory and improve circulation – however, news from recent studies raises questions about these purported effects. Minnesota-based Medtronic offers a U.S. Food and Drug Administration (FDA)-cleared smart pill called PillCam COLON, which provides clear visualization of the colon and is complementary to colonoscopy. It is an alternative for patients who refuse invasive colon exams, have bleeding or sedation risks or inflammatory bowel disease, or have had a previous incomplete colonoscopy. PillCam COLON allows more people to get screened for colorectal cancer with a minimally invasive, radiation-free option. The research focus for WCEs is on effective localization, steering and control of capsules. Device development relies on leveraging applied science and technologies for better system performance, rather than completely reengineering the pill. The stop-signal task has been used in a number of laboratories to study the effects of stimulants on cognitive control. In this task, subjects are instructed to respond as quickly as possible by button press to target stimuli except on certain trials, when the target is followed by a stop signal. On those trials, they must try to avoid responding. The stop signal can follow the target stimulus almost immediately, in which case it is fairly easy for subjects to cancel their response, or it can come later, in which case subjects may fail to inhibit their response. The main dependent measure for stop-signal task performance is the stop time, which is the average go reaction time minus the interval between the target and stop signal at which subjects inhibit 50% of their responses. De Wit and colleagues have published two studies of the effects of d-AMP on this task. De Wit, Crean, and Richards (2000) reported no significant effect of the drug on stop time for their subjects overall but a significant effect on the half of the subjects who were slowest in stopping on the baseline trials. De Wit et al. (2002) found an overall improvement in stop time in addition to replicating their earlier finding that this was primarily the result of enhancement for the subjects who were initially the slowest stoppers. In contrast, Filmore, Kelly, and Martin (2005) used a different measure of cognitive control in this task, simply the number of failures to stop, and reported no effects of d-AMP. The truth is that, almost 20 years ago when my brain was failing and I was fat and tired, I did not know to follow this advice. I bought $1000 worth of smart drugs from Europe, took them all at once out of desperation, and got enough cognitive function to save my career and tackle my metabolic problems. With the information we have now, you don't need to do that. Please learn from my mistakes!
CommonCrawl
Is there a way to use a one-way hash function for both sides to "ratchet" an asymmetric keypair? Is there any way to derive a series of asymmetric keys, such as using a one-way hashing function, where both sides can predict the next in the sequence? Detailed Scenario: Imagine a chat application where two people, Alice and Bob, are sending messages -- each with the agreement to delete any messages received 7 days after they are sent. And let's assume that some attacker is recording all of the encrypted communications between them. The goal is to prevent an attacker who compromises a phone from getting more than 7 days of messages, either from the phone, or using any copy of a key on the phone. Protecting against this requires both Alice and Bob to do two things: Encrypt each message with a different key Delete each message after 7 days Also delete the key used to decrypt the message after 7 days If they don't delete the key, then when the attacker compromises the phone, they can decrypt any previous messages from their historical record of the encrypted communications. Partial Solution: One way this could be done is for each side to publish a new asymmetric public key every day, with the promise that "any message encrypted to this public key (for me to decrypt with its corresponding private key) will be deleted, along with the private key used to decrypt it, on YYYY-MM-DD". So, each side would receive the updated key every day, and encrypt messages to that public key (or more likely, use that public key to encrypt and deliver an ephemeral symmetric key used for all messages in that 24 hour range). So long as both sides deleted their message after 7 days, and the recipient deleted their copy of the private key for that day (along with any ephemeral key), then forward secrecy would be maintained: compromising a device would only reveal at most 7 days of messages. The problem with that solution is it requires both sides to be in constant communication, in order to receive a daily key update from the other side. If Alice were to go offline for, say, a month, then Bob wouldn't know which of Alice's public keys to encrypt a new message with. Solution / Question: Which brings me to my question: This is easy with a symmetric key: if you have the key for N=1 in the sequence, both sides can apply the same hash function, and generate the key for N=2 in the sequence. I'm wondering if there is any equivalent for asymmetric keys, where: Alice applies a one-way hash function to Alice's private key for N=1, to generate a private key for N=2 Bob applies a one-way hash function to Alice's public key for N=1, to generate a public key for N=2 If so, then both Alice and Bob could anticipate what key will be used for any future date by both "advancing" their side of the asymmetric keypair, even if they lose contact with the other side for extended periods of time. PS: I'm aware there are other ways to solve this, such as by having each side publish a pool of "pre-keys" that are allocated out as necessary. I'm specifically asking about if there's any way to directly ratchet both the public and private key. hash public-key quinthar quintharquinthar $\begingroup$ In the Partial Solution, "each side would receive the updated key every day" can be eased slightly: if we want safety for messages older than T (e.g. the 7 days in the question), (a) if the participants exchange messages bidirectionally, it suffices that they communicate more often than T (b) if unidirectional (e.g. email) with a delivery delays D<T, we can reneew keys with any period less than T-D. $\endgroup$ Here is one way, based on elliptic curves (or finite field based on discrete logs); one such public key encryption method is the Integrated Encyption Scheme. Now, it does have the drawback that if the attacker learns a prior public key, he can then (with knowledge of the current private key), he can learn the prior private key. However, this limitation may still be acceptable to you. In such a scheme, there is a private key (which is an integer $p$ in the range $[0, q)$), and the public key is a point $P = p \cdot G$ (where $\cdot$ is point multiplication if you're using elliptic curves, and exponentiation, more commonly written as $G^p$ if you're using a finite field group). To ratchet the key forward, the two sides would compute: Private key: $p' = hash(P) \times p \bmod q$ Public key: $P' = hash(P) \cdot P$ (where $hash$ is a cryptographic hash function). This works, in that the relationship between the new public key $P'$ and new private key $p'$ still holds; $p' \cdot G = hash(P) \times p \cdot G = hash(P) \cdot P = P'$. In addition, it is ratcheting; given $p'$, you cannot recover $p$ without knowing the hash of $P$ (and we assumed that he doesn't know that) ponchoponcho $\begingroup$ Thank you for the incredibly fast and helpful explanation! $\endgroup$ – quinthar $\begingroup$ @qinthar: be sure to weight the limitation: w.r.t. the goal of confidentiality of old messages if current private key leaks, it becomes unsafe to make public keys public, if we assume that what's public gets publicly archived, or adversaries know they are adversaries in advance. $\endgroup$ $\begingroup$ Yes, that's a tricky constraint I won't deny. I'm trying to see if I can find a way to make it work. Can you think of any other solution that doesn't have this constraint? $\endgroup$ $\begingroup$ @poncho I'm sure this is obvious to others, but can you explain how you calculate a previous private key given a previous public key and future private key? I believe you, I just don't quite understand how to do it. $\endgroup$ $\begingroup$ @quinthar: if you have the previous public key $P$ and the current private key $p'$, then the previous private key $p = p' \times hash(P)^{-1} \bmod q$; that is, multiplied by the modular inverse of $hash(P)$ $\endgroup$ – poncho Not the answer you're looking for? Browse other questions tagged hash public-key or ask your own question. Requiring a "supervisor" key pair and a "user" key pair to decrypt multiple-recipient messages Is symmetric key encrypted with server's public key secure Does any public key crypto support and/or allow a 3rd party "control-key"? Perfect Forward Secrecy in PGP/asynchronous email Is there a signature scheme in which private keys can't be linked to their signatures? Sequential public key generation with secret private keys? Looking for commutative asymmetric cipher for data matching I don't understand Digital Signature Is there an asymmetric encryption protocol which provides arbitrarily many seemingly unrelated public keys for a single private key?
CommonCrawl
SBLC: a hybrid model for disease named entity recognition based on semantic bidirectional LSTMs and conditional random fields Proceedings from the 2018 Sino-US Conference on Health Informatics Kai Xu1, Zhanfan Zhou2, Tao Gong3,4, Tianyong Hao5 & Wenyin Liu1 Disease named entity recognition (NER) is a fundamental step in information processing of medical texts. However, disease NER involves complex issues such as descriptive modifiers in actual practice. The accurate identification of disease NER is a still an open and essential research problem in medical information extraction and text mining tasks. A hybrid model named Semantics Bidirectional LSTM and CRF (SBLC) for disease named entity recognition task is proposed. The model leverages word embeddings, Bidirectional Long Short Term Memory networks and Conditional Random Fields. A publically available NCBI disease dataset is applied to evaluate the model through comparing with nine state-of-the-art baseline methods including cTAKES, MetaMap, DNorm, C-Bi-LSTM-CRF, TaggerOne and DNER. The results show that the SBLC model achieves an F1 score of 0.862 and outperforms the other methods. In addition, the model does not rely on external domain dictionaries, thus it can be more conveniently applied in many aspects of medical text processing. According to performance comparison, the proposed SBLC model achieved the best performance, demonstrating its effectiveness in disease named entity recognition. Medical named entities are prevalent in biomedical texts, and they play critical roles in boosting scientific discovery and facilitating information access [1]. As a typical category of medical named entities, disease names are widely used in biomedical studies [2], including disease cause exploration, disease relationship analysis, clinical diagnosis, disease prevention and treatment [3]. Major research tasks in biomedical information extraction depend on accurate disease named entity recognition (NER) [4,5,6,7,8], and how to accurately identify disease named entities is a fundamental and essential research problem in medical information extraction and text mining tasks. Disease NER involves many complex issues, which induce difficulties in actual practice [3]. Disease names are usually generated by combining Greek and Latin roots and affixes, e.g., hemo-chromatosis. More and more unknown names are difficult to identify from a morphology aspect. Many disease names also frequently contain disease descriptive modifiers, e.g., liver cancer. These modifiers may be related to human body parts or degrees of disease, e.g. recurrent cat-eye syndrome. This may cause difficulties in identifying modifiers from other types of medical named entities (e.g., syndrome). Moreover, disease names may have multiple representation forms. For instance, hectical complaint and recurrent fever are the same disease but represented differently. Finally, there exist a large amount of disease name abbreviations in medical texts. Some of them may not be standard, such as those user-defined abbreviations listed in the appendix of clinical trial texts. There are large number of biomedical texts, e.g., PubMed, PMC OA full texts, and Wikipedia. In order to effectively obtain the semantic information from the texts, word embedding training method named Negative Sampling (NEG) Skip-gram [9] was proposed by Mikolov et al. to learn high quality vector representations from a large number of unstructured texts. This method could speed up the vector training process and generate better word embeddings. The method simplified the traditional neural network structure, and thus could adapt to a large number of texts. It could also automatically generate semantic representations of words in text context. Recently, many deep neural networks, such as the Long Short Term Memory network (LSTM) model [10], have been widely used to extract text context features. A variety of relevant models that integrate LSTM to train word contextual features and Conditional Random Field (CRF)-based methods to optimize word sequence parameters have been widely used in NER tasks [11]. These models improved the feature extraction process by reducing the work-load of feature selection. In addition, word embeddings have been proved to be effective in NER tasks [12]. Motivated by both the effectively applied LSTM model and the usefulness of word embeddings, this paper combines the word embeddings containing the semantics of disease named entities with LSTM to improve the performance of disease NER tasks. To this purpose, we propose a new model named SBLC for disease NER. The model is based on word embeddings, bidirectional LSTM and CRF. As a multi-layer neural network, the model consists of three layers. The first layer is word embedding, which is generated from medical resources through massive medical text training. The second layer is Bi-LSTM, which is used to obtain the context of semantic structures. The third layer is CRF, which captures relationship among token labels. We evaluate the SBLC model by comparing it with the state-of-the-art methods including NCBI, UMLS, CMT, MeSH, cTAKES, DNorm and TaggerOne. Based on the standard publicly available NCBI disease dataset that contains 6892 disease named entities, the SBLC model achieves an F1 score of 0.862, outperforming all the other baseline methods. The major contributions of this paper lie in the following two aspects. First, the proposed SBLC model systematically combines word embedding, bidirectional LSTM and CRF for disease NER tasks. Second, this revised model by integrating Ab3P improves the current performance compared with state-of-the-art methods on a publically available dataset. The rest of the paper is organized as follows: The section Related Work gives a brief overview of the background of the disease NER and related work. The section Methods introduces the methodology of the SBLC model. The section Result presents the evaluation of the proposed SBLC model. The section Discussion analyzes error cases, discusses properties of medical semantic words, and points out the limitations of our model. Finally, the section Conclusion concludes this study. Disease NER In medical domain, most existing studies on disease NER mainly used machine learning methods with supervised, unsupervised or semi-supervised training. For example, Dogan et al. [2] proposed an inference-based method which linked disease names mentioned in medical texts with their corresponding medical lexical entries. The method, for the first time, used Unified Medical Language System (UMLS) [13] developed by the National Library of Medicine in the NCBI disease corpus. Some similar systems, such as MetaMap [14], cTAKES [15], MedLEE [16], SymText / MPlus [17], KnowledgeMap [18], HiTEX [19] have been developed utilizing UMLS. Although UMLS could cover a wide range of medical mentions, many of these methods failed to identify disease mentions not appearing in the UMLS. In addition, the NER efficiency in terms of accuracy was not sufficiently high for practical usage. For example, the F1 in NCBI dataset of official MetaMap was only 0.559 as reported in [2]. DNorm [3] was one of the recent studies using a NCBI disease corpus and a MEDICS vocabulary. It combined MeSH [20] and OMIM [21]. DNorm learned the similarity between disease names directly from training data, which was based on the technology of paired learning to rank (pLTR) strings normalization. Instead of solely relying on medical lexical resources, DNorm adopted a machine learning approach including pattern matching, dictionary searching, heuristic rules. By defining a vector space, it converted disease mentions and concepts into vectors. DNorm achieved an F1 score of 0.809 on the NCBI disease corpus. In 2016, Leaman and Lu proposed the TaggerOne [22]. It was a joint model that combined NER and normalized machine learning during training and predicting to overcome the cascading error of DNorm. TaggerOne consisted of a semi-Markov structured linear classifier for NER and a supervised semantic index for normalization, and ensured high throughput. Based on the same NCBI disease corpus, TaggerOne achieved an F1 score of 0.829. With respect to the methods applying deep learning to NER, some neural network models that could automatically extract word representation characteristics from raw texts have been widely used in the NER field (e.g., [23]). Using deep learning, some sequence annotation methods were also proposed and applied to disease NER tasks (e.g., [24, 25]). As a typical method, Pyysalo et al. [12] used word2vec to train a list of medical resources, and obtained a better performance on a NCBI Disease corpus. Recently, Wei et al. proposed a multi-layer neural network, DNER [24], which used GENIA Tagger [26] to extract a number of word features including words, part-of-speech tags, words chunking information, glyphs, morphological features, word embeddings, and so on. After extraction, the word features were embedded as inputs to a bidirectional Recurrent Neural Network model, and other features like POS tags were used for a CRF model. The normalization method of dictionary matching and the vector space model (VSM) were used together to generate optimized outputs. The overall performance of the model in terms of F1 score was 0.843 on the NCBI disease corpus. To our knowledge, DNER was the best performance deep learning-based method. Motivated by the benefits of word embedding and deep learning from the existing research, we intend to utilize external medical resources for word representation and combine bidirectional LSTM and CRF for NER recognition. We use a large number of medical resources to train the word embeddings model in an unsupervised manner, and combine the deep learning techniques for disease NER tasks. Word embedding training Success of machine learning algorithms usually depended on appropriate data representation, since different representations could capture different features of the data. Distributed word representation proposed by Hinton [27], has been widely used. The word distribution hypothesis held that the words in a similar context have similar meanings, which convey similarities in semantic dimensions. Along with the recent development of machine learning techniques, more and more complex models have been trained on larger datasets and achieved superior performance [28]. Mikolov et al. [29] proposed a skip-gram method for calculating vector representations of words in large data sets. The compositions of disease named entities often contained rare medical words. In order to improve the computational efficiency, the Skip-gram model removed the hidden layer so that all words in input layer shared a mapping layer. In the skip-gram method, Negative Sampling (NEG) was used. It was a simplified version of Noise Contrastive Estimation (NCE) [30]. NEG simplified NCE by guaranteeing word vector quality and improving training speed. NEG no longer used a relatively complex Huffman tree, but rather a relatively simple random negative sample, which could be used as an alternative for hierarchical softmax. Motivated by the related work, particularly from Mikolov et al. [9, 29], we apply the NEG skip-gram method for disease NER. The method is described as follows. Given a training text sequence w1, …, wT, at position t, the distribution score s(w, c; θ) for the true probability model was calculated using Eq. (1). The target of w was a set of context words wt − n, …, wt − 1, wt + 1, …, wt + n. $$ s\left({w}_t,{c}_t;\theta \right)={v}_{w_t}^T{v}_{w_{t+j}}^{\prime },-n\le j\le n,j\ne 0 $$ When using the negative sampling method, k negative cases (\( {\tilde{w}}_{t,i},1\le i\le k \)) were randomly sampled in the noise distribution Q(w) for each positive case (wt, ct). σ was a logistic function. The negative function for negative samples was shown in Eq. (2): $$ {\displaystyle \begin{array}{l}{L}_{\theta}\left({w}_t,{c}_t\right)=\log P\left(y=1|{w}_t,{c}_t\right)+\sum \limits_{i=1}^k\log \left(1-P\left(y=1|{\tilde{w}}_{t,i},{c}_t\right)\right)\\ {}\kern3.75em =\log \sigma \left(s\left({w}_t,{c}_t;\theta \right)\right)+\sum \limits_{i=1}^k\log \sigma \left(-s\left({\tilde{w}}_{t,i},{c}_t;\theta \right)\right)\end{array}} $$ The value k was determined by the size of the data. Normally, k ranged within [5, 20] in a small-scale data, while decreased to [2, 5] in a large-scale data [9]. Equation (2) could be solved by a random gradient rise method. Bi-LSTM & CRF As a typical deep learning method, the long and short memory network (LSTM) [10] was usually used for annotation tasks of text sequences. LSTM, as shown in Eq. (3), could capture long distance information by adding several threshold cells which controlled the contribution of each memory cell. Therefore, LSTM enhanced the ability of keeping long distance context information. Longer contextual information could help the model to learn semantics more precisely. $$ {\displaystyle \begin{array}{l}{i}_t=\sigma \left({W}_{xi}{x}_t+{W}_{hi}{h}_{t-1}+{W}_{ci}{c}_{t-1}+{b}_i\right)\\ {}{c}_t=\left(1-{i}_t\right)\odot {c}_{t-1}+{i}_t\odot \tanh \left({W}_{xc}{x}_t+{W}_{hc}{h}_{t-1}+{b}_c\right)\\ {}{o}_t=\sigma \left({W}_{xo}{x}_t+{W}_{ho}{h}_{t-1}+{W}_{co}{c}_t+{b}_o\right)\\ {}{h}_t={o}_t\odot \tanh \left({c}_t\right)\end{array}} $$ Bidirectional LSTM (Bi-LSTM) could simultaneously learn forward and backward information of input sentences and enhance the ability of entity classification. A sentence X containing multiple words could be represented as a set of dimension vectors (x1, x2, …, xn).\( {\overrightarrow{y}}_t \) denoted the forward LSTM and \( {\overleftarrow{y}}_t \) denotes the backward LSTM. \( {\overrightarrow{y}}_t \) and \( {\overleftarrow{y}}_t \) were calculated by capturing from the LSTM the preceding and following information of the word t, respectively. The overall representation was achieved by generating the same backend sequence in LSTM. This pair of forward and backward LSTMs was Bi-LSTM. This representation preserved the context information for the word t. Since there was more and more research focusing on Bi-LSTM and Conditional Random Field (CRF) in NER tasks, the following of this subsection described CRF. It was first introduced as a sequence data tag recognition model by Lafferty et al. [11]. Considering that the target of NER was label sequences, linear chain CRF could compute the global optimal sequence, thus it was widely used to solve NER problems. The objective function of a linear chain CRF was the conditional probability of the state sequence y given the input sequence x, as shown in Eq. (4). $$ P\left(y|x\right)=\frac{1}{z(x)}\exp \left(\sum \limits_{k=1}^K{\lambda}_k{f}_k\left({y}_t,{y}_{t-1},{x}_t\right)\right) $$ fk(yt, yt − 1, xt) was a characteristic function. λk denoted the learning weights of the function features, while yt − 1 and ytreferred to the previous and the current states, respectively. Z(x) was the normalization factor for all state sequences, as shown in Eq. (5). $$ Z(x)=\sum \limits_y\exp \left(\sum \limits_{k=1}^K{\lambda}_k{f}_k\left({y}_t,{y}_{t-1},{x}_t\right)\right) $$ The maximum likelihood method and numerical optimization L-BFGS algorithm were used to solve the parameter vector \( \overrightarrow{\lambda}=\left\{{\lambda}_1,\dots, {\lambda}_k\right\} \) in training process. The viterbi algorithm was used to find the most likely hidden state sequences from observed sequences [31]. This paper presents a new model SBLC for disease named entity recognition based on semantic word embedding, bidirectional LSTM, and CRF. The model consists of three layers: 1) a semantic word embedding layer, 2) a bidirectional LSTM layer, and 3) a CRF and Ab3p layer. The overall architecture of the SBLC model shown in Fig. 1. The overall architecture of the proposed SBLC model including three layers: The first layer is word embedding containing word embeddings trained on three large-scale datasets. The second layer is Bi-LSTM used to learn context information. The third layer is CRF and Ab3p capturing the relationship among word part-of-speech labels In the model, we first train semantic word vectors on three corpora including PubMed, PMC OA full text and Wikipedia. The trained word vectors are then projected to the vectors trained on a standard NCBI corpus. The word vectors containing text semantic information are input to the Bi-LSTM layer. The NCBI training corpus is further used for Bi-LSTM parameter training. We optimize sequence parameters by the CRF layer. Finally, the model identifies disease abbreviations using an Ab3P module. The first layer is word embedding. The Skip-gram model based on Negative Sampling is used to train word embeddings on the three large-scale medical datasets. Based on a previous work [12], we extract the texts from PubMed, PMC Open Access (OA), and Wikipedia. A total of 22,120,000 abstract records from PubMed, 672,000 full-texts from PMC OA, and 3,750,000 articles from Wikipedia are retrieved by the end of 2013. The finally extracted texts as a corpus contain a total of 5.5 billion words. The corpus is then used as the training dataset for word embedding generation. The second layer is Bi-LSTM, which is used to learn context information. LSTM captures long distance information through a threshold unit, thus it can learn more semantic features through longer contextual information. Using the Bi-LSTM structure can simultaneously learn the context information of preceding and following sentences. From our previous empirical studies, the Bi-LSTM can enhance entity classification performance. The third layer is CRF and Ab3p, which captures the relationship among word part-of-speech labels. We use NLTK toolkit [32], a widely used natural language processing tool, for part-of-speech labeling. In the CRF, the Viterbi algorithm is used to solve the global optimal sequence problem. Finally, the BIO method is used for NER annotation and the Ab3P is used to identify additional disease abbreviations. In general, a disease NER task can be regarded as a process of assigning named entity tags to words. A single named entity may consist of multiple words in order. Accordingly, we use the BIO method for sequenced-word labeling. Each word is marked with BIO labels. A word is tagged with a B label if it is at the beginning of a named entity. If the word is inside the entity but not at the beginning, it is tagged as I. Words that are not named entities are marked as O. The labels of named entities are mutually dependent. For example, an I-PERSON cannot appear after a B-LOCATION label. Therefore, the BIO labels cannot be tagged independently. We use a CRF method to calculate the possibility score of each label from the Bi-LSTM output. The objective function s(X,y), as shown in Eq. (6), is used to calculate the probability of each label. The higher the value, the higher probability of the predicted label to be chosen. $$ s\left(X,y\right)=\sum \limits_{i=1}^n{P^{sem}}_{i,{y}_i}+\sum \limits_{i=0}^n{A}_{y_i,{y}_{i+1}} $$ For an input sentence set X = (x1, x2, …, xn), Psem is a score matrix, which is the output of the bidirectional LSTM network containing the medical semantic features. Psem is of size n × k, where k is the number of different BIO labels and it is set to 3 in this paper. A is a matrix of transition scores and Ai, j represents the transition score from the BIO labeli to labelj. y0 and yn are the beginning and ending labels of a sentence, respectively. We use a softmax function p(y|X) to calculate the probability of sequence y from all possible label sequences, as shown in Eq. (7). $$ p\left(y|X\right)=\frac{\exp \left(s\left(X,y\right)\right)}{\sum_{\tilde{y}\in {Y}_X}\exp \left(s\left(X,\tilde{y}\right)\right)} $$ The final computation task is to find the point estimate y* of all possible outputs y such that the conditional log-likelihood probability P(y|X) is maximized, as shown in Eq. (8). $$ {y}^{\ast }=\arg \max \left(\log P\left(y|X\right)\right) $$ In the task of disease NER, disease abbreviations are often interfered by other non-disease abbreviations. For example, a disease name CT appearing in a clinical text may refer to Computed Tomography (non-disease) or Copper Toxicosis (Wilson disease). Thus, the identification of CT as Computed Tomography is incorrect. The abbreviation recognition is not effective using solely word embeddings generated by the NEG skip-gram training, since the disease abbreviations are easily conflicted with other types of non-disease abbreviations. Taking the same example, CT is expected to be classified as Copper Toxicosis (ID 215600 in OMIM (Online Mendelian Inheritance in Man)). However, the most similar vocabularies associated with the word embeddings are the following 5 ranked tuples (noncontrast CT, 0.8745), (MDCT ray, 0.8664), (Computed tomography, 0.8643), (non-contrast, 0.8621), and (unenhanced, 0.8505), where the first tuple element refers to the words relevant to CT and the second element is their similarity values. However, the similarity between CT and target word Copper Toxicosis is as low as 0.003, causing the difficulty in the identification of disease abbreviation Copper Toxicosis. To that end, we use Ab3P [33], available at http://www.ncbi.nlm.nih.gov/CBBresearch/Wilbur/, to identify disease abbreviations. Evident in previously reported results, Ab3P has an F1 score of 0.9 and 0.894 ​​on the Medstract corpus and the MEDLINE annotation set, respectively. It defines short form (SF) as abbreviations and long form (LF) as the full representations of the abbreviations. Ab3P uses relaxed length restrictions and tried to find the best LF candidates by searching for the most reliable strategy out of seventeen strategies. For example, strategy FC denotes that a SF character matches the 1st character of a word in LF. Strategy FCG denotes that a SF character matches the character following a non-alphanumeric and non-space character in LF. The BIO labels for the identified abbreviations by SBLC and Ab3P are SetSBLC and SetAb3P, respectively. The final label sets are computed asSetSBLC ∪ SetAb3P. If there is no identification output for an abbreviation using SBLC, the identified label by Ab3P is applied as the final result. In cases the identified labels from SBLC and Ab3P are different, the labels by Ab3P are taken as the correct identification. In this way, Ab3P in identifying abbreviations of disease named entities is used to supply the SBLC, thus improving the overall NER performance. We use a publicly available dataset, the NCBI disease corpus [2], to evaluate the performance of the proposed SBLC model. The dataset is developed and annotated by the research groups from American National Center for Biotechnology Information (NCBI) and American National Institutes of Health (NIH). It has been frequently used in disease NER tasks [3, 22, 24]. The dataset contains 793 article abstracts from PubMed, and includes over 6000 sentences and 2136 unique disease concepts. The dataset is manually annotated by 14 persons having medical informatics research backgrounds and medical text annotation experiences. The dataset consists of three sub-datasets: a training data set (593 texts), a development data set (100 texts), and a test data set (100 texts). Detailed statistics information of the NCBI dataset is shown in Table 1. Table 1 The statistics of the NCBI dataset for disease NER To evaluate the effectiveness of the SBLC, the following 9 baseline methods are used in performance comparison: Dictionary look-up method [2]. It uses Norm from the SPECIALIST lexical tools to identify disease names in the MEDIC lexicon. cTAKES [15]. The cTAKES NER component implements a dictionary look-up algorithm within a noun-phrase look-up window. The dictionary is a subset of UMLS, including SNOMED CT and RxNORM concepts guided by extensive consultations with clinical researchers and practitioners. Each named entity is mapped to a concept from the terminology. The cTAKES is available at http://ctakes.apache.org/. In the comparison, we use the latest version cTAKES 4.0. MetaMap [14]. MetaMap is based on lexical look-up to identify the UMLS Metathesaurus concepts in biomedical texts. In the experiment, we use MetaMap MEDIC filtering to restrict output results to disease names. The Inference Method [2]. It tries to link diseases to their corresponding medical lexical entries. It designs string matching rule combinations that map annotated strings to standard disease dictionaries. The method was tested by the manually annotated AZDC disease corpus and the PubMed abstract texts. DNorm [3]. The method is based on pairwise learning to rank (pLTR), which has been successfully applied to large optimization problems in information retrieval. It learns similarities between mentions and concept names, including synonymy and polysemy. CRF + UMLS, CRF + CMT, CRF + MeSH [34]. These are several hybrid combination strategies involving CRF and UMLS, CRF and Convergent Medical Terminology (CMT), as well as CRF and Medical Subject Headings (MeSH). C-Bi-LSTM-CRF [34]. It extracts the prefix and suffix information for each word at the character-level in training text. The method consists of three layers. The first layer is a character-based Bi-LSTM layer designed to learn character-level expressions of words. The second layer is a word-based Bi-LSTM layer. The third layer is a CRF layer, which captures the relations among labels. TaggerOne [22]. This method is developed by the National Center for Biotechnology Information, USA. It uses a semi-Markov structured linear classifier for NER and normalization, simultaneously performs NER and normalization during training and prediction. DNER [24]. Based on a deep learning method Bi-RNN, this method recognizes named entities using a support vector machine classifier. Dictionary matching and vector space model based normalization method are used to align the recognized mention-level disease named entities in MeSH. We further analyze the functional characteristics of all the baseline methods in terms of using "dictionary look-up", "disease name normalization", "word embedding", "LSTM", and "CRF", as shown in Table 2. "Y" means that a method contains a specific function and "N" means not. As can be seen in the table, most of the methods use disease name normalization approach and half of them use CRF. Only SBLC and C-Bi-LSTM-CRF use LSTM. SBLC is the only method that uses word embedding and it does not rely on dictionary look-up nor disease name normalization. Table 2 Parameter combination comparison Evaluation metrics We use three widely used evaluation metrics, precision, recall and F1-score, in disease NER studies [2, 3, 24, 34, 35] and other types of NER studies [23, 25, 31]. There are four possible outcomes for an instance in a testing data: An instance will be classified as a disease when it is truly a disease (true positive, TP); it will be classified as a disease when it is actually a non-disease (false positive, FP); it will be classified as a non-disease when it is actually a disease (false negative, FN); or it will be classified as a non-disease and it is truly a non-disease (true negative, TN). Based on these 4 possible outcomes, precision, recall and F1-score are defined as follows: Precision: the proportion of instances that are correctly labeled as diseases among those labeled as diseases. $$ Precision=\frac{TP}{TP+ FP} $$ Recall: the proportion of disease instances that are correctly labeled. $$ Recall=\frac{TP}{TP+ FN} $$ F1 score: the harmonic mean of precision and recall. $$ F 1=\frac{2\times Precision\times Recall}{Precision+ Recall} $$ Parameter tuning In SBLC, there are a number of parameters. In the parameter tuning process, we try different combinations of the parameters and record the corresponding performances in terms of F1 scores based on the training dataset. Eventually, we obtain a list of optimized parameter values, as shown in Table 3. Table 3 The optimized parameter settings of the LSTM network In addition, the increase of the hidden layer dimension of Bi-LSTM network may lead to high computational complexity. To optimize the network layers, we have tried different dimensions of hidden layers ranging from 50 to 200 incrementally, with a step of 50, to test the performance of the Bi-LSTM network on the training dataset. From the result shown in Table 4, the F1 score is 0.768 using 50 dimensions of hidden layers and is increased to 0.802 using 100 dimensions of hidden layers. However, the F1 score drops to 0.753 and 0.768 when the dimension number of the hidden layers is increased to 150 and 200, respectively. In order to have a lower computational complexity, we select 100 as the best dimension number of hidden layers for the Bi-LSTM network. Table 4 Effects of dimension settings of hidden layer dimension in Bi-LSTM The number of word embedding dimensions may also affect the method performance and computational complexity. Similarly, we set the word embedding dimensions from 50 to 200, with a step of 50. From the result shown in the Table 5, the highest F1 score is 0.862 when the dimension equals to 200. Consequently, we use 200, which is also commonly used in many other NER tasks as the best dimension setting in word embedding generation. Table 5 Effects of different parameter settings of word embedding dimensions During word embedding training, different training data sources may affect the quality of generated word embedding. We use three datasets: 1) A PubMed dataset composed of 22,120,000 paper abstracts. 2) A PMC dataset containing 672,000 full-text publications, and 3) A Wikipedia dataset containing 3,750,000 articles. We test the performance of disease NER using different combinations of the datasets. As shown in Table 6, with respect to F1 score, using the PubMed (abstract) and the PMC (full text) separately achieve an F1 score of 0.843 and 0.861, respectively. Using the PubMed (abstract) + PMC (full text) obtains the best F1 performance. Table 6 Performance comparison using different combinations of external training datasets From the result, Wikipedia is not effective on both independent usage and combination. This might be caused by our incomplete Wikipedia training dataset, since the dataset contained only part of disease named entries and some disease names were not being covered. Moreover, Wikipedia is not a specialized medical corpus thus much non-medical content were involved. The reason was also reported by [36] similarly. We therefore use the combination of the PubMed (abstract) and the PMC (full text) as the external datasets for word embedding pre-training. In order to verify the robustness of the proposed SBLC model, we evaluate the performance using different sizes of the test dataset increasing from 10 to 100 abstracts with a step of 10. We apply a bootstrap sampling method on the test data set using put-back sampling method for 100 times. After that, we assess the statistical significance of F1 scores by computing confidence intervals at the 95% level. In each round, five different strategies by setting different SBLC parameters are used for comparison. As mentioned above, SBLC was the method with the full functions; SBLC(− semantic word embedding) represented SBLC without semantic word embedding layer; SBLC(− word embedding) represents the SBLC without word embedding in the training process; SBLC(− Bi-LSTM) denoted SBLC without Bi-LSTM network; and SBLC(− CRF) denoted the SBLC without the CRF layer. Without Bi-LSTM, the model acquires the widest range of variability and poor robustness. It shows that Bi-LSTM contributes a lot to the robustness of the SBLC model. The performances of the models without semantic word embedding nor word embedding are close to each other. The robustness of the SLBC model is generally smoother, compared to the two methods. The F1 scores using different numbers of testing texts are shown in Fig. 2. The performance of SBLC using different numbers of testing texts. The lines are the averaged F1 for 100 times testing and the shaded areas are at the 95% confidence level In addition, we test the performance of SBLC by comparing it with different strategies considering contributions from four parts: Ab3p, CRF, Bi-LSTM, Word Embedding. The comparison results are shown in Table 7. CRF uses the CRF layer structure only for NER. The precision, recall, F1 score is 0.701, 0.675 and 0.688. Bi-LSTM uses the Bi-LSTM layer structure only. The precision, recall, F1 score is 0.600, 0.425 and 0.498. While adding Ab3p on the basis of CRF, Ab3p + CRF obtains a precision and a recall of 0.726 and 0.689, respectively. By adding abbreviations on the basis of Bi-LSTM, Ab3p + Bi-LSTM obtains a precision and a recall of 0.645 and 0.452, respectively. Utilizing both CRF and Bi-LSTM layers, Bi-LSTM + CRF achieves a precision, a recall, and an F1 score of 0.806, 0.800 and 0.803, which improves the overall performance. Combining Ab3p, Bi-LSTM and CRF layers, Ab3p + Bi-LSTM + CRF improves the precision, recall, and F1 score to 0.813, 0.808 and 0.811. Combining Word Embedding and Bi-LSTM layers, Word Embedding + Bi-LSTM achieves a precision, a recall, and an F1 score of 0.675, 0.501 and 0.575. Word Embedding + CRF obtains a precision, a recall, and an F1 score of 0.821, 0.772 and 0.796. Combining Word Embedding, Bi-LSTM and CRF layers, Word Embedding + Bi-LSTM + CRF obtains a precision, a recall, and an F1 score of 0.842, 0.828 and 0.835. Ab3p + Word Embedding + Bi-LSTM, by combining Ab3p, Word Embedding and Bi-LSTM layers, obtains a precision, a recall, and an F1 score of 0.613, 0.689 and 0.648. Combining Ab3p, Word Embedding and CRF layers, Ab3p + Word Embedding + CRF obtains a precision, a recall, and an F1 score of 0.846, 0.786 and 0.815. Ab3p + Word Embedding + Bi-LSTM + CRF (SBLC) obtains the highest precision, recall, and F1 score of 0.866, 0.858 and 0.862. Table 7 Effects of different parameter settings and the final optimized result The fourth experiment compares the performances of the proposed SBLC model with those of the above mentioned 9 baseline methods. For MetaMap, we further consider the usage of two filtering strategies: semantic type filtering and MEDIC filtering. For TaggerOne, we further use normalization leveraging external resource. Comparison results are shown in Table 8. The widely-used cTAKES obtain an F1 score of 0.506 and the MetaMap increased the F1 score to 0.559. The inference method acquires an F1 score of 0.637. The three combinations of CRF strategies CRF + CMT, CRF + MeSH and CRF + UMLS obtain F1 scores of 0.735, 0.746 and 0.756. The state-of-the-art methods DNorm and TaggerOne, both developed by NIH, achieve relatively higher F1 scores as 0.798 and 0.829, respectively. The deep learning-based method C-LSTM-CRF obtains an F1 of 0.802, while the recent DNER has an F1 score of 0.843. Our SBLC achieves the highest F1 score of 0.862, outperforming all the baseline methods. The comparison results show the effectiveness of our proposed SBLC method. Table 8 The performance comparison of our SBLC model with the baseline methods on the same NCBI test dataset We analyze all the error cases from our SBLC method, and summarize the error cases as the following three types. 1) The complex compound words cause difficulties in disease NER. For example, the disease name "insulin-dependent diabetes mellitus" (MeSH ID D003922) has a joint mark "-" but SBLC can recognize "diabetes mellitus" only. This might be due to the insufficient amount of training data, which cause the incorrect identification of complex disease named entities and compound words. 2) Long disease mentions might cause NER failures. For example, "demyelination of the cerebral white matter" (D003711) and "disorder of glycoprotein metabolism" (DiseaseClass, D008661) are two long disease names failed to be recognized by SBLC. We further identify the length of these error cases with long disease names, and find that the unidentified disease names usually contain more than 3 words. This is a challenge for disease NER, particularly with the appearance of more and more disease names. 3) Some rare disease names appear in the testing dataset only. For example, Non-Hodgkins lymphoma (D008228) is not appeared in the training dataset, thus it is missed in the NER on the testing dataset. Medical semantic word embedding In a medical NER task, word is a fundamental unit and word semantics is proved to be useful. The trained semantics could be further enhanced as a feature for higher-level neural network training. For example, the disease NER result on a PubMed article (PID 9949209) in the testing dataset is shown in Fig. 3. The words with colored background in purple, blue, gray and yellow denote the four identified unique disease mentions. These mentions are further normalized to standard concepts marked with associated rectangle boxes containing unique concept id. The annotations of the identified disease named entities In SBLC, NEG skip-gram is used to train word embeddings and the trained embeddings could reflect the semantic distances among the learned disease concepts. For example, based on the same example above, SBLC calculates the similarities among all the identified disease concepts using the Cosine similarity measure. The results are reported in Table 9. Words in different capitalization and tense, or synonymy are identified and assigned with a similarity weights. In order to view the similarity among the identified disease concepts, we map the concepts to a two-dimensional space, as shown in Fig. 4. The closer the words, the more semantically similar they become. For example, the closest semantics to the word "liver" are "kidney", "hepatic", "pancreas", "kidneys", and "livers". Table 9 The semantic similarity among the identified disease concepts using Cosine similarity measure The example word embedding projected to a two-dimensional space In this paper, we proposed a new deep learning-based model named as SBLC. The model utilized semantic word embeddings, bidirectional LSTM, CRF, and Ab3P. Based on a standard NCBI disease dataset, we compared the SBLC with 9 state-of-the-art methods including MetaMap, cTAKES, DNorm, and TaggerOne. The results showed that the SBLC model achieved the best performance, indicating the effectiveness of SBLC in disease named entity recognition. Bi-LSTM: Bidirectional Long Short Term Memory networks CMT: Convergent Medical Terminology CRF: Conditional Random Fields NER: Named Entity Recognition UMLS: Unified Medical Language System A. Névéol, J. Li, and Z. Lu. Linking multiple disease-related resources through UMLS. ACM SIGHIT International Health Informatics Symposium. New York; 2012. p. 767–772. Dogan RI, Leaman R, Lu Z. NCBI disease corpus: a resource for disease name recognition and concept normalization. J Biomed Inform. 2014;47:1–10. Leaman R, Doğan RI, Lu Z. DNormL: Disease name normalization with pairwise learning to rank. Bioinformatics. 2013;29(22):2909–17. Meystre SM, Savova GK, Kipper-Schuler KC, Hurdle JF, et al. Extracting information from textual documents in the electronic health record: a review of recent research. IMIA Yearbook. 2008;47(Suppl 1):128–44. Eltyeb S, Salim N. Chemical named entities recognition: a review on approaches and applications. J Cheminformatics. 2014;6(1):17. Goulart RRV, de Lima VLS, Xavier CC. A systematic review of named entity recognition in biomedical texts. J Braz Comput Soc. 2011;17(2):103–16. Meystre SM, Friedlin FJ, South BR, Shen S, Samore MH. Automatic de-identification of textual documents in the electronic health record: a review of recent research. BMC Med Res Methodol. 2010;10(1):70. Rzhetsky A, Seringhaus M, Gerstein M. Seeking a new biology through text mining. Cell. 2008;134(1):9–13. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. in Proc. of the 26th International Conference on Neural Information Processing Systems. Volume 2, USA. 2013. p. 3111–3119. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80. J. Lafferty, A. McCallum, and F. C. Pereira. Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: the Eighteenth International Conference on Machine Learning. 2001; pp. 282–289. S. Pyysalo, F. Ginter, H. Moen, T. Salakoski, and S. Ananiadou. Distributional semantics resources for biomedical text processing. In The 5th international symposium on languages in biology and medicine (LBM 2013), Tokyo, Japan 2013. Bodenreider O. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic Acids Res. 2004;32(suppl 1):267–70. A. R. Aronson. Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program. In: Proc of the AMIA Symposium 2001; p.17. Savova GK, et al. Mayo clinical text analysis and knowledge extraction system (cTAKES): architecture, component evaluation and applications. J Am Med Inform Assoc. 2010;17(5):507–13. Chiang J-H, Lin J-W, Yang C-W. Automated evaluation of electronic discharge notes to assess quality of care for cardiovascular diseases using medical language extraction and encoding system (MedLEE). J Am Med Inform Assoc. 2010;17(3):245–52. L. M. Christensen, P. J. Haug, and M. Fiszman. MPLUS: a probabilistic medical language understanding system. In Proc of the ACL-02 workshop on Natural language processing in the biomedical domain 2002; vol. 3, pp. 29–36. Denny JC, Smithers JD, Miller RA, Spickard A III. Understanding' medical school curriculum content using KnowledgeMap. J Am Med Inform Assoc. 2003;10(4):351–62. Zeng QT, Goryachev S, Weiss S, Sordo M, Murphy SN, Lazarus R. Extracting principal diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing system. BMC Med Inform Decis Mak. 2006;6(1):30. Lipscomb CE. Medical subject headings (MeSH). Bull Med Libr Assoc. 2000;88(3):265. Hamosh A, Scott AF, Amberger JS, Bocchini CA, McKusick VA. Online Mendelian inheritance in man (OMIM), a knowledgebase of human genes and genetic disorders. Nucleic Acids Res. 2005;33(suppl_1):514–7. Leaman R, Lu Z. TaggerOne: Joint named entity recognition and normalization with semi-Markov models. Bioinformatics. 2016;32(18):2839–46. Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural architectures for named entity recognition. In: Proc. of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego: Proc of the Human Language Technology Conference and the Annual Meeting of the North American Chapter of the Association for Computational Linguistics; 2016. p. 260–70. Wei Q, Chen T, Xu R, He Y, Gui L. Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks. Database (Oxford). 2016:baw140. Gridach M. Character-level neural network for biomedical named entity recognition. J Biomed Inform. 2017;70:85–91. Kulick S, et al. Integrated annotation for biomedical information extraction. In: Proc of the Human Language Technology Conference and the Annual Meeting of the North American Chapter of the Association for Computational Linguistics; 2004. p. 61–8. Hinton GE, Mcclelland JL, Rumelhart DE. Distributed representations, parallel distributed processing: explorations in the microstructure of cognition, vol. 1. Cambridge, MA: foundations. MIT Press; 1986. Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell. 2013;35(8):1798–828. T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. ArXiv Prepr. 2013; ArXiv13013781. Gutmann M, Hyvärinen A. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In: Proc. of the Thirteenth International Conference on Artificial Intelligence and Statistics; 2010. p. 297–304. Li K, et al. Hadoop recognition of biomedical named entity using conditional random fields. IEEE Trans. Parallel Distrib Syst. 2015;26(11):3040–51. Bird S. NLTK: the natural language toolkit. In: Proc. of the COLING/ACL on interactive presentation sessions; 2006. p. 69–72. Sohn S, Comeau DC, Kim W, Wilbur WJ. Abbreviation definition identification based on automatic precision estimates. BMC Bioinformatics. 2008;9:402–11. Xu K, Zhou Z, Hao T, Liu W. A bidirectional LSTM and conditional random fields approach to medical named entity recognition. Adv Intell Syst Comput. 2018;639:355–65. Wei CH, Leaman R, Lu Z. SimConcept: a hybrid approach for simplifying composite named entities in biomedical text. IEEE J Biomed Health Inform. 2015;19(4):1385–91. Chiu B, Crichton G, Korhonen A, Pyysalo S. How to train good word Embeddings for biomedical NLP. In: Proc. of the 15th Workshop on Biomedical Natural Language Processing, Berlin, Germany; 2016. p. 166–74. Publication of the article is supported by grants from National Natural Science Foundation of China (61772146), Guangdong Innovative Research Team Program (2014ZT05G157), Guangzhou Science Technology and Innovation Commission (201803010063), Natural Science Foundation of Guangdong Province (2018A030310051), and the Science and Technology Plan of Guangzhou (201804010296). The datasets used and analyzed during the current study are available from the https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/. About this supplement This article has been published as part of BMC Medical Informatics and Decision Making Volume 18 Supplement 5, 2018: Proceedings from the 2018 Sino-US Conference on Health Informatics. The full contents of the supplement are available online at https://bmcmedinformdecismak.biomedcentral.com/articles/supplements/volume-18-supplement-5. School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China Kai Xu & Wenyin Liu School of Information Science and Technology, Guangdong Universities of Foreign Studies, Guangzhou, China Zhanfan Zhou Educational Testing Service, Princeton, NJ, USA Tao Gong Center for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies, Guangzhou, China School of Computer Science, South China Normal University, Guangzhou, China Tianyong Hao Kai Xu Wenyin Liu KX leaded the method design and experiment implementation. ZFZ took in charge of data processing and labeling. TYH, TG, and WYL provided theoretical guidance, result review, and paper revision. All authors read and approved the final manuscript. Correspondence to Tianyong Hao or Wenyin Liu. Xu, K., Zhou, Z., Gong, T. et al. SBLC: a hybrid model for disease named entity recognition based on semantic bidirectional LSTMs and conditional random fields. BMC Med Inform Decis Mak 18 (Suppl 5), 114 (2018). https://doi.org/10.1186/s12911-018-0690-y Biomedical informatics
CommonCrawl
http://www.nyu.edu/ Public Documents (52) Modeling orientation detection Hormet Yiltiz Let θ₁, θ₂…θN ∈ [0, 2π) the preferred orientations of a network of N neurons with circular Normal tuning curves and Poisson firing rates. Model input is a target orientation (θt, ct) superimposed on a mask orientation (θm, cm), where a pair (θ, c) represents an orientation θ at contrast c. Tuning response of ith neuron to a set of orientations (θj, cj): $$\lambda_i := \sum_j g \cdot c_j e^{2\kappa \cos(\theta_j)},$$ where g is the gain of the network. The Poisson response is a random sample from the Poisson distribution with λ as the mean firing rate: $$r_i \sim (\lambda_i)$$ The Poisson response acts as the input drive to divisive normalization, so we can normalized response: $$R_i^{(t)} = {\sigma^2 + \rfloor^2}}$$ Then the weights are updated based on the Hebbian learning rule: $$^{(t)} = ^{(t-1)} + \alpha (^T - )$$ where α is the Hebbian learning rate, C is the Hebbian learning homeostatis target. This feedforward network runs until tsat such that network responses are stable. We initialize the network with initial weights W(0) and a homeostatis target C. The initial weights W(0) is a uniform matrix scaled by the network's mean tuning response to gratings that are at full contrast and each neuron's preferred orientation. That is equivalent to the tuning response of a single neuron to a full-contrast grating at this neuron's preferred orientation. Thus: $$^{(0)} := _{NN}}{\lambda_i(\theta = \theta_i, c=1)}$$ The homeostatis target is set to the network's mean response product over some period t₀ to tmax to a set of orientations θ₁, θ₂…θk equally spaced within [0, 2π), normalized by the size of the network: $$ := }(\theta_1, \theta_2 \ldots \theta_k) }(\theta_1, \theta_2 \ldots \theta_k)^T / N,$$ $$} := (\theta_j) = ^{t_{max}} R_i(\theta_j)}{t_{max}},$$ where Ri(θj) is the normalized (but not Hebbian weight learned) response of ith neuron to the orientation θj from the set of equally spaced orientations. Furthermore, since the input to the time average of the normalized responses is Poisson process, the time average asymptotes to the Poisson rate λ. Thus, we could replace the time average by the asymptote for better precision and computational efficiency. That is: $$R_{ij} := \to +\infty}} = \to +\infty}(\theta_j) = \to +\infty} ^{t_{max}} R_i(\theta_j)}{t_{max}} = \to +\infty} ^{t_{max}} {\sigma^2 + )} \rfloor^2}}}{t_{max}} = {\sigma^2 + )} \rfloor^2}}$$ Here is a simple model readout that takes the normalized response of the neuron whose preferred orientation θi (pre-learning) matches target orientation θt: $$d := R_i, \theta_i = \theta_t$$ Alternatively, we could normalize that w.r.t. the total network response: $$d := {\sum }, \theta_i = \theta_t$$ Or we can weight the network response with their pre-learning sensitivity to the target: $$d := w , $$ $$ w_i := \lambda_i(\theta_t, c = 1)$$ City of New Orleans Emergency Medical Services Resource Optimization Matt Sloane NYU CENTER FOR URBAN SCIENCE AND PROGRESS CAPSTONE PROJECT SQUID-Bike to digitally measure citywide Bike Lane Infrastructure Varun Adibhatla Abstract / Executive SummaryUrban bicycle usage has gained in importance across many cities with progressive transportation policies. According to \cite{hu_more_2017}, as of this paper's publication, "there are more than 450,000 daily bike trips in New York City, up from 170,000 in 2005, an increase that has outpaced population and employment growth". About one in five bike trips is by a commuter. Biking serves as an important transportation option to many around the world and we argue that the need for effective bicycle lane maintenance should be a top concern for municipalities.Conventionally, street maintenance is an expensive, often inaccurate, and time intensive process that either uses subjective data prone to error or uses very precise data that is very expensive to collect citywide. There exists a clear need for cities to adopt a cost-effective and data intensive maintenance practice that can scale to the entire city and be performed frequently. These needs have been explored through the Street Quality Identification also known as the SQUID project \cite{adibhatla_digitizing_2016} to develop standardized methods for digital street inspection. This work extends the SQUID project and repurposes it for citywide bike lane measurements.In this paper, we describe the development of a data and analytics framework to measure bicycle lane quality using street imagery and accelerometer data obtained from an open source smartphone application, OpenStreetCam (OSC) \cite{telenav_openstreetmap_nodate} . This framework can be used in crowdsourced or situated settings with the overall purpose being the standardized measurement of citywide bike lane quality.IntroductionIn recent years, bike based mobility in cities around the world has been growing rapidly. Compared to cars, bikes are cheaper, safer, more sustainable, allow for higher traffic flows, and require far less space for parking. Many city governments have invested in a permanent bike-sharing infrastructure and programs in an effort to improve transit conditions within the urban core. A Data-Driven Evaluation of Delays in Criminal Prosecution Hrafnkell Hjörleifsson ABSTRACT The District Attorney's office of Santa Clara County, California has observed long durations for their prosecution processes. It is interested in assessing the drivers of prosecutorial delays and determining whether there is evidence of disparate treatment of accused individuals in pre-trial detention and criminal charging practices. A recent report from the county's civil grand jury found that only 47% of cases from 2013 were resolved in less than year, far less than the statwide average of 88%. We describe a visualization tool and analytical models to identify factors affecting delays in the prosecutorial process and any characteristics that are associated with disparate treatment of defendants. Using prosecutorial data from January through June of 2014, we find that the time to close the initial phase of prosecution (the entering of a plea), the initial plea entered, the type of court in which a defendant is tried and the main charged offense are important predictors of whether a case will extend beyond one year. Durations for prosecution are found not significantly different for different racial and ethnic population, and do not appear as important features in our modeling to predict case durations longer than one year. Further, we find that, in this data, 81% of felony cases were resolved in less than one year, far greater than the value reported by the civil grand jury. WiFind: Analyzing Wi-Fi Density around NYCHA Housing Projects Charlie Mydlarz Hypertemporal Imaging: An alternative technology to monitor grid dynamics Victor Sette Gripp ContextMotivationThe motivation of this report is to explore the team contribution to the Dr. Bianco's "Hypertemporal Imaging of NYC Grid Dynamics" proof of concept project, an alternative technology to monitor grid dynamics and energy consumption patterns, in contrast to a traditional approach which is based on in-situ monitoring of energy grids and buildings. This approach can, by analysing the lights of a city landscape, infer similar results obtained by sensors through images taken by a single camera.On the traditional approach, in order to provide reliable and affordable energy distribution, cities have to monitor the health of electric grid and energy consumption patterns. Those measurements are fundamental to provide good service during peak hours, to guarantee that the electrical grid is working in its healthy condition and to support future plans for increasing demand as cities grow and citizens use more electrical equipment.Measurements about electric grid are collected by a Phase Monitor Unit (PMU), a synchrophasor measurement device that captures information about the voltage and phase angle of the system which allows identifying possible shifts on phase and measure grid stability. Also, to access individualized energy consumption information, it is necessary the deployment of smart meters, devices that have to be installed on buildings to report real-time energy consumption information.The deployment of sensors and equipments is expensive and not possible to all cities worldwide. The PMU unitary overall cost is at a range from $40,000.00 - $180,000.00 (U.S. Department of Energy, 2009) and it is estimated that to monitor building energy consumption NYC will spend $1.5B for 1 million buildings during the next five years, values that can be impeditive for many cities in the world.The Hyperthemporal Imaging technology comes as an indirect, real-time, and affordable way to get electric grid health information and a non-intrusive and indirect way to achieve energy disaggregation and observe energy consumption patterns. In this context, this study will explore the expected improvement by using a new camera to capture images of city landscape and a study about cost and area covered by a single camera, estimating the ideal number of cameras needed to cover NYC. Previous Research\citet{bianco_hypertemporal_2016} showed a proof of concept that hypertemporal visible imaging can be used to monitor grid dynamics and identify phase changes in individual light sources from the city landscape. This technique relies on the fact that the United States grid provides electricity as an alternate current (AC) with a frequency of 60 Hz (some countries use 50 Hz standard instead). The alternate current at 60 Hz induces a flickering twice as fast, at 120 Hz, in most of the lights in the city, including incandescent, halogen, and some fluorescent lights. LED and more modern fluorescent lights have a different behavior.Analyzing a signal with a frequency of 120 Hz would require at the very least a four times faster sampling rate, at 480 Hz, ideally eight times faster, which would require specialized and more expensive equipment. Since one of the main goals of developing this alternative technology is to provide an affordable way for cities in developing nations to monitor the dynamics of their electric grid, the equipment costs had to be kept low and, therefore, the solution was to use a liquid crystal shutter mounted at the lens aperture of the camera. The shutter is then set to oscillate between the states 'open' and 'closed' at a frequency (119.75 Hz) close to the one corresponding to the flickering of the lights (120 Hz), which, in turn, down-converts the flicker to 0.25 Hz, the beat frequency given by Equation 1. PUI2016 Extra Credit Ozgur L. Akkas 2016 U.S. Election Exit Poll Results Modeling Xianbo Gao PUI2015 Extra Credit Project2016 U.S. Election Exit Poll Results Modeling <Xianbo Gao, gaogxb, xg656>Abstract: Using PCA and Lasso regression to build a regression model for 2016 U.S. Election Exit Poll Results to find which factors and to what extent contribute to the result.Introduction: In this project, I aim to discover the main factors influence the percentage of people voting for Trump and Clinton in state level in the 2016 U.S. Election Exit Poll Result, how much each factor contributes to the percentage and build a model to fit the percentage of the voting result in state level. Then I can explain the reason which Trump won the election by the election exit poll result.Data: County level election results and information of people provided by United States Department of AgricultureEconomic Research ServiceElection results and information of people in excel format provided by uselectionatlas.orgThe data only have population in 2014. Besides, there are only information of 37 states, not all the states.There are 51 columns which are factors or variables. The names of these columns are codes which should be replaced by description, so I rename these columns. I try to convert all the data into percentage format. 30 factors are or can be converted into percentage (such as percentage of age under 18). 21 factors which are not able to be converted into percentage level are normalized (such as mean time to work). After that, the data are summed into state level by weighted average which is based on population in each County. The format of data is shown below. PUI2016 Extra Credit Project Chunqing Xu Time Series Analysis of Beijing Air Pollution<Chunqing Xu, cx495, cx495> Vision Zero Crash Data Analysis Alexey Kalinin Abstract: The project is focused on exploring fatalities occurred in New York City (NYC) among 3 major groups involved in traffic accidents between 2009 and 2016: pedestrians, bicyclists, and motor vehicle occupants (MVO). The project's results show that overall trend in fatalities is declining, while trend analysis for each group shows that fatalities among bicyclists is increasing. Also, further analysis revealed that the highest number of fatalities that occurred: for pedestrians in lower east side downtown Manhattan, zip code 10002; for bicyclists in east Harlem uptown Manhattan zip code 10029; for MVO in East Flatbush, Brooklyn, zip code 11203. Original GitHub link for code:https://github.com/ak6129/PUI2016_ak6129/blob/master/ExtraCreditProject_ak6129/ExtraCreditsProject_ak6129.ipynb Property Tax Research in NYC [Report] Viola Zhong Student Name: Xinge Zhong(viola), Github account: xingezhong, NetID: xz1809 PUI2016 Extra Credit Project Report Yue Cai Impact of Uber on the traffic in Midtown Manhattan in New York City Nonie Mathur Note: Throughout this paper, 'taxis' include green and yellow taxis and 'other FHVs' include all the for hire vehicles other than Uber vehicles and taxis.Abstract: For Hire Vehicles have played an important role in New York City's transportation. With the increasing number of platforms providing these services, the number of actors in the city's transportation network have increased, raising a wide range of concerns, including their role played in the city's traffic congestion. This project was chosen in light of the debate between Uber and Mayor de Blasio. For my analysis, I used the 'Aggregate FHV Data' which was available on FiveThirtyEight's Github account who have been analyzing the data for the same purposes. This data contains information in the number of pick ups per day by yellow and green taxis, Uber, Lyft and the other 'For Hire Vehicles' in Midtown Manhattan. For my analysis of the research question – Does Uber have an impact on the Traffic congestion of the city, I performed a test of means – Z test to compare the mean of daily pick ups made by Uber vehicles to the mean of daily pick ups made by the other FHVs in Midtown Manhattan. With the calculated Z statistic, I rejected the Null Hypothesis, which proved that Uber vehicles did not lead to traffic congestion in the city (at a significance level of 0.05). Introduction: According to an article published in the blog 'Hot Air' in August 2016, when Uber was launched in New York City in the year 2011, the taxi business in the city was booming, increasing the number of medallion licenses being issued. This led to an increase in the number of vehicles on the streets, resulting in traffic congestion. In Summer 2015, New York City Mayor Bill de Blasio raised his concerns about the increase in traffic congestion due to the increasing number of ride hailing apps, most popular of them being Uber. Mayor de Blasio decided to cap the number of Uber vehicles on the streets in the city implying that the uncapped number of vehicles along with the number of taxis on the streets of the city may lead to 'urban gridlock' (FiveThirtyEight, October 2015). As a result, to study this further, in January 2016, de Blasio administration released the 'For-Hire Vehicle Transportation Study' which highlighted that even though the number of Uber vehicles have increased in the city, they are not responsible for the increasing traffic congestion because they are replacing the yellow cabs. Similarly, a study done by FiveThirtyEight (a website involved in a number of poll analysis in the fields of politics, economics, sports etc.) performed a similar statistical test and came up with the same conclusion as the report by the Mayor's administration. Based on the above mentioned studies, I have attempted to answer the following research question:Does Uber have an impact on the traffic patterns in New York City? Null Hypothesis:The average number of Uber pick ups in a day on the streets in Midtown Manhattan is more than the average number of 'For Hire Vehicles' and taxi pick ups in a day on the streets in Midtown Manhattan.Alternate Hypothesis: The average number of Uber pick ups in a day on the streets in Midtown Manhattan is less than or equal to the average number of 'For Hire Vehicles' and taxi trips in a day on the streets in Midtown Manhattan. The significance level for this analysis is 0.05. To answer this question, I first specified my null and alternate hypothesis, followed by specifying the significance level. I decided to perform a Z test to answer this research question. The Z test compares the standard deviation of the expected distribution and the observed result. It tells us how many standard deviations from the mean an observation is, under the assumption of normality. The logic behind using this test will be detailed out further in the next section. Data: To answer my research question, I needed the following information:1. Number of Uber pick ups2. Number of Taxi pick ups 3. Number of other For Hire Vehicles pick upsSince Midtown Manhattan is one of the Central Business Districts of New York City, I realized that it would be a good area to analyze traffic in. I received data for the months of July, August and September 2014. I got all this information from FiveThirtyEight's github account which had this information apart from other numbers such as:· Average trips per Hour and day of week (Uber, Lyft and the other FHVs)· Uber, Lyft pick ups per day within Manhattan core, LaGuardia airport and JFK (2014)· Taxi pick ups per day within Manhattan core, LaGuardia airport and JFK (2013 and 2014)· Change in daily Uber and Lyft trips in Manhattan Core (Sept 2014)· Change in daily yellow taxi trips in Manhattan Core (September 2013 compared with September 2014)However, I did not need this information to answer my research question, so I dropped these columns while data wrangling. Since the document had a lot of unfilled columns and rows, I dropped them all so that I could get a clean dataframe which was good for processing the data quickly. Methodology: To make the dataframe easy to understand, I made a new column in the dataframe which added the total number of FHVs (excluding Uber vehicles, yellow and green taxis) on the streets. Similarly, I combined both the green and yellow taxis. I also grouped the information given in groups of three (using groupby) – for the months of July, August and September – this gave me a clear look at the traffic patterns through the three months in summer 2014.I also plotted the total daily trips made by Uber, Taxis and other For Hire Vehicles everyday from July 1, 2014 to September 30, 2014 to look at the trends. Study of impact on mobility: The impact of construction sites on pedestrian traffic i... Ekaterina Levitskaya Ekaterina Levitskaya, github: el2666, NYU ID: el2666 Spatial Layouts of Playgrounds in New York City Kevin Han INTRODUCTION Our project is aimed at children growing up in NYC, by investigating the information of recreation facilities like playgrounds around zip codes zones and residential neighborhoods. Through this project we want to supply information of the playgrounds within walking distance in neighborhood units, the amounts for average children living in communities, the crime rates around the playgrounds, the transportation situation, the restaurants and schools around the playgrounds for parents to consider that when they plan to take children out. The Temporal and Weather Data Analysis on NYC Yellow Taxi Ridership Demands Le Xu Several researches have been done since The NYC Taxi & Limousine Commission has released the detailed historical dataset covering over 1.1 billion individual taxi trips, from January 2009 through June 2016. Many Data scientists have examined this dataset passionately, in order to discover this great city's neighborhoods, nightlife, airport traffic, and more. In this contribution, the likelihood of occurrence of long taxi trips during the day and night has been studied, as well as the relationship of weather and taxi demands. The present study has investigated NYC yellow taxi trips by looking at the two months period of 2016(January and June) based on the temporal factors and weather condition. The results show it was more likely long trip would occur during the nighttime compares to daytime, and the snow depth does greatly affect the demand of taxi trips, but precipitation does not display evident correlation with demand of taxi rides. Keywords: NYC, yellow taxi, data, demand Is the current disposition of LinkNYC kiosks contributing to the increasing internet... Santiago Carrillo The notion of a digital divide has been increasingly addressed by policy makers for the last two decades. In the year 2013 the Census Bureau included questions regarding internet access for the first time in the federal agency's history. The results of the survey highlighted the situation of hundreds of thousands of low-income Americans with no computer ownership and deficient access broadband. For New York City, the American Community Survey (ACS) stressed estimates for its most vulnerable segments of the population located in all five boroughs. According to the Comptroller's Office' analysis of ACS results, 20% of the city's youth and 45% of its senior population lacked broadband at home. The deficient access was concentrated in the Bronx and Brooklyn with 34% and 30% of residents lacking internet access, respectively. Aiming to address this increasing divide, Mayor Bill de Blasio announced LinkNYC program, a municipal Wi-Fi network that will eventually replace more than 7,000 phone booths with as many as 10,000 interactive kiosks with the capacity to provide New Yorkers with free high-speed internet within a 150-foot radius from each device. This project evaluates the spatial relationship between the current disposition of the self-funded LinkNYC kiosks (Links) and the areas of New York City hosting population living below poverty. Specifically, this study was designed to examine whether or not, low-income members of the community were more likely to be located in long-proximity to Links, compared to high-income segments of the population. The study was done by utilizing American Community Survey population and internet use estimates for 2015 and 2014, respectively and the LinkNYC locator provided by New York City's open data portal. Among the findings of this project, a strong presence of Links was identified in the borough of Manhattan, with 90% of total free Wi-Fi devices installed within community districts of higher median household income and the greatest number internet subscriptions. Conversely, Brooklyn was the borough with lower median income households, the least number of internet subscriptions and had, at the time of the analysis, only two installed LinkNYC kiosks (representing less than 1% of the total installed Links). Findings indicated that population below poverty was more likely to be located at longer-distance from their nearest LinkNYC kios than their higher-income neighbors. The limitations of the linear nature measurements and results' implications within New York City's digital divide, however, will be explored in depth throughout the sections below.
CommonCrawl
The Extremely High Energy Cosmic Rays [PDF] Shigeru Yoshida,Hongyue Dai Physics , 1998, DOI: 10.1088/0954-3899/24/5/002 Abstract: Experimental results from Haverah Park, Yakutsk, AGASA and Fly's Eye are reviewed. All these experiments work in the energy range above 0.1 EeV. The 'dip' structure around 3 EeV in the energy spectrum is well established by all the experiments, though the exact position differs slightly. Fly's Eye and Yakutsk results on the chemical composition indicate that the cosmic rays are getting lighter over the energy range from 0.1 EeV to 10 EeV, but the exact fraction is hadronic interaction model dependent, as indicated by the AGASA analysis. The arrival directions of cosmic rays are largely isotropic, but interesting features may be starting to emerge. Most of the experimental results can best be explained with the scenario that an extragalactic component gradually takes over a galactic population as energy increases and cosmic rays at the highest energies are dominated by particles coming from extragalactic space. However, identification of the extragalactic sources has not yet been successful because of limited statistics and the resolution of the data. The influence of cosmic-rays on the magnetorotational instability [PDF] Fazeleh Khajenabi Physics , 2011, DOI: 10.1007/s10509-011-0829-0 Abstract: We present a linear perturbation analysis of the magnetorotational instability in the presence of the cosmic rays. Dynamical effects of the cosmic rays are considered by a fluid description and the diffusion of cosmic rays is only along the magnetic field lines. We show an enhancement in the growth rate of the unstable mode because of the existence of cosmic rays. But as the diffusion of cosmic rays increases, we see that the growth rate decreases. Thus, cosmic rays have a destabilizing role in the magnetorotational instability of the accretion discs. Origin and Propagation of Extremely High Energy Cosmic Rays [PDF] Pijushpani Bhattacharjee,Guenter Sigl Physics , 1998, DOI: 10.1016/S0370-1573(99)00101-5 Abstract: Cosmic ray particles with energies in excess of 10**(20) eV have been detected. The sources as well as the physical mechanism(s) responsible for endowing cosmic ray particles with such enormous energies are unknown. This report gives a review of the physics and astrophysics associated with the questions of origin and propagation of these Extremely High Energy (EHE) cosmic rays in the Universe. After a brief review of the observed cosmic rays in general and their possible sources and acceleration mechanisms, a detailed discussion is given of possible "top-down" (non-acceleration) scenarios of origin of EHE cosmic rays through decay of sufficiently massive particles originating from processes in the early Universe. The massive particles can come from collapse and/or annihilation of cosmic topological defects (such as monopoles, cosmic strings, etc.) associated with Grand Unified Theories or they could be some long-lived metastable supermassive relic particles that were created in the early Universe and are decaying in the current epoch. The highest energy end of the cosmic ray spectrum can thus be used as a probe of new fundamental physics beyond Standard Model. We discuss the role of existing and proposed cosmic ray, gamma-ray and neutrino experiments in this context. We also discuss how observations with next generation experiments of images and spectra of EHE cosmic ray sources can be used to obtain new information on Galactic and extragalactic magnetic fields and possibly their origin. Prospects for radio detection of extremely high energy cosmic rays and neutrinos in the Moon [PDF] J. Alvarez-Mu?iz,E. Zas Physics , 2001, DOI: 10.1063/1.1398166 Abstract: We explore the feasibility of using the Moon as a detector of extremely high energy (>10^19 eV) cosmic rays and neutrinos. The idea is to use the existing radiotelescopes on Earth to look for short pulses of Cherenkov radiation in the GHz range emitted by showers induced just below the surface of the Moon when cosmic rays or neutrinos strike it. We estimate the energy threshold of the technique and the effective aperture and volume of the Moon for this detection. We apply our calculation to obtain the expected event rates from the observed cosmic ray flux and several representative theoretical neutrino fluxes. Extremely high energy cosmic rays and the Auger Observatory [PDF] Murat Boratav Physics , 1996, DOI: 10.1016/0920-5632(96)00300-3 Abstract: Over the last 30 years or so, a handful of events observed in ground-based cosmic ray detectors seem to have opened a new window in the field of high-energy astrophysics. These events have energies exceeding 5x10**19 eV (the region of the so-called Greisen-Zatsepin-Kuzmin spectral cutoff); they seem to come from no known astrophysical source; their chemical composition is mostly unknown; no conventional accelerating mechanism is considered as being able to explain their production and propagation to earth. Only a dedicated detector can bring in the high-quality and statistically significant data needed to solve this long-lasting puzzle: this is the aim of the Auger Observatory project around which a world-wide collaboration is being mobilized. Physics of Extremely High Energy Cosmic Rays [PDF] Xavier Bertou,Murat Boratav,Antoine Letessier-Selvon Physics , 2000, DOI: 10.1016/S0217-751X(00)00090-2 Abstract: Over the last third of the century, a few tens of events, detected by ground-based cosmic ray detectors, have opened a new window in the field of high-energy astrophysics. These events have macroscopic energies, unobserved sources, an unknown chemical composition and a production and transport mechanism yet to be explained. With a flux as low as one particle per century per square kilometer, only dedicated detectors with huge apertures can bring in the high-quality and statistically significant data needed to answer those questions. In this article, we review the present status of the field both from an experimental and theoretical point of view. Special attention is given to the next generation of detectors devoted to the thorough exploration of the highest energy ranges Cluster Analysis of Extremely High Energy Cosmic Rays in the Northern Sky [PDF] Y. Uchihori,M. Nagano,M. Takeda,M. Teshima,J. Lloyd-Evans,A. A. Watson Physics , 1999, DOI: 10.1016/S0927-6505(99)00119-X Abstract: The arrival directions of extremely high energy cosmic rays (EHECR) above $4\times10^{19}$ eV, observed by four surface array experiments in the northern hemisphere,are examined for coincidences from similar directions in the sky. The total number of cosmic rays is 92.A significant number of double coincidences (doublet) and triple coincidences (triplet) are observed on the supergalactic plane within the experimental angular resolution. The chance probability of such multiplets from a uniform distribution is less than 1 % if we consider a restricted region within $\pm 10^{\circ}$ of the supergalactic plane. Though there is still a possibility of chance coincidence, the present results on small angle clustering along the supergalactic plane may be important in interpreting EHECR enigma. An independent set of data is required to check our claims. Top-Down Models and Extremely High Energy Cosmic Rays [PDF] O. E. Kalashev,V. A. Kuzmin,D. V. Semikoz Physics , 1999, Abstract: We developed numerical code for calculation of the extragalactic component of the spectra of leptons, nucleons and $\gamma$-rays resulting from ``top-down'' (non-acceleration) models for the case of uniform and isotropic source distribution. We explored two different classes of ``top-down'' scenarios: the wide earlier investigated class of X particles coming from collapse and/or annihilation of cosmic topological defects (such as cosmic strings, monopoles, etc.) and the models of super-heavy long-living X particles with lifetime of the order or much greater than the current Universe age. On Spectrum of Extremely High Energy Cosmic Rays through Decay of Superheavy Particles [PDF] Yūichi Chikashige,Jun-ichi Kamoshita Abstract: We propose a formula for flux of extremely high energy cosmic rays (EHECR) through decay of superheavy particles. It is shown that EHECR spectrum reported by AGASA is reproduced by the formula. The presence of EHECR suggests, according to this approach, the existence of superheavy particles with mass of about $7 \times 10^{11}$GeV and the lifetime of about $10^9$ years. Possibility to obtain a knowledge of $\Omega_0$ of the universe from the spectrum of EHECR is also pointed out. Solar panels as air Cherenkov detectors for extremely high energy cosmic rays [PDF] S. Cecchini,I. D'Antone,L. Degli Esposti,G. Giacomelli,M. Guerra,I. Lax,G. Mandrioli,A. Parretta,A. Sarno,R. Schioppo,M. Sorel,M. Spurio Abstract: Increasing interest towards the observation of the highest energy cosmic rays has motivated the development of new detection techniques. The properties of the Cherenkov photon pulse emitted in the atmosphere by these very rare particles indicate low-cost semiconductor detectors as good candidates for their optical read-out. The aim of this paper is to evaluate the viability of solar panels for this purpose. The experimental framework resulting from measurements performed with suitably-designed solar cells and large conventional photovoltaic areas is presented. A discussion on the obtained and achievable sensitivities follows.
CommonCrawl
Health shocks and child time allocation decisions by households: evidence from Ethiopia Yonatan Dinku1, David Fielding1 & Murat Genç1 Little is currently known about the effects of shocks to parental health on the allocation of children's time between alternative activities. Using longitudinal data from the Ethiopian Young Lives surveys of 2006 and 2009, we analyse the effect of health shocks on the amount of children's time spent in work, leisure and education. One key contribution of the paper is that we distinguish between child labour as defined by organisations such as the International Labour Organisation and other types of child work, such as light domestic chores. We find that paternal illness increases the time spent in income-generating work but maternal illness increases the time spent in domestic work. Moreover, maternal illness has a relatively large effect on daughters while paternal illness has a relatively large effect on sons. Overall, parental illness leads to large and significant increases in the amount of child labour. JEL Classification: D13, I12, I21, O15 In developing countries, the opportunity cost of children's time is likely to be higher when the parents' income-generating capacity is lower, so negative household income shocks will reduce children's education and play time and increase their work time (Basu and Van 1998). Evidence for such an effect has been found in studies of agricultural productivity shocks (Beegle et al. 2006; Colmer 2013; Guarcello et al. 2008) and employment shocks (Duryea et al. 2007; Guarcello et al. 2010).Footnote 1 Fallon and Tzannatos (1998) and Udry (2006) argue that child labour is a consequence of chronic poverty, and there is some evidence for such a link from cross-country studies (Edmonds and Pavcnik 2005), country-specific studies (Jensen and Nielsen 1997; Edmonds 2005), and cash-transfer experiments (Edmonds 2006; Edmonds and Schady 2009; Bourguignon et al. 2003). This paper focuses on the effects on children's time allocation of shocks to parental health. Parental health shocks could have an especially large effect on children's time, because a child is required not only to provide a substitute for adult labour but also to care for the parent. The child's education could be adversely affected because the household can no longer afford to pay for it, or because the child has no time to study (Haile and Haile 2012; Rosati and Rossi 2001; Rosenzweig and Evenson 1977; Udry 2006), or because the child is fatigued by strenuous or hazardous employment (Duryea et al. 2007; Heady 2003). There is already a small literature on the links between parental illness, child labour and education, but only two papers (Dillon 2012; Alam 2015) which estimate the impact of adult health shocks on the allocation of children's time across a range of activities, rather than just on time spent in school. Our analysis embodies a number of distinctive features. Firstly, we distinguish between different types of child work using two alternative types of disaggregation. We distinguish between different kinds of activity (education, play, domestic chores inside the home and work outside the home), and we also distinguish between (i) time spent on innocuous household chores or light work outside the home and (ii) child labour as defined by organisations such as the International Labour Organisation (ILO) and the United Nations Children's Emergency Fund (UNICEF). Depending on their age and the type of task they perform, children might benefit from light work: they might acquire skills useful in future life or earn income that can help to finance their own education and health (Cigno and Rosati 2002; Moehling 2005). We believe that this distinction enhances the relevance of our results to policymakers, who may be concerned primarily with child labour as defined by the ILO and UNICEF—i.e. work that is harmful to the child's wellbeing and personal development. Secondly, we estimate the effects of parental illness using a panel dataset that allows us to control for unobserved heterogeneity at the household level. Such heterogeneity could arise if, for example, there are some parents who put relatively little value on human capital and so invest in neither their own health nor their children's education. Thirdly, while we do control for shocks to the health of adults in the household other than the mother and father, our main focus is on parental health shocks, and on asymmetries in (i) the effect of maternal health shocks on girls and boys and (ii) the effect of paternal health shocks on girls and boys. Our study uses data for two cohorts of children in the Ethiopian Young Lives survey. One of the key original contributions of this paper is that we estimate the impact of parental health shocks on the allocation on children's time in a way that allows for a distinction between child work and child labour. This distinction is based on the definition of child labour developed by UNICEF (United Nations 1989), which is consistent with the guidelines in the ILO's Minimum Age Convention (ILO 1973) and the resolutions of the 18th International Conference of Labour Statisticians (ILO 2008). This definition takes into account work intensity and the child's age. We also distinguish between the effects of paternal and maternal illness, and between the effects on sons and the effects on daughters. Our results are based on fixed-effects estimates that allow for unobserved heterogeneity. We find that parental illness has a large and statistically significant effect on the allocation of children's time, but that there are asymmetries between maternal and paternal illness. Paternal illness reduces time spent in school while increasing time spent in income-generating work, but maternal illness reduces time spent in play and income-generating work while increasing time spent in domestic work. Moreover, maternal illness has a larger impact on daughters while paternal illness has a larger impact on sons. In this way, the effects of parental illness appear to reflect traditional gender roles within the household. There is also some heterogeneity in the effects of parental illness on the prevalence of child labour. Overall, serious maternal illness is associated with a ten percentage point increase in prevalence while serious paternal illness has a smaller effect. However, maternal illness has a relatively large effect on prevalence among girls and paternal illness a relatively large effect on prevalence among boys. Data and descriptive statistics Our data are taken from the Ethiopian Young Lives surveys of 2006 and 2009 (www.younglives.org.uk/content/ethiopia).Footnote 2 As with Young Lives surveys in other countries, the Ethiopian sample comprises two cohorts of children in a stratified sample of villages: one cohort was aged 0.5–1.5 years in 2002 while the other was aged 7.5–8.5 years in 2002. It is the existence of two cohorts that provides much of the variation in the child age variable in our sample. Before attrition, the younger cohort comprises 2000 children and the older cohort 1000 children.Footnote 3 After attrition due to mortality and other factors,Footnote 4 and after excluding children aged under five or living in single-parent households at the time of the survey, we have sample sizes of 1299 and 970 respectively. However, only one child is sampled in each household, so there is no distinction between child fixed effects and household fixed effects. Table 1 shows summary statistics for children's daily time allocation between play, schooling (including homework), domestic chores and income-generating work, and Figs. 1, 2, 3 and 4 contain the corresponding histograms. Income-generating work includes activities such as street vending, work on the farm or serving in the family store. Domestic chores include activities such as washing, cooking, cleaning and caregiving; these definitions are consistent with those of the United Nations (2009). The table shows a marked upward trend in schooling time and downward trend in play time; this corresponds to an increase in the school enrolment rate from 56% in 2006 to 85% in 2009.Footnote 5 There is also a substantial percentage increase in income-generating work time. On average, income-generating work only makes up a small proportion of children's time, but the low mean is accompanied by a high standard deviation, so there are some children who are spending a substantial proportion of their time in income-generating work. Note that this work is not motivated by a need to pay for schooling, because Ethiopian public schools do not charge fees. The table also shows some gender asymmetries. Although boys and girls spend roughly the same amount of time on average in play, schooling and work, the girls' work time is much more dominated by domestic chores while boys spend a substantial amount of time in income-generating work. This may reflect cultural norms relating to gender roles: for example, boys do not normally cook, and it is sometimes unacceptable for a girl to leave home unaccompanied. The figures show that the distributions of all four activities are left-skewed, but the skewness is more marked for schooling and income-generating work, with many children either not attending school or not going out to work. Finally, the table includes mean values for each activity disaggregated by the health status of the parents. It can be seen that parental illness is generally associated with an increase time spent in domestic chores and income-generating work and a decrease in time spent in play and schooling. Note, however, that these are unconditional associations which do not necessarily correspond to a causal effect. Table 1 Descriptive statistics Histogram of play hours (2006 and 2009 surveys combined) Histogram of schooling hours (2006 and 2009 surveys combined) Histogram of domestic chore hours (2006 and 2009 surveys combined) Histogram of income-generating work hours (2006 and 2009 surveys combined) Neither of the work categories in Table 1 corresponds to standard definitions of child labour. In this paper, we will analyse both the work categories in Table 1 and child labour as defined by UNICEF. For children aged 5–11 years, child labour is defined as domestic chores in excess of 28 h per week or any income-generating work; for children aged 12–14 years, child labour is defined as domestic chores in excess of 28 h per week or income-generating work is excess of 14 h per week (there are no children in our sample over the age of 14). Using these definitions, the incidence of child labour across the two rounds of the survey is 54% for both age groups. The under-11s account for two thirds of the sample and therefore two thirds of the cases of child labour. Table 1 also reports the proportion of children whose mothers or fathers report having been ill in the 3 years prior to the survey.Footnote 6 The incidence of maternal illness is higher than the incidence of paternal illness; moreover, the incidence of paternal illness is the same across the two surveys while the incidence of maternal incidence has risen. These asymmetries will matter if the effects of parental illness are gender-specific. In estimating the effects of parental illness, we will need to control for other negative shocks to the household that might affect child labour: illness among other members of the household, the death of livestock, crop failure, theft, the loss of paid employment and forced eviction. Three-year incidence rates for these shocks are also reported in Table 1. Other control variables in our model include measures of the age and highest school grade previously attained by the child, the child's mother and the child's father; whether the child has a step-mother or step-father; the household's size, wealth level and access to risk-sharing institutions; the sex of the household head and a household power index for the mother; the local community's level of access to healthcare and microfinance services; the incidence of community-level droughts and floods, and whether the community is rural or urban. Definitions and summary statistics for these variables appear in the Appendix; note that the questions in the Young Lives survey relate to serious illnesses only: the results that follow should be interpreted as estimates of the effect of a serious parental illness within the past 3 years on the current allocation of the child's time.Footnote 7 Modelling the determinants of child work and child labour Modelling child work We first estimate the effect of parental illness on the number of hours of a child's time that are allocated to the different activities in Table 2. Our estimates are based on a fixed-effects Poisson model with errors clustered at the community level.Footnote 8 For each activity j, the dependent variable (y ijt ) is the amount of time that child i records spending on that activity in survey round t. This variable is assumed to have a Poisson distribution with a mean equal to: Table 2 Determinants of on time spent in different activities $$ \mathrm{E}\left({y}_{ij t}\right)=\exp \left({\gamma}_{1j}{h}_{it}^m+{\gamma}_{2j}{h}_{it}^f+{x}_{it}^{\prime }{\beta}_j+{\eta}_{ij}\right) $$ Here, \( {h}_{it}^m \) and \( {h}_{it}^f \) are indicator variables for the incidence of paternal and maternal illness in the previous 3 years, x it is a vector comprising the control variables listed above, and η ij is a child-specific fixed effect; the β and γ terms are parameters to be estimated. The model is fitted to the full sample of 2269 children, except in the case of schooling, where we exclude children initially aged 5–6 years because the first year of primary education is for children aged 7–8 years.Footnote 9 Before discussing the estimates of the parameters in Eq. (1), we should comment on the assumption that \( {h}_{it}^m \) and \( {h}_{it}^f \) are exogenous to y ijt . There are several different potential sources of endogeneity. Firstly, there could be some household-level characteristics that are associated both with an unhealthy adult lifestyle and with decisions about the allocation of children's time. Secondly, there could be household-level characteristics that are associated both with decisions about fertility (which could influence parental health) and the allocation of children's time. Thirdly, the amount of work a child is doing for her parents could affect their subsequent health. In relation to the first two points, we note that the vector x it includes a wide range of child- and household-level characteristics, which are listed in the Appendix. For unobserved heterogeneity to affect the consistency of our estimates of the γ parameters, this heterogeneity would have to be uncorrelated with the elements of x it ; we suggest that there is unlikely to be a large amount of such heterogeneity. Moreover, we have fitted a fixed-effects model, so unobserved heterogeneity that is time-invariant will have no effect on our estimates. In relation to the third point, we acknowledge that the ideal approach would be to fit an instrumental variable (IV) model, but there is no obvious IV for parental health in the dataset. Instead, we explore the possibility of reverse causality by fitting a model of \( {h}_{i2009}^m \) and \( {h}_{2009}^f \) conditional on yij2006. Such a model appears in the Appendix: it shows that child time allocation in 2006 has no significant effect on parental health in 2009, and the point estimates of the effects are very close to zero. This gives us some reason to believe that there will not be a large amount of endogeneity bias in our estimates; nevertheless, the absence of an IV should be noted as a caveat in a causal interpretation of the γ parameters in Eq. (1). A second caveat in the interpretation of our results is that our self-reported parental illness measures do not include any disaggregation by type of illness or by severity of illness, and the γ parameters should be interpreted as mean effects across all types of illness. Table 2 includes estimates of the γ parameters, which can be interpreted as the percentage change in the number of hours worked, on average, in the case of maternal or paternal illness. The table indicates some asymmetries between the effects of maternal and paternal illness. Maternal illness is associated with a 30% increase in the amount of time spent on domestic chores; correspondingly, there is a 10% reduction in the amount of time spent in play and a 17% reduction in the amount of time spent in income-generating work; all of these effects are significant at the 5% level. The estimated effect of maternal illness on time in school is very small and insignificantly different from zero. Paternal illness is associated with a 28% increase in the amount of time spent in income-generating work; correspondingly, there is a 9% reduction in the amount of time spent in school; both of these effects are significant at the 1% level. The effects on maternal illness on domestic chore time and of paternal illness on income-generating work time are unsurprising, given the traditional gender roles of adults in most Ethiopian households (Haile and Haile 2012).Footnote 10 However, it is more surprising that only paternal illness reduces time in school. One possible explanation is that the extra income-generating work resulting from paternal illness takes up whole days of a child's time, making it impossible to go to school; the extra domestic chores resulting from maternal illness might more easily be fitted around the school day. Previous studies have also found a relatively small effect of parental mortality and morbidity on school hours. In Tanzania, for example, Ainsworth et al. (2005) find that the mortality of one parent (either the mother or the father) has a small and statistically insignificant effect on the number of school hours, while Alam (2015) finds that maternal illness has no significant effect on the probability of a child attending school. In a study of ten Sub-Saharan African countries, Case et al. (2004) find significant effects of the death of one parent on the probability of school attendance, but the fall in probability is typically only about five percentage points.Footnote 11 Estimates of the β parameters in Eq. (1) appear in the Appendix, along with a discussion of the effects of all of the control variables. Table 2 also includes parameter estimates for two control variables of particular interest. These show that firstly, the effects of illness of a member of the household other than the mother or father are very small and insignificantly different from zero, and secondly, domestic chore time is reduced by 22% when the household lives in a location with a local health centre. This effect is significant at the 1% level, and suggests that extension of access to primary healthcare would be an effective way to mitigate the impact of parental illness on children, as well as reducing the burden of illness on adults. The Appendix contains further results that focus on the extensive margin of the Table 2 effects, showing the impact of parental illness on the probability that children will be spending any time on domestic chores or income-generating work. It turns out that there are also large effects at the extensive margin: maternal illness raises the probability of involvement in some chores by five percentage points and reduces the probability of involvement in income-generating work by the same amount; paternal illness raises the probability of involvement in income-generating work by four percentage points. Moreover, proximity to a local health centre reduces the probability of involvement in domestic chores by about five percentage points. It is important to remember that the γ parameters measure percentage changes in the time allocated to a particular activity, not the absolute number of hours. Nevertheless, absolute changes can be computed at specific values of the explanatory variables, for example, at their mean values. Across the two rounds of the survey, the mean number of play hours is 5.1 and the mean number of hours of schooling is 5.5; the corresponding figure for domestic chores is 2.9 and the corresponding figure for income-generating work is 1.3. The Table 2 results imply that at these mean values, maternal illness results in a decrease in play time of 5.1 × 0.098 ≈ 0.5 h per day, a decrease in schooling time of 5.5 × 0.030 ≈ 0.2 h, and a decrease in income-generating work time of 1.3 × 0.170 ≈ 0.2 h; the corresponding increase in chore time is 2.9 × 0.299 ≈ 0.9 h.Footnote 12 The model does not constrain the effects to sum to zero, because the diary does not ask children to account for all 24 h in a day, but the effects do approximately sum to zero at the mean; this is also true of paternal illness. The results in Table 2 are based on the assumption that the effects of parental illness and of the control variables are linearly separable. There are two main reasons why this assumption might not hold. Firstly, when only one parent is ill, part of the family's coping strategy might be for the other parent to reduce his or her leisure time, but when both parents are ill, this will not be possible, and the effect on the children will be magnified. We can explore this possibility by adding an interaction term \( {h}_{it}^m\times {h}_{it}^f \) to the right-hand side of Eq. (1). Secondly, the effects of parental illness on the allocation of a child's time might depend on the characteristics of the child. For example, the effect of illness on the value of the marginal hour allocated to education might depend on the child's existing level of educational attainment, or the effect of illness on the value of the marginal hour allocated to work might depend on the child's age. We can explore this possibility by adding interaction terms in \( {h}_{it}^m \) and educational attainment and in \( {h}_{it}^f \) and educational attainment, or in \( {h}_{it}^m \) and age and in \( {h}_{it}^f \) and age. One challenge in the interpretation of a model with interaction terms is that these terms are necessarily correlated with each other, and the correlations will bias the standard errors upwards. When we fit a model with more than one type of interaction term, almost none of the individual parameters is significantly different from zero. Nevertheless, we can explore the separability assumption by fitting a set of models, each one of which includes a different type of interaction term. The results of such an exercise are reported in Tables 3 and 4. Table 3 The effects of parental illness: models with an interaction term Table 4 The effects of parental illness: models with interaction terms in child characteristics Table 3 shows the results of adding \( {h}_{it}^m\times {h}_{it}^f \) to the model. Here, the interaction term in the play and income-generating work equations is insignificantly different from zero, and the addition of the interaction term makes little difference to the γ parameter estimates. However, the coefficient on the interaction term is significantly less than zero in the schooling equation and significantly greater than zero in the domestic chores equation, indicating that when both parents are ill, the effect on the substitution out of schooling and into chores is magnified. The addition of the interaction term makes the γ parameter estimates in the schooling and chores equations larger in absolute value, but these differences are not statistically significant. Table 4 shows the results of adding interaction terms in educational attainment (measured as the highest school grade attained) or in age. Here, the one large and significant interaction effect is that the impact of paternal illness on time allocated to income-generating work is smaller for older children or for more educated children. Age and educational attainment are so highly correlated that it makes little sense to include both interaction terms in a single model, but the most plausible interpretation of the effect is that when a child is closer to completing school, the parents are more reluctant to remove the child from school in order to make up for income lost through paternal illness. There is a qualitatively similar result for the effect of paternal illness on time allocated to domestic chores, but here the effect is much smaller and of marginal statistical significance. Finally, Table 5 shows estimates of the γ parameters when the model (without interaction terms) is fitted to a sample of boys only and a sample of girls only. There are some asymmetries in the effects of parental illness on girls and boys. For boys, the effects of paternal illness are similar to (but somewhat larger than) the aggregate effects in Table 2: when his father is ill, a boy can be expected to spend 29% more time in income-generating work and 14% less time in school. The estimated effects of paternal illness on girls are all much smaller and insignificantly different from zero. By contrast, the effects of maternal illness are larger for girls than for boys: the increase in girls' domestic chore time is 31% (versus 26% for boys) and the reduction in girls' play time is 12% (versus 8% for boys). The most marked asymmetry in the effect of maternal illness relates to income-generating work time, which falls by 34% for girls but only 8% for boys: there is thus some evidence that whatever income-generating work girls are doing can be sacrificed if the mother needs more help in the home, but this is not the case for boys.Footnote 13 Note that the effects of parental illness on girls' time do not depend on whether she has any brothers, and the effects on boys' time do not depend on whether he has any sisters: when the illness variables are interacted with indicator variables for whether there are no boys in the house (in the case of girls) or no girls in the house (in the case of boys), the coefficients on these interaction terms are insignificantly different from zero (p > 0.1). Recall also that children in single-parent households are excluded from the sample, so the estimated effects of maternal (paternal) illness are for households in which the father (mother) is present. Table 5 The effects of parental illness on time spent in different activities (sub-samples) Modelling child labour The second part of our empirical analysis involves estimation of the determinants of the prevalence of child labour. The harm to individual children from being subjected to child labour will depend on a number of factors, including both the total number of labour hours and the type of work involved. Measuring the extent of harm is a topic for future research, and here we follow previous studies (Baland and Robinson 2000; Basu and Van 1998; Beegle et al. 2006; Ranjan 1999) in focussing on a binary variable (z it ) which indicates whether child i is subjected to any child labour in survey period t. Assume that the data-generating process for z it takes the form of a fixed-effects Probit model: $$ \mathrm{P}\left({z}_{it}=1\right)=\Phi \left({\alpha}_1{h}_{it}^m+{\alpha}_2{h}_{it}^f+{x}_{it}^{\prime}\varphi +{\mu}_i\right) $$ Here Ф(.) is the cumulative normal density function, μ i is a child-specific fixed effect, and the other variables are as in Eq. (1). Although this model cannot be estimated directly, consistent estimates of the α and φ parameters can be obtained by replacing μ i with a linear function of the child-specific mean values of \( {h}_{it}^m \), \( {h}_{it}^f \) and x it plus a random effect ε(i): $$ \mathrm{P}\left({z}_{it}=1\right)=\Phi \left({\alpha}_1{h}_{it}^m+{\alpha}_2{h}_{it}^f+{x}_{it}^{\hbox{'}}\varphi +{\pi}_1{\overline{h}}_i^m+{\pi}_2{\overline{h}}_i^f+{\overline{x}}_i^{\hbox{'}}\omega +\varepsilon (i)\right),\kern0.75em \varepsilon (i)\sim \kern0.5em \mathrm{N}\left(\delta, \kern0.5em {\sigma}^2\right) $$ This is the correlated random-effects (CRE) model (Wooldridge 2011). For comparison, we also estimate a simple random-effects Probit model in which the π and ω parameters are set to zero. Table 6 shows the average partial effects of maternal and paternal illness on the probability of child labour, i.e. Ф ′ (.) ∙ α1 and Ф ′ (.) ∙ α2 evaluated at the mean value of Ф(.), along with the corresponding standard errors. Estimates of the other parameters in the model are available on request. It can be seen that the CRE and simple random-effects estimates are quite similar, although the restrictions implicit in the latter can be rejected at the 1% level using a χ 2 test. Maternal illness has a relatively large effect on the probability of child labour: in the CRE model, this probability is estimated to increase by about ten percentage points when the mother is ill, an effect that is significant at the 1% level. The effect of paternal illness is much smaller and in the CRE model is insignificantly different from zero. Table 6 Average partial effects of illness on the probability of child labour The asymmetry in the effects of maternal and paternal illness is somewhat surprising: one might have expected the father's illness to increase child labour through its effect on household income (Basu and Van 1998; Fallon and Tzannatos 1998; Udry 2006). However, it is consistent with the results regarding child time allocation discussed above. Maternal illness mainly affects time spent on domestic chores while paternal illness mainly affects time spent in income-generating work. The percentage increase in domestic chores following maternal illness is approximately equal to the percentage increase in income-generating work following paternal illness (see Table 2), but the average amount of time spent in domestic chores is much higher than the average amount of time spent in income-generating work (see Table 1). Therefore, maternal illness is associated with larger absolute increases in total child work time, and is more likely to take the child's work hours over the threshold that defines child labour. One possible reason for the relatively large absolute effects of maternal illness is that children's labour is a closer substitute for women's labour than it is for men's, either for cultural reasons or because men's labour often requires upper body strength that children lack, whereas women's labour involves stamina that children do have. In the Appendix, we explore this idea by looking at the effects of illness on household consumption. We show that paternal illness leads to a significant reduction in household expenditure. This suggests that the average household's response to paternal illness is a combination of reduced spending and a moderate increase in the children's income-generating work time: the consumption financed by the marginal hour which a healthy father spends in income-generating work seems not to be essential. However, we also find that maternal illness leads to no significant reduction in household expenditure. The lost maternal labour hours are probably mainly in domestic chores, but the household does not respond by reducing paternal income-generating work time and the consumption it finances: rather, the children must make up for the mother's lost hours. Table 7 shows average partial effects from the CRE model fitted to girl-only and boy-only sub-samples. It can be seen that the effect of maternal illness on girls is larger than that on boys. On average, maternal illness raises the probability of child labour for girls by 13 percentage points (an effect significant at the 1% level) and the probability of child labour for boys by only five percentage points (an effect not quite significant at the 5% level). In contrast, paternal illness raises the probability of child labour for boys by seven percentage points (an effect significant at the 5% level) while having no significant impact on girls. Taken together, the results in Tables 5 and 7 suggest that girls' labour is a very close substitute for women's labour but no substitute for men's labour, while boys' labour is a moderately close substitute for both men's and women's labour. This is consistent with evidence that child labour is a closer substitute for women's labour than it is for men's (Diamond and Fayed 1998; Ray 2000). Table 7 Average partial effects of illness on the probability of child labour (sub-samples) Health shocks are among the most unpredictable and costly causes of economic hardship in developing countries. When a family member is ill, households face loss of income and large, out-of-pocket payments for medical care (Sparrow et al. 2014). This paper contributes to the growing body of evidence that many households cope with such shocks by reallocating the time of family members (Gertler and Gruber 2002; Wagstaff 2007). We show that in at least one country—Ethiopia—parental health shocks have a large effect on the allocation of children's time.Footnote 14 Our results are based on estimates from fixed-effects models applied to longitudinal data from the Ethiopian Young Lives survey. Interpreting our estimates as causal effects (subject to the caveat noted in Section 3.1), we find that paternal illness reduces children's time spent in school and increases their time spent in income-generating work, while maternal illness reduces time spent in play and increases time spent in domestic work. Maternal illness has a relatively large effect on girls while paternal illness has a relatively large effect on boys, which suggests that the allocation of both adult and child time is influenced by traditional gender roles. Moreover, parental illness has significant effects on the prevalence of child labour, i.e. the proportion of children engaged in work detrimental to their personal development. Here the effects of maternal illness are larger than those of paternal illness, which reflects the fact that maternal illness has a relatively large absolute effect on the number of hours that children work. These results suggest that existing studies may underestimate the size of the association between household welfare and child labour. Measures of welfare are often based on poverty indices related to household income or wealth, and these measures are more strongly correlated with the income-generating work of men than with the domestic work of women in traditional societies. Negative shocks to the supply of labour for domestic work can nevertheless lead to substantial reductions in household welfare, and Ethiopian households' coping strategy for such shocks seems to entail effects on children that are at least as large as the effects of lost income-generating capacity through paternal illness. A further implication is that when estimating the return to public investment in adult (and particularly women's) health, it is important to account for the effect of adult health on children's time allocation. How can the effects of parental illness on child labour be mitigated? Firstly, as shown in Table 2, access to local healthcare services can reduce the impact of parental illness on some types of child work, and in some contexts, the extension of microfinance facilities can help household to insure themselves against illness (Gertler et al. 2009). However, some programmes to extend healthcare have had no significant effect on child labour (Rocha and Soares 2010), and it may be that the effects of maternal illness cannot be fully mitigated unless someone else can be found to do the housework. Some households might be able to rely on extended family members or friends to help out when the mother is ill, but others will have to buy in help; this means that development outcomes for many children will depend crucially on the existence of an efficient market for domestic help. In addition, results from studies with data on siblings indicate that family-specific shocks explain a large proportion of the variation in the allocation of children's time. See for example Hull (2017). These are rounds 2 and 3 of the survey. Round 1 does not include the child time diary that is used to measure our key dependent variables, and round 4 incorporates a cohort who are aged 18–19, i.e. they are already adults. About 75% of children are from food-insecure areas, so there is some concern about the representativeness of the sample. However, when we interact the parental illness variables discussed below with a measure of household wealth, the interaction effects are insignificantly different from zero (p > 0.1), and so the over-sampling of poor households is unlikely to affect the relevance of our results to Ethiopia as a whole. Outes-Leon and Dercon (2008) find that the attrition is purely random. This trend in the Young Lives data is similar to the trend in the population: see http://data.worldbank.org/indicator/SE.PRM.NENR?locations=ET. Table 1 shows a net increase in the total time devoted to all four activities of 0.4 h per day. The time diary does not ask children to account for all 24 h in a day, and there is a residual time category for time spent eating, sleeping and washing. The amount of time allocated to this residual category appears to have diminished. Evidence from previous studies suggests that self-reported measures of illness are a reliable indicator of the individual's true health status; see for example Butler et al. (1987). Note also that the effect of permanent illnesses (or illnesses of slowly changing severity) will wash out in the fixed effects. Our estimates relate to the effects of transitions into (or out of) serious illness. One alternative to this model is a Fixed Effects Negative Binomial model, which does not impose the restriction that the mean of the distribution is equal to the variance. However, the restrictions embodied in this model mean that the estimator is unlikely to account properly for individual fixed effects (Guimaraes 2008). Also, when we attempted to fit this model to the data, it failed to converge. Another alternative is a linear model with ln(1 + y) on the left-hand side of the equation. We include estimates of the parameters in such a model in the appendix: they are very similar to the ones reported in the main text. Note that an observation of zero hours in both survey rounds will be perfectly predicted by the fixed effect, so such observations are excluded from the sample. This means that in most cases the reported sample sizes are somewhat smaller than the total of 2×2269 = 4538. Gendered social norms prevail in much of Ethiopian society, and the responsibilities of men and women are culturally constructed. Activities such as cleaning, cooking and childcare are almost exclusively the responsibility of women and girls, while household repairs, lawn mowing and work outside the home are largely the responsibility of men and boys. However, women do sometimes work outside the home in order to supplement household income: see for example Ogato et al. (2009). Case et al. (2004) and Mishra et al. (2007) find much larger effects when both parents are dead, but with only a handful of orphans in our sample, it is not possible for us to make a direct comparison using the Ethiopian data. Note that our results are conditional on whether the child is living with its biological mother, and on whether it is living with its biological father, the effects of which are reported in the appendix. The effect on chore time is larger than in the Tanzanian results reported by Alam (2015), who finds that maternal illness leads to an extra 1.3 h of chores per week (i.e. about 0.2 h per day), while paternal illness has no significant effect. Dillon (2012) finds no significant effect in Mali. However, the standard errors in Table 4 are generally not small enough to establish a statistically significant difference between boys and girls: the only significant difference between the boys' effect and the girls' effect is with regard to paternal illness and schooling. Even some of the smaller parental health effects that we estimate are of a similar magnitude to the effects of child health reported elsewhere. For example, Alderman et al. (2006) find that in Zimbabwe (where the mean total number of years in school is about 8.5), a one standard deviation increase in the child's height-for-age z-score delays the start of schooling by about 5 months, while Glewwe et al. (2001) find an effect in the Philippines that is about half as large. Expressed as a percentage of the total number of years in school, the effects of a three or four standard deviation difference in height in these studies are of a similar magnitude to the 9% reduction in school time caused by a paternal health shock in our sample. One further significant effect is that adult unemployment leads to a reallocation of children's time from play to schooling. The cause of this puzzling effect is a subject for future research. Ainsworth M, Beegle K, Koda G (2005) The impact of adult mortality and parental deaths on primary schooling in north-western Tanzania. J Dev Stud 41(3):412–439 Alam SA (2015) Parental health shocks, child labour and educational outcomes: evidence from Tanzania. J Health Econ 44:161–175 Alderman H, Hoddinott J, Kinsey B (2006) Long term consequences of early childhood malnutrition. Oxf Econ Pap 58(3):450–474 Baland JM, Robinson JA (2000) Is child labor inefficient? J Polit Econ 108(4):663–679 Basu K, Van PH (1998) The economics of child labor. Am Econ Rev 88(3):412–427 Beegle K, Dehejia RH, Gatti R (2006) Child labor and agricultural shocks. J Dev Econ 81(1):80–96 Bourguignon F, Ferreira FH, Leite PG (2003) Conditional cash transfers, schooling, and child labor: micro-simulating Brazil's bolsa scola program. World Bank Econ Rev 17(2):229–254 Butler J, Burkhauser RV, Mitchell JM, Pincus TP (1987) Measurement error in self-reported health variables. Review of Economics and Statistics 69(4):644–650 Case A, Paxson C, Ableidinger J (2004) Orphans in Africa: parental death, poverty, and school enrollment. Demography 41(3):483–508 Cigno A, Rosati FC (2002) Child labour education and nutrition in rural India. Pac Econ Rev 7(1):65–83 Colmer J (2013) Climate variability, child labour and schooling: evidence on the intensive and extensive margin, (working paper no. 132). Grantham research institute on climate change and the environment. London School of Economics and Political Science, London Diamond C, Fayed T (1998) Evidence on substitutability of adult and child labour. J Dev Stud 34(3):62–70 Dillon A (2012) Child labour and schooling responses to production and health shocks in northern Mali. J Afr Econ 22(2):276–299 Duryea S, Lam D, Levison D (2007) Effects of economic shocks on children's employment and schooling in Brazil. J Dev Econ 84(1):188–214 Edmonds E (2005) Does child labor decline with improving economic status? J Hum Resour 40(1):77–99 Edmonds E (2006) Child labor and schooling responses to anticipated income in South Africa. J Dev Econ 81(2):386–414 Edmonds E, Pavcnik N (2005) Child labor in the global economy. J Econ Perspect 19(1):199–220 Edmonds E, Schady N (2009) Poverty alleviation and child labor, NBER working paper no.15345. National Bureau of Economic Research, Cambridge Fallon P, Tzannatos Z (1998) Child labour: issues and directions for the World Bank social protection. Human Development Network, World Bank, Washington Gertler P, Gruber J (2002) Insuring consumption against illness. Am Econ Rev 92(1):51–70 Gertler P, Levine DI, Moretti E (2009) Do microfinance programs help families insure consumption against illness? Health Econ 18(3):257–273 Glewwe P, Jacoby HG, King EM (2001) Early childhood nutrition and academic achievement: a longitudinal analysis. J Public Econ 81(3):345–368 Guarcello L, Kovrova I, Rosati FC (2008) Child labour as a response to shocks: evidence from Cambodian villages. In: UCW working paper, understanding Children's work project, Faculity of economics. University of Rome, Rome Guarcello L, Mealli F, Rosati FC (2010) Household vulnerability and child labor: the effect of shocks, credit rationing, and insurance. J Popul Econ 23(1):169–198 Guimaraes P (2008) The fixed effects negative binomial model revisited. Econ Lett 99(1):63–66 Haile G, Haile B (2012) Child labour and child schooling in rural Ethiopia: nature and trade-off. Educ Econ 20(4):365–385 Heady C (2003) The effect of child labor on learning achievement. World Dev 31(2):385–398 Hoddinott J, Berhane G, Gilligan DO, Kumar N, Taffesse AS (2012) The impact of Ethiopia's productive safety net programme and related transfers on agricultural productivity. J Afr Econ 21(5):761–786 Hull MC (2017) The time-varying role of the family in student time use and achievement. IZA Journal of Labor Economics 6:10 ILO (1973). Minimum age convention (no.138). Retrieved from www.ilo.org/dyn/normlex/en/f?p=NORMLEXPUB:12100:0::NO::P12100_ILO_CODE:C138. Accessed 1 Feb 2018 ILO (2008) 18th international conference of labour statisticians (report of the conference). International Labor Organization, Geneva Jensen P, Nielsen HS (1997) Child labour or school attendance? Evidence from Zambia. J Popul Econ 10(4):407–424 Krishnan P, Sciubba E (2009) Links and architecture in village networks. Economic Journal 119(537):917–949 Mishra V, Arnold F, Otieno F, Cross A, Hong R (2007) Education and nutritional status of orphans and children of HIV-infected parents in Kenya. AIDS Education & Prevention 19(5):383–395 Moehling CM (2005) "She has suddenly become powerful": youth employment and household decision making in the early twentieth century. J Econ Hist 65(2):414–438 Ogato GS, Boon EK, Subramani J (2009) Gender roles in crop production and management practices: a case study of three rural communities in Ambo district, Ethiopia. J Hum Ecol 27(1):1–20 Outes-Leon I, Dercon S (2008) Survey attrition and attrition bias in young lives. Technical note no.5, young lives, Department of International Development. University of Oxford, Oxford Ranjan P (1999) An economic analysis of child labor. Econ Lett 64(1):99–105 Ray R (2000) Analysis of child labour in Peru and Pakistan: a comparative study. J Popul Econ 13(1):3–19 Rocha R, Soares RR (2010) Evaluating the impact of community-based health interventions: evidence from Brazil's family health program. Health Econ 19(S1):126–158 Rosati FC, Rossi M (2001) Children's working hours, school enrolment and human capital accumulation: evidence from Pakistan and Nicaragua, UCW working paper, understanding children's work project, Faculity of Economics. University of Rome, Rome Rosenzweig MR, Evenson R (1977) Fertility, schooling, and the economic contribution of children of rural India: an econometric analysis. Econometrica 45(5):1065–1079 Sparrow R, Poel EV, Hadiwidjaja G, Yumna A, Warda N, Suryahadi A (2014) Coping with the economic consequences of ill health in Indonesia. Health Econ 23(6):719–728 Udry C (2006) Child labor. In: Banerjee AV, Benabou R, Mookherjee D (eds) Understanding Poverty. Oxford University Press, New York, pp 243–258 United Nations (1989). Convention on the rights of the child. Retrieved from www.ohchr.org/en/professionalinterest/pages/crc.aspx. Accessed 1 Feb 2018 United Nations (2009). Systems of national accounts 2008. Retrieved from http://unstats.un.org/unsd/nationalaccount/docs/SNA2008.pdf. Accessed 1 Feb 2018 Wagstaff A (2007) The economic consequences of health shocks: evidence from Vietnam. J Health Econ 26(1):82–100 Woldehanna T, Mekonnen A, Alemu T (2008) Young lives: Ethiopia round 2 survey report, Young lives country report, Department of International Development. University of Oxford, Oxford Wooldridge JM (2011) A simple method for estimating unconditional heterogeneity distributions in correlated random effects models. Econ Lett 113(1):12–15 We would like to thank the anonymous referees and the editor for the useful remarks. Responsible editor: Pierre Cahuc. Department of Economics, University of Otago, Dunedin, 9054, New Zealand Yonatan Dinku , David Fielding & Murat Genç Search for Yonatan Dinku in: Search for David Fielding in: Search for Murat Genç in: Correspondence to David Fielding. The IZA Journal of Labor Economics is committed to the IZA Guiding Principles of Research Integrity. The authors declare that they have observed these principles. Definitions and summary statistics for the additional control variables Table 8 provides summary statistics for the additional control variables in the model. These variables are defined as follows. Table 8 Summary statistics for the additional explanatory variables Child characteristics Child education is the highest school grade completed by the child. Child age is the child's age in years Female equals one if the child is a girl, and zero otherwise. Parental characteristics Mother's education is the highest school grade completed by the mother. Biological mother equals zero if the child's mother is the biological mother, and zero otherwise. Mother's age is the mother's age in years. Father's education is the highest school grade completed by the father. Biological father equals zero if the child's father is the biological father, and zero otherwise. Father's age is the father's age in years. Mother's power is an index of the relative influence of the mother in household decision-making. Male head equals one if the household head is male, and zero otherwise. Household size is the number of people in the household. Wealth index is the household wealth index described in Woldehanna et al. (2008). Owns animal equals one if the household owns any livestock, and zero otherwise. Land size is the surface area of the household's land, in hectares. Member of social group equals one if the household is a member of a risk-sharing institution and zero otherwise. These institutions are the iddir, eqqub and debbo. The iddir is a funeral association with contributions that fund expenses when a family member dies; in recent years, iddirs have started making loans or grants to members experiencing other types of shock that entail a loss of income. The eqqub is a rotating credit and saving association which can prioritise payments to members facing financial difficulties. The debbo is an agricultural labour sharing arrangement that can provide extra help to members who are ill (Hoddinott et al. 2012; Krishnan and Sciubba 2009). Community characteristics Urban equals one if the community in an urban area, and zero otherwise. Microfinance equals one if there is a microfinance organisation in the community, and zero otherwise. Health centre access equals one if there is a health centre in the community, and zero otherwise. Drought events equals one if the community has experienced a drought in the last 3 years, and zero otherwise. Flood events equals one if the community has experienced a flood in the last 3 years, and zero otherwise. The full set of parameter estimates in the Table 3 Model Table 9 corresponds to Table 3 of the main text but includes the estimated effects of all of the explanatory variables in the model. Among the statistically significant effects, it can be seen that the existing education level of the child (highest grade) is positively associated with current time in school and negatively associated with time spent in income-generating work, while older children spend more time in income-generating work but also more time in play. A child living with its biological mother can be expected to spend less time in domestic chores but more time in income-generating work, while a child living with its biological father can be expected to spend less time in income-generating work and more time in other activities. The mother effect might reflect a preference of mothers to have their biological children working with them in the home, while the father effect suggests that fathers put more weight on the welfare of biological children than on the welfare of step-children. Children of older mothers tend to spend less time in school and children of older fathers tend to spend less time in play, while children of households with a male head spend less time in both of these activities and more time in income-generating work, so it appears that younger parents and mothers attach mores weight to the welfare of their children than older parents and fathers. Droughts lead to a reallocation of children's time from domestic work to income-generating work, as does the presence of a microfinance facility, which raises concerns about the possible unintended consequences of such facilities. Finally, as noted in the conclusion to the paper, the presence of a health centre reduces the amount of time spent in domestic work, suggesting that access to healthcare services does mitigate the effect of maternal illness; however, the presence of a health centre has no significant effect on the time spent in play, schooling or income-generating work.Footnote 15 Table 9 Estimated effects of all variables on time spent in different activities Estimating the effects of parental illness in a linear model One alternative to the Poisson model in Eq. (1) is a linear model of the following form: $$ \mathrm{E}\left(\ln \left(1+{y}_{ij t}\right)\right)={\gamma}_{1j}{h}_{it}^m+{\gamma}_{2j}{h}_{it}^f+{x}_{it}^{\prime }{\beta}_j+{\eta}_{ij} $$ The notation here corresponds to the notation in Eq. (1); note that 1 + y ijt is required on the left-hand side of the equation instead of y ijt , because sometimes y ijt = 0. Table 10, which matches Table 2 in the main text, presents estimates of the γ parameters in Eq. (4). The parameters in the two tables cannot be compared directly, because those in Table 2 measure the effect of illness in terms of a percentage change in the number of hours allocated to an activity, while those in Table 10 measure the effect of illness in terms of a percentage change in one plus the number of hours allocated to an activity. Nevertheless, the sign and significance level of each parameter in Table 10 matches that in Table 2; moreover, the parameters are of roughly equal magnitude. This gives us some confidence in the robustness of our results. Table 10 The effects of parental illness on time spent in different activities: linear model estimates The extensive margin: determinants of the probability that a child does any work Table 11 reports average marginal effects in CRE Probit models of two dependent variables: (i) whether the child spends any time on income-generating work and (ii) whether the child spends any time on household chores. The equations have the following form, analogous to Eq. (3) of the main text: $$ P\left({y}_{ijt}>0\right)=\Phi \left({\rho}_1{h}_{it}^m+{\rho}_2{h}_{it}^f+{x}_{it}^{\hbox{'}}\tau +{\vartheta}_1{\overline{h}}_i^m+{\vartheta}_2{\overline{h}}_i^f+{\overline{x}}_i^{\hbox{'}}\theta +u(i)\right)\kern0.5em u(i)\sim N\left(\delta, \kern0.5em {\sigma}^2\right)\kern0.5em $$ Table 11 Determinants of the probability that a child does any work The variables in this equation have the same definition as in Eqs. (1–3). The marginal effects in the table are calculated as Ф ′ (.) ∙ ρ1 and Ф ′ (.) ∙ ρ2 and Ф′(.) ∙ τ, evaluated at the mean value of Ф(.); the corresponding standard errors are also shown. Maternal illness is estimated to lead to a five percentage point reduction in the probability of any income-generating work and a five percentage point increase in the probability of any household chores; these effects are significant at the 1 % level. Paternal illness is estimated to lead to a four percentage point increase in in the probability of any income-generating work and a three percentage point increase in the probability of any household chores; the latter effect is significant at the 1% level. Note also that access to a health centre reduces the probability of involvement in household chores by five percentage points, an effect that is significant at the 1% level. This effect at the extensive margin reinforces the evidence on the link between domestic chores and access to healthcare that is discussed in the main text. Determinants of household expenditure In Table 12, we report coefficients from a fixed-effects model of (i) the logarithm of total household food expenditure and (ii) the logarithm of total household non-food expenditure. The explanatory variables are the same as in Table 9, except that the child-specific variables are omitted. Table 12 shows that ceteris paribus, wealthier households, larger households, urban households and households with access to a health facility have significantly higher levels of both types of expenditure. Theft necessitates higher expenditure on non-food items, the illness of a household member other than the mother or father necessitates higher expenditure on food, and forced eviction necessitates higher expenditure of both types. Access to a microfinance facility is associated with lower food expenditure, which again raises some concerns about the unintended consequences of such facilities. Conditional on these effects, the paternal illness is associated with a 15% reduction in non-food expenditure, but none of the other parental illness effects is significantly different from zero. Our interpretation of the asymmetry in the expenditure effects of maternal and paternal illness is discussed in Section 3 of the main text. Table 12 Estimated effects on household expenditure Exploring reverse causality: parental health in 2009 and child time allocation in 2006 Table 13 includes estimates from Probit models of parental health in 2009. These models are of the following form: $$ P\left({h}_{i2009}^k=1\right)=\Phi \left({\sum}_j{\delta}_j{y}_{ij2006}+\lambda\ {h}_{i2006}^k+{x}_{i2006}^{\hbox{'}}\zeta \right) $$ Table 13 The effect of child time allocation in 2006 on parental health status in 2009 Here, the parameter δ j measures the effect of the amount of time that the child spends in activity j in 2006 on the probability that the mother (when k = f) or father (when k = m) will be ill in 2009. This effect is estimated conditional on the health status of the parent in 2006 (with a persistence parameter λ) and on the other control variables appearing in Eq. (1) of the main text. Table 13 shows estimates of the marginal effects Ф′δ j evaluated at the mean values of all explanatory variables; further results are available on request. It can be seen that all of the estimates are very close to (and insignificantly different from) zero. This suggests that the allocation of a child's time has no substantial effect on the subsequent health status of her mother or father, so reverse causality is unlikely to be a serious concern when estimating the parameters in Eq. (1) of the main text. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Dinku, Y., Fielding, D. & Genç, M. Health shocks and child time allocation decisions by households: evidence from Ethiopia. IZA J Labor Econ 7, 4 (2018) doi:10.1186/s40172-018-0064-9 Accepted: 15 March 2018 Parental illness
CommonCrawl
Study protocol A fixed nitrous oxide/oxygen mixture as an analgesic for patients with postherpetic neuralgia: study protocol for a randomized controlled trial Hai-Xiang Gao1,2 na1, Jun-Jun Zhang1 na1, Ning Liu3 na1, Yi Wang4 na1, Chun-Xiang Ma4 na1, Lu-Lu Gao5, Qiang Liu6, Ting-Ting Zhang1, Yi-Ling Wang1,7, Wen-Qiang Bao4 & Yu-Xiang Li1 The pain management of postherpetic neuralgia (PHN) remains a major challenge, with no immediate relief. Nitrous oxide/oxygen mixture has the advantages of quick analgesic effect and well-tolerated. The purpose of this study is to investigate the analgesic effect and safety of nitrous oxide/oxygen mixture in patients with PHN. This study is a single-center, two-group (1:1), randomized, placebo-controlled, double-blind clinical trial. A total of 42 patients with postherpetic neuralgia will be recruited and randomly divided into the intervention group and the control group. The control group will receive routine treatment plus oxygen, and the intervention group will receive routine treatment plus nitrous oxide/oxygen mixture. Data collectors, patients, and clinicians are all blind to the therapy. The outcomes of each group will be monitored at baseline (T0), 5 min (T1), and 15 min (T2) after the start of the therapy and at 5 min after the end of the therapy (T3). The primary outcome measure will be the pain intensity. Secondary outcomes included physiological parameters, adverse effects, patients' acceptance of analgesia, and satisfaction from patients. Previous studies have shown that nitrous oxide/oxygen mixture can effectively relieve cancer patients with breakthrough pain. This study will explore the analgesic effect of oxide/oxygen mixture on PHN. If beneficial to patients with PHN, it will contribute to the pain management of PHN. Chinese Clinical Trial Register ChiCTR1900023730. Registered on 9 June 2019 Postherpetic neuralgia (PHN) is a major complication of herpes zoster (HZ), and the definition of PHN is arbitrary but involves pain that lasts for 90 days or more from the onset of the HZ [1]. PHN is more common; one study found that the overall incidence of PHN was 3.9–42.0/100,000 person-years [2]. Age is a major risk factor for PHN, with a 4% risk of developing PHN under age 50 and a 34% risk over age 80 [3]. The proportion of the world's population over 60 is expected to double in the next 50 years [4]. With the rapid increase in life expectancy, the incidence of PHN may rise sharply [5]. Spontaneous pain is common in PHN and can be intermittent or persistent. Pain has a variety of qualities, such as burning, throbbing, aching, stabbing pain, or electric-shock-like pain. Besides, allodynia and hyperalgesia have also been reported [6, 7]. PHN seriously harms the physical, psychological, and daily activities of patients. Patients lack appetite, lose weight, have trouble sleeping, and are reluctant to communicate with others. Some even suffer from depression and anxiety [8]. PHN also has a serious negative impact on an individual's work; one study found that 64% of 88 participants missed work and 76% were less productive on account of HZ and PHN [9]. As mentioned above, PHN not only is physical pain but also causes irreparable detriment to all aspects of the patients. Effective treatment for PHN is urgently needed. However, there is no cure for PHN, and the main goal of current treatment is pain relief [10, 11]. The Neuropathic Pain Special Interest Group of the International Association for the Study of Pain (NeuPSIG) proposed some suggestions for the treatment of PHN. They advised gabapentin, pregabalin, and tricyclic antidepressants (TCAs) as first-line therapy; 5% lidocaine as second-line treatment; and opioids as third-line treatment [12]. But even with the most effective drugs, less than 50% of patients experience significant reduction (> 50%) in pain [13]. Gabapentin and pregabalin analgesia work slowly and take weeks to titrate to the therapeutic dose [14, 15]. Patients receiving the lowest dose of gabapentin continued treatment for an average of 17 weeks. The average daily dose of pregabalin is 187 mg, the average maximum dose is 222 mg, and it takes an average of 30 days to reach the maximum dose [14]. TCAs show a relatively slow onset of action with potentially troublesome adverse effects, such as cognitive impairment, excessive sedation, and cardiotoxicity (myocardial ischemia and arrhythmia) [16,17,18]. Five percent lidocaine takes hours (≤ 4 h) to act as an analgesic, acting faster than gabapentin or pregabalin [19]. These drugs are the most commonly used drugs to relieve PHN pain in clinical practice, which require a relatively long treatment period and cannot quickly relieve patients' severe acute pain. Strong opioids such as oxycodone and morphine are comparable to TCAs in reducing pain in PHN patients and can quickly relieve severe acute pain [20]. Presumably for these reasons, one study found that 21.6% of PHN patients frequently used opioids as first-line therapy [21]. But the current guidelines for treating PHN with opioids are controversial [16]. This is not only related to the side effects of opioids such as constipation, nausea, and sedation, but PHN patients treated with opioids also have a higher medical burden [21, 22]. Besides, any physician who prescribes opioids should assess each patient's personal and family history of substance abuse. And prescribing people must commit to regular monitoring of patients' urine drug tests and stop taking opioids if there are signs of abuse [22]. That is, not all patients with PHN can use opioids for pain relief. Therefore, there is an urgent need to find a treatment that can relieve severe acute pain in PHN patients instead of opioids. Nitrous oxide/oxygen mixture has been shown to have a fast and safe analgesic effect. In 1799, Humphrey discovered that nitrous oxide had anesthetic and analgesic effects [23]. The nitrous oxide does not irritate the respiratory tract, has very low solubility in the blood, and does not bind to the protein [24]. Nitrous oxide plays an analgesic role by stimulating the beta endorphin system and antagonizing the N-methyl-d-aspartic acid receptor [25]. Studies have shown that 30% of nitrous oxide produces an analgesic effect equivalent to 10–15 mg of morphine [26, 27]. In clinical practice, nitrous oxide and oxygen are mixed in different proportions for anesthesia or analgesia [28]. The nitrous oxide/oxygen mixture is stored in a pre-mixed cylinder and can be inhaled through a facemask or nasal catheter for self-administered analgesic. When inhaling nitrous oxide/oxygen mixture for analgesia, the patient is conscious, and adverse reactions are uncommon [29]. Now, it has been used in many countries around the world to relieve a wide variety of pain, such as childbirth [30], dental procedures [31], burn dressing [32], cancer patients with breakthrough pain [33], lumbar punctures [34], and other painful conditions. Another advantage of nitrous oxide/oxygen mixture is that it works quickly and recovers quickly [35]. Also, nitrous oxide/oxygen mixture is easily controlled and does not require a professional anesthesiologist to operate [36]. Side effects were mild and quickly resolved after cessation of inhalation [29]. Taking these advantages into account, this study will be conducted to explore the analgesic efficacy and safety of a fixed nitrous oxide/oxygen mixture (Patent no. ZL 2013 1 0053336.X) on PHN through a randomized controlled trial. This study hypothesizes that the inhalation of nitrous oxide/oxygen mixture will be able to relieve pain in patients with PHN. It will be possible to replace opioids if nitrous oxide/oxygen mixture can effectively and rapidly relieve pain in patients with PHN, thereby reducing the risk of opioid abuse and lightening the financial burden on patients. The main aim of the study is to evaluate the analgesic effect of the fixed nitrous oxide/oxygen mixture on patients with PHN. There are three secondary aims of the study: To evaluate the safety of the fixed nitrous oxide/oxygen mixture by monitoring patients' physiological parameters and adverse effects To determine the satisfaction of patients with the analgesic effect of the fixed nitrous oxide/oxygen mixture To investigate the acceptance of patients to the fixed nitrous oxide/oxygen mixture analgesia This study is a single-center, two-group (1:1), randomized, placebo-controlled, double-blind clinical efficacy trial. We intend to investigate the analgesic effect and safety of nitrous oxide/oxygen mixture for PHN patients. Eligible participants will be randomly assigned to either the intervention group (nitrous oxide/oxygen mixture) or the control group (oxygen) based on computer-generated random numbers. The whole study design is shown in Fig. 1. Study design framework. PHN postherpetic neuralgia, NRS numerical rating scale, SpO2 percutaneous oxygen saturation, HR heart rate, BP blood pressure, RTO routine treatment plus oxygen, RTN routine treatment plus nitrous oxide/oxygen mixture The protocol was approved by the Ethics Committee of the General Hospital of Ningxia Medical University (Reference No: 2018-373). The trial was registered with the Chinese Clinical Trial Registry (ChiCTR1900023730) on 9 June 2019. We adhered to Standard Protocol Items Recommendations for Interventional Trials (SPIRIT) for the design of the test (see Additional file 1). Study setting This study will be conducted in the pain department of the General Hospital of Ningxia Medical University. This hospital is a tertiary hospital with functions of medical treatment, teaching, and scientific research. Approximately 100 PHN inpatients are admitted to the pain unit each year. This is sufficient to meet the sample size requirement. All patients with postherpetic neuralgia admitted to the pain unit will be invited to participate in this study. Patients who meet the following inclusion criteria will be enrolled in the trial: The patient met the Chinese PHN clinical diagnostic criteria (PHN is defined as pain persisting for ≥ 1 month after HZ rash healing [37]); Over the age of 18; Pain score ≥ 4 according to the Numerical Rating Scale (NRS); and Volunteered to participate and signed the informed consent. Exclusion criteria Patients who have one of the following conditions are not allowed to participate in the trial: Women who are pregnant; The patient diagnosed with intestinal obstruction, air embolism, pneumothorax, and obstructive respiratory disease; The patient's clinical history includes epilepsy; The patient suffers from otorhinolaryngologic diseases, such as sinusitis, middle ear disease, and eardrum transplantation; Patient who is unable to report pain; and Critically ill patients: (1) patients in intensive care, (2) patients after surgery, (3) patients with severe trauma or extensive burns, (4) patients with ventilator-assisted breathing, and (5) patients with life-threatening conditions requiring monitoring of vital signs. This study will be conducted in the pain department of the General Hospital of Ningxia Medical University. The number of PHN patients admitted to the pain department each year is enough to meet the sample size of this study. We will recruit participants through the pain department's WeChat public platform, posters, and medical staff introductions. For patients who wish to participate in the study, researchers will elaborate on the purpose, methods, possible risks, and rights of participants. If the patient is fully aware of the study and agrees to participate and meets the inclusion criteria, the investigator will have the patient sign the informed consent. Randomization, allocation concealment, and blinding Patients who meet the inclusion criteria will be randomly assigned to the intervention group and control group at a 1:1 allocation ratio after recruitment. The randomly assigned list will be computer-generated and performed by a statistician who is not involved in the trial and will be stored in a sealed, opaque envelope kept by the project manager. To ensure that the trial is double-blind, no investigator or patient will know about the randomization, only the project manager responsible for the gas distribution. The nitrous oxide/oxygen mixture and the oxygen will be supplied by the Ningfeng Oxygen Company, and the cylinder packaging of the two gases is identical. The letters A and B will be used to identify the two gases, and only the project manager will know what type of gas each letter represents. The project manager will distribute the gas according to the randomly assigned list. A clinician is responsible for assessing the patient's condition on the ward, determining whether the patient meets the inclusion criteria, and informing the patient of the purpose of the trial, methods, possible risks, and rights of participants. Patients who meet the inclusion criteria will sign an informed consent form after agreeing to participate in the trial. The project manager will group patients who agree to participate in the trial based on the randomly assigned list and escort the patient to the treatment room, where the trial will be conducted. All patients in the trial will receive routine treatment (oral pregabalin 75 mg twice daily). Patients in the intervention group will inhale nitrous oxide/oxygen mixture with a facemask, which is a visible intervention measure. To eliminate the influence of psychological factors on the experimental results of patients in the control group, patients in the control group will inhale oxygen by a facemask. Patients in the intervention group will receive routine treatment plus a fixed nitrous oxide/oxygen mixture (contains 65% nitrous oxide and 35% oxygen), while patients in the control group will receive routine treatment plus oxygen. Before inhaling the gas, the data collector will teach the patient to use the NRS to report pain intensity. Besides, the data collector will inform patients that they have the right to stop or quit the trial at any time if they experience discomfort during the study. Before the trial begins (T0), the data collector will record the patient's demographic and baseline characteristics; assess the patient's pain intensity; measure heart rate, blood pressure, and percutaneous oxygen saturation; and teach the patient to inhale gas using a facemask. Patients in both groups will inhale gas using an oral-nasal mask, which is disposable and has a one-way valve. According to our previous research, the gas intake will last for 15 min [33, 34]. When the data collector opens the valve of the cylinder and the patient puts on the mask to inhale the gas, the trial will start timing. The data collector will assess the patient's pain intensity and measure blood pressure, heart rate, and percutaneous oxygen saturation at the beginning of the trial at 5 min (T1) and 15 min (T2). To assess the patient's pain, the data collector will ask the patient: How bad is the pain now? Patients will point to the corresponding pain score with their fingers and without having to remove the facemask. Five minutes after the end of the trial (T3), in addition to collecting the data mentioned above, the data collector will ask patients about their satisfaction with pain relief and acceptance of analgesic methods. Throughout the trial, the data collector will closely observe and ask the patient whether there are adverse effects. Nitrous oxide/oxygen mixture has been widely used for pain relief with mild adverse effects and quick recovery [29]. Adverse effects of nitrous oxide/oxygen mixture have been reported in the literature, including nausea, vomiting, dizziness, drowsiness, headache, hypotension, and oxygen desaturation [29]. These adverse reactions usually do not require medication and recover quickly [29]. If any adverse effects occur or the patient requests that the trial be discontinued, the data collector can determine whether the trial should continue. The reason for trial stopped will be recorded by the data collector. The project manager will accompany the patient to participate in the trial but does not involve in the collection of trial data. Only the project manager knows what kind of gas the patient is breathing. During the trial, the project manager will take treatment measures if the patient has adverse effects. Participant retention and withdrawal To improve patients' ability to adhere to the intervention, researchers will do the following strategies. First, the investigator will explain to the patient in detail the purpose of the study, the methodology, the possible risks, and the rights of the participants. Second, the researchers will closely monitor patients' physiological parameters during the trial to ensure the safety of their lives. If patients have any discomfort during the trial, they will be available for medical treatment at any time. Third, patients who complete the trial will receive a free visit from a pain specialist. The investigator will create a chart to record the patient's adherence to the trial. Patients can ask to withdraw from the trial at any time without affecting subsequent treatment. The data collector will record and analyze the reasons for the withdrawal. Data collection, management, and confidentiality Members of the research team will be trained before the study. The training includes protocols for trial design (e.g., how to achieve double blindness), use of gas devices (e.g., use of facemask), data collection (e.g., how to assess a patient's pain intensity), and experimental contingency planning (e.g., management of adverse effects in patients). Besides, data collectors and investigators will be fixed during the trial, and data will be double-entered to ensure accuracy. The Data Monitoring Committee (DMC) members will periodically review the database to improve data quality (see the "Data monitoring" section). Patients will participate anonymously and their personal information will not be disclosed. The patient's anonymous code will be randomly generated by the computer and kept by the project manager. The trial data will be stored in a folder on the computer that can only be accessed through password authentication. Researchers can check this data only with the authorization of the project manager. When the experiment is complete, the researchers will have access to the data for statistical analysis. We will use a form to collect the baseline characteristics of patients, including age, gender, weight, height, education level, employment status, medical burden (no burden at all, basically no burden, some burden, heavy burden), location of pain, and course of the disease. Besides, the patients' pain intensity, physiological parameters (blood pressure, heart rate, percutaneous oxygen saturation), patients' acceptance of the analgesia, patients' satisfaction with pain relief, and adverse effects will be recorded. NRS will be used to assess the intensity of pain, ranging from 0 to 10, where 0 means "painless" and 10 means "worst pain imaginable" [38]. In most cases, clinicians and patients tend to use NRS to assess pain levels. Patients choose a number to represent their pain degree according to their feelings, which is easy to be understood by patients, is easy to use, and even can obtain scores in oral use, with high reliability and validity [39]. The reliability of the NRS has been proven to be moderate to high, with a maximum of 0.96. Besides, the convergence validity of NRS is 0.79 to 0.95 [40]. The patients' physiological parameters will be monitored with an electronic sphygmomanometer (OMRON, HEM-7120) and oximeter (PC-60B). Pain intensity and the physiological parameters will be monitored simultaneously at baseline (T0), 5 min (T1), and 15 min (T2) after the trial began and at 5 min (T3) after the trial ended. Patients' satisfaction with pain relief will be measured by a 5-point Likert scale (5, very satisfied; 4, satisfied; 3, uncertain; 2, dissatisfied; 1, very dissatisfied) [41]. The 5-point Likert scale is one of the Likert scales. Likert scale plays an important role in psychological research, which is the main method to measure attitude and personality. Besides, the Likert scale is easy to implement [42]. Furthermore, the data collectors will inquire into the patient's acceptance of analgesia by asking the patient whether they would accept the gas inhalation to ease the pain (yes/no). Satisfaction and acceptance will be investigated at T3. Starting with gas inhalation, the data collectors will carefully observe and record any adverse effects on the patient. The expected adverse effects of nitrous oxide/oxygen mixture therapy include nausea, vomiting, dizziness, drowsiness, headache, hypotension, and oxygen desaturation [29]. If any adverse effects occur, the study will be terminated immediately, the patient will be given oxygen, and the adverse effects will be fully reversible in 5 min [43]. Any adverse effects concerning the intervention will be recorded throughout the trial period. Outcome measure The primary outcome measure is pain intensity, which will be measured by NRS. The data collector will assess pain intensity at baseline, T1, T2, and T3. Before trial, patients will be taught to use the NRS to assess the pain intensity by pointing out the appropriate number (range = 0 to 10), without having to take off the facemask, thus ensuring continued inhalation of the gas. The following are the secondary outcomes: Patients' satisfaction with pain relief Patients' acceptance of analgesia Physiological parameters (blood pressure, heart rate, percutaneous oxygen saturation) Data collectors will collect data on patients' satisfaction with pain relief, patients' acceptance of analgesia, and adverse effects at T3. Patients' physiological parameters will be collected at baseline, T1, T2, and T3 (Fig. 2). SPIRIT figure: schedule of enrolment, interventions, and assessments. PHN postherpetic neuralgia, T0 baseline, T1 5 min after starting the therapy, T2 15 min after starting the therapy, T3 5 min after finishing the therapy The DMC will be set up at the same time the research project is identified. The DMC includes a pain management specialist, a chief pharmacist, a chief nurse, and a statistical specialist who chairs the DMC. During the implementation of the study, members of the DMC will review the original test data every month, check the standardization of the study, ensure the completeness and accuracy of the data, and guarantee the credibility and dependability of the study results. If the study is not performed as planned, the DMC will make corrective recommendations or stop the study. Any deviations from the trial will be recorded in the breach report form, and the study team will discuss and propose a modification plan. Update clinical trial registration information as necessary. Paper material of the original test data will be stored in a locked filing cabinet in the pain department office of the General Hospital of Ningxia Medical University. All data will be input into the computer by two researchers, and access passwords will be set to keep the data secure. Personally identifiable subject information related to the data will be replaced with anonymous numbers. In order to ensure the quality of data, an independent clinical research assistant will review the original data every month to check whether the data is correct and complete. The authors will follow the guidelines recommended by the International Committee of Medical Journal Editors (ICMJE). Sample size calculation We performed a pilot trial to calculate the sample size. According to our previous study on breakthrough pain [44], the sample size is calculated based on the pain intensity of patients, because the pain intensity is a primary outcome measure. Ducassé et al. [43] conducted a study on the analgesic effects of pre-mixed nitrous oxide and oxygen on patients with out-of-hospital trauma. The study confirmed that pain intensity in the intervention group (nitrous oxide/oxygen mixture) was significantly lower than that in the control group (medical air) at 5 min after administration (p < 0.001). Therefore, we used the pain intensity recorded at T1 as a reference in calculating the sample size. Our pilot trial involved 14 patients, with a 1:1 ratio between the intervention group and the control group. The sample size of this study was calculated by the following formula: $$ {n}_1={n}_2=2{\left[\frac{\left({u}_{\alpha }+{u}_{\beta}\right)}{\raisebox{1ex}{$\delta $}\!\left/ \!\raisebox{-1ex}{$\sigma $}\right.}\right]}^2+\frac{1}{4}{u}_{\alpha}^2 $$ where n1 and n2 represent the sample sizes of each group. δ is the difference value between the two population means. σ represents the population standard deviation. According to the pilot trial, the pain severity during the test reported by all control group patients (n = 7) revealed a mean (SD) of 6.71 (1.28) out of 10. Patients (n = 7) in the intervention group reported a mean (SD) of 4.50 (1.80). So we got δ = |μ1 − μ2| =2.21. The standard deviation of the entire sample (n = 14) was 1.93, which was σ =1.93. α represents a significance level of 5% for a two-sided test, uα = 1.96. β was the type II error, which was set to 0.1; thus, uβ = 1.282. We substituted these data into the formula and calculated that n1 and n2 were 16.9. Therefore, the sample size of each group was about 17. Taking into account the 20% drop-out rate before the end of the study, we decided to enroll a total of 42 participants (21 per group). The test data will be analyzed by a statistical expert using SPSS version 22.0 (Chicago, IL, USA). The statistical expert is not involved in the experiment and is only responsible for the analysis of the data. We will use descriptive statistical methods to measure demographics and baseline clinical features. The difference in the primary outcome between the two groups will be examined using the t test. A rank sum test will be used to compare the satisfaction with pain relief between the two groups. A chi-square test will be used to analyze the patients' acceptance of analgesia. Adverse effects will be tested by the chi-square test. Physiological parameters will be compared using the repeated measures analysis of variance. Besides the complete case analyses, multiple computations are used to replace the missing values in the resulting parameters. Data will be performed with an intention-to-treat analysis. All results are statistically significant with p < 0.05. PHN is a potentially debilitating neuropathic pain that is often undertreated. Viral damage to central and peripheral nerves may be the cause of PHN pain, which may be spontaneous, intermittent, or chronic, with no pattern [17]. PHN often affects the elderly, who have poor immunity and suffer from a variety of diseases. A survey of elderly PHN patients showed that pain severely interfered with their normal lives. Thirty-nine percent of patients with mild pain said they had at least moderate depression. Patients with moderate to severe pain were 49% and 60%, respectively. Only 14% of patients reported being "very satisfied" or "fairly satisfied" with the medication for pain relief [45]. At present, clinicians pay more attention to the long-term treatment effect of PHN, ignoring the rapid relief of pain. Patients must endure pain because analgesics work slowly. Therefore, rapid relief of pain in patients with PHN is particularly important. In a previous research, Liu et al. [33] investigated the analgesic effect of nitrous oxide/oxygen mixture on cancer patients with breakthrough pain. The results showed that the pain intensity of patients who were treated with nitrous oxide/oxygen mixture decreased significantly at 5 min after the start of treatment (2.8 ± 1.3 versus 5.5 ± 1.2, p < 0.01). Furthermore, there were no serious adverse events associated with the nitrous oxide/oxygen mixture. The results of Liu et al.'s study indicated that the nitrous oxide/oxygen mixture can provide rapid and safe analgesia. Therefore, in this study, we will attempt to explore the analgesic efficacy and safety of a fixed nitrous oxide/oxygen mixture on PHN. To date, this study is the first randomized controlled trial to investigate the effects of a nitrous oxide/oxygen mixture for pain relief in PHN patients. If this study proves that nitrous oxide/oxygen mixture is beneficial for PHN patients, it could be used as an emergency therapy for rapid relief of severe acute pain. PHN can be treated with nitrous oxide/oxygen mixture to compensate for the slow onset of systemic treatment and to reduce opioid use. There are some limitations in this study. First, the definition of PHN used in this study is different from other studies because this study is conducted in a hospital in China, so the diagnostic criteria of Chinese postherpetic neuralgia are adopted (pain persisting for ≥ 1 month after HZ rash healing). The results of this study should be analyzed and discussed with the related research on postherpetic neuralgia. Second, this study is a single-center study with relatively limited research objects, which cannot fully represent the overall level. In the future, we will conduct multi-center research. Recruitment of patients began on 8 October 2019. At the time of manuscript submission, the recruitment for this study is ongoing. Due to the prevalence of COVID-19, completion of this trial has been delayed and is expected to be completed until April 2021, originally scheduled for August 2020. At present, the trial has already recruited 20 participants. This is protocol version 3.0, dated 22 July 2019. After this study is completed, the final dataset and statistical codes will be accessible from the corresponding authors on reasonable request, except for patients' personal information. The results will be published in peer-reviewed journals. Findings will be shared with the academic community, policymakers, and the general public. PHN: HZ: NeuPSIG: The Neuropathic Pain Special Interest Group of the International Association for the Study of Pain TCAs: SPIRIT: Standard Protocol Items Recommendations for Interventional Trials NRS: Numerical Rating Scale DMC: Data Monitoring Committee ICMJE: International Committee of Medical Journal Editors Johnson RW, Rice AS. Clinical practice. Postherpetic neuralgia. N Engl J Med. 2014;371:1526–33. Van Hecke O, Austin S, Khan R, et al. Neuropathic pain in the general population: a systematic review of epidemiological studies. Pain. 2014;155(9):1907. Delaney A, Colvin LA, Fallon MT, et al. Postherpetic neuralgia: from preclinical models to the clinic. Neurotherapeutics. 2009;6(4):630–7. Arnold N, Messaoudi I. Herpes zoster and the search for an effective vaccine. Clin Exp Immunol. 2017;187(1):82–92. Chen LK, Arai H, Chen LY, et al. Looking back to move forward: a twenty-year audit of herpes zoster in Asia-Pacific. BMC Infect Dis. 2017;17(1):213. Dworkin RH, Gnann JW, Oaklander AL, et al. Diagnosis and assessment of pain associated with herpes zoster and postherpetic neuralgia. J Pain. 2008;9(1):S37–44. Johnson RW, Wasner G, Saddier P, et al. Herpes zoster and postherpetic neuralgia: optimizing management in the elderly patient. Drugs Aging. 2008;25(12):991–1006. Gupta R, Smith PF. Post-herpetic neuralgia. Contin Educ Anaesth Crit Care Pain. 2012;12:181–5. Drolet M, Levin MJ, Schmader KE, et al. Employment related productivity loss associated with herpes zoster and postherpetic neuralgia: a 6-month prospective study. Vaccine. 2012;30(12):2047–50. Hempenstall K, Nurmikko TJ, Johnson RW, et al. Analgesic therapy in postherpetic neuralgia: a quantitative systematic review. PLoS Med. 2005;2(7):e164. PubMed PubMed Central Article CAS Google Scholar van Wijck AJ, Opstelten W, Moons KG, et al. The PINE study of epidural steroids and local anaesthetics to prevent postherpetic neuralgia: a randomised controlled trial. Lancet. 2006;367:219–24. PubMed Article CAS Google Scholar Finnerup NB, Attal N, Haroutounian S, et al. Pharmacotherapy for neuropathic pain in adults: a systematic review and meta-analysis. Lancet Neurol. 2015;14(2):162–73. Johnson RW, Rice ASC. Postherpetic neuralgia. N Engl J Med. 2014;371:1526–33. Johnson P, Becker L, Halpern R, et al. Real-world treatment of post-herpetic neuralgia with gabapentin or pregabalin. Clin Drug Investig. 2013;3(1):35–44. Bockbrader HN, Wesche D, Miller R, et al. A comparison of the pharmacokinetics and pharmacodynamics of pregabalin and gabapentin. Clin Pharmacokinet. 2010;49(10):661–9. Sampathkumar P, Drage LA, Martin DP. Herpes zoster (shingles) and postherpetic neuralgia. Mayo Clin Proc. 2009;84(3):274–80. Argoff CE. Review of current guidelines on the care of postherpetic neuralgia. Postgrad Med. 2011;123(5):134–42. PubMed Article PubMed Central Google Scholar Saarto T, Wiffen PJ. Antidepressants for neuropathic pain: a Cochrane review. J Neurol Neurosurg Psychiatry. 2010;81(12):1372–3. Nalamachu S, Morley-Forster P. Diagnosing and managing postherpetic neuralgia. Drugs Aging. 2012;29(11):863–9. Raja SN, Haythornthwaite JA, Pappagallo M, et al. Opioids versus antidepressants in postherpetic neuralgia: a randomized, placebo-controlled trial. Neurology. 2002;59(7):1015–21. Gudin J, Fudin J, Wang E, et al. Treatment patterns and medication use in patients with postherpetic neuralgia. J Manag Care Spec Pharm. 2019;25(12):1–10. Harden RN, Kaye AD, Kintanar T, et al. Evidence-based guidance for the management of postherpetic neuralgia in primary care. Postgrad Med. 2013;125(4):191–202. West JB. Humphry Davy, nitrous oxide, the pneumatic institution, and the royal institution. AAm J Physiol Lung Cell Mol Physiol. 2014;307(9):L661–7. Evans JK, Buckley SL, Alexander AH, et al. Analgesia for the reduction of fractures in children: a comparison of nitrous oxide with intramuscular sedation. J Pediatr Orthop. 1995;15(1):73–7. de Vasconcellos K, Sneyd JR. Nitrous oxide: are we still in equipoise? A qualitative review of current controversies. Br J Anaesth. 2013;111(6):877–85. Chapman WP, Arrowood JG, Beecher HK. The analgesic effects of low concentrations of nitrous oxide compared in man with morphine sulphate. J Clin Invest. 1943;22(6):871–5. Parbrook GD, Rees GA, Robertson GS. Relief of post-operative pain: comparison of a 25 per cent nitrous-oxide and oxygen mixture with morphine. Br Med J. 1964;2(5407):480–2. Tunstall ME. Obstetric analgesia: the use of a fixed nitrous oxide and oxygen mixture from one cylinder. Lancet. 1961;278(7209):964. Faddy SC, Garlick SR. A systematic review of the safety of analgesia with 50% nitrous oxide: can lay responders use analgesic gases in the prehospital setting? Emerg Med J. 2005;22(12):901–8. Likis FE, Andrews JC, Collins MR, et al. Nitrous oxide for the management of labor pain: a systematic review. Anesth Analg. 2014;118(1):153–67. Yokoe C, Hanamoto H, Sugimura M, et al. A prospective, randomized controlled trial of conscious sedation using propofol combined with inhaled nitrous oxide for dental treatment. J Oral Maxillofac Surg. 2015;73(3):402–9. Li YX, Han WJ, Tang HT, et al. Nitrous oxide-oxygen mixture during burn wound dressing: a double-blind randomized controlled study. CNS Neurosci Ther. 2013;19(4):278–9. Liu Q, Gao LL, Dai YL, et al. Nitrous oxide/oxygen mixture for analgesia in adult cancer patients with breakthrough pain: a randomized, double-blind controlled trial. Eur J Pain (London, England). 2018;22(3):492–500. Liu Q, Chai XM, Zhang JJ, et al. A fixed nitrous oxide and oxygen mixture for analgesia in children with leukemia with lumbar puncture-induced pain: a randomized, double-blind controlled trial. J Pain Symptom Manag. 2019;57(6):1043–50. Drummond GB, Fisher L, Pumphrey O, et al. Direct measurement of nitrous oxide kinetics. Br J Anaesth. 2012;109:776–81. Moisset X, Sia MA, Pereira B, et al. Fixed 50:50 mixture of nitrous oxide and oxygen to reduce lumbar-puncture-induced pain: a randomized controlled trial. Eur J Neurol. 2016;0:1–7. Yu S, Wan Y, Wan Q, et al. Chinese expert consensus on diagnosis and treatment of postherpetic neuralgia. Chin J Pain Med. 2016;22:161–7. Annequin D, Carbajal R, Chauvin P, et al. Fixed 50% nitrous oxide oxygen mixture for painful procedures: a French survey. Pediatrics. 2000;105(4):E47. Ferreira-Valente MA, Pais-Ribeiro JL, Jensen MP. Validity of four pain intensity rating scales. Pain. 2011;152(10):2399–404. Kahl C, Cleland JA. Visual analogue scale, numeric pain rating scale and the McGill Pain Questionnaire: an overview of psychometric properties. Phys Ther Rev. 2005;10(2):123–8. Juang CM, Yen MS, Horng HC, et al. Treatment of primary deep dyspareunia with laparoscopic uterosacral nerve ablation procedure: a pilot study. J Chin Med Assoc. 2006;69(3):110–4. Böckenholt U. Measuring response styles in Likert items. Psychol Methods. 2017;22(1):69–83. Ducassé JL, Siksik G, Durand-Béchu M, et al. Nitrous oxide for early analgesia in the emergency setting: a randomized, double-blind multicenter prehospital trial. Acad Emerg Med. 2013;20(2):178–84. Liu Q, Wang Y, Luo XJ, Wang NJ, Chen P, Jin X, et al. A fixed inhaled nitrous oxide/oxygen mixture as an analgesic for adult cancer patients with breakthrough pain: study protocol for a randomized controlled trial. Trials. 2017;18(1):13. Oster G, Harding G, Dukes E, et al. Pain, medication use and health-related quality of life in older persons with postherpetic neuralgia: results from a population-based study. J Pain. 2005;6(6):356–63. The research team would like to thank the Science and Technology Department of Ningxia for funding this study. Thanks to the medical staff and patients of the pain department of the General Hospital of Ningxia Medical University for their support. Besides, we appreciate the experts who made suggestions and comments on this study. The study was supported by the Science and Technology Department of Ningxia (grant number: 2019BEG03009). The funder did not play a role in the design, the execution of the study, or the collection, management, analysis, and interpretation of data, as well as the writing of the report, or the decision to submit the report for publication. The funder provides only financial support for the study. Audits will be applied annually to verify the utilization of funds, the trial progress, and the participants' medical records. The name and contact information for the trial sponsor are as follows: Science and Technology Department of Ningxia, Tel: 0951-5032404, Address: Yinchuan, 95 Jie Fang street. Hai-Xiang Gao, Jun-Jun Zhang, Ning Liu, Yi Wang and Chun-Xiang Ma contributed equally to this work. School of Nursing, Ningxia Medical University, 1160 Sheng Li Street, Yinchuan, 750004, China Hai-Xiang Gao, Jun-Jun Zhang, Ting-Ting Zhang, Yi-Ling Wang & Yu-Xiang Li Intensive Care Unit, The Second People's Hospital of Yinchuan, 684 Bei Jing Street, Yinchuan, 750011, China Hai-Xiang Gao Department of Pharmacology, Ningxia Medical University, 1160 Sheng Li Street, Yinchuan, 750004, China Ning Liu Pain Department, General Hospital of Ningxia Medical University, 804 Sheng Li Street, Yinchuan, 750004, China Yi Wang, Chun-Xiang Ma & Wen-Qiang Bao School of Public Health and Management, Ningxia Medical University, 1160 Sheng Li Street, Yinchuan, 750004, China Lu-Lu Gao School of Preclinical Medical Sciences, Ningxia Medical University, 1160 Sheng Li Street, Yinchuan, 750004, China Nursing Department, The First People's Hospital of Yinchuan, 2 Li Qun Street, Yinchuan, 750004, China Yi-Ling Wang Jun-Jun Zhang Yi Wang Chun-Xiang Ma Ting-Ting Zhang Wen-Qiang Bao Yu-Xiang Li W-QB and Y-XL conceived and designed the trial and requested funding from the Science and Technology Department of Ningxia. H-XG drafted the first version of the manuscript together with J-JZ and NL. YW and L-LG provided significant input to the study design. C-XM and T-TZ contribute to enroll patients and to the data collection. QL and L-YW analyzed the data. All authors read and approved the submitted version of the final manuscript. Correspondence to Wen-Qiang Bao or Yu-Xiang Li. Ethical approval was obtained from the Ethics Committee of the General Hospital of Ningxia Medical University (2018-373). The study was registered with the Chinese Clinical Trial Registry (ChiCTR1900023730). Prior to the trial, patients who meet the inclusion criteria will be briefed on the purpose of the study, the intervention, the possible risks, and their rights in the trial. Patients are given 24 h to consider and will sign an informed consent form if they agree to participate. These materials are available from the corresponding author. SPIRIT 2013 Checklist: Recommended items to address in a clinical trial protocol and related documents. Gao, HX., Zhang, JJ., Liu, N. et al. A fixed nitrous oxide/oxygen mixture as an analgesic for patients with postherpetic neuralgia: study protocol for a randomized controlled trial. Trials 22, 29 (2021). https://doi.org/10.1186/s13063-020-04960-5 Severe acute pain Randomized control trial
CommonCrawl
Transactions of the American Mathematical Society Published by the American Mathematical Society, the Transactions of the American Mathematical Society (TRAN) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. ISSN 1088-6850 (online) ISSN 0002-9947 (print) The 2020 MCQ for Transactions of the American Mathematical Society is 1.43. Journals Home eContent Search About TRAN Editorial Board Author and Submission Information Journal Policies Subscription Information All issues : 1900 – Present Bounded holomorphic functions on bounded symmetric domains by Joel M. Cohen and Flavia Colonna PDF Trans. Amer. Math. Soc. 343 (1994), 135-156 Request permission Let D be a bounded homogeneous domain in ${\mathbb {C}^n}$, and let $\Delta$ denote the open unit disk. If $z \in D$ and $f:D \to \Delta$ is holomorphic, then ${\beta _f}(z)$ is defined as the maximum ratio $|{\nabla _z}(f)x|/{H_z}{(x,\bar x)^{1/2}}$, where x is a nonzero vector in ${\mathbb {C}^n}$ and ${H_z}$ is the Bergman metric on D. The number ${\beta _f}(z)$ represents the maximum dilation of f at z. The set consisting of all ${\beta _f}(z)$ for $z \in D$ and $f:D \to \Delta$ holomorphic, is known to be bounded. We let ${c_D}$, be its least upper bound. In this work we calculate ${c_D}$ for all bounded symmetric domains having no exceptional factors and give indication on how to handle the general case. In addition we describe the extremal functions (that is, the holomorphic functions f for which ${\beta _f} = {c_D}$) when D contains $\Delta$ as a factor, and show that the class of extremal functions is very large when $\Delta$ is not a factor of D. E. Cartan, Sur les domains bournés de l'espace de n variable complexes, Abh. Math. Sem. Univ. Hamburg 11 (1935), 116-162. Flavia Colonna, The Bloch constant of bounded analytic functions, J. London Math. Soc. (2) 36 (1987), no. 1, 95–101. MR 897677, DOI 10.1112/jlms/s2-36.1.95 Flavia Colonna, The Bloch constant of bounded harmonic mappings, Indiana Univ. Math. J. 38 (1989), no. 4, 829–840. MR 1029679, DOI 10.1512/iumj.1989.38.38039 Daniel Drucker, Exceptional Lie algebras and the structure of Hermitian symmetric spaces, Mem. Amer. Math. Soc. 16 (1978), no. 208, iv+207. MR 499340, DOI 10.1090/memo/0208 Kyong T. Hahn, Holomorphic mappings of the hyperbolic space into the complex Euclidean space and the Bloch theorem, Canadian J. Math. 27 (1975), 446–458. MR 466641, DOI 10.4153/CJM-1975-053-0 Maurice Heins, Selected topics in the classical theory of functions of a complex variable, Athena Series: Selected Topics in Mathematics, Holt, Rinehart and Winston, New York, 1962. MR 0162913 Sigurđur Helgason, Differential geometry and symmetric spaces, Pure and Applied Mathematics, Vol. XII, Academic Press, New York-London, 1962. MR 0145455 Mikio Ise, Bounded symmetric domains of exceptional type, J. Fac. Sci. Univ. Tokyo Sect. IA Math. 23 (1976), no. 1, 75–105. MR 419860 Shoshichi Kobayashi, Hyperbolic manifolds and holomorphic mappings, Pure and Applied Mathematics, vol. 2, Marcel Dekker, Inc., New York, 1970. MR 0277770 Max Koecher, An elementary approach to bounded symmetric domains, Rice University, Houston, Tex., 1969. MR 0261032 I. I. Pyateskii-Shapiro, Automorphic functions and the geometry of classical domains, Mathematics and its Applications, Vol. 8, Gordon and Breach Science Publishers, New York-London-Paris, 1969. Translated from the Russian. MR 0252690 Walter Rudin, Function theory in the unit ball of $\textbf {C}^{n}$, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 241, Springer-Verlag, New York-Berlin, 1980. MR 601594 Carl L. Siegel, Analytic functions of several complex variables, Kendrick Press, Heber City, UT, 2008. Lectures delivered at the Institute for Advanced Study, 1948–1949; With notes by P. T. Bateman; Reprint of the 1950 edition. MR 2357088 Richard M. Timoney, Bloch functions in several complex variables. I, Bull. London Math. Soc. 12 (1980), no. 4, 241–267. MR 576974, DOI 10.1112/blms/12.4.241 Richard M. Timoney, Bloch functions in several complex variables. II, J. Reine Angew. Math. 319 (1980), 1–22. MR 586111, DOI 10.1515/crll.1980.319.1 Retrieve articles in Transactions of the American Mathematical Society with MSC: 32A37, 32M15, 46E15 Retrieve articles in all journals with MSC: 32A37, 32M15, 46E15 Journal: Trans. Amer. Math. Soc. 343 (1994), 135-156 MSC: Primary 32A37; Secondary 32M15, 46E15 DOI: https://doi.org/10.1090/S0002-9947-1994-1176085-6
CommonCrawl
An enhanced low overhead and stable clustering scheme for crossroads in VANETs Yan Huo1,2, Yuejia Liu1, Liran Ma3, Xiuzhen Cheng2 & Tao Jing1 In this paper, we study the clustering problem for crossroads in Vehicular Ad hoc Networks (VANETs). Considering the load balancing of both the whole network and each cluster based on the multiple metrics, an Enhanced Low Overhead and Stable Clustering (EnLOSC) scheme is presented to ensure the stability and security of clusters and to reduce the communication overhead in this case. The proposed capability metric, designed to find the vehicles with similar direction and better channel quality, is exploited in the processes of formation and maintenance to determine which node is suitable for a cluster head. Based on this, a Cluster Head Electing in Advance Mechanism (CHEAM) is developed in order to fairly select a new head for "isolated" vehicles that may not belong to a cluster. Meanwhile, other metrics are related to the node density and cluster size, which are exploited in the Cluster Merging and Splitting Mechanisms to keep the system load balancing and to improve the communication quality. Furthermore, the proposed Discovery and Elimination Scheme (DES) is designed to tackle the malicious nodes that may hurt the cluster communication. Accordingly, an enhanced cluster maintenance strategy with multi-metrics and a secure scheme is proposed so as to reduce the number of isolated vehicles, keep appropriate loading for each cluster head, and protect the whole link over cluster communication. Numerical results and discussion indicate that the cluster stability, communication overhead, load balance, and security can be significantly enhanced by our proposed scheme. Communication in Vehicular Ad hoc Networks (VANETs), which are considered to be a special class of Mobile Ad hoc Networks (MANETs), becomes an important research topic with the spectrum allocation for Intelligent Transportation System (ITS) and the development of Dedicated Short Range Communication (DSRC) standards [1, 2]. In particular, the DSRC is an important technology designed for ITS, which requires a short-range, wireless link to transmit signals only for Vehicle-to-Vehicle (V2V) communication and Vehicle-to-Infrastructure (V2I) communication. Naturally, the multi-hop and relay technology typically exploited in MANETs is introduced in this network. However, the existing methods that enable communication in MANETs cannot be directly applied in VANETs due to the following characteristics [3, 4]. First, the fast movement of vehicles can lead to a highly dynamic and frequently disconnected network topology. Second, the trajectories of the vehicles in VANETs are strictly restricted by the layout of roads. Clustering-based methods that divide vehicles into clusters by taking advantage of the layout-determined trajectories are considered as effective ways to facilitate communication in VANETs. Stable communication can be achieved in highly dynamic VANETs through cluster-based communication where a leader is selected within each cluster to handle intra-cluster and inter-cluster traffic. As we know, clustering, which has been already extensively researched in the past [5, 6], is the task of grouping a set of nodes (mobile devices, vehicles, etc.) with some similar properties based on the predefined rules. However, there exist various difficult challenges to design a reliable communication in the VANET scenario, many of which can be addressed by a clustered network [7]. The reason of the hard design is the rapidly changing network topology caused by the highly mobile environment, which may result in data congestion [8] and low Quality of Service (QoS) [9]. Additionally, inevitable situations, such as traffic jams and crossroads, also lead to contention and the hidden terminal problem, especially in a dense network. Aimed at the clustering-based communication in VANETs, the goal of designed rules in clustering algorithms is to achieve stable, easy, quick, and efficient communication with necessary QoS requirements. Accordingly, many works have been done to develop effective clustering algorithms for VANETs with most of them focusing on the scenario of the highway or straight lanes [4, 10–19]. However, the performance of these schemes turns out to be unsatisfactory when it comes to a city scenario with crossroads. This is because a large number of vehicles can become isolated at crossroads. As a result, a considerable amount of communication overhead and congestion can result from the routing discovery processes for the isolated vehicles. To deal with these problems, we have proposed a novel clustering scheme for the crossroads in VANETs in [20], with the objective to stabilize the clusters, minimize the number of isolated nodes, and reduce the communication overhead. However, our former proposed scheme only focused on the efficiency of formation and maintenance methods using the vehicle mobility and transmission quality, regardless of the system load and network congestion. Furthermore, there may be some malicious vehicles that interfere or even destroy the cluster communication security. Thus, it is also important to design an effective mechanism in our scheme for the crossroads. Accordingly, in order to enhance the security and system load balancing, we decide to present other novel metrics and introduce a malicious vehicle discovery and elimination scheme in the cluster strategy. The main contributions of this paper are threefold: We propose a novel clustering scheme named Enhanced Low Overhead and Stable Clustering (EnLOSC) for crossroads in VANETs, which includes a cluster formation algorithm and a cluster maintenance scheme. In order to implement the load balancing for both network and cluster during the maintenance phase, we propose the cluster size and node density metrics to analyze the network load for the purpose of ensuring the similar size and density of a cluster. In the meantime, we also introduce the capability metric to select and update a cluster head by considering both the mobility and the transmission power loss of the vehicle. A Cluster Head Electing in Advance Mechanism (CHEAM) is to help a cluster member select a new cluster by predicting its stay time, while the Cluster Merging and Splitting Mechanisms are proposed to keep the network load balancing by observing node density and cluster size. Meanwhile, a secure method, called a Discovery and Elimination Scheme (DES), is presented to find and remove malicious nodes in one cluster. To the best of our knowledge, this is the first work to combine multi-metrics and security mechanism with the clustering algorithm for the crossroad situation in VANETs. The rest of the paper is organized as follows. Section 2 and Section 3 describe our problem formulation and the metrics for the cluster scheme, respectively. Accordingly, we show the details of cluster formation and maintenance algorithms for the purpose of achieving low overhead and secure and stable load balancing communication in Section 4 and Section 5. Sequentially, numerical analysis and discussion are presented in Section 6 to evaluate the performances of our scheme. In the end, the related works and conclusion are summarized in Section 7 and Section 8, respectively. Our system model is based on a bidirectional multi-lane city road scenario with a crossroad as illustrated in Fig. 1. We assume that all vehicles are equipped with GPS so that each vehicle is aware of its own location (represented by Cartesian coordinates), velocity, and direction (represented by direction vector) at any time. We further assume that the precise time is known and traceable to the Coordinated Universal Time (UTC). We also assume that all vehicles can send packets with a unified transmitting power P t and decode received packets about the threshold P r . As shown in Fig. 1, there exist a number of clusters. Nodes that are in a dashed box belong to the same cluster. The cluster heads are red nodes, and the cluster members are black ones. In addition, the nodes in gray color are called hopping cluster members because they are about to leave the current cluster and hop to a new one. The undecided nodes in white color are the isolated nodes. In a cluster, there are one cluster head, several cluster members, and hopping cluster members. The cluster head is responsible for handling the intra-cluster communication and relays the inter-cluster communication among clusters. Note that we use node and vehicle interchangeably in the rest of this paper, Similar to [15], only those nodes that are moving in the same direction can be clustered together. Once a node joins in a cluster and becomes a cluster member, a timer named TimerS that is related to its predicted stay time in the cluster starts. The definition of TimerS for member j in cluster i is: $$ \text{TimerS}(i,j)=T_{j,i}^{\text{stay}}-T_{f}, $$ where \(T_{j,i}^{\text {stay}}\) is the predicted stay time of member j in cluster i which is detailed in Section 5.1.1. T f is the ideal time of a cluster formation procedure, which includes the packet transmission cost and capability metric comparison cost. Because of the dynamic topology of VANETs, a cluster member may change to be a hopping cluster member. When the change happens, the hopping member starts searching for a new cluster head to join even though it may still belong to the current cluster. In addition, the isolated nodes continuously search for clusters to join. Note that if there are too many isolated nodes in a dense network, the total communication overhead increases significantly and can lead to poor network performance. Therefore, it is important to design a clustering algorithm that reduces the number of isolated nodes as much as possible. Metrics for clustering strategy This section describes three metrics, the capability, scale, and node density, which are exploited to design formation and maintenance algorithms by the node and network properties, respectively, for the crossroad scenario in VANETs. The capability metric Taking into account both the node's mobility and the transmission power loss, we firstly design a metric to measure a node's capability of acting as a cluster head. Actually, nodes can obtain their position, velocity, and direction information based on the data derived from GPS. Let \(\vec {D_{i}}=D_{ix}\vec {x}+D_{iy}\vec {y}\) be the direction vector of node i, where \(\vec {x}\) and \(\vec {y}\) are the unit vectors of the X and Y axes. The angle between the direction of two nodes i and j can be calculated as: $${} \theta_{i,j}=\arccos\frac{\vec{D_{i}}\cdot\vec{D_{j}}}{\left|\vec{D_{i}}\right| \left|\vec{D_{j}}\right|}=\arccos\frac{D_{ix}D_{jx}+D_{iy}D_{jy}}{\sqrt{D_{ix}^{2}+D_{iy}^{2}}\sqrt{D_{jx}^{2}+D_{jy}^{2}}}. $$ We consider node i and node j are moving in the same direction when θ≤π/4 and consider they are moving in different directions when θ>π/4. In this way, we could avoid mistakenly labeling the node as changing lanes when it is actually turning at the crossroads. Similarly, let v i and v j denote the velocity of nodes i and j obtained from the GPS. The relative velocity of node i and node j can be calculated as follows: $$ v_{i,j}^{\text{rel}}=\left|v_{i}-v_{j}\right|. $$ We use the Relative Velocity Metric (RVM) to indicate the relative mobility between two moving nodes: $$ \text{RVM}(i,j)=\log\frac{v_{\text{max}}}{v_{\text{max}}-v_{i,j}^{\text{rel}}}. $$ Here, v max is the upper boundary of the velocity. When node i has n one-hop (direct) neighbors, the RVM value of node i can be calculated as: $$ \text{RVM}(i)=\frac{1}{n}\sum^{n}_{j=1}\text{RVM}(i,j)=\frac{1}{n}\sum^{n}_{j=1}\log\frac{v_{\text{max}}}{v_{\text{max}}-v_{i,j}^{\text{rel}}}. $$ Clearly, RVM(i) is not smaller than 1. A smaller RVM(i) indicates that node i's velocity is more similar with that of its direct neighbors. That is, a node with smaller RVM is more likely to stay with its direct neighbors for a longer time due to their similar velocity. Therefore, a node with a lower RVM value is preferred to act as the cluster head to make the cluster more stable. As described in Section 2, P t is the unified transmission power of all nodes and P r (i,j) denotes the received power of node i from node j. We define the transmission Power Loss Metric (PLM) between node i and node j as: $$ \text{PLM}(i,j)=\log\frac{P_{t}}{P_{r}(i,j)}. $$ When node i has n direct neighbors, the PLM of node i can be presented as: $$ \text{PLM}(i)=\frac{1}{n}\sum^{n}_{j=1}\log\frac{P_{t}}{P_{r}(i,j)}. $$ A PLM(i) that is not smaller than 1 is related to the average channel quality and the sum of distance between a node and its direct neighbors. A node with a smaller PLM value is more likely to have a shorter communication distance and better channel quality with its direct neighbors. Taking both RVM(i) and PLM(i) into consideration, we define a capability metric M to describe the capability of a node to be a cluster head: $$ M(i)=\text{RVM}(i)+\text{PLM}(i). $$ A node with a smaller M value implies that this node has more similar mobility with its direct neighbors and better channel quality. In other words, a more stable cluster can be formed by selecting a node with a smaller M value as the cluster head. The cluster size metric Generally, there exists a challenge brought by the clustering algorithm, which is related to the number of cluster members. As we know, the scale of a cluster will be too large to maintain and update when there are many nodes in this cluster, which may result in a heavy load on the cluster head and worse communication quality. To deal with it, we decide to take into account the communication load on each cluster head, which is represented by the number of members in this cluster, for the purpose of the load balancing and well-performed clustering scheme that is suitable for the high node density scenario. Actually, each cluster has its own head and members after the initialization and formation stage. For clarity, we assume that N member is the number of head's members, which can be recorded by the periodic broadcast packets, the Cluster Member Announcement (CMA) packets, in our scheme shown in Section 5, during the maintenance process. Accordingly, we can give the definition and the illustration of the cluster size metric as below. $$ k_{s}(i)=\frac {N_{\text{member}}(i)} {N_{\text{max}}(i)} $$ where the cluster size metric k s (i) is related to the number of i's members. Intuitively, if the number of members is greater than N max(i), it means that this cluster needs to be reformed and updated in order to keep the communication load of the head i bearable. Here, we assume that N max is an ideal upper limit of the number of cluster head's members, which represents the maximum number of members a cluster head can manage and handle without disconnection and congestion. The value of N max(i) for cluster head i is determined by two factors: the max bandwidth B(i) and the bandwidth allocation method. For simplicity, we consider two kinds of bandwidth allocation. The first one, called the equal bandwidth allocation for each cluster member, is shown in Fig. 2, which is assumed that every cluster member occupies a unit bandwidth B 0. Therefore, the value of N max can be described as $$ N_{\text{max}}=\frac {B(i)} {B_{0}} $$ Bandwidth allocated equally The other method, depicted as Fig. 3, is allocated hierarchically based on the demand and priority of members in a cluster. In this case, some special nodes that are in charge of forwarding massive or important messages require more bandwidth than others. Prioritizing the bandwidth of cluster members with high, medium, or low, we assume that there are only one node with high priority, two with medium priority, and the rest with low priority in one cluster, of which bandwidth are defined as B high, B medium, and B low, respectively. Note that nodes with high or medium priority will occupy more bandwidth than ones with low priority. Without loss of generality, the relationship of bandwidth among high, medium, and low priority is depict as B high=2B low and B medium=1.5B low. Thus, the maximum number of cluster head i's members can be calculated as: $$\begin{array}{*{20}l} \begin{aligned} N_{\text{max}} & =N_{\text{high}}+N_{\text{medium}}+N_{\text{low}}\\ & =1+2+\frac {B(i)-B_{\text{high}}-2 B_{\text{medium}}} {B_{\text{low}}}\\ & =\frac {B(i)} {B_{\text{low}}}-2 \end{aligned} \end{array} $$ Bandwidth allocated hierarchically The node density metric Similar to the cluster size, each node should be sensible of its direct neighbors to help to determine or select whether or which node around it is suitable to the head and what cluster size should be restricted. Assuming N neighbors(i) is the number of node i's direct neighbors, we exploit the node density metric, k d (i), to describe the number ratio between i's neighbors and N max. $$ k_{d}(i)=\frac {N_{\text{neighbors}}(i)} {N_{\text{max}}(i)} $$ Obviously, when 0<k d (i)<1, the density is not so high that the node i is capable to relay and communicate within the cluster or among clusters. In one word, the larger k d (i) is, the denser the nodes are distributed in the radius range of the detected node i. Cluster formation scheme According to the various metrics defined in the previous subsection, we present the EnLOSC scheme that contains a cluster formation algorithm and a cluster maintenance scheme. For the convenience of description, we assume every node uses its own ID as an identification and the cluster ID is represented by the cluster head's ID. Before forming or changing cluster-based network topology, there are some undecided nodes or hopping cluster members that want to join in a new cluster. For this purpose, these nodes broadcast HELLO packets, which contain the position, velocity, and direction information, to their direct neighbors. At the same time, when there are cluster heads in the network, the heads also broadcast their information, called Cluster Head Announcement (CHA) packets that contain the cluster head's ID, position, velocity, and direction. Accordingly, when node j receives HELLO packets or CHA packets, it adds the senders' IDs into its Direct Neighbors List (DNL). Then, it uses (2) to calculate the angle θ between its direction and each direct neighbor's direction. If θ>π/4, the corresponding neighbor is considered moving in a different direction and deleted from its DNL. After checking θ and updating DNL, node j computes the M value based on (8) and sends it to other nodes in its DNL. If there is only one cluster head in node j's DNL, it sends a ClusterJoin packet including its ID to the head and becomes a member of the cluster. If there are more than one cluster heads in node j's DNL, it selects the head with the smallest M value and sends a ClusterJoin packet to the head. If there is no cluster head in node j's DNL, it compares its M value with that of its direct neighbors. When node j finds its M value is smaller than that of any node in its DNL, it will be elected as a cluster head and change its state into cluster head. After that, it will broadcast a ClusterInvite packet to its direct neighbors which contains the cluster ID and its M value. Another node in the network who receives this ClusterInvite packet will reply a ClusterJoin packet to node j if j's M value is smaller than that of any other received ClusterInvite packet. Once a cluster is formed, the cluster head periodically (with the time period being T c ) broadcasts CHA packets. Similarly, the cluster members regularly broadcast Cluster Member Announcement (CMA) packets containing the cluster member's ID, position, velocity, and direction. Through this way, the cluster head and the cluster members know each other and maintain the cluster. Additionally, because the undecided nodes broadcast HELLO packets periodically with T c until they join in a cluster, the DNL of all nodes should be updated periodically as well. The details of cluster formation algorithm are shown in Algorithm 1. Cluster maintenance scheme In most clustering schemes, nodes are only allowed to get together when they are moving in the same direction. Nevertheless, a cluster member, which has to leave the current cluster and join in another one, always falls into the undecided state. Due to every undecided node in the network needing to frequently start the formation algorithm described in 4, more number of undecided nodes will lead to the high network overhead. It is more serious specially at the crossroad scenario when using traditional schemes, even resulting in the degrading or interruption of communications in intra- and inter-cluster. Therefore, a Cluster Head Electing in Advance Mechanism (CHEAM) is proposed to reduce the number of undecided nodes in this section. Besides, node density is still not taken into consideration in the existing clustering strategies, though the cluster size will be increasing along with the growing node density. To a certain degree, disconnection and congestion in the cluster-based VANETs will occur when the cluster size exceeds the upper limit that the head can bear, which deteriorates the Quality of Service of the communication. On the contrary, it is certain that the cluster size is so small as to waste communication resources. Considered both the cluster size metric and the node density metric, a Cluster Merging and Splitting Mechanism is presented for the purpose of alleviating and avoiding the bad performance of the network caused by the uneven node density. Furthermore, because of the existing malicious nodes that may interfere with communication and damage network performance, we also design a Discovery and Elimination Scheme (DES) mechanism to avoid malicious nodes and to protect users' privacy. Finally, a secure cluster maintenance algorithm with load balancing and low overhead is introduced based on the above mechanisms. Cluster Head Electing in Advance Mechanism Stay time prediction Before proposing CHEAM, we first introduce how to predict the ideal stay time of a cluster member in its cluster. The stay time of a cluster member plays a key role in our cluster maintenance algorithm. Therefore, we study the stay time prediction problem before presenting the cluster maintenance algorithm. Assuming \(v_{i}^{\text {Ins}}\), \(v_{j}^{\text {Ins}}\), \(\left (x_{i}^{\text {Ins}},y_{i}^{\text {Ins}}\right)\), and \(\left (x_{j}^{\text {Ins}},y_{i}^{\text {Ins}}\right)\) are the instantaneous velocities and positions of the cluster head i and its member j which are contained in the CHA and the CMA packets, respectively, the instantaneous distance between head i and member j can be represented by: $$ D^{\text{Ins}}=\sqrt{\left(x_{i}^{\text{Ins}}-x_{j}^{\text{Ins}}\right)^{2}+\left(y_{i}^{\text{Ins}}-y_{j}^{\text{Ins}}\right)^{2}}. $$ Comparing the position and the velocity of the head i with those of the member j, four different stay time prediction results of member j in cluster i, \(T_{j,i}^{\text {stay}}\) can be obtained, $$ T_{j,i}^{\text{stay}}\,=\, \left\{ \begin{array}{ll} \frac{R+D^{\text{Ins}}}{v_{j}^{\text{Ins}}-v_{i}^{\text{Ins}}} &\! \text{if head}~i~\text{is in front of member}~j~\text{and}~ v_{i}^{\text{Ins}}<v_{j}^{\text{Ins}}\\ \frac{R+D^{\text{Ins}}}{v_{i}^{\text{Ins}}-v_{j}^{\text{Ins}}} & \text{if head}~i~\text{is in front of member}~j~\text{and}~ v_{i}^{\text{Ins}}>v_{j}^{\text{Ins}}\\ \frac{R-D^{\text{Ins}}}{v_{j}^{\text{Ins}}-v_{i}^{\text{Ins}}} & \text{if head}~i~\text{is behind member}~j~\text{and}~ v_{i}^{\text{Ins}}<v_{j}^{\text{Ins}}\\ \frac{R-D^{\text{Ins}}}{v_{i}^{\text{Ins}}-v_{j}^{\text{Ins}}} & \text{if head}~i~\text{is behind member}~j~\text{and}~ v_{i}^{\text{Ins}}>v_{j}^{\text{Ins}}, \end{array} \right. $$ where R is the communication radius of a mobile node. Considering the definition of the predicted stay time and the hopping cluster member mentioned before, the detail of CHEAM can be described as below. The main idea of CHEAM is to select the most stable (optimal) head for the hopping cluster member as a substitute in advance. In this procedure, the scheme needs to detect the direction and predict the stay time of all members in the cluster. Additionally, the substitute head could be the current head if there are no other candidates with smaller M. Once a substitute is selected, the hopping cluster member hops into the new cluster and becomes a cluster member. Through this way, the number of the undecided nodes can be significantly reduced so that the cluster-based network overhead is minimized. Accordingly, the CHEAM is introduced in Algorithm 2. As mentioned in Section 4, the node j will start TimerS and compute the predicted stay time as soon as it joins in the cluster i. In Algorithm 2, j becomes a hopping cluster member and executes the formation algorithm if the TimerS expires or the direction between j and i is larger than π/4. In one word, the hopping cluster members are always ready to shift from one cluster to another. The member will start the cluster head selection procedure when it changes into a hopping state. Cluster Merging and Splitting Mechanism Cluster merging The Cluster Merging Mechanism is used to combine two nearby clusters whose cluster size metrics are smaller than 0.5. Therefore, the waste of network resources can be obviously avoided by the integrated cluster. Actually, the Cluster Merging Mechanism is started when cluster head i detects that k d (i)>1 and k s (i)<0.5. Then i broadcasts a CMerge packet to its direct neighbors in DNL so as to request merging. Once another cluster head i ′ replies to the request, these two clusters will be combined and a new cluster head with the lowest M should be selected. The Cluster Merging Mechanism is illustrated in Algorithm 3. Cluster splitting The main idea of our Cluster Splitting Mechanism is that the cluster head will start to inform some members to become new cluster heads when it meets all the following conditions: (1) the cluster size is beyond the maximum value and (2) the node density in the radius range of the cluster head is high. Thus, these new heads will invite neighbor nodes into their own clusters based on Cluster Formation. In this way, the average size of clusters can be reduced and the problem that a cluster head with high communication overhead resulted from high node density scenario can be easily solved. Specifically, when the node density and size metric of cluster head i satisfy k d (i)≥k s (i)>1, head i may search for n=⌈k s (i)⌉+1 candidate nodes with a lower M value than others in its cluster and send Cluster Head Transformation (CHT) packets to them. Here, ⌈·⌉ represents the ceiling function. Those nodes who receive CHT packets will change their current state into a cluster head and broadcast a transformed cluster head invitation, TCHInvite packets, to cluster members as soon as possible. Obviously, a cluster member can receive N received packets whether they are TCHInvite or ClusterInvite and sequence these received heads from 1 to N received. In order to select any one of heads to join in, members may generate a random integer N random among [ 1,N received] and join into this new cluster with head N random. Algorithm 4 shows the Cluster Splitting Mechanism. Discovery and Elimination Scheme In this section, a malicious node is assumed as a node which only greedily occupies the network resources by sending packets too frequently, e.g., bandwidth or the forwarding time slot. There are also some external attackers attempting to paralyze both network and cluster, by means of spreading viruses all over the network or installing trojans on nodes without permission. We assume that the adversary models of both malicious nodes and external attackers cannot overhear, eavesdrop, or even tamper with the plaintext of other nodes. Nevertheless, we should still prevent these malicious nodes and attackers from damaging the network and design a security protocol, the Discovery and Elimination Scheme (DES), to protect the privacy of cluster nodes. Assume that all cluster heads are considered as managers which are aware of their cluster members' IDs, while the members in the cluster cannot know the ID of each other. Therefore, the procedure of DES is as below. Generate and broadcast hash ID: Every packet broadcasted by cluster members must include a control word, which is a hash value returned by a certain hash function, e.g., the nonreversible SHA hash function, from the ID of this cluster member. Verify hash ID and forward data: When the cluster head receives a packet, it compares the hash(ID) control word to its own hash table immediately. In other words, the cluster head will consider the transmitter as its member and forward the packet to the destination, if the hash(ID) in the packet matches one of the items in the head's hash table. Particularly, once a cluster member with hash(ID), occupying a high proportion of resources, is detected, it is deemed as a malicious node which should be expelled out of the current cluster for the purpose of avoiding the congestion. Communication among members: Due to the lack of other members' ID information, a cluster member receives a packet that must be a plaintext so that it can know the content of the packet rather than the source. Thus the members' privacy can be protected. Maintenance scheme So far, the details of every mechanism are described above; we now propose the cluster maintenance scheme introduced as below. In Algorithm 5, the Discovery and Elimination Scheme is run as long as there exists data transmission during the maintenance, for the purpose of protecting the cluster communication. After receiving the broadcast packets, we should recalculate the capability metric to identify which node should be changed into the cluster head so as to maintain cluster stability. In order to keep load balancing, Algorithm 5 also introduces Merging and Splitting from lines 8 to 15, which is based on the determination of node density and cluster size. Simulation and discussion In this section, we first carry out an extensive simulation study on MATLAB platform to evaluate the performance of the proposed scheme in a crossroad scenario. The LOSC scheme in [20] and a variant of the Lowest-ID algorithm called MOBIC clustering algorithm in [10] are also implemented as a comparison with our scheme. The following subsection is to analyze the security of EnLOSC. Numerical evaluation The simulation scenario is a two-lane crossroad as shown in Fig. 1. The communication between two vehicles follows the free-space path loss: \(\text {FSPL}=\left (\frac {4\pi df}{c}\right)^{2}\), where f is the signal frequency, c is the speed of light in a vacuum, and d is the distance between the transmitter and the receiver. Without loss of generality, the transmitting and receiving antenna gain are assumed to be 1, and the communication radius is 100 m. Besides, the number of vehicles N n is set up from 50 to 200, and the vehicle speed is selected randomly between 30 and 50 km/h. In Fig. 4, we illustrate the performance of Cluster Merging and Splitting Mechanism on cluster size controlling. The maximum number of cluster members in EnLOSC is set to be 30 in our simulation. As depicted, the average cluster size under LOSC and MOBIC schemes grows significantly by the increase of the number of vehicles in the whole network, while this argument under the EnLOSC algorithm has a little change by a different number of vehicles. In comparison with LOSC and MOBIC schemes, the result in the figure reports the better performance of controlling the cluster size by merging and splitting mechanism in EnLOSC under either high or low density scenarios. In other words, the overhead of cluster heads by high node density and the waste of resources resulting from low node density can be significantly cut down when using EnLOSC. Analysis of cluster size Figure 5 shows the performance of cluster stability that is represented by the average number of cluster heads changing per second. Intuitively, we believe that the cluster-based network is not stable if this average value is large because of the frequent head handoff. Depicted in Fig. 5, the number of head changing in the LOSC scheme is lower than that of MOBIC with the various number of vehicles, which means the stability of clusters formed by our previous algorithm in [20] is better than that formed by MOBIC. Obviously, the cluster stability of the proposed EnLOSC is slightly higher than LOSC. The reason for the increasing number of cluster head shifting in EnLOSC is the cost of keeping load balancing for every cluster-by-cluster merging and splitting schemes. Besides, it can be also inferred that this average value in MOBIC is vulnerable to the large number of vehicles. In other words, compared with MOBIC, both LOSC and EnLOSC schemes are more suitable for the crossroad scenario in VANETs. Analysis of cluster stability For the purpose of illustrating the communication overhead, we explore the average number of undecided nodes which is calculated for the duration of periodic broadcasting T c . During each T c , every node in a cluster broadcasts either a CHA or a CMA packet. In contrast to MOBIC, the result in Fig. 6 reports that the previous work in [20] and our enhanced scheme achieve great improvements on reducing the average number of undecided nodes due to the proposed CHEAM. Moreover, because of the scattered cluster coverage in VANETs and the restricted communication radius, there may exist some undecided nodes between two clusters. The smaller clustering structure resulting from the cluster splitting schemes in EnLOSC will narrow those gaps, which can reduce the probability of those nodes not being in any one cluster. Accordingly, the congestion and overhead caused by the undecided nodes can be cut down significantly in the crossroad when using both LOSC and EnLOSC. Analysis of undecided nodes Security analysis According to the rule of DES, every node in one cluster, which wants to transmit its own data, should add a hash(ID) to the plaintext. Because the cluster head has the existing hash table including all the members' hash(ID), it knows the source of each packet. In this subsection, we investigate the security issues of our proposed DES from the following aspects. Firstly, if there exists some external attackers, who want to make cluster paralysis, they have to follow this protocol. Since the attackers' IDs are not stored at the cluster's hash table, if the attackers broadcast the message, they cannot pass the cluster's verification and the packets will be dropped. Meanwhile, the cluster head can also refuse to forward a member's data, if this member does not follow the protocol, for example, the member does not provide its hash(ID) to the cluster head, who can decline the request. Secondly, when a cluster member sends packets to the head constantly, the network resource will be occupied exclusively, which represents the head. Thus, the cluster head, as a relay, cannot transmit other members' data. Within this situation, to overcome the problem, the head can set up different thresholds of the resource utilization for each member and then will calculate the percentage of each member in terms of the resource utilization based on the number of hash(ID). If the member has a high frequency of hash(ID) than the threshold, who is called as a malicious node, the malicious node will be denied providing relay service. Thirdly, owing to the broadcast packets with hash(ID), which hides the real ID of each member, the DES also provides simple anonymous communication. That means that other members cannot know which member broadcasts those packets. What's more, even if we introduce the DES, the computation of proposed clustering maintenance scheme has not increased significantly because of the low complexity hash algorithm. Cluster head selection plays an important role in clustering algorithms. Various metrics have been proposed to describe a vehicle's capability of functioning as a cluster head in VANETs. In this section, we briefly review the work related to clustering methods that are based on these different metrics. In [16], the metric is defined by considering the traffic flow on the lane. A vehicle on the lane with the most traffic flow is selected as the cluster head. The clustering algorithm proposed in [17] defines the metric as a function of the path loss. A vehicle with a smaller path loss from other vehicles has a higher metric value. It is concluded in [14] that the performance of cluster-based communication can be further improved by exploring the geography information for cluster head selection. Based on this conclusion, [13] and [21] combine the geography information together with the traffic information and the task information to define their metrics. To further extend the reliable clustering method in highway scenarios in VANETs, Ibrahim exploited dense traffic to design a CASCADE scheme in [22] in order to enable both safety (collision warning) and information (congestion notification) applications. The aforementioned clustering schemes usually cause frequent re-affiliation and cluster head changes since they do not consider the effects of fast movements of vehicles in VANETs. To solve these problems, mobility-based clustering algorithms are put forward. In [15], Song et al. use the moving direction of vehicles together with the location information to design a clustering algorithm. Only the vehicles moving in the same direction can form clusters and the cluster head is selected according to the location information. In [11], Basu et al. designs a mobility metric by measuring the fluctuation of a vehicle's received power during successive transmissions. A vehicle with a smaller fluctuation is considered as a vehicle with a smaller relative speed with others, and it is more likely to be selected as a cluster head. The performance of this scheme degrades significantly when the vehicle's speed varies sharply and frequently due to the fact that the vehicle's acceleration is not considered in the mobility metric. Basu et al. [10] solves this problem by designing a mobility metric consisting of both the relative velocity and the relative acceleration to represent a vehicle's ability to be the cluster head. Besides, a metric called the Aggregate Local Mobility (ALM) measure is considered to design a criterion triggering cluster re-organization strategy with a contention-based scheme in [17]. However, the metric is only defined by the relative mobility calculated through the current and previous distances between a node and its neighbor. To the best of our knowledge, these existing schemes we referred to consider either the highway scenarios or the straight-lane scenarios. None of them considers the complicated and challenging crossroad scenarios where large numbers of vehicles can become isolated, and thus, considerable communication overhead and network congestion can be generated using these existing schemes. In this paper, we tackle the challenge of designing a low overhead and stable clustering scheme for crossroads in VANETs. Based on our previous studies in [20], we present an Enhanced LOSC that focuses on not only the stability and network overhead but also the load balancing and security in the crossroad scenario in this paper. A new capability metric M, which is related to the relative velocity and the power loss, is introduced to describe a node's capability of being a cluster head and exploited in the maintenance algorithm to achieve the cluster head electing. Meanwhile, in order to maintain load balancing of the head in a clustering-based network, we also use other metrics expressed by node density and cluster size to adjust the number of nodes in a cluster. Furthermore, the proposed security method, DES, can protect the security of the cluster from attackers and malicious nodes and also provide a simple anonymous communication to preserve nodes' privacy. Compared with the existing MOBIC clustering algorithm and the previous LOSC scheme, the simulation results show that there are less isolated nodes in VANETs by using the EnLOSC scheme, which can ensure the more stable and load balancing clusters. For future research, we will consider how to design a secure strategy for more complex VANETs to deal with the wiretapping problem caused by some eavesdroppers. JB Kenney, Dedicated short-range communications (DSRC) standards in the United States. Proc. IEEE. 99(7), 1162–1182 (2011). G Xin, H Yan, C Zhipeng, O Tomoaki, Intersection-based forwarding protocol for vehicular ad hoc networks. Telecommun. Syst, 1–10 (2015). doi:10.1007/s11235-015-9983-y. Z Rawashdeh, S Mahmud, A novel algorithm to form stable clusters in vehicular ad hoc networks on highways. EURASIP J. Wirel. Commun. Netw. 2012(1), 15 (2012). W Fan, Y Shi, S Chen, L Zou, in IET International Conference on Communication Technology and Application. A mobility metrics based dynamic clustering algorithm for VANETs, (2011), pp. 752–756, doi:10.1049/cp.2011.0769. S Vodopivec, J Bester, A Kos, in Telecommunications and Signal Processing (TSP) 2012 35th International Conference On. A survey on clustering algorithms for vehicular ad-hoc networks, (2012), pp. 52–56, doi:10.1109/TSP.2012.6256251. L Guo, C Ai, X Wang, Z Cai, Y Li, in Performance Computing and Communications Conference (IPCCC), 2009 IEEE 28th International. Real time clustering of sensory data in wireless sensor networks, (2009), pp. 33–40, doi:10.1109/PCCC.2009.5403841. C Shea, B Hassanabadi, S Valaee, Mobility-based clustering in VANETs using affinity propagation (IEEE, 2009). W Chen, S Cai, Ad hoc peer-to-peer network architecture for vehicle safety communications. IEEE Commun. Mag. 43(4), 100–107 (2005). doi:10.1109/MCOM.2005.1421912. R Ramanathan, M Steenstrup, Hierarchically-organized, multihop mobile wireless networks for quality-of-service support. Mob. Netw. Appl. 3(1), 101–119 (1998). doi:10.1023/A:1019148009641. P Basu, N Khan, TDC Little, in International Conference on Distributed Computing Systems Workshop. A mobility based metric for clustering in mobile ad hoc networks, (2001), pp. 413–418, doi:10.1109/CDCS.2001.918738. Y Gunter, B Wiegel, HP Grossmann, in IEEE Intelligent Transportation Systems Conference. Cluster-based medium access scheme for VANETs, (2007), pp. 343–348, doi:10.1109/ITSC.2007.4357651. M Sood, S Kanwar, in Communication and Information Technology Applications (CSCITA). Clustering in MANET and VANET: a survey.in International Conference on Circuits, Systems, (2014), pp. 375–380, doi:10.1109/CSCITA.2014.6839290. T Song, W Xia, T Song, L Shen, in IEEE International Conference on Communication Technology (ICCT). A cluster-based directional routing protocol in VANET, (2010), pp. 1172–1175, doi:10.1109/ICCT.2010.5689132. Y Harikrishnan, J He, in International Conference on Computing Networking and Communications (ICNC). Clustering algorithm based on minimal path loss ratio for vehicular communication, (2013), pp. 745–749, doi:10.1109/ICCNC.2013.6504181. RA Santos, RM Edwards, NL Seed, in International Workshop on Mobile and Wireless Communications Network. Using the cluster-based location routing (CBLR) algorithm for exchanging information on a motorway, (2002), pp. 212–216, doi:10.1109/MWCN.2002.1045724. Z Wang, L Liu, M Zhou, N Ansari, A position-based clustering technique for ad hoc intervehicle communication. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 38(2), 201–208 (2008). doi:10.1109/TSMCC.2007.913917. E Souza, I Nikolaidis, P Gburzynski, in IEEE International Conference on Communications (ICC). A new aggregate local mobility (alm) clustering algorithm for VANETs, (2010), pp. 1–5, doi:10.1109/ICC.2010.5501789. B Zhou, Z Cao, M Gerla, in International Conference on Wireless On-Demand Network Systems and Services, 2009, 2009. Cluster-based inter-domain routing (CIDR) protocol for MANETs (WONS, 2009), pp. 19–26, doi:10.1109/WONS.2009.4801843. Y Huang, M Chen, Z Cai, X Guan, T Ohtsuki, Y Zhang, in Global Communications Conference (GLOBECOM), 2015. Graph theory based capacity analysis for vehicular ad hoc networks (IEEE, 2015). Y Huo, Y Liu, X Xing, X Cheng, L Ma, T Jing, in Wireless Algorithms Systems, and Applications (WASA), 2015. A low overhead and stable clustering scheme for crossroads in VANETs, (2015), pp. 232–242. Y Luo, W Zhang, Y Hu, in International Conference on Networks Security Wireless Communications and Trusted Computing (NSWCTC), 1. A new cluster based routing protocol for VANET, (2010), pp. 176–180, doi:10.1109/NSWCTC.2010.48. K Ibrahim, MC Weigle, in GLOBECOM Workshops, 2008. Cascade: cluster-based accurate syntactic compression of aggregated data in VANETs (IEEE, 2008), pp. 1–10, doi:10.1109/GLOCOMW.2008.ECP.59. We are very grateful to all reviewers who have helped improve the quality of this paper. Thanks also to Beijing Review copyeditor Yu Nan for improving the language. This work was supported by the National Natural Science Foundation of China (Grant No. 61172074 and 61471028), the Fundamental Research Funds for the Central Universities (2015JBM016), the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No.20130009110015), the NSF awards CNS-1352726, and the financial support from the China Scholarship Council. School of Electronics and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China Yan Huo , Yuejia Liu & Tao Jing Department of Computer Science, The George Washington University, Washington, 20052, DC, USA & Xiuzhen Cheng Department of Computer Science, Texas Christian University, Fort Worth, 76129, TX, USA Liran Ma Search for Yan Huo in: Search for Yuejia Liu in: Search for Liran Ma in: Search for Xiuzhen Cheng in: Search for Tao Jing in: Correspondence to Yan Huo. Open Access Thisarticle is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Huo, Y., Liu, Y., Ma, L. et al. An enhanced low overhead and stable clustering scheme for crossroads in VANETs. J Wireless Com Network 2016, 74 (2016) doi:10.1186/s13638-016-0573-9 VANETs Cluster formation and maintenance Cluster stability Big Data Analytics for Cyber-Physical Systems
CommonCrawl
Lorentz force in superposition of two magnetic fields When an electron with charge $q$ travels with velocity $v$ perpendicular to a magnetic field generated between two permanent magnets with field strength $B$ and no electric field, it experiences a Lorentz force equal to $$F = qv \times B$$ The resulting change in momentum for the electron will be transferred through the magnetic field to the magnets. For example in a setup like this: the electron would experience a change in momentum upwards and the magnets would experience an equal and opposite change in momentum downwards, due to conservation of momentum. My question is, does this "reaction force" on the magnets also apply when you have a magnetic field inside a magnetic field, such that the superposition of the two fields results in no magnetic field at the electron's position. For example, lets say you have this following thought experiment with 4 magnets and a moving electron: Where the red oval represents zero (or essentially zero) magnetic field. The magnetic field lines pointing right from the small magnets exactly cancels out with the magnetic field lines pointing left from the big magnets. If you just had the big magnets the electron would experience a force down (into the screen), and if you just had the small magnets the electron would experience a force up (out of the screen), but these cancel out, so there is no net force on the electron. If you think there must still be a field between the two small magnets, either mentally increase the strength of the two larger magnets or mentally move the smaller magnets further apart. Here is a zoomed in visualization of the magnetic field lines to help visualize this. Sorry for the image quality: This might seem counter intuitive, but it is possible because the strength of a magnetic field is proportional to the inverse cube of distance to the magnet. Imagine how a compass still points to earth's magnetic north, even if it is between 2 magnets spaced 100 meters apart. So which of the following happens? 1) The big magnets experience a change in momentum up and the small magnets experience a change in momentum down. 2) None of the magnets experience a change in momentum. This would be the case if the experiment results in the magnets not moving. There is no violation of conservation of momentum here, in both 1 and 2, the net momentum change is 0. I expect there is a knowable concrete answer (1 or 2) because this could be tested in the real world with a relatively uncomplicated experiment. I am more interested in a concrete 1 or 2 and less interested in a why, but a general reasoning of why would be nice. I won't be able to follow a math explanation if it uses more than simple derivatives or integrals. I tried to look for duplicate questions that might already answer this, and there are a lot of related questions, but I couldn't conclude a concrete answer to this question from them. The closest I found was this question, which might hold the math to get the answer, but unfortunately I couldn't follow all the details. electromagnetism forces magnetic-fields conservation-laws charge AndrewAndrew The resulting change in momentum for the electron will be transferred through the magnetic field to the magnets. What you describe never was observed. Magnets, being involved in the phenomenon of Lorentz force, don't experience a momentum nor does their field strength weaken. The influence of the magnet is comparable to that of a catalyst in chemistry, it is not consumed. So we need a different explanation, how the Lorentz force works in detail. Perhaps you know that the deflection of the moving electron in the magnetic field is accompanied by the emission of electromagnetic radiation and the loose of kinetic energy of the electron. A photon has a momentum and that is the reason why the moving electron gets deflected and moves in a spiral path until it's kinetic energy is exhausted. you have a magnetic field inside a magnetic field, such that the superposition of the two fields results in no magnetic field at the electron's position. The magnetic field between the inner magnets still exist, even with the stronger magnets outside. Magnetic fields, imaginated by field lines, are always closed loops (going even through the source) and opposite directed magnetic fields displace each other. For permanent magnets it is clear that the source of the field are the aligned (and "frozen") magnetic dipoles of the involved subatomic particles. With a very strong magnetic field you are able to destroy the alignment of the smaller magnet, but this leads again to a resulting magnetic field in the position of your electron. edited Mar 9 at 8:19 HolgerFiedlerHolgerFiedler $\begingroup$ I am unable to see, how the first two sentences of your answer have to be understood. First, momentum conservation is fundamental. Second, fields mediate interactions between matter. One electron will always feel the repulsive force exerted by another electron and vice versa. In such an interaction, momentum is conserved. I see the Lorentz force as the result of the relativistic transformation of the electric field seen in the electron's rest frame into the lab frame in which the magnets are at rest. I do not think that there is doubt that momentum is conserved in EM interactions. $\endgroup$ – flaudemus Mar 8 at 8:13 $\begingroup$ Concerning the observation: perhaps the effect was not observed in this particular setting, because it is very small. But we know, for example, that two parallel current carrying wires (which are otherwise charge neutral) do interact and there is a mutual force due to the magnetic field between them. $\endgroup$ – flaudemus Mar 8 at 8:18 $\begingroup$ "What you describe never was observed" I believe that is incorrect. Have you ever felt the kick back of a powerful electric drill when you turn it on in the air? This is because of conservation of angular momentum. The drill's rotational force is entirely generated by electrons travelling (through a wire) in a magnetic field and can be explained with only the Lorentz force. $\endgroup$ – Andrew Mar 8 at 10:07 $\begingroup$ "The magnetic field between the inner magnets still exist" When I say "no magnetic field" I elaborate later saying "the red oval represents zero (or essentially zero) magnetic field". This means there is a magnetic field, but the value of the field strength is zero (or essentially zero). I didn't draw the field lines because they would have cluttered an already busy diagram, but see here ece.neu.edu/fac-ece/nian/mom/img/How%20Magnets%20Work/… for an example of no magnetic field at a point between two magnets. $\endgroup$ – Andrew Mar 8 at 10:12 $\begingroup$ @Andrew Please compare your sketch with N and S against each other and the sketch from your link. $\endgroup$ – HolgerFiedler Mar 8 at 19:57 Not the answer you're looking for? Browse other questions tagged electromagnetism forces magnetic-fields conservation-laws charge or ask your own question. Apparent Violation of Newton's $3^{\text{rd}}$ Law and the Conservation of Momentum (and Angular Momentum) For a Pair of Charged Particles Magnetic force doesn't do work and therefore can't change the KE of a particle? Why magnetic field lines and force are not orthogonal with magnets? What is equation to find force of magnetic attraction Lorentz Force and Apparent Conservation of Momentum Violation Useful for Unidirectional Force? Why would a moving infinite region of magnetic field exert an electric force? Are two magnetic fields better than one? Editing magnetic fields Force exerted on an uncharged particle in a uniform magnetic field? Magnetic force between two magnets Why does a charge moving parallel to the direction of magnetic field experience no force at all?
CommonCrawl
Shell resource partitioning as a mechanism of coexistence in two co-occurring terrestrial hermit crab species Sebastian Steibl1 & Christian Laforsch1 BMC Ecology volume 20, Article number: 1 (2020) Cite this article Coexistence is enabled by ecological differentiation of the co-occurring species. One possible mechanism thereby is resource partitioning, where each species utilizes a distinct subset of the most limited resource. This resource partitioning is difficult to investigate using empirical research in nature, as only few species are primarily limited by solely one resource, rather than a combination of multiple factors. One exception are the shell-dwelling hermit crabs, which are known to be limited under natural conditions and in suitable habitats primarily by the availability of gastropod shells. In the present study, we used two co-occurring terrestrial hermit crab species, Coenobita rugosus and C. perlatus, to investigate how resource partitioning is realized in nature and whether it could be a driver of coexistence. Field sampling of eleven separated hermit crab populations showed that the two co-occurring hermit crab species inhabit the same beach habitat but utilize a distinct subset of the shell resource. Preference experiments and principal component analysis of the shell morphometric data thereby revealed that the observed utilization patterns arise out of different intrinsic preferences towards two distinct shell shapes. While C. rugosus displayed a preference towards a short and globose shell morphology, C. perlatus showed preferences towards an elongated shell morphology with narrow aperture. The two terrestrial hermit crab species occur in the same habitat but have evolved different preferences towards distinct subsets of the limiting shell resource. Resource partitioning might therefore be the main driver of their ecological differentiation, which ultimately allowed these co-occurring species to coexist in their environment. As the preferred shell morphology of C. rugosus maximizes reproductive output at the expense of protection, while the preferred shell morphology of C. perlatus maximizes protection against predation at the expense of reproductive output, shell resource partitioning might reflect different strategies to respond to the same set of selective pressures occurring in beach habitats. This work offers empirical support for the competitive exclusion principle-hypothesis and demonstrates that hermit crabs are an ideal model organism to investigate resource partitioning in natural populations. Throughout all ecosystems, species can be found that are closely related to each other, occupy the same trophic level within the food web and share the same habitat, thus fulfilling similar ecological roles for the ecosystem [1]. When two or more species overlap to a certain degree in their biology and share a common and essential resource that is limited in supply, these species experience competition [2, 3]. This interspecific competition can occur in two forms, either via direct interference competition (i.e. fighting over resources) or via indirect exploitative competition (i.e. consumption of resources by one species makes it unavailable for second species). In ecological research, evidence for competition between two species can be provided by comparing which resources are used and which are intrinsically preferred [4]. When investigating resource utilization between co-occurring species, studies have shown that some animals that presumably compete over the same resource, actually partition the resource [5, 6]. According to the competitive exclusion principle, this resource partitioning, as a form of ecological differentiation between species, can thereby be the mechanism that allows co-occurring species to coexist in the same environment [7]. This coexistence can only be realized when each species uses a discrete subset of the limiting resource, which differs qualitatively from those of the co-occurring species [8, 9]. This premise for resource partitioning is described in the concept of limiting similarity, which states that there needs to be a limit to how similar two species can be to each other in order to stably coexist, rather than compete [5]. Such theoretical hypotheses are difficult to test using empirical research, as most animals in nature are not limited by only a single resource, but rather by a multitude of abiotic and biotic factors [10]. There exist, however, some co-occurring species, where enough evidence has been collected to suggest that they are indeed primarily limited by only one resource. Shell-dwelling hermit crabs are limited under natural conditions and in suitable habitats only by the availability of the shell resource, while food and habitat are not considered as a limiting factor [10,11,12,13]. Therefore, they appear to be suitable model organisms to investigate competition theory in empirical research. Hermit crabs (Superfamily: Paguroidea) are characterized by an uncalcified and reduced abdomen, which they protect by utilizing mainly gastropod shells [14, 15]. As a well-fitting shell optimizes growth and maximizes clutch size [16], offers protection against predators and mechanical disruption [17, 18], and decreases the risk of desiccation in the intertidal and terrestrial species [19], hermit crabs are under constant pressure to find a well-fitting shell. The availability of empty and well-fitting shells thereby depends on the gastropod population and their mortality and hence is the limiting resource of hermit crab populations [10, 14, 20]. Co-occurring species of hermit crabs experience direct interference competition by fighting over shells in a highly ritualized behaviour and indirect exploitative competition, as the utilization of an empty shell makes it unavailable for other individuals [11, 13, 14, 21,22,23]. This competition can force hermit crabs to utilize shells outside their optimal fit range, resulting in a reduced fitness [10, 20, 24]. A number of studies, however, were able to demonstrate, that, contrary to the proposed shell competition, at least some co-occurring hermit crab species partition the shell resource [10, 25,26,27]. In these studies, the utilized gastropod shells and their morphometric parameters (e.g. size, weight) of co-occurring hermit crab species in the field were investigated and compared. It was thereby shown that co-occurring hermit crabs utilize indeed shells of different gastropod species or with different shell parameters [8, 25], although other studies suggested that the observed differences in shell utilization arise not out of different preferences [11, 21]. Therefore, it is discussed whether shell resource partitioning is indeed the mechanism of coexistence in co-occurring hermit crab species [10, 23]. One major limitation of many research approaches that investigate shell resource partitioning in hermit crabs is that the proposed preferences are based on the species identities of the gastropod shells [e.g. 20, 26]. The utilization of different shell species depends on the gastropod communities in the particular habitat and gastropod species vary between different regions [19, 24, 28, 29]. Proposing that co-occurring hermit crab species partition the shell resource by preferring different shell species is an uninformative and not universally applicable approach, because the available set of utilizable gastropod species varies between regions and does not reflect the actual preference of a hermit crab species, i.e. the same hermit crab species can prefer two completely different shell species in two different populations but in both cases select for the same morphological shell parameters. A better approach is the comparison of preferences for different shell parameters. Determining the shell partitioning mechanism based on single shell parameters, however, is restricted, as the various shell variables are all highly intercorrelated, making it impossible to characterize a single parameter on which preferences could be based upon [30]. Using morphometric data, it was demonstrated that co-occurring hermit crab species have distinct preferences towards e.g. large shells or narrow apertures [25]. To deepen our understanding of resource partitioning as a possible driver of coexistence using empirical research on hermit crabs, it would be essential to incorporate (I) a large-scale sampling effort to pool data of multiple distinct hermit crab and gastropod populations, (II) a comparison between shell utilization patterns in the natural habitat and the intrinsic preferences towards distinct subsets of the resource and (III) a statistical analysis of the overall morphology of the different subsets of the resources, rather than a single parameter-approach. The present study complies with the three abovementioned criteria by conducting an atoll-wide sampling that covered eleven distinct hermit crab and gastropod populations and by comparing the field data with laboratory shell preference experiments. A principal component analysis (PCA) of the shell morphometrics was then applied to compare the decisive criteria of the shell morphology between the co-occurring species. As research organisms to test competition theory, the only terrestrial hermit crab genus, Coenobita, was chosen, because it has already been established that the two co-occurring hermit crab species in the investigated system, C. rugosus and C. perlatus, are both primarily beach associated and unspecialized detritus feeders with no clear food preferences [31,32,33]. They are therefore an ideal system to test for the effect of the shell resource on coexistence, because other potentially limiting factors can be excluded upfront. The overall shell utilization in land hermit crabs has received only limited research focus in comparison to their well-studied marine counterparts [34, 35]. As terrestrial hermit crabs are restricted to one island, they inhabit and obtain the shell resource only from the surrounding coastal water [19]. Therefore, sampling multiple islands covers distinct hermit crab and gastropod populations and decreases the effect of predominant species in one island ecosystem. Of the 876 collected hermit crabs, 700 were identified as C. rugosus and 176 as C. perlatus. The proportion of C. rugosus and C. perlatus varied significantly between the eleven investigated islands (F = 6.2536, df = 10, p < 0.001). On nine out of the eleven investigated islands within the Atoll, the mean proportion of C. rugosus was 86.47 ± 11.64%. On one island however, only 37.05% of the collected crabs were identified as C. rugosus, while 62.95% were C. perlatus. On another island, C. perlatus was completely absent from the investigated plots. The proportion of C. rugosus (80.28 ± 7.10%) and C. perlatus (19.72 ± 7.10%) was not significantly different between the four investigated beach habitat types (F = 1.9196, df = 3, p = 0.147). The collected C. rugosus and C. perlatus had a carapace length of 6.50 ± 2.23 mm and 6.46 ± 2.71 mm, respectively. The mean carapace length of the two species did not differ statistically (Wilcoxon W = 56,344, p = 0.291). The collected C. rugosus inhabited gastropod shells of 90 different species (in 21 different families), while the collected C. perlatus inhabited gastropod shells of 41 different species (in 14 different families; see Additional file 1: Table S1). The shell species diversity index, i.e. the diversity of shell species inhabited by the two investigated hermit crab species, of C. rugosus was H = 3.644 and of C. perlatus H = 3.039. The niche width in respect to utilizable shell species was therefore B = 23.870 for C. rugosus and B = 12.869 for C. perlatus (Table 1). Table 1 Comparison of the shell utilization and preferences of the two co-occurring hermit crab species The proportional utilization of the investigated shell types differed significantly between C. rugosus and C. perlatus (Table 1). Proportionally more C. rugosus inhabited naticid shells than C. perlatus (p = 0.003), while proportionally more C. perlatus inhabited cerithiid (p < 0.001) and strombid shells (p < 0.001). No differences were found in the number of inhabited nassariid shells between C. rugosus and C. perlatus (p = 0.237; Table 1). Shell preference experiments The mean carapace length of the 150 tested C. rugosus was 6.25 ± 1.43 mm and of the 150 tested C. perlatus 6.42 ± 1.42 mm (mean ± standard deviation). The size of the tested hermit crab in the laboratory experiment did not differ statistically between the two species (Wilcoxon W = 12,207, p = 0.199). The two terrestrial hermit crabs C. rugosus and C. perlatus had significantly different shell preferences for the tested gastropod shells (Table 1, Additional file 2: Table S2). C. perlatus selected strombid shells significantly more often than C. rugosus (p < 0.001) and C. rugosus selected naticid significantly more often than C. perlatus (p < 0.001). No differences existed for the number of selected cerithiid (p = 1.000) and nassariid shells (p = 1.000) between the two hermit crab species. Morphometric analysis of gastropod shells The five investigated morphometric parameters (shell length, shell width, aperture length, aperture width, shell weight) of the utilized gastropod shells differed significantly between the four investigated gastropod shell types (F = 71.505, df = 3, p < 0.001) and between the two hermit crab species (F = 16.080, df = 1, p < 0.001). The first three principal components of the PCA, comparing the morphometric parameters, explained 96.47% of the total variance and were therefore used for further analysis (Fig. 1). Principal component 1 (PC1) correlates with all five morphometric parameters, suggesting that all five parameters vary together. PC2 is primarily a measure for shell length (correlation 0.784) and aperture width (correlation − 0.526) and can be viewed as an overall descriptor of the shell shape with high values of PC1 indicating an elongated and narrow shell shape, while low values of PC2 indicate a short and bulbous shell shape. PC3 negatively correlates with aperture length (correlation − 0.851) and can be viewed as a measure of how elongated the shell aperture is Table 2. The shell morphology of the four most utilized gastropod shell types. The principal component analysis is based on the five log-transformed morphometric parameters (AL aperture length, AW aperture width, L length, W width, WT weight). Each data point represents a single shell, colours resemble the different shell types Table 2 Comparison of the shell morphology of the four most utilized gastropod shell types and the two hermit crab species The four gastropod shell types differed significantly in PC1 (F = 60.96, df = 3, p < 0.001), PC2 (F = 548.1, df = 3, p < 0.001) and PC3 (F = 307.8, df = 3, p < 0.001). Tukey HSD post hoc test indicated significant differences in PC1 between all pairwise comparisons (p < 0.001), apart from nassariid-cerithiid (p = 0.997) and strombid-naticid shells (p = 0.999). PC2 was significantly different in all pairwise comparisons (p < 0.001 in all comparisons). PC3 was significantly different in all comparisons (p < 0.001), apart from one non-significant difference in the pairwise comparison of nassariid and cerithiid shells (p = 0.051; Table 2). All three principal components of the shell parameters differed significantly between the two hermit crab species (PC1: F = 9.819.3, df = 1, p = 0.001; PC2: F = 57.01, df = 1, p < 0.001; PC3: F = 92.14 df = 1, p < 0.001; Additional file 3: Fig. S1). According to the competitive exclusion principle, ecological differentiation is the premise for coexistence in co-occurring species [7]. This ecological differentiation can be realized by partitioning the limiting resource between two species [9]. In the present study, the utilization of the limiting resource of two co-occurring hermit crab species was investigated to study the relevance of resource partitioning as a driver of coexistence. In natural populations, the two co-occurring hermit crabs C. rugosus and C. perlatus utilized different gastropod shell species. These differences in the shell utilization of the two hermit crab species arise out of different preferences towards different shell types. Together with the morphometric analysis, the presented data suggest that the two hermit crab species are not in competition over the limited shell resource but have evolved different preferences towards distinct subsets of the shell resource, which ultimately could enable both species to coexist in their habitat. Coexistence of co-occurring marine hermit crabs has been suggested to arise out of a combination of resource and habitat partitioning [10, 14]. Terrestrial hermit crabs are more restricted in their habitat choice, as especially small islands offer only little heterogeneity in the beach environment [36,37,38,39]. Although C. perlatus was overall less abundant than C. rugosus, there relative proportions did not differ between the four present beach habitat types. As both species are known to be primarily beach-associated and rarely occurring in the densely vegetated inland [40,41,42,43,44], the high overlap of both species in the beach habitats suggests that habitat partitioning is not a driver of coexistence in these two species. Partitioning of or competition over the food resource can also be excluded as a driver for coexistence, as previous studies demonstrated that C. rugosus and C. perlatus are both unspecific detritus feeders with no clear food preference [32, 43] and not limited by food availability [10, 14, 22]. As habitat and food resource partitioning appears to play a minor role for C. rugosus and C. perlatus, the possible mechanism for coexistence might arise out of shell resource partitioning. The morphometric analysis of the utilized shells in the field suggests that C. rugosus utilizes shells with a small and globose morphology, while C. perlatus utilizes shells with a large, elongated and narrow morphology. These utilization patterns arise indeed out of different intrinsic preferences towards the respective shell morphology, as C. rugosus selected for the short and globose naticid shells, while C. perlatus selected for the large and elongated strombid shells in the laboratory experiments. The determined preferences towards a certain shell morphology lay in concordance with previous studies, which reported C. rugosus to utilize mainly Muricidae, Neritidae or Turbinidae shells, which also have a globose morphology, and C. perlatus to utilize mainly the elongated cerithiid shells [35, 40, 43,44,45]. This overall similarity further underlines that not the shell species itself is the decisive criteria in the shell selection process, but rather the overall morphology of the present shell, described by the principal components of the morphometric data. The utilized shells found in the natural populations were overall fairly eroded and showed no striking variations in colour or ornamentation but appeared rather uniform pale and smooth, independent of the gastropod species. Therefore, preferences towards certain shell colours or ornamental features like spines can be excluded as further decisive factors in shell selection of the investigated hermit crab species. As gastropod communities vary between different regions, the adaptive mechanism in shell selection behaviour is therefore not the evolution of preferences towards species (although at least one hermit crab species is known utilizing only one shell species, Calcinus seurati [14, 20]), but rather of preferences towards certain shell morphologies [46]. The two investigated hermit crab species apparently have evolved different shell preferences towards distinct subsets of the shell resource. These intrinsic preferences could hint towards differing strategies of the two hermit crab species to respond to the same overall selective pressures [47, 48]. Heavy and elongated shells with a narrow aperture, like the strombid shells, offer optimal protection against desiccation and predation, but limit clutch size and increase energy expenditure during locomotion due to a reduced internal volume and increased weight [8, 16, 20, 25]. Light-weight and voluminous shells, like the naticid shells, allow a greater dispersal and are advantageous for burrowing, but cannot retain water efficiently and offer less protection against predation [27, 40, 49]. As different shell preferences might represent different strategies to respond to selective pressures from the same environment, C. perlatus might has evolved a strategy to reduce desiccation- and predation-related mortality at the expense of an increased energy expenditure and limited clutch size [48]. C. rugosus has evolved a strategy to maximize reproductive output at the expense of an increased susceptibility for desiccation and predation. Further research is needed to test, whether the observed shell resource partitioning in the two co-occurring hermit crab species is the cause or the effect of the proposed ecological differentiation in respect to their life-history strategy and if the utilization of different subsets of the shell resource can even be a driver of speciation in hermit crabs. In either way, it is shown that the utilization of distinct subsets of the limiting resource can drive ecological differentiation, which then ultimately enables two species to coexist [7, 9]. It is thereby demonstrated that co-occurring hermit crabs are indeed suitable model organisms to empirically investigate competition and coexistence theory, as their limitation by primarily one resource offers controllable and empirically testable conditions for investigating natural and intrinsic behaviour of resource partitioning. Overall, our research investigated the mechanism of resource partitioning as a driver of coexistence and demonstrated that two co-occurring species of terrestrial hermit crabs have evolved intrinsic preferences towards distinct subsets of the shell resource, which attenuates interspecific competition over the limiting resource in natural populations. As the preferred shell morphologies of the two hermit crab species either maximize reproductive output or minimize predation risk, the two hermit crab species might have evolved different strategies to respond to the overall selective pressures in their natural habitat. These findings offer empirical support for theoretical hypotheses on competition theory and mechanisms of coexistence in ecology. By discussing different life-history strategies, associated with the observed resource partitioning, the presented model system using hermit crabs can form the basis for future research on mechanisms of coexistence and speciation. Hermit crabs were collected on the beaches of eleven coral islands, distributed over the Lhaviyani (Faadhippolhu) Atoll, Republic of Maldives. Sampling was carried out between 03/02/2017 and 10/03/2017, always in the time from 2 h before low tide until absolute low tide. On each island, hermit crabs were collected in six plots with 10 m length (measured along the current drift line) and 2 m width (measured perpendicular to the current drift line). The habitat structure of each plot was assigned in four different beach habitat types: (1) fine sand beach, (2) fine sand beach interspersed with small coral and rock fragments, (3) fine sand beach interspersed with larger boulders and (4) predominantly rock-covered beach. The collected hermit crabs were transferred to the laboratory and removed from their shell by carefully heating the apex of the shell above an open flame. This is a standard procedure when investigating hermit crabs and leaves the animal without injuries [27, 49]. Afterwards, the hermit crab and their corresponding shell were photographed on millimetre paper (Nikon D5000 mounted with Nikon AF-S Nikkor 18–105 mm, 1:3.5–5.6, Nikon Corp., Tokyo, Japan.) and identified using identification keys [50,51,52,53,54]. The weight of the shell was measured using a fine scale (TS-300 300 g × 0.01 g, G&G GmbH, Neuss, Germany). The carapace length of the hermit crabs and the morphometric parameters of their corresponding shell were determined using ImageJ 1.49b (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, Maryland, USA, http://imagej.nih.gov/ij/, 1997–2015). Shell length was measured from the shell's apex to the siphonal notch—if present—or otherwise to the lower end of the aperture. Shell width was measured perpendicular to the longitudinal axis of the shell at the broadest section. Shell aperture length was measured from the anterior to the posterior canal of the aperture and aperture width was measured perpendicular to the aperture length between the outer lip and the columellar fold at the broadest section. Statistical analysis was performed using R 3.5.1. [55] Differences in the number of shells utilized for a given shell species between C. rugosus and C. perlatus were tested for the four most abundant gastropod families in the plots, i.e. strombid shells (246 specimen), nassariid shells (196 specimen), cerithiid shells (166 specimen) and naticid shells (141 specimen; Fig. 2). Statistical comparison in the number of utilized shells of each of the four shell types between the two collected hermit crab species were analysed using Fisher's exact test [56]. Levels of significance were adjusted using Bonferroni–Holm-correction. The relative abundance of the two hermit crab species was calculated and statistically compared between the four investigated beach habitat type and between the eleven investigated coral islands using non-parametric multivariate analysis (PERMANOVA) with 999 permutations, implemented in the vegan package of R [57]. The diversity of shell species occupied by the two hermit crab species was calculated using the Shannon-Index H. Based on the number of inhabited shells from the two hermit crab species, the niche breadth (B) with respect to shell species inhabited was calculated using $$B = \frac{1}{{\sum {(p_{i}^{2} )} }}$$ where pi is the proportion of crabs (C. rugosus or C. perlatus) found in shells of the gastropod species I [13]. The sizes of the two sampled hermit crab species were statistically compared using Wilcoxon test. The two co-occurring hermit crab species and the four most commonly utilized gastropod shell types. On the top, the two tested hermit crab species, Coenobita rugosus (a) and C. perlatus (b) and below the four different shell types utilized, i.e. nassariid (c; here depicted: Nassarius variciferus), naticid (d; here depicted Polinices mammilla), cerithiid (e; here depicted Rhinoclavis aspera) and strombid shells (f; here depicted Gibberulus gibberulus) 150 hermit crabs of each of the two species C. rugosus and C. perlatus and 150 cerithiid, nassariid, naticid and strombid shells were collected on the beaches of Naifaru, Lhaviyani (Faadhippolhu) Atoll, Republic of Maldives from 16/03 to 20/03/2017. The collected hermit crabs were transferred into the laboratory and removed from their shell. After removing the crab out of its shell, the carapace length was measured using a ruler and the size of the crab with its corresponding shell was noted. One hermit crab (without its shell) of a given size was then transferred into a 45-cm diameter test arena, filled 2 cm with sand from the adjacent beaches, and left to acclimatise for 5 min. After acclimatisation, two of the four tested shell types, were placed next to each other on a random place inside the test arena with the aperture facing upwards. For each tested hermit crab of a given size, two empty gastropod shells were presented that were formerly inhabited by a hermit crab with the same size of the one tested in the arena (e.g. a 1 cm-sized hermit crab was offered two shells that were formerly inhabited by 1 cm-sized crabs). This procedure was conducted to ensure that both presented shells were principally utilizable for the tested hermit crab of a given size. For C. rugosus and C. perlatus each combination of two shell species (strombid vs. naticid, strombid vs. nassariid, strombid vs. cerithiid, naticid vs. nassariid, naticid vs. cerithiid, nassariid vs. cerithiid) was tested 25 times (n = 25). One hour after presenting the two empty gastropod shells, the utilized shell type was noted and the hermit crab together with both shells transferred back to its original habitat. If no shell had been utilized by the tested hermit crab after 1 h, the experiment was terminated and the crab, as well as both shells, excluded from the experiment and transferred back to the original habitat. The carapace lengths between the two tested hermit crab species was statistically compared using the Wilcoxon test. Preferences for the investigated shell species, between the two hermit crab species were analysed using Fisher's exact test. Levels of significance were adjusted using Bonferroni–Holm-correction. Differences in the five morphometric parameters between the four different gastropod types and the two hermit crab species were compared using non-parametric multivariate analysis (PERMANOVA) with 999 permutations. One principal component analysis (PCA) was performed with log-transformed values of the five morphometric parameters. Statistical differences between the principal components of the four shell types and the two hermit crab species were analysed using ANOVA and Tukey HSD post hoc tests. The datasets generated during this study are available from the corresponding author on reasonable request. Barnes DKA. Local, regional and global patterns of resource use in ecology: hermit crabs and gastropod shells as an example. Mar Ecol Prog Ser. 2003;246:211–23. Birch LC. The meanings of competition. Am Nat. 1957;91:5–18. Klomp H. The concepts "similar ecology" and "competition" in animal ecology. Arch Neerl Zool. 1961;14:90–102. Abrams P. Shell selection and utilization in a terrestrial hermit crab, Coenobita compressus (H. Milne Edwards). Oecologia. 1978;34:239–53. Roughgarden J. Resource partitioning among competing species—a coevolutionary approach. Theor Popul Biol. 1976;9:388–424. Schoener TW. Resource partitioning in ecological communities. Science (80-). 1974;185:27–39. Hardin G. The competitive exclusion principle. Science (80-). 1960;131:1292–7. Gherardi F, Nardone F. The question of coexistence in hermit crabs: population ecology of a tropical intertidal assemblage. Crustaceana. 1997;70:608–29. MacArthur R, Levins R. Competition, habitat selection, and character displacement in a patchy environment. PNAS. 1964;51:1207–10. Vance RR. Competition and mechanism of coexistence in three sympatric species of intertidal hermit crabs. Ecology. 1972;53:1062–74. Abrams PA. Resource partitioning and interspecific competition in a tropical hermit crab community. Oecologia. 1980;46:365–79. Fotheringham N. Hermit crab shells as a limiting resource (Decapoda, Paguridea). Crustaceana. 1976;31:193–9. Hazlett BA. Interspecific shell fighting in three sympatric species of hermit crabs in Hawaii. Pac Sci. 1970;24:472–82. Hazlett BA. The behavioral ecology of hermit crabs. Annu Rev Ecol Syst. 1981;12:1–22. Kavita J. Spatial and temporal variations in population dynamics of few key rocky intertidal macrofauna at tourism influenced intertidal shorelines. Saurashtra University; 2010. Bertness MD. The influence of shell-type on hermit crab growth rate and clutch size (Decapoda, Anomura). Crustaceana. 1981;40:197–205. Borjesson DL, Szelistowski WA. Shell selection, utilization and predation in the hermit crab Clibanarius panamensis stimpson in a tropical mangrove estuary. J Exp Mar Biol Ecol. 1989;133:213–28. Vance RR. The role of shell adequacy in behavioral interactions involving hermit crabs. Ecology. 1972;53:1075–83. Völker L. Zur Gehäusewahl des Land-Einsiedlerkrebses Coenobita scaevola Forskal vom Roten Meer. J Exp Mar Biol Ecol. 1967;1:168–90. Reese ES. Behavioral adaptations of intertidal hermit crabs. Am Sci. 1969;9:343–55. Bach C, Hazlett BA, Rittschof D. Effects of interspecific competition on fitness of the hermit crab Clibanarius tricolor. Ecology. 1976;57:579–86. Childress JR. Behavioral ecology and fitness theory in a tropical hermit crab. Ecology. 1972;53:960–4. Grant WC, Ulmer KM. Shell selection and aggressive behavior in two sympatric species of hermit crabs. Biol Bull. 1974;146:32–43. https://doi.org/10.2307/1540395. Scully EP. The effects of gastropod shell availability and habitat characteristics on shell utilization by the intertidal hermit crab Pagurus longicarpus Say. J Exp Mar Biol Ecol. 1979;37:139–52. Bertness MD. Shell preference and utilization patterns in littoral hermit crabs of the bay of Panama. J Exp Mar Biol Ecol. 1980;48:1–16. Gherardi F, McLaughlin PA. Shallow-water hermit crabs (Crustacea: Decapoda: Anomura: Paguridea) from Mauritius and Rodrigues Islands, with the description of a new species of Calcinus. Raffles Bull Zool. 1994;42:613–56. Reddy T, Biseswar R. Patterns of shell utilization in two sympatric species of hermit crabs from the Natal Coast (Decapoda, Anomura, Diogenidae). Crustaceana. 1993;65:13–24. Blackstone NW. The effects of shell size and shape on growth and form in the hermit crab Pagurus longicarpus. Biol Bull. 1985;168:75–90. Wilber TPJ, Herrnkind W. Rate of new shell acquisition by hermit crabs in a salt marsh habitat. J Crustac Biol. 1982;2:588–92. Mitchell KA. Shell selection in the hermit crab Pagurus bernhardus. Mar Biol. 1976;35:335–43. Hsu C-H, Soong K. Mechanisms causing size differences of the land hermit crab Coenobita rugosus among eco-islands in Southern Taiwan. PLoS ONE. 2017;12:e0174319. https://doi.org/10.1371/journal.pone.0174319. Nigro KM, Hathaway SA, Wegmann AS, Miller-ter Kuile A, Fisher RN, Young HS. Stable isotope analysis as an early monitoring tool for community-scale effects of rat eradication. Restor Ecol. 2017;25:1015–25. Page HM, Willason SW. Distribution patterns of terrestrial hermit crabs at Enewetak Atoll, Marshall Islands. Pac Sci. 1982;36:107–17. http://scholarspace.manoa.hawaii.edu/handle/10125/412. Sallam WS, Mantelatto FL, Hanafy MH. Shell utilization by the land hermit crab Coenobita scaevola (Anomura, Coenobitidae) from Wadi El-Gemal, Red Sea. Belgian J Zool. 2008;138:13–9. Willason SW, Page HM. Patterns of shell resource utilization by terrestrial hermit crabs at Enewetak Atoll, Marhsall Islands. Pac Sci. 1983;37:157–64. Greenaway P. Terrestrial adaptations in the Anomura (Crustacea: Decapoda). Mem Mus Vic. 2003;60:13–26. Kadmon R, Allouche O. Integrating the effects of area, isolation, and habitat heterogeneity on species diversity: a unification of island biogeography and niche theory. Am Nat. 2007;170:443–54. https://doi.org/10.1086/519853. McMahon BR, Burggren WW. Respiration and adaptation to the terrestrial habitat in the land hermit crab Coenobita clypeatus. J Exp Biol. 1979;79:265–81. Morrison LW, Spiller DA. Land hermit crab (Coenobita clypeatus) densities and patterns of gastropod shell use on small Bahamian islands. J Biogeogr. 2006;33:314–22. Barnes DKA. Hermit crabs, humans and Mozambique mangroves. Afr J Ecol. 2001;39:241–8. Burggren WW, McMahon BR. Biology of the land crabs. Cambridge: Cambridge University Press; 1988. Gross WJ. Water balance in anomuran land crabs on a dry atoll. Biol Bull. 1964;126:54–68. Hsu C-H, Otte ML, Liu C-C, Chou J-Y, Fang W-T. What are the sympatric mechanisms for three species of terrestrial hermit crab (Coenobita rugosus, C. brevimanus, and C. cavipes) in coastal forests ? PLoS ONE. 2018;13:e0207640. Vannini M. Researches on the coast of Somalia. The shore and the dune of Sar Uanle 10. Sandy beach decapods. Monit Zool Ital. 1976;8:255–86. Barnes DKA. Ecology of tropical hermit crabs at Quirimba Island, Mozambique: shell characteristics and utilisation. Mar Ecol Prog Ser. 1999;183:241–51. Lively CM. A graphical model for shell-species selection by hermit crabs. Ecology. 1988;69:1233–8. Bertness MD. Conflicting advantages in resource utilization: the hermit crab housing dilemma. Am Nat. 1981;118:432–7. Conover MR. The importance of various shell characteristics to the shell-selection behavior of hermit crabs. J Exp Mar Biol Ecol. 1978;32:131–42. Bertness MD. Predation, physical stress, and the organization of a tropical rocky intertidal hermit crab community. Ecology. 1981;62:411–25. https://doi.org/10.2307/1936715. Abbott RT, Dance SP. Compendium of seashells. New York: E.P. Dutton Inc.; 1983. Bosch DT, Dance SP, Moolenbeek RG, Oliver PG. Seashells of Eastern Arabia. Dubai: Motivate Publishing; 1995. Hogarth P, Gherardi F, McLaughlin PA. Hermit crabs (Crustacea Decapoda Anomura) of the Maldives with the description of a new species of Catapagurus A. Milne Edwards 1880. Trop Zool. 1998;11:149–75. https://doi.org/10.1080/03946975.1998.10539358. Okutani T. Marine mollusks in Japan. Tokyo: Tokai University Press; 2000. Steger J, Jambura PL, Mähnert B, Zuschin M. Diversity, size frequency distribution and trophic structure of the macromollusc fauna of Vavvaru Island (Faadhippolhu Atoll, northern Maldives). Ann des naturhistorischen Museums Wien. 2017;119:17–54. Team R. R: a language and environment for statistical computing. 2013. https://www.r-project.org/. Fisher RA. The logic of inductive inference. J R Stat Soc. 1935;98:39–82. Oksanen J. Multivariate analysis of ecological communities in R: vegan tutorial. R Doc. 2015:43. We thank the "Atoll Marine Centre" and "Naifaru Juvenile" for accommodation during the field sampling and Mr. Enrico Schwabe (Zoologische Staatssammlung München) for helping to identify the gastropod shells. Financial support for accommodation during the field study by the "Max Weber-Programm"-scholarship. The funding body played no role in design of the study, data collection, analysis interpretation of data and writing the manuscript. Department Animal Ecology I, University of Bayreuth and BayCEER, Universitaetsstr. 30, 95440, Bayreuth, Germany Sebastian Steibl & Christian Laforsch Sebastian Steibl Christian Laforsch SS and CL designed the study. SS conducted the field sampling and the laboratory experiments; SS analysed the data. SS and CL wrote the manuscript. Both authors read and approved the final manuscript. Correspondence to Christian Laforsch. The study was carried in accordance with the Ministry of Fisheries and Agriculture, Male', Republic of Maldives (research permit no.: (OTHR)30-D/INDIV/2017/122) and complied the fundamental principles of the Basel declaration for research in animals. The investigated species are not at risk of extinction or considered as endangered species by IUCN. No permissions other than the research permit were required from the landowners in order to sample the hermit crabs. Gastropod species utilized by the two co-occurring hermit crab species, C. rugosus and C. perlatus, in natural populations (N = 11). Outcome of the two-choice preference experiments. Each combination of shells was tested 25 times (N = 25). Additional file 3: Fig. S1. Shell partitioning of the two hermit crab species. PCA calculation based on the five investigated morphometric parameters of their utilized gastropod shells. (AL: aperture length, AW: aperture width, L: length, W: width, WT: weight). Each data point represents a single shell, colours resemble the two co-occurring hermit crab species (black: C. perlatus, grey: C. rugosus). Steibl, S., Laforsch, C. Shell resource partitioning as a mechanism of coexistence in two co-occurring terrestrial hermit crab species. BMC Ecol 20, 1 (2020). https://doi.org/10.1186/s12898-019-0268-2 Accepted: 29 November 2019 Coenobita perlatus Coenobita rugosus Competitive exclusion principle Shell utilization Resource partitioning
CommonCrawl
Geometric Progression Calculator Common Ratio `n^{th}` Term = 1024 `n^{th}` Partial Sum = 2046 Geometric Progression - work with steps Number of Terms (`n`) = 10 First Term (`a`) = 2 Common Ratio (`r`) = 2 Find the `n^{th}` term & `n^{th}` partial sum of geometric series? `n^{th}` Term (`T_n`) = a x r(n - 1) `n^{th}` Partial Sum (`S_n`) = `a \times (r^n - 1)/(r - 1)` `n^{th}` Term (`T_n`) = 2 x 2(10 - 1) = 2 x 2(9) = 2 x 512 `S_n = (2(2^10 - 1))/(2 - 1)` ` = (2(1024 - 1))/(1)` ` = (2(1023))/(1)` ` = (2046)/(1)` Geometric progression calculator calculates the $n^{th}$ term and the $n^{th}$ partial sum of a geometric progression. It is necessary to follow the next steps: Enter the number of terms of a geometric progression in the box. This value must be positive integer. Enter the first term and the common ratio in the box. The first term can be a real number or variable, and the common ratio must be nonzero real number; Press the "Generate Work" button to make the computation; Geometric progression calculator will give $n^{th}$ term and the $n^{th}$ partial sum of a geometric progression. Input: There are three inputs: the number of terms, the first term, and the common ratio of a geometric progression. The number of terms must be positive integer, the first term can be in terms of real numbers or variables, and the common ratio must be nonzero real number Output: The outputs (the $n^{th}$ term and the sum of $n$ terms of the geometric progression) are in terms of nonzero real numbers of variables $n^{th}$ term of a geometric progression: Explicit Formula: The $n^{th}$ term, $g_n$, of a geometric progression $(g_n )_{n\in N}$ is $$g_n =g_1r^{n-1},\;\mbox{for}\;n\geq2$$ where $r\ne 0$ is the common ratio and $g_1$ is the initial term of the geometric progression. Recurrent Formula: The $n^{th}$ term of a geometric progression $(g_n )_{n\in N}$ is $$g_n =g_{n-1}r,\;\mbox{for}\;n\geq2$$ where $r\ne 0$ is the common ratio and $g_{n-1}$ is the $(n-1)^{th}$ term of the geometric progression. Formula for the `n^{th}` partial sum of a geometric progression: The sum of the first $n$ terms, $S_n$, of a geometric progression $(g_n )_{n\in N}$: $$S_n=\left\{ \begin{array}{ll} \frac{g_1(1-r^n)}{1-r}, & r\ne1; \\ n g_1, & r=1 \end{array} \right. $$ where $g_1$ and $r$ are is the initial term and the common ratio of the geometric progression, respectively, and $n$ number of terms in the geometric progression. What is Geometric Progression? A geometric progression $(g_n )_{n\in N}$, or geometric sequence, is a sequence of real numbers or variables where each term is obtained from the preceding one by multiplying by a nonzero real number. The first term of a geometric progression is denoted by $g_1$, the second term by $g_2$, and so on up, the $n^{th}$ term by $g_n$. This means, any geometric progression $(g_n )_{n\in N}$ has the following form $$g_1, g_1r, g_1r^2, g_1r^3, g_1r^4,\ldots$$ where the nonzero constant $r$ is the common ratio. The first term $g_1$ is called the initial term. Note that the common ratio $r$ cannot be zero. On the other hand, the progression $(g_n )_{n\in N}$ is a geometric progression with common ratio $r$ if the ratios between consecutive terms are equal, i.e. $$\frac{g_2}{g_1}=\frac{g_3}{g_2}=\ldots=\frac{g_n}{g_{n-1}} =r$$ If $r>1$, then the geometric progression is an increasing progression and it holds $$g_1\lt g_2\lt \ldots\lt g_{n-1}\lt g_n$$ If $0\lt r\lt 1$, then the geometric progression is an decreasing progression and it holds $$g_1>g_2>\ldots>g_{n-1}>g_n$$ If $r=1$, then the geometric progression is a constant progression and it holds $$g_1=g_2=\ldots=g_{n-1}=g_n$$ The constant progression is only progression that is both geometric and geometric. The terms between two nonconsecutive terms of a geometric progression $(g_n )_{n\in N}$ are called geometric means of these terms. For example, the geometric means between $g_1$ and $g_5$ are $g_2$, $g_3$ and $g_5.$ If two nonconsecutive terms of a geometric progression $(g_n )_{n\in N}$ and the number of geometric means between them are given, then the geometric progression is fully determined. A geometric series is a sum of the terms of a geometric progression. A geometric series can be infinite or finite. The $n^{th}$ partial sum, usually denoted by the symbol $S_n$, represents the sum of the first $n$ terms of a series. This means that the geometric series is the following sum $$S_n=g_1+g_1r+g_1r^2+\ldots+g_1r^{n-1},$$ where $g_1$ is the initial term and $r\ne0$ is the common ratio. How to Calculate $n^{th}$ Term or Sum of $n$ Terms of Geometric Progression? The $n^{th}$ term of geometric progression $(g_n )_{n\in N}$ can be defined recursively. By definition, the $n^{th}$ term $g_n$ is equal to $g_{n-1}r$, where $g_{n-1}$ is the $(n-1)^{th}$ term and $r$ is the common ratio. Therefore, $$g_n=g_{n-1}r$$ Since successive terms of a geometric progression can be determined as the product of the common ratio $r$ and the previous term, it follows that each term can be determined as the product of $g_1$ and a corresponding power of $r$. The formula for the $n^{th}$ term of a geometric progression $(g_n )_{n\in N}$ is $$g_n=g_{n-1}\cdot r=g_{n-2}\cdot r\cdot r=\ldots=g_{1}\cdot\underset{n-1}{\underbrace{r\cdot r\cdot\ldots\cdot r}}=g_1r^{n-1}$$ Hence, the $n^{th}$ term of a geometric progression can also be determined from the first term $g_1$ and the common ratio $r$. Any two terms $g_n$ and $g_m$ $(n>m>0)$ of a geometric progression $(g_n )_{n\in N}$ are related by the formula $$g_n=g_mr^{n-m}$$ In developing a formula for the $n^{th}$ partial sum for a finite geometric series, the series can be written in the following way $$S_n=g_1+g_1r+g_1r^2+\ldots+g_1r^{n-1}$$ On the other hand, by multiplying the previous equation by $r\ne 0$, we obtain $$S_n\cdot r =g_1r+g_1r^2+g_1r^3+\ldots+g_1r^{n-1}+g_1r^{n}$$ By substituting the last two equations, we obtain the formula for the sum of the first $n$ terms of a geometric progression with first term $g_1$ and common ratio $r$: $$S_n=\left\{ \begin{array}{ll} \frac{g_1(1-r^n)}{1-r}, & r\ne1; \\ n g_1, & r=1 \end{array} \right. $$ If a series has infinite number of terms, it is an infinite series. The sum of first $n$ terms of an infinite series is the $n^{th}$ partial sum of the series, $S_n$. In the following way, we can check whether the infinite geometric series converges or diverges. If $|r|<1$, the series converges and we can find its sum. It's sum is a finite number. If $|r|>1$, the series diverges and we can not find its sum which means that it's sum is not a finite number. Geometric progression work with steps shows the complete step-by-step calculation for finding the $n^{th}$ term and the $n^{th}$ partial sum of a geometric progression such that there is $5$ terms in the geometric progression, the first term is $2$, and the common ratio is $4$. For any other combinations the number of terms, the first term, and the common ratio, just supply the other numbers as inputs and click on the on the "GENERATE WORK" button. The grade school students may use this geometric progression calculator to generate the work, verify the results or do their homework problems efficiently. Real World Problems Using Geometric Progression A widespread application of geometric progression can be found in financial mathematics. Banks or financial companies usually use geometric progression to determine earnings in accounts or how much to charge for loans. For example, if we deposit $\$1 00$ at the bank and bank offers an annual return of $3\%$ of the investment, the deposited sum year increase by $3\%$ after each year. The balances at the end of each year represent a geometric progression. The $n^{th}$ term of this progression is $g_n=1.03g_{n-1}$ and the initial term is $g_1=100$. Geometric progression can represent growth or decay. If a common ratio $r$ is greater than $1$, $r>1$, then a geometric progression may model growth. For instance, population growth. If a common ratio $r$ is positive and less than $1$, $0\lt r\lt 1$, then a geometric progression may model decay. For example, suppose that a new car loses one-fifth of its value each year. What is the value of this car after 3 years, if it costs now $\$30,0000$. Geometric progression and exponential function are closely related. The difference between these two concepts is that a geometric progression is discrete while an exponential function is continuous function. For example, if we consider the geometric sequence $g_n=\frac{1}{3}\cdot {2}^n$ and the exponential function $f(x)=\frac{1}{3}\cdot 2^x,\;x>0,$ we obtain the following graphs: Geometric Progression Practice Problems Practice Problem 1: We have $\$12$ and go to the bank to deposit money. The bank gives us the following option: the first month we receive $\$18$, the second month we receive $\$27$, etc. How much money will we receive after $10$ months? Given the sequence by the recurrence relation $g_{n+1}=6g_n$ and $g_1=3$. Find the sum of the first $10$ terms of the geometric sequence. The geometric progression calculator, formulas for the $n^{th}$ term of the geometric sequence and the sum of $n$ numbers of the geometric sequence, example calculation (work with steps), real world problems and practice problems would be very useful for grade school students (K-12 education) in studying series and sequences and in solving problems in banking, biology and other real life fields. Ratio & Proportion Calculator Logarithm (Log) Calculator Antilog Calculator Prime or Composite Number Calculator Prime Factorization Calculator Arithmetic Progression Calculator Billion - Million - Crores - Lakhs Converter Binary to Decimal, Hex & Octal Converter
CommonCrawl